Saturday, December 11, 2010

Supporting Wikipedia

Support WikipediaAs a student, Wikipedia is an amazing resource for knowledge, especially in the field of Computer Engineering.  The Wikimedia Foundation is entirely run off of donations and needs the support from users to keep the servers running.  To show my support I donated $20 to help make sure Wikipedia remains a freely available resource for the entire world.

Monday, November 29, 2010

EC2 Micro Hosting Faster than Shared Host

I switched my hosting from a shared hosting provider to an Amazon EC2 Micro instance a while back. The biggest concern I had was making sure the limited resources of the micro instance were enough to run my site. As it turns out, it is more than enough. The thing the micro instance can't do is handle load that goes on for any extended time. My website doesn't receive too many hits so this is the perfect usage.

The above graph isn't from my website, but it is from a PHP web application running on EC2. The graph is from Google Webmaster Tools for a site Time spent downloading a page (in milliseconds) from the Google Crawler. You can definitely the switch in October from the shared host to the EC2 instance in the page response times.   During this time period the average number of people accessing the server and the crawl time stayed about the same.

The bottom line, I highly recommend an EC2 Micro instance for any small website.  While it does cost a little more than a shared host, the benefits definitely make it my top choice.  The downside to a micro instance is once you have sustained high CPU usage for a certain amount of time your are severely throttled.  As long as the usage is not sustained, your site will be very responsive.

Monday, November 22, 2010


If you know me, then this will seem very ironic.  Recently, after receiving the idea from a friend, I created the website which provides random reasons why you should or shouldn't skip class.  The irony is that I personally never skip class.  It is really meant to be a joke site providing comical reasons for skipping or not skipping class.  The site didn't take me very long to create and as people provide me with more reasons I will be adding them to the rotation.

The first version of the site was a PHP based website that I had up and running in less than 20 minutes.  The second version is a little more robust and runs on Google App Engine so I don't have to worry about the traffic messing up my personal web server.  Still, this version of the site was finished in less than a few hours.  The main problem with a site like this is that people just stay on one page and click refresh so they can see all of the random reasons.  This is why I chose to go with App Engine.

It didn't take very long to create the site and since it is App Engine based there will be no real work or ongoing cost to maintain it.  Hopefully college students out there find some entertainment out of the site and share it with their friends.

Friday, October 15, 2010

The ABC's of Google

A is for Android
B is for Buzz
C is for Chrome
D is for Docs
E is for Earth
F is for Finance
G is for Groups
H is for Health
I is for Instant
J is for jQuery
K is for Knol
L is for Latitude
M is for Maps
N is for News
O is for Orcut
P is for Profile
Q is for Quick Search
R is for Reader
S is for Search
T is for Talk
U is for Uncle Sam
V is for Voice
W is for Webmaster
X is for XMPP
Y is for YouTube
Z is for Zitegeist

Monday, October 11, 2010

Automated Documentation Generation

When it comes to projects that I am working on, I tend to love tools that help me develop better software.  My favorite open source developer website is Ohloh, which provides insights into code statistics.  Since my projects are hosted on Google Code and Github the code browsing, issue tracking, and wiki pages are already covered. The missing link has been generating the source code documentation such as Javadocs.

All of my projects are hosted in a Subversion of Git repository, so it is just a single command to get the most up to date code.  Generating the documentation on my server allows for an automated approach to get up to date documentation on by website.

A few of my projects are written in Java, so generating Javadocs is exactly what I did.  The difficulty in this task was getting my server to compile the Android and App Engine projects.  After some tinkering, I managed to get the correct files and configuration set up on my server.  When executing the Javadoc command, it is necessary the the project be compiled.  This documentation is excellent and matches the standard for almost all Java projects.

I also have a few open source PHP web application that I have created.  For these projects, phpDocumentor provided a method to generate documentation that is similar to Javadocs, but is specifically geared towards PHP.  The documentation for these PHP applications is not as robust as Java, mainly due to the nature and design of the language.  My PHP based projects are lacking in function and class header information making these documents less useful.  In the future I may find some time to work on refactoring and fully documenting these projects.

Lastly, I have some open source C# applications.  These applications are the best documented applications because of my use of StyleCop which enforces strict coding standards.  Older versions of Visual Studio had built in functionality to generate HTML documentation.  This has replaced with NDoc, which has been replaced with Sandcastle.  These solutions had the major downside of only running on Windows, my website runs on Linux.  Based on the limitations of these other approaches, I went with Doxygen to generate C# documentation.

Doxygen is capable of generating html based documentation from a variety of languages without the need to compile the code.  After some customization, Doxygen generates robust HTML based documentation.  While it would be possible to generate the Java and PHP documentation using Doxygen, it is not as a tuned approach for those specific languages.  For the specific requirements of documenting a C# application, it is the best solution.

The current implementation generates new documentation for my opensource projects every night based on a single script.  My next goal is to write some logic that checks for updates every night, but only regenerates the documentation if there were changes made to the source.  Overall I am very pleased that I have found a solution for all of my projects and will continue to work on improving my commenting and documentation practices.

Monday, September 20, 2010

Switching to EC2 Hosting for my Personal Website

I had been using a GoDaddy shared hosting account for my personal website since I created it, but recently changed things up. With Amazon announcing their EC2 Micro instances, cloud based hosting was within my price range. The cost of running the VM is $0.02 per hour, totaling around $14.40 per month. There are some other costs including storage and bandwidth, but they will likely total less than $2.00 per month. What it comes down to is I have my own personal install of Linux to host my web site!

This does come with some down sides.  The VM is slow, specifically the CPU.  It comes with 613 MB of RAM which is plenty for my purposes.  Since my website doesn't receive that many visits the speed is not a major concern of mine.  However, I have determined that the micro instance isn't powerful enough to handle my install of ThinkUp because the database is just too large.  This is disappointing, but I will create a new install on my Linux box and run it locally.

The reason I can justify paying twice as much for hosting is the benefits that I get from having root on the Linux box.  While it means I have to do more Linux administration, it also means I can run whatever software I want!  Specifically, I have set up a personal SVN server that I am using for class projects.  I use both Google Code and Github for open source projects, but my class projects never had a home until now.  So far it has worked out with no problems and using WebSVN provides me with a useful way to browse and analyze my code.

I have migrated my backup script to the new setup and switched to using s3cmd to transfer files to Amazon S3.  With the additional services provided on the server, the script now includes a backup of the SVN repositories and the server configuration files.

While this does mean that I will be paying more, the additional features should justify the cost.

Monday, August 30, 2010

Almost there...

This is my last fall semester as a college student.


Wednesday, August 4, 2010

Automated Generation of Javadocs for Open Source Android Applications

While Java is not my favorite language, it has its benefits. With one Android application already posted on the market and another application in development, I decided to start using Javadocs a little more seriously. The process of generating Javadocs is not that complicated using Eclipse, but that is not the solution I wanted. My goal was to automate the generation and posting of the docs as the source code changed.

My shared web host was suitable for hosting the html files, but not running javadoc to actually generate new documents. The solution I came up with was to automatically download the latest code from my public repository, generate the javadocs, and then upload the html files to my shared host. The computer I executed this code on was a Linux virtual server that I have sitting around. The process was actually very simple:
  1. Clean up any files from the previous run of the script
  2. Download the latest source code from the svn repository using svn export. It is notable that you can use svn export with Github since they support accessing repositories using the SVN protocol. Awesome!
  3. Generate the javadocs based off of the freshly downloaded code using the desired parameters.
  4. Copy the newly generated javadocs to the desired server. For my purposes, secure copy was the best solution. With my server's public key installed on the shared host, I was able to log into the remote box without prompting for a user name and password.
The final step in the process is to simply run the script nightly with a cron job and the javadocs will always be up to date. Since I was generating documents for Android applications it was important that the Android jar file be located on the server and the javadoc command be made aware of its location. Without this jar file, the generated javadocs would be incomplete. It is necessary that the Java code can actually be compiled on the computer where the javadoc command is run.

Here is the bash script with the file paths changed to protect the innocent:

cd /path/to/files/docs/
rm -rf ampted.svn
rm -rf ampted

svn export
mv trunk ampted.svn

JAVADOCHEADER='<a target="_top" href="">Android Mobile Physical Therapy Exercise Documenter</a>'
JAVADOCFOOTER="Generated on `date`"
javadoc -private -header "$JAVADOCHEADER" -footer "$JAVADOCFOOTER" -d /path/to/files/docs/ampted/ -sourcepath /path/to/files/docs/ampted.svn/android/src/ -subpackages com.AMPTedApp -classpath /path/to/files/lib/android.jar

scp -r /path/to/files/docs/ampted

This process has one point where it could definitely be improved and that is that it always overwrites the javadocs even if the source code did not change. If a check was added that compared the commit number of the previously generated documentation and only generate and upload a new copy if it is newer. This wasted effort is not a major concern for small projects, but may need to be fixed as my projects grow in size.

You can see the docs for AMPTed, which is a project that is in the very early stages of development, at

Nothing to do, so what will I accomplish?

There are two and a half weeks before my last fall semester starts and I have very little to do. I have a few things on my calendar, but generally it is empty. So, what I am going to do with all of this free time? Simple, write lots and lots of code. Actually, my plan is to work on several projects while I still have the time. It just happens to be that most of these projects involve me writing code.

The main projects that I will be working on include creating DPX Answers for DyKnow Panel Extractor, creating the foundation for AMPTed App, fixing bugs and making small improvements to OpenNoteSecure, and implementing the NAESC Conference registration website. These are high level items on my to-do-list. I plan on making a low level to-do-list that will help me get my ideas organized.

It is a rare for me to have this much free time, so I plan on putting myself to work and making some major progress. I've already managed to code quite a bit, including some major improvements to other projects that are not on the above list. I just need to find a nice quiet place to sit down and start working and not move until I have a plan of action.

Monday, July 26, 2010

Some Web Server Management and a Plan for Backups

It has been quite a while since I spent some time administering my personal websites. My sites are hosted using GoDaddy's shared host, which isn't as bad as some of the reviews make it out to be. The big thing that I have been putting off is implementing a reliable and automated backup system. My previous strategy for backups was to simply dump the databases and copy down all of the files once a month, if I remembered. It would not be easy to replace the content of my websites if it were to be lost.

The first step on developing my backup strategy was to clean up my content and current installs. I deleted some web applications and code that I was playing with, but no longer used. These applications were some things that I was playing around with, but never did anything with. Once I did that I made a backup of everything by hand and then upgraded all of my web apps to the latest version. I was now ready to develop my automated system.

The first step was to get a local backup on the web server itself. This was done through shell access to the server using SSH and developing a shell script that performed all of the necessary steps. The two things that need to be backed up were the databases and the actual files. The MySQL databases are simple to backup using the mysqldump command and compressing the output and storing it to a file. The files can be backed up using a simple tar command which can also compress the files down to a reasonable size.

Once all of my database and files were compressed and organized, I took them all and packaged them up in another tar which was my final backup, a single file. This script was set to run as a cron job and the automated backup process was half way complete. The only thing left to do was to find a way to transfer the backup off-site.

My first though was to copy the backup to my personal Linux server. I eventually found a way to automate this process using scp and was happy with the results. It would have been fairly simple to automate this process, but it just didn't seem to be the solution I was looking for. The solution I went with was to store the backups on Amazon S3 using s3-bash. S3 provided very cheap storage and was easily accessed using open source tools that made the process of transferring files very painless. My estimates place the total cost of backups that will be stored on S3 at less than $0.40 a month!

Deciding to use a paid service meant that it would not be logical to store all of my backups indefinitely, and I needed to come up with a plan on how long to keep each backup. I also needed some way to delete backups after they were no longer needed. The solution I came up with was extremely simple. The backup script would run every night and generate and transfer the complete backup, about 45 MB, to the S3 servers. My plan was to keep the backup created on the first of each month for a year, this way I'll avoid data loss if something went wrong that was a long term problem. Additionally, I would keep a backup for each day of the week helping me to avoid loss of data in the short term. After 12 months of operation I would have a total of 19 backup files that would continue to be replaced as time went on. The old backups would not need to be deleted, because by uploading a file with the same key (or file name) it over writes the older version, thereby deleting the old backup.

My backup script has only been running for a few days, but I am very pleased with the results. I still want to do some testing to insure that by backups are comprehensive, but initial inspection shows reveals no problems. This set it and forget it approach is exactly what I was hoping to implement.

Friday, July 9, 2010

The Finish Line is in Sight - Summer 2010

This semester is unique in that I have a large number of projects & papers that I will have completed. Two of the projects are for the classes I am taking this semester and the other two are papers that I hope to have published. I've already heard the good news about one of the papers and will learn about the fate of my other paper in August. First my class projects...

OpenNoteSecure is an Android application that I created for my CECS 564 Cryptology Term Paper. The goal of the application was to store information securely on an Android phone using encryption. The paper I have written about the project, Storing Encrypted Plain Text Files Using Google Android, is almost finished and I will post it to my website after the semester is over. This is my first Android application that I have created and it is available on the Android Market. I have learned a lot about using cryptology libraries and developing Android applications through this project.

This semester I am also taking IE 563 Experimental Design, a class that has proved to be very useful. My project for that class was testing the accuracy of Window's handwriting recognition software. The source code I used (other than the database schema) was included as part of DPX and is available in the subversion repository. The paper about the project, tentatively titled An Analysis of Type II Errors Using Windows Handwriting Recognition on Individual Words and Numbers, covers the statistical analysis based on the data I collected from handwriting samples. After this semester is over I also plan on posting this paper along with the source code, executable, and some instructions to my website.

The paper that I coauthored with three of my classmates comparing my groups capstone project to another similar project was accepted by CGames 2010. Our paper, Comparing Multiple Game Engine Designs To Develop A Unified, Abstract Layer For Supporting Multiple Game Play Scenarios, discusses a method for abstracting the common elements of our game engines and proposes a new design pattern. Both projects were unique in that they implemented game engines and the similarities and differences between the two projects are very interesting. We will be attending the conference here in Louisville where we will present our paper. Luckily, the conference falls after the semester is over so there will be time to prepare our presentation and attend the conference.

Lastly, and probably what I am most proud of, was the paper that I submitted to WIPTE 2010. Titled, A Method For Automating The Analysis Of Tablet PC Ink Based Student Work Collected Using Dyknow Vision, my paper discusses a tool I developed, DPX Grader, for automatically extracting handwritten scores form panels. My goal is to use what I learned from my analysis of handwriting recognition in the above mentioned class project to develop a tool I am calling DPX Answers that extends what I have done with DPX Grader. My goal is to semi-automate the grading process of student work submitted using a Tablet PC. This is rather ambitious, but I am excited to start major development during the break between semesters. I hope to present this application along side DPX Grader at WIPTE 2010 should my paper be accepted.

There is just over a full week of school left in the semester and I am not finished yet. My class projects are not finished, I have a few other smaller projects and homework assignments standing between me and the end of the semester. Back to work!

Thursday, July 1, 2010

My First Android Application: OpenNoteSecure

Yesterday I published my first Android application, OpenNoteSecure (which is open source), to the Android market. It is a simple application that stores encrypted text files on your phone using AES or DES. It was built for my Cryptology project this semester as a demonstration for securely storing information on your phone.

I will eventually have a paper that will analyze the security of storing information on an Android phone using the encryption that is implemented in the application. Once I finish the paper at the end of the semester I will post it online. It is a fairly simple project, but learning about how to develop Android applications has been worth the time.

You can download the application using the QR code. To my surprise it has already been downloaded a few times. If someone finds this application useful it will have been worth the effort.

Wednesday, June 2, 2010

The State of Development

I have been doing quite a bit of development lately, and since most of what I do is open source I thought I would provide some type of update. I have managed to keep up development on some of my projects, but others have not received any attention.

Development on this project essentially stopped after the spring semester was over. (Such is the development cycle of a capstone project.) However, Card Surface is a fairly functional application that is actually playable. While it is not a fully polished project, it does work. Version 0.0.3 is available for download and includes Blackjack. I am not sure what is in store for the future of this game, but I am very proud of the work that we have done.

This is the most active project that I have been working on lately as the deadline for papers submitted to WIPTE 2010 quickly approaches. My focus has been on two new components. The first is a complete rewrite of the deserialization method used to ready DyKnow files. My new approach directly maps the XML components into properties in classes and uses the built in Microsoft serialization functions. I have also created a collection of unit tests to validate the accuracy of my implementation on a variety of DyKnow files. The new application I have been working on is DPX Grader which is a new program that reads in and analyzes graded student work. DPX Grader is able to recognize text and output a CSV file. This new application is the focus of my paper that I will submit to WIPTE.

This project has mostly been at a stand still since I announced it on my blog. It is installed on SSSC's server, but has not been used yet. Hopefully it finds some use in the new fiscal year.

Two main improvements have been made to Seeker recently. The first is a modification to the assignment algorithm that increased the previous contract restriction from one to four. This means you can't have a contract on someone that you have had a contract on for your last four contracts. The other improvement was to the leaderboard. The monthly and semesterly leaderboards are now able for each month and semester respectively.

There really has not been any work on this tool recently. My plan is to try to submit improvements to this tool as a capstone project for the CECS department this fall. It may be a long shot, but there are a large number of improvements that need to be made and the generalization of some components and the removal of some of the views needs to completed.

This tool has had no major development recently other than the release of version of the application. There are a number of outstanding issues for this project and several improvements that could be made, but this is not high on my todo list.

This is my first attempt at an Android application and is very early in the development cycle. I was very actively developing this application and learning about the Android SDK up until the start of the summer semester. I hope to resume development when I find some free time. This is also my first project that is hosted at GitHub.

What is Next?
There is one new application that I will start development on in the next two months. SSSC is hosting the 2011 NAESC National Conference and a website to manage the registration needs to be developed. While PHP & MySQL is my obvious choice for developing a web application, I have been taking a look at Google App Engine as a possibility. This is likely not going to be a big project and will have a very focused goal, but it will definitely be open source.

Back to work...

Thursday, May 13, 2010

Proud University of Louisville Graduate

Four years after entering college, and actually slightly ahead of schedule for Speed School, I graduated with my Undergraduate degree in Computer Engineering and Computer Science on May 8, 2010 from the University of Louisville Speed School of Engineering.


Wednesday, April 28, 2010

One Week with an Android

I have been using my Nexus One for about a week now so I actually have some opinions! First things first, my favorite app by far is AppBrain. AppBrain syncs the list of all of the installed application on my phone to the web. I am able to see other people's lists and can even queue up new installs from their site.

What I miss about my iPhone

I will admit, there are a few things that my iPhone did better than my Nexus. The first thing I miss are a few of the applications that are not yet on Android. This is not a major problem, but still something I miss. The other thing I miss is Audible books, however I have heard rumors they are working on bringing Audible to the Android platform.

I am also trying to forget about how my iPhone worked. It has been said that it is easier to use an Android phone if you have never used an iPhone. I am frustrated with how many clicks it takes to place a phone call. It was a total 7 including unlocking my the phone when I used my iPhone. I have been up to 20 clicks depending on what menu I am in. Spending more time with the phone will definitely make these type of frustrations go away.

What I love about my Nexus

The best feature is by far the various Google applications. Sadly, I have not used my phone as much as I would have liked. Every waking moment for the past week has been dedicated to completing my capstone project.

The biggest change I have made so far is that I gave up my Zune for listening to podcasts. I have started using Google Listen for all of my podcast consumption. This has been the biggest change. Having a stand along podcatcher has been one of my biggest complaints about using a Zune or an Apple product.

The Conclusion

I am happy with my decision to go with an Android phone and really like the Nexus One. My next step is to start developing apps for it. I have the SDK downloaded and have started looking at some sample projects. Hopefully I will have something useful within a few months. I also have some exciting long term plans to develop an Android application!

Monday, April 26, 2010

Capstone Project: CardSurface

After four months and 15,031 lines of C# code my capstone project is finally complete!

CardSurface is a card game engine designed to work on a a multitouch screen and allow players to play a card game in a somewhat natural way. The engine itself is client-server based and allows multiple table clients to access the same server. Face down cards are viewable on mobile devices trough a web based interface.

Our project took an entire semester worth of work, but it ended up being a huge success! We implemented not only a GUI based table client using the Microsoft Surface SDK, we created a command line client that is also able to connect to the server and play a game.

Our engine is designed to implement any turn based card game. For our demonstration of the engine we implemented Blackjack, however Poker would be a better demonstration of our engine's features.

The biggest accomplishment of our engine is that the client itself has no knowledge about the game that it is playing. It requires the server to provide an updated game state after each move or action that is performed in the game. Additionally we support multiple clients and multiple games running on the same server!

There is a lot of room for improvement with this project and we probably only finished half of what we would have liked to have accomplished. However, it is definitely a project that I am proud of. The entire code base is available open source on card-surface on Google Code.

I also want to thank Aaron and Kyle one last time for being amazing partners on this capstone project. We definitely went above and beyond what was required and created something that will live on past this project.

Monday, April 19, 2010

Thunder over Louisville 2010

Not only do I attend Thunder every year now, I blog about it (Thunder 2008 & Thunder 2009).

Just to get all of the linking out of the way first all of my photos are in my Flickr Thunder over Louisville 2010 set and in three Facebook albums: Facebook Thunder over Louisville Album 1 of 3, Facebook Thunder over Louisville Album 2 of 3, and Facebook Thunder over Louisville Album 3 of 3.

I really wish I had time to compose a more detailed blog post, but being the last week of classes means I am crunched for time. Instead, here are some of my favorite pictures:


I have a lot of really good pictures from the air show this year. The weather was perfect for taking pictures!

A historical prospective of our military

Almost looks like something out of a movie

We had to stand there for a few minutes to get the picture, but it was worth it.

I shot fully manual this year for my pictures and remembered my tripod. I think they turned out better than previous years.

Some more more fireworks...

And even more fireworks...

Tuesday, April 13, 2010

Abandoning my iPhone for Android?

I purchased an iPhone 3G shortly after it was released. It was my first smart phone and my experience has been something of a love hate relationship. See What I like about my iPhone 3G and What I hate about my iPhone 3G. While there is nothing wrong with my phone, it has survived undamaged, I have the urge to upgrade. I have a problem with gadgets and need to move onto something new.

My current collection of mobile gadgets includes my iPhone 3G, my Zune 80, and an iPod Touch that gets no use. I use my Zune for podcasts, but would be willing to move to another device if it was better than what I currently use. I really should sell my iPod Touch or give it to someone as a present. Just to throw this into the mix, I don't have any plans to buy an iPad but a good Android table would be hard to resist.

The Nexus One phone purchased unlocked from Google that works on AT&T costs $529. That is a lot of money, but since it does not come with a contract I am willing to pay that price. For how much I use my phone, cost is not a factor.

It really comes down to the decision of what phone to buy. Right now I am thinking about clicking the checkout button on the Nexus One and jumping ship over to the Android platform. This is partially motivated by my desire to develop mobile applications.

Is the Nexus One the right choice for me? If I don't get a good argument against the Nexus One I'll hand over the cash.

Friday, March 26, 2010

Organization Budget and Finance

It was about a year ago when I started working on my Student Council Attendance tool for Speed School Student Council. This tool was designed to solve the problem of accurately tracking member attendance and make my job as DoA easier. After successfully putting this tool into use, I was ready for my next challenge. The next item on my list was creating a way to transparently track our council's budget including how we spend our money. I ended up spending the majority of my spring break this year working on my newest open source PHP tool, Organization Budget and Finance.

This is my first web based application that I designed in such a way that it could very easily be used by other groups or individuals. The design takes into account no special considerations from SSSC and simply attempts to fill a very specific need. This application is not designed to be a complete financial tool or even be used to balance a budget or an account. I am hopeful that someone will come across this tool and find a use for it.

This tool is used for allocating funds and tracking receipts for specific line items. I have implemented almost all of the major features and hope to have our new DoF put this tool into use very soon. The tool is based around the concept of a line item. This is a budgetary item that can have any number of sub line items. This can go as many levels deep as desired or necessary. This essentially creates a tree structure which represents the budget itself. The top level is designed to be used for each years budget. The next level will be used for all of the major events or funded items. The successive levels provide additional details to how funds are allocated. Multiple funding sources can be allocated to each line item. Any number of receipts can also associated with line items and represent money spent. There is not direct connection between the funds and the receipts meaning the allocated funds are treated as essentially one pile of money that receipts deduct from.

The design approach for this tool was to keep it as simple as possible, specifically with respect to the database. However, some interesting features have been built on top of this core set of information. Receipts, sources, and line item's can be made private. The reason for this is that not all information should be made publicly available, at least initially. The obvious use of this feature would be hiding items that are not yet finalized, such as next year's budget, or hiding receipts that have not cleared the account yet or the amount has not been confirmed.

Other features include the ability search the database for specific receipts or line items. This will especially be useful when trying to find out how much was spent on something from a previous year. The budget pages use custom CSS formating which allows for easy printing and avoids all of the fancy styling that is part of the web site. Lastly, the entire database can be easily downloaded in a single click as an XML file for easy backup and data portability.

Why put in all of this work for this tool? In the end it boils down to transparency. I strongly believe that SSSC will benefit from more transparency. It starts with having this tool being open source and ends with our budget being available for anyone to look at. In the end, not many people will care how much we spent on pizza at Fall Festival or how much the E-Expo name tags cost for all of the council members. However, certain council members will care about this information and having an accurate record is priceless.

Our historical records with respect to finances have mostly been lost to time. The real judge of my success will be time. What will the state of these records be 10 years from now? Hopefully I remember to look back and see if I was successful.

Thursday, March 18, 2010

Becoming an Open Source Developer

I have been writing computer programs for a very long time. My first experience was writing Logo programs when I was in elementary school. I have come a long way since then. In high school I began programming in PHP and was able to make functional applications. In college, as a Computer Engineering & Computer science student, I have completed a wide variety of programming projects, but only recently have I started releasing my code open source.

Right now I have 6 projects that I have started and are hosted on Google Code. A list of all my projects are listed on my website under open source projects. Three of these projects are written in C# and three of them are PHP web applications. Two of them are school projects that I worked on with other classmates.

Why do I release my code open source? There are two main reasons. The first reason is that using Google Code provides a Subversion server for storing code, a bug tracker, and wiki for documentation. The second reason is there is no benefit keeping the code closed source. Releasing the code open source maximizes the possible benefit of the code that I write. In some cases this is by simply making the application available for others to use but in other cases the actual source code could be used and extended by other individuals in the future. Additionally, I hope by making these projects open source they may live on past the time where I can focus on their development.

At this point in time in my career as a programmer, my skill set and reputation are the most important assets I can focus on improving. Any possible monetary benefit of keeping the code I write closed source is offset by the benefits involved in contributing to the open source community. One of the website I have recently discovered is Ohloh, where I promptly created a profile ( that includes all of my contributions to the open source projects.

Up to this point I have only contributed to open source projects that I have started. However, I want to start contributing to other projects, at least in a small way. However, the projects I am interested in use Git and I am still in the process of learning how to use Git effectively. I will admit that I am partial to Subversion and really like using Google Code. Using Git has been frustrating on Windows, but once I get the time I will start expanding my contributions.

No matter what my future holds for me, I know that developing open source code will be something I continue to do.

Monday, March 8, 2010

NAESC 2010 Best Website:

Every now and then the hard work that I do gets recognized. I was not able to attend this year's NAESC National conference in Austin Texas, but I did apply for the best publication / website. As it turns out, we won! So I don't have to explain everything, here is our application we turned in to the conference:

1. Describe the publication or web site's purpose, audience, and design.

The Speed School Student Council website serves as the main public face for the council. The main components for the website include individual member history, a list of events and descriptions, a calendar of events, a photo gallery, and council documents. The website uses the Drupal content management system to provide an easy to use navigation structure for browsing all of the content. A new addition to the website is historical meeting minutes dating back to 1948. These newly digitized records amounting to 1,900 pages of content that are now available. Newly created documents are managed in Google Docs and embedded directly in the website. This allows for easy collaboration and a much easier interface for members to modify and update the content.

2. What is the value of the publication or web site?
The biggest innovation of the SSSC website is the attendance records, committees, and achievements. The information displayed on the website is obtained from a web based application called Student Council Attendance (, an open source application developed by a member of SSSC. All of the members of the council are tracked using this system. Attendance at meetings is taken and automatically made available on the website. This includes automated reporting as members fall into bad standing. Committees and their members are also tracked and available on the website. A fun aspect of the website is Council Achievements, which are based off of achievements awarded in video games. Members are awarded achievements for serving on the council and doing various tasks, including attending major events.

3. Please link to a digital copy or image of the publication or the web site.

One of our amazing freshman, Jeremy Bozarth, gave a presentation on how we use our website to promote our council and encourage involvement. Our website has a wide number of features that are not present on many other council's websites. While I have not heard all of the details yet, my understanding is the attendance for the presentation was very high and it was an amazing presentation!

Here are the slides from the presentation:

Wednesday, February 17, 2010

Grooming my Social Graph

Google Buzz peaked my interest in my social graph. I knew about Google's Social Graph API but I never looked into it before. It turns out that it uses XHTML Friends Network, XFN for short, to declare links between various services. The entire system is built on top of the existing web and uses links that are already in place. When a person links to another person's page, they describe the relationship using the "rel" attribute. There are several defined ways you can describe your relationship using a link that are shown in the following table:

Depending on how you want to describe your relationship, there will be different values used in the rel attribute. You can also use multiple to describe a single relationship. For example, if you have met someone and are their friend you may use rel="met friend" in a link.

Right now there are a growing number of sites that are using their existing data and simply declaring these relationships. The power comes when big companies such as Google index these pages and can then surface and analyze this information.

The only real way to use this type of information at this point in time is to correct your social graph. When I looked at mine using Google's Site Connectivity tool, I noticed that all of my profiles were not all linked together. When you enter in a URL of on of your profiles on services such at Twitter or Digg or the URL of your personal home page or blog, it uses the public data to connect to all of your other services. Even though I had a lot of established links, it simply was not fully connected.

What I needed to do was update all of my profiles so they provided a two way link using the "me" relationship. This mean each of my profiles on the various services needed to link to one of my pages that I was claiming to be me, I choose my blog because it is where I have the most up to date content. This is just half of the link that is required. A link from my blog would need to be traced back to each profile. This way both ends are in agreement and can substantiate the same claim. The link does not need to be direct and can pass through several nodes. For example if my Twitter account claims my blog, my blog claims my Google profile, and my Google profile claims my Twitter account, this provides the same authority as two profiles claiming each other.

After adding and updating links between all of my profile pages my graph was corrected, but was not updated. Luckily Google provides a tool to force a recrawl of updated pages to correct the graph. The pages you can recrawl are those claimed on your Google Profile, so if you want to correct a link just add it and force the update. I did this for several pages that I then removed from my profile. They were linked to on other places in my social graph, but I did not want them listed on my Google profile.

While it is not yet useful, as more sites support XFN, this may change the way we use the Internet. There are valid concerns about privacy, but the potential applications are very exciting. I'll leave the explaining up to Google:

Monday, February 15, 2010

Google Buzz and the Not-So-Live Web

The thing Google has realized is that people want everything to happen instantly. This is most true when it comes to communication. However, that is not the area in which Google shines. They have probably done more for means of instant communication than any other company on the Internet. The product that comes to mind that no one really cares about is pubsubhubbub. This is a way that instant communication means can scale in a decentralized method.

What it comes down to is alerting other services when an action is performed. In my case this is most often posting a status message on Twitter. Facebook manages to import this message as me status in only a few seconds. Occasionally it doesn't work, but the delay is typically very small.

Now one of the benefits of Buzz is that you can import feeds from other services. One of the first things I did was import my Tweets into Buzz. Twitter is great because typically the things you say are time sensitive. Right now, messages take many hours to be imported into Buzz. So when I say I'm watching TV at 9 PM on Twitter, it gets sent out to the world on Buzz at 3 AM and my friends thing I'm crazy. You would think at the very least they would use the post time included as part of the Atom feed.

The real problem probably doesn't boil down to technology. The problem is more likely companies and openness. Twitter believes it has value in keeping their data locked down, at least slightly. This is not going to be the case moving into the future. Users are going to be in control of their data, and if we want two services to integrate and they do not, something is going to have to give.

Wednesday, February 10, 2010

Google Buzz Further Bifurcates the Conversation

I post my status messages to Twitter and have them imported as my Facebook status. I occasionally have people @ reply to me on Twitter, but more often I have people like and comment on my Facebook status messages. With Google Buzz I will have my Tweets imported and syndicated (along with blog posts). This adds another place where my posts will be commented on and liked. While Google was very smart in integrating Gmail with the service to guarantee a large user base out of the gate, it didn't solve the underlying problem and actually made the current environment worse.

If Google Buzz provided some way to bring together the reply to Twitter messages and Facebook status messages along with comments on blog posts and other platforms and combined them with messages posted directly on Buzz, they would have something going for them. However, this doesn't appear to be the case.

While everyone says Buzz competes with Facebook and Twitter they are missing the point. With regard to Facebook, the value of the company lies in the information they know about how people are connected to each other. Google wants this information and this can be seen in their other services such as Google Profile. Buzz will provide an accelerated way for Google to gain information about who we are connected to and this is really the center of the product.

As long as Buzz integrates with a large number of other Google products seamlessly it will do more good than harm. From how I have seem my friends use the product so far, it looks like a large number of Google Profiles will be filled as people use the service. This opens the door for social search. This will definitely be a service to keep an eye on as the web continues to evolve.

Saturday, January 30, 2010

Tablet PC Handbook

I have been working for the past two months on a new project. This is my first attempt to launch a website that is dedicated to a specific topic. The Tablet PC Handbook is a wiki-based living book that I have been writing and developing as a resource for information about Tablet PCs. I was hoping to have it more developed by this point, but I have made available what I currently have. Moving forward I will continue to develop more content and fill in the gaps that still exist on the site. There is an associated blog and twitter account for the project.

Check out the Tablet PC Handbook at

Monday, January 25, 2010

Seeker: Reinventing Assassins

In the past, Speed School Student Council hosted a game called Assassins that was played at Speed School. The concept was fairly simple and it was an easy, social, game that allowed players to have fun during the semester. The rules were fairly simple: You were given a target and a secret. The targets were assigned in a circle such that when you found your target, you then were to find their target. The system kept everyone honest by requiring you to enter your targets secret to confirm the "kill." In the end there would be only two people that would need to find each other. Despite the violent name, Assassins is harmless and is more like a big game of hide and seek.

The concept of the game is brilliant, but there are some major shortcomings. The main problem is that the game was designed to be played with pencil and paper. If every player was given an index card with their target, when they found their target they got their index card, in the end the player with the most cards won. However, Assassins in practice was played on the web, requiring players to memorize or carry their password with them and go back to a computer to tell the system they found their target.

Less than two weeks ago I was talking to Mike and Alex about the game and it turned out if we slightly changed the rules some interesting things happened. We played out what would happen and it seemed to work. The first problem is the circle of targets, in the end two people have each other. Additionally, the targets are not secret because the person you eliminated has no reason to keep quiet. Another problem is that once you are found you are out of the game, no more fun for you. Lastly, requiring players to use a website to progress in the game takes you out of the experience. I believe I managed to fix all of these shortcomings with Seeker.

The first change is to issue contract targets at random. The first side effect is that multiple people can have the same target. No problem, first one to reach the target gets credit. When issuing a contract there are some limitations: you can't have someone that has your, you won't be issued your previous target, and you won't be issued someone who is not in the game. This still provides a large degree of randomness in the game. When you are eliminated you fail your contract and if someone reaches your target before you do you fail your contract as well. Since contracts continue to be issued, this is not a problem. When you are eliminated or found you are only out of the game for a short period. In this case 24 hours. This respawn time is taken straight out of the way video games offer. To keep the game moving contracts will expire after 72 hours.

The last problem is how to play when you are not around a computer. While some players would have smart phones, it is not a guarantee. However, a large fraction of college students have text messaging plans. ZeepMobile provides a free web based text messaging API that was used to allow players to play the game. By allowing players to get their current secret and target along with completing contracts via text messages it makes the game more transparent to the player. Additionally, players will get text messages when they get a new contract, are eliminated, or fail a contract.

I managed to code the entire game in a single weekend and we started playing the next week. It is still somewhat a work in progress and I am still working a few of the bugs out, but it seems to be working. The game is open source and you can see the code at seeker-game on Google Code. The best part about Seeker is how simple the concept is. At the core, there is only a table of users and a table of contracts. The most complicated part is keeping the game state updated. Once that is ironed out the rest is up to the players.

Tuesday, January 19, 2010

UofL Ekstrom Library's August 2009 Flood Collection

In August 2009 there was a major flood in Louisville. With my trusty camera I hiked to UofL's campus with some friends and took a lot of pictures along the way. I posted them to my blog (Old Louisville Flood Flood Damage) shortly after the flood. I posted all of the photos to Flickr under a creative commons license and submitted them to the library for their digital archive.

The library posted the August 2009 Flood collection recently and it is worth taking a look at. However, if you only want to look at a few photos, look at the photos I submitted to the collection.

Powered By Blogger