Here at Lextech, we have internal training events a couple times a month. We take occasional surveys to find out what technologies people are interested in learning about, then figure out who among us knows it or might be interested in picking it up. These teachers are then cajoled into giving a class with the promise of free food. The classes take the form of either a brief overview during lunch, or a more in-depth look during the evening.
We’ve all been very excited about the release of the iPhone SDK (official and otherwise), so an evening training event covering that was a no-brainer. After a brief discussion, it became clear that I would be the easiest one to talk into leading the class since I respond well to flattery.
I decided that the focus of the class should be creating extremely intuitive applications that make use of the iPhone’s rather unique user interface. Coincidentally I had just finished up a project that involved controlling a pan & tilt camera from a PC, and this struck me as the perfect application for the iPhone.
Rather than throw an entire application at people and then try to go through all the different pieces, I thought it would be best to take an incremental approach. I presented the application in essentially the same way that I created it: one layer at a time. For the sake of clarity, each layer was shown as a separate Xcode project that built upon the previous one.
These are the different layers I used:
The first task was to display live video from the camera we would be using. This seemed like a great layer to start with, since it provides an immediate usefulness all by itself. It would also be quick and easy to implement, or so I thought.
The output from the video cameras is analog, so we use a video server (an Axis 241Q) to digitize and stream it. It works great and is easy to use; just set it up and enter the URL in QuickTime.
The server software that controls the pan & tilt cameras is Java based. Fortunately we already had an experimental AJAX interface to the Java server. All I had to do was create an NSURLConnection to talk to the existing web interface.
Now we get to the part that really shines on the iPhone: the user interface. Our standard GUI interfaces for these cameras have buttons for moving left – right (pan), and up – down (tilt). The obvious iPhone enhancement was to implement these controls using finger flicks and drags, just like scrolling in the other native applications. I just had to override the default touch behavior and send commands to the server using the network layer.
The metaphorical cherry-on-top for ease of use was auto-configuration. Instead of the user needing to know what cameras were available and, even worse, having to type in the addresses and other configuration data, we used Bonjour to make it all automagical.
Apple provides some very nice Java interfaces for Bonjour. It was quite simple to setup the Java server to broadcast information about the cameras using a custom service type. The application is set up as a listener for that service, and when the user launches it they are presented with a real-time list of all the cameras available on the local network. All they have to do is click on the one they want to control.
The class went even better than I could have hoped. We all learned a bunch of cool stuff that’s going to give us a real head-start on developing iPhone applications. To try out their newly acquired skills, the class attendees even extended the application I presented, adding a pinching interface for zooming the camera. I was a bit surprised when they fixed the compilation errors and it just worked perfectly on the first go. It was a pretty impressive demonstration of the platform’s unique capabilities.
And really, this is what I hope to see more of on the iPhone. Applications that don’t just blandly replicate existing desktop functionality, but instead provide tools conceived within the framework of the revolutionary iPhone interface. Tools that exceed desktop functionality.