Monday, January 28, 2008

Text Expansion: Wasting time trying to save time

Perhaps nothing is more irritating than trying to set up some time-saving software, having problems, and wasting lots of time resolving them. That's one reason why I found OS X so compelling when I first started using it; for the most part things just worked. Rather than fighting with the computer just to get the proper tools in place, I could actually get things done.

For reasons I'll go into some other time I have been searching for quite a while for ways to speed up my text input. My most recent endeavor was based on the idea of using text expansion to minimize the number of keystrokes I have to enter. As a special education teacher I had worked with the application Co:Writer by Don Johnston software, which does a fine job of text prediction as letters and sentences are typed. Unfortunately, a single license is $325. Since that is far too rich for my blood, I decided to set up a system of abbreviations myself. That can't be too hard, right? Guess again.

I found three programs that work as text expanders for OS X, Typinator, TypeIt4Me, and TextExpander. All are available as free trials with full licenses costing 19.95 Euros, $27, and $29.95, respectively.

All three programs work the same way. They run in the background watching your keystrokes. When you type a space, punctuation, or other defined key, the programs compare the keyboard buffer to the list of abbreviations you have defined. If there is a match, they backspace over what you have just typed, copy the expansion onto the clipboard, and paste its contents. A sound can also play when this happens.

To give you an example, I have "ty" defined as a shortcut for "thank you". When I start a new word with "ty" (typing it right after a space or other delimiter) and then type, say, a period, the text expansion program backspaces three times, deleting the delimiter and the "ty". Then it copies the expansion and the period onto the clipboard and pastes it into place, effectively replacing the "ty." with "thank you.". It may sound complicated, but it's really not.

The product pages for each program tend to emphasize the use of abbreviations for larger snippets of repetitive text like form letters. My usage goal was a little different. I wanted to use very short abbreviations for very common, but sometimes also very short, words. According to teacher school, if you take the 100 most common words in the English language, you can read (or write) 50% of all elementary text. One of the popular lists of words by frequency is Fry's First 100, named for its creator, Edward Fry. I figured that would be a good starting place for my abbreviations. Of course that is where the trouble started.

The Fry 100 Word List

I began simply enough. I had a text file of Fry's List with one word per line. The programs all had options for importing text files, so I started typing abbreviations after each word, with a comma in between. If a word was only one letter (like "I") or not easily abbreviated (like "in"), I deleted it from the list. For very common and short words I used one-letter abbreviations ("t" for "the", "n" for "and", etc.)

Unbeknownst to me, there were several problems with this. First of all, the programs would accept tab-delimited but not comma-delimited text. I had to search and replace all my commas with tabs, but not too big of a deal. Next, however, I discovered that the order I had put the abbreviation and expanded text were reversed. I didn't want to retype all of that (though it probably would have been faster in the long run), so I found a simple Java program that read in a comma delimited file and wrote it out differently and modified it to fit my needs. Unfortunately, after all of this I still had a problem with the encoding of the text. The text expansion programs would not accept Unicode, so I had to resave the file.

After all this conversion I finally had my abbreviations loaded into TextExpander. The program installs itself as a System Preferences pane and has a nice interface with some advanced features. You can decide on a "snippet" by snippet basis whether to type the delimiter and how to treat upper case. I started using the program while doing emails and blogging. As I encountered a new word that I use a lot, I would add a snippet if there was not one already. It was gratifying to hear the little beeps as I typed, knowing that I was saving keystrokes each time the sound played.

But my troubles were not over. For some reason my one-letter abbreviations were not working. It turns out that TextExpander and Typinator set a minimum of two letters for an abbreviation. While TextExpander correctly highlighted my snippets in red if I accidently created duplicates, it did not flag the one-letter snippets. This limitation eliminated much of the benefit of text expansion as I was using it. Fortunately TypeIt4Me allows single letter abbreviations, but changing programs led to another problem.

I had used TextExpander for a while and added some 50 new expansions. Once again I had new abbreviations that I had to transfer into a program. TypeIt4Me's "Open File..." would allow me to choose the TextExpander file, but no new words would appear. I took a look at the two programs' abbreviation files, both plain text XML. Both are standard Apple plists, even using the same name for most attributes. However, TypeIt4Me capitalized the first letter of each, while TextExpander did not, and XML is case-sensitive. In this case close did not count.

In my stubborn refusal to do data entry when something is already in a computer, I ended up with another time consuming solution. I took the TextExpander XML file and used XSLT to parse out each abbreviation and expansion and write them to a tab-delimited text file for import into TypeIt4Me. I'll try to be an optimist and imagine that maybe somewhere all this foolishness of mine will be useful to someone else.

I have since gotten TypeIt4Me set up to my liking. I have a shortcut key to toggle it on and off, and another to add a new abbreviation. My abbreviation file has grown to over 200 items. I have also learned not to type too fast after a replacement is triggered, or sometimes I end up typing in the middle of the copy and paste.

TypeIt4Me has a nice feature where it tracks the number of expansions done and keystrokes saved. As you can see below, it will be a while before I make up the hours spent mucking around with these programs, but I did get to polish up my Java and XML knowledge and eventually solve my problems.
TypeIt4Me shows how many keystrokes have been saved.

Friday, January 25, 2008

C64 Movie Maker for the 21st Century, Review of Anime Studio

Back in "the day" (1985 to be exact) my friends and I had the state-of-the-art gaming machine, the Commodore 64. In addition to classic games like Alternate Reality, Archon, M.U.L.E., and the Ultima series there was a great animation program for us aspiring young movie makers entitled, fittingly, Movie Maker by Electronic Arts.

Movie Maker character editor.

That simple 8-bit program running on a machine with 64k of RAM (yes, 64 kilobytes, not even one megabyte) allowed us to draw the different stages of movement of an animated character, situate them over a background we could also draw, and specify changes to make them come alive. Despite the low resolution it was a step up, both in productivity and movement quality, from the Betamax camcorder stopmotion animation we were also attempting.

In recent years I have returned to those roots for the occasional hobbyist animated short. My default methodology evolved around the tools at hand, Photoshop and iMovie. I would start with a scanned drawing of a complete scene and chop up the characters into Photoshop layers, moving, rotating or scaling them as needed. Then I would make the appropriate layers visible and save the image off as a jpeg to be imported into iMovie. The occasional transition or Ken Burns effect, some sounds and music, and voilĂ , a movie.

While working on a similar project of late I began to long for the good old days of computer animation. I imagined a program that would simply allow me to place images as objects in a scene, move, rotate, or scale them, and take a snapshot to use as a frame of the movie. It'd be even better if it could interpolate the results for smooth animation. My final wish would be to slice up the image of a character into parts that could be moved independently.

I happily found Anime Studio by e frontier that makes all of the preceding not only possible, but surprisingly easy. The program is available for Mac OS X and Windows and has two versions available: the basic version sells for $50, and a Pro version, which adds some 3d capability, is $200. I reviewed the downloadable demo (30 day trial) version of Anime Studio 5.5 on a Powerbook G4, 667mHz, with 1 GB of RAM running OS X version 10.4.11.

The Good

For basic animation, AS made things easy. Simply import the image of your character or draw one from scratch using the vector drawing tools provided. Then add a skeleton layer, defining the bones by simply clicking and dragging to draw them. The bones are automatically associated with the body part they are drawn over. Click on the timeline below to choose a frame, then move the character by manipulating its bones. The program immediately interpolates the intermediate frames. Using the right and left arrow keys you can scrub through the new animation.

It is equally easy to do other effects like moving, resizing, or rotating a layer, or moving or zooming the camera. Each time a change is made a new dot appears on the timeline. These can be selected and moved around to refine the animation or deleted if a mistake was made.
Anime Studio's user interface.

More advanced features are also available. Objects are arranged in layers, which can be given a Z value to determine their order as they pass by each other or the camera moves. They can also be set up as masks much like you would use in Photoshop. This can enable pretty sophisticated interactions.

One great feature of AS is the ability to add a soundtrack to your animation. I used Quicktime Pro to create a simple voice-over and added it to my project. The waveform of the audio immediately appears overlaid on the timeline. This makes it much easier to sync the character's movement to the sound file.

For lip-syncing, AS uses the concept of Switch Layers. These allow more than one image to be associated with a single position and switched out as needed. To generate the mapping between mouth positions and sound, a free program called Papagayo is available. This is an open source GPL application supported by Lost Marble software. I'm not sure what the relationship is between e frontier and Lost Marble (if any), but one of the tutorials uses the Papagayo lip-sync. I was not incredibly impressed with the results, nor with the fact that only the small selection of included mouths can be used by the program, but it is nice that this feature exists at all in a low-end product.

The tutorials built in to the AS user manual go through most aspects of the animation process, logically building from the very basics to more advanced topics. All of the files are included in the Anime Studio directory so it is also possible to skip some of the steps and simply load the finished product to see how it all came together. After spending a few hours with the tutorials I felt pretty comfortable with the program.

There are more tutorials and user forums available online. It is plenty to get you started. The 30-day trial is fully functional except for the ability to export the final video. A preview export is available, but it stamps a "Free Demo" watermark on the file.

In addition to the tutorial files, the program has the option of importing sample characters, props, backgrounds, and more, that are included with the program. The sample below was created using an included background and characters. The explosion movie came from one of the tutorials. I added the beaker and cylinder and the sound. Putting this short animation together only took a few minutes.

Sample movie.
The Bad

I do have some gripes with Anime Studio. In perhaps 30 hours of use the program crashed about three or four times. Granted my test system is rather old and underpowered, but it can be very frustrating to work on a project for some time and then lose that work due to a crash.

Even less forgivable is that the tutorials, while having excellent content, frequently refer to options and features not available in the demo. For example, they tell you to do a preview to see gradient shading, but this option does not exist. The worst part is that e frontier has been aware of the documentation problems for over a year (via messages on their forums) and hasn't bothered to fix the problems. Some of the later tutorials are entirely focused on options that only exist in the Pro version.

As far as I could tell, the option for exporting a sample movie was completely undocumented. I found on the forums that pressing F5 does the trick; however sometimes this keystroke was ignored. I believe the secret was to make sure a preview was not already open in another window before pressing F5.

The Ugly

For all its goodness, Anime Studio also had a lot of simple interface problems that, if fixed, could make AS much more pleasant to use. In general, the windows and icons look like they came from a 1990's OS 9 application. The dialogs are not standard OS X Cocoa windows, and the icon images are not only ugly, but are very confusing. Some of the options' icons are virtually identical. I found it easier to memorize the keyboard shortcuts for each tool than to try to decipher the corresponding icons.

Anime Studio icons and dialog window.

Similarly minor but annoying issues were the program's handling of zoom and the layout of the windows. By default there are windows for Tools, Styles, the Timeline, and the main display window. On my 1024x768 display, three of these four overlap. If I am using the timeline, I cannot see the play, rewind, etc. controls in the main window. To resize the main window I have to hide or resize the timeline first. The preview is resized along with the main window, so maximizing it causes it to take up the whole screen and be obscured by the other windows. I much prefer the Photoshop or Safari style where a maximized window simply expands until everything is visible. This would, for me, make the zoom tool much more intuitive.

Obviously maintaining a cross-platform application has its tradeoffs. In this case the look and feel are decidedly un-Mac-like.


On the whole, I found Anime Studio very easy to use for 2d animation. It was good enough, in fact, that I will be purchasing the basic version. Sometimes I went into the app with a quick idea I wanted to try out and would find myself several hours later still at the computer, working on other ideas that had occurred to me. Being able to move drawings with the skeleton effects, sync with a soundtrack, put objects on layers, and see the whole thing smoothly animated is exactly what I was looking for. If you have similar cravings, I recommend giving Anime Studio 5 a try!

Thursday, January 24, 2008

Before Touch Screens, Multitouch Mice

As much as I would like for there to be a sub-$1000, tablet-like, touch screen Mac the economics of it just don't work yet. The company Axiotron previewed its Modbook over a year ago and just started shipping them (supposedly). Still, the price of $2,300-2,500 is prohibitive. A Wacom 12.1" touch LCD runs a grand and weighs over four pounds. Unfortunately, I don't think there is enough magic at Apple Labs to deliver the product I crave; however, an intermediate step may be entirely plausible and could ship soon. Imagine grafting together a slightly rounder, flatter Mighty Mouse, a MacBook Air trackpad, and the guts of a Wii controller.

The ideal device that I envision is decidedly a bit ambitious and futuristic, but there are variations on the theme that keep it more practical. First, imagine an iMac G3 "puck mouse" (shudder) without the cord or button. Overlay on this surface the multi-touch, gesture sensitive trackpad that debuted recently on the Air. For just moving the cursor around it is much more convenient to have something physically moving than trying to rub a trackpad just the right way. That is where the mouse nature comes into play. Due to its roundness, it would be convenient if the mouse were inertially sensitive rather than relying on optical movement over a surface. That is where the Wii-like internals would be used. The orientation wouldn't affect the direction of cursor movement. You could move it around without worrying about the direction it is facing, avoiding the annoying problem when the puck mouse would turn. Eventually, this could lead to hand-held devices being moved in 3d space though at that point gestures would have to be handled differently.

For the current iteration, however, the surface of the mouse would register taps (mimicking the behavior of standard mouse buttons) but would also allow the use of iPhone gestures- swiping side to side, pinching and expanding, or rotating. Since these gestures are based more on what is currently selected than the mouse position, it makes sense for that sensitivity to be layered on top of the means of moving the cursor rather than coupled with it.

If an inertially sensitive, orientation-independent version is too ambitious for now, it would be equally plausible to base the design on a slightly flattened Mighty Mouse rather than the puck mouse. This would maintain the standard mouse directionality, and the device could come with a cord or wireless. It would also eliminate the need for the hardware and software to handle Wii-like position sensing. The basic idea of overlaying the gesture sensitivity would be the same.

It may look a little clunky, but the multitouch mouse would provide a new level of interactivity to the Mac interface. It would also leverage the work done on the iPhone and Touch interface and get users used to the "standard" Apple gestures. Until we can get fully touch sensitive notebook or tablet screens, the multitouch mouse would be a welcome step forward.

Friday, January 18, 2008

Television 2.0: Can Apple TV replace cable?

When I watched the Stevenote the other day I was suddenly struck with a notion. Over 125,000 Podcasts, a huge number of YouTube videos, a thousand movies, and 350 TV shows are available through Apple TV's new interface. Then Elgato recently released a new version of their EyeTV software for TiVo-like functionality on a Mac. Do the economics and content availability make enough sense that people could replace cable?

The first question is whether there is enough content for this idea to be feasible. Further, how much can be viewed for free versus how many movies and TV shows would be purchased. The numbers above seem impressive. Add to that the fact that an Elgato tuner will record HD video (which all broadcast signals are supposed to be by February 2008) and stream it to the Apple TV, and you have all broadcast networks, all video Podcasts and iTunes U content, and all of YouTube available for free. The answer then boils down to how much of the free content is of interest to an individual, and how much of what they want is only available on cable networks. If a person is big on movies they may find the 1,000 available on iTunes to be pretty paltry compared to NetFlix's 90,000 DVDs for rent, plus 5,000 available for viewing on demand (on Windows only).

If content is not an issue, the question remains whether the cost makes sense. Of course markets vary, so I will be using my personal experience to estimate the costs involved. First off, cable. Most of the cable companies currently offer bundle deals with basic cable, internet, and home phone in the neighborhood of $100. Assuming that you can use all those services, that puts the cost of basic cable around $35. Add another $10 for DVR rental (TiVo service costs $12, plus the initial purchase of the hardware), and you have around $45 per month. This buys you programming on all the non-premium channels that can be recorded and played back easily via DVR. It does not include any pay channels like HBO or any HD programming, and it probably a very conservative estimate for a cable bill.

In contrast the initial hardware expense of the Apple TV/Elgato solution is around $330 (40 GB Apple TV: $229, Elgato eyetv hybrid: $99). This assumes an existing Mac and wireless high speed internet. Shows from cable networks cost up to $1.99 per episode. A daily show like, well, The Daily Show, can be purchased as a multi-pass for $9.99 per 16 episodes. Obviously the point at which a person breaks even, if ever, depends on how much paid programming they consume. The average American watches around 30 hours of TV per week [link]. The $45 basic cable expense would not go very far on the iTunes Store toward acquiring 120 hours of programming. For four-hour-a-day television viewers then, this will not be a solution.

The viewing habits of me and most of my friends, however, are much more modest. There are two or three shows I try to catch regularly and maybe ten others that I know I enjoy but don't make a point of seeing. In addition, I probably watch two movies per week on average. I believe I could easily satisfy my video entertainment cravings for $40 per month. This would buy me four movie rentals ($2.99 x 2 + $3.99 x 2 = $14), a month of The Colbert Report ($10), and eight assorted cable programs ($1.99 x 8 = $16). This would total about 20 hours of television, plus as much broadcast TV, Podcast and YouTube'd video as I care for. The leftover $5 per month would take five and a half years to pay off the initial hardware investment ($330 / $5/mo = 65 mo).

Would it be worthwhile? The best thing about moving toward the Apple TV/Elgato solution would be an increased democratization of media consumption. I'm sure there are many worthwhile Podcasts that are just waiting to educate and entertain me. Likewise, I have spent hours browsing YouTube videos with perhaps less education and more entertainment. The cable TV shows and movies would be accessible on demand. The broadcast programming would be predetermined by having been recorded, but once captured would also be on-demand. Suddenly the world of video entertainment changes from flipping through "100 channels with nothing on" to making choices from thousands of available options. The old favorites would always be there, but a new breed of programming by independent film companies or John Doe's with camcorders would be equally accessible. For someone like me who values education, diversity, and novelty, this could be the prescription for a video media revolution.

Tuesday, January 15, 2008

Macworld 2008 Predictions and Reality (Tooting my own horn)

The long anticipated Macworld Steve Jobs keynote has come and gone. Now is the time when people go back to last week's columns and review the accuracy of their prognostications. Since my article was right on, I am happy to do so.

A fairly large part of the speech, and a good number of my predictions involved Apple TV. Indeed it is gaining the ability to buy and rent movies directly, and the price was also reduced (to $229, squarely in the middle of my $199-$250 prediction). The answer to the question of content was surprisingly positive. Eleven studios signed up! That's far more than anticipated. I also did not expect the availability of HD content, but I know this will be applauded in many quarters. Unfortunately, no announcements about non-purchase TV shows, but maybe those networks will fall into line if movies take off. One other great selling point with the Apple TV and iPod/iPhone paring is that movies rented will transfer between them, even remembering a pause location.

Next up, I predicted mention of the iPhone SDK and possible third party app announcement. We did get a mention of the release of the developers kit and then new software updates including GPS-like functionality on Google Maps. As expected there was no new iPhone model or pricing announced.

About new laptops I said, "Make it slimmer, lighter, maybe update the color design.... A fairly minor new feature that could dramatically change the usability of the laptops would be the incorporation of the iPhone style multitouch to the MacBook's trackpad," and, "If, as rumors suggest, it does not have a DVD drive or conventional hard drive (using Flash instead), this would make for a very slim, light system indeed." The MacBook Air announcement was right on the money, if I do say so myself. The solid state drive is optional (and very expensive), but available.

The Mac Pro was actually updated before Macworld, and hasn't received a visual makeover, but otherwise this prediction also came true.

The unexpected announcement was the Time Machine, an Airport Base Station with a built in hard drive. Priced at $299 or $399 for a half or full TB, this should prove to be a popular product.

Overall, my expectations were met and slightly exceeded. The biggest disappointment was what happened to "One more thing"?

Friday, January 11, 2008

Tech Stock Insanity

It sits there, waiting quietly, a testament to over-exuberance, maybe greed on the down side or idealism on the positive side. I stumble upon it from time to time and reminisce about what might have been. What I am referring to is my My Yahoo! stock market widget with my portfolio from 1999.

At the time I had a lot of extra money to invest. Following stocks became a hobby of mine. Being in IT myself, tech companies were the main ones I investigated and invested in: Yahoo, Cisco, Intel... I got in right before the bubble burst. Within two months or less the values had doubled. It seemed the market would keep rising forever. Then, less than a year from my initial purchases, all of the stocks had plummeted below where I had bought them. Some still have not recovered their November 1999 value.

Overall, the stock market has been a great investment opportunity. Greater indexes put the annualized return of the market around 10% since its inception. Isolating the tech-heavy Nasdaq, however, shows a different story. In March 2000, the Nasdaq composite index passed 5,000. In the nearly nine years since then it has not broken 3,000. What is the reason for this volatility? I have some theories but more questions than answers.

First, I would guess that most day traders are computer savvy. Therefore, they are more likely to invest in tech stocks. These individual investors may be subject to letting the hype and excitement surrounding new technological products or businesses overshadow their true value. Part of this theory could be tested by comparing the individual versus institutional holdings of tech and other stocks over time. I believe the tech stocks would generally have higher individual holdings but also would show greater variability in this percentage over time as traders en masse buy and sell on whims.

Another reason for volatility seems to be the passing of arbitrary milestones. For example, the late '99 Yahoo stock price went from under $100 per share to over $200. Once that somehow magic number was passed, people began to question whether that valuation was really justified. Many people began to get worried or simply had seen enough appreciation to want to cash in, and the massive sell-offs began.

The stupidity of this is that price per share is really an arbitrary number. Double the number of shares and suddenly the price drops in half. This is commonly done when a stock splits. Companies also often buy up stock in order to increase the individual share value. Anyway, there is really no difference between a stock going from $75 to $80 as there is in it going from $94 to $100. These milestones make no objective difference but seem to have a great effect on traders' actions. Again this could be tested fairly easily by looking at transaction volume at key values.

Another very annoying tendency is for a company to come out with an earnings report that beats the projections, but the market reacts by lowering the stock's value because it didn't beat estimates by enough or some unknown reason. I don't understand how a company can lose 5% or more in market valuation for making the profits that everyone expected.

Despite these problems and annoyances I am back in the market in a small way. I have one stock-based IRA remaining. It often seems best to do the opposite of my inclination, but then those few times I guess correctly have me thinking that somehow this foolishness will all make sense, and I can figure it out. I just have to remind myself that it is ultimately irrational and hopefully only invest money I can afford to lose.

Wednesday, January 9, 2008

Hello, Computer?? Apple can you hear me?

There is a classic (among geeks at least) scene in Star Trek IV where the crew of the Enterprise has traveled back in time to the late 20th century. Chief Engineer Scott sits down in front of a Mac computer and says, "Computer. Computer?" Getting no response Dr. McCoy helpfully hands Scott the mouse, which he holds like microphone, "Hello, Computer??" The befuddled 1980's Earthling standing by finally tells him to just use the keyboard ("How quaint"). [Watch on YouTube] While the Mac was not able to respond to voice input, it did bring the mouse-driven graphical user interface revolution to the masses.

Moreover, when Steve Jobs introduced the Macintosh over 20 years ago his demonstration, in part, "let the computer do the talking" via a synthesized voice reading a short speech. This was quite a feat in 1984. However, very little has changed in speech synthesis between then and now. Why has there been so little advancement in voice-based computer interfaces in over two decades? Are the factors finally in place for the next interface revolution to truly put the Personal in PC's. The answer may be yes, and the company poised to lead that change is once again Apple.

The answers to the first question are many. Primarily, voice-based interfaces have stagnated not due to technology constraints but because of a lack of demand. The niche market has been served by software venders like Dragon Systems (now owned by Nuance) who have been able to do voice recognition since the days when the 486 processor ruled. Current iterations showing the feasibility of voice recognition include voice dialing, available on many, even the most inexpensive, cell phones, and the Sync system for making phone calls and controlling digital music players in Ford cars. The lack of demand on desktop systems in the past is largely due to the fact that the majority of computer use took place in the cubicle farms of the American office. Voice interfaces would not fit very effectively into that environment.

As more people spend more time online at home, speech-based interactions make more sense. In addition, many people compose numerous emails, and blog, chat, or Twitter daily. All of these applications would be well served by dictation software. Further, the generation of people having grown up with computers continues to grow. While older people, as a general rule, may be less comfortable with technology, kids and young adults have no aversion to talking to a machine. Perhaps the time is finally right for someone to take this seemingly logical next step in computer interfaces.

If the time is now the company may be Apple. Buoyed by an amazing chain of products since Steve Jobs regained the helm, Apple has shown a repeated ability to take existing technologies and polish and package them in a user-friendly way that brings them to more people. The iPod and iTunes have done it for digital music, OS X for Unix, and now the iPhone/iPod Touch for mobile computing. History has shown Apple to have an interest in improving the user experience. Another major advantage is control of the hardware and software environment and a commitment to open source. Apple has long included built-in microphones on its laptops and all-in-ones. Tweaking these for noise reduction or other speech enhancements would be fairly easy. If they set their engineers to the task, speech could become an intrinsic part of Mac OS.

This is the key to my argument. I don't purport to have done an exhaustive review of the available add-ons that can make a computer voice-activated. Far from it. But that is because this technology should not be an add-on. If I can edit a photo, listen to digital music, browse the internet, and write formatted text using a stock installation of an operating system, I should just as easily be able to search for a file, cue up a song, navigate to a web site, or dictate text without using the keyboard. I'm not saying that the computer should understand complex natural language or that the mouse and keyboard would be replaced entirely. I would be happy to follow a set format for commands and annunciate clearly and separate each word from the others.

Mac OS X even includes some support out of the box for voice recognition and computer speech (my computer tells me the time every half hour). The problem is these features are not highlighted as the way to interact with the computer. Until there is a keynote where Steve Jobs uses spoken commands in a demonstration or there is an Apple ad campaign that shows users talking to their computers, consumers (and therefore developers) won't take speech seriously. But if it were suddenly put forward as part of human interface guidelines a whole new breed of more usable applications could take hold, and the next generation of computer interface could develop. If only Apple is listening.

Sunday, January 6, 2008

Managing Macworld 2008 Expectations

Tom Hanks is supposedly signed for the highest sum ever paid an actor to reprise his role as Robert Langdon in the film adaptation of Dan Brown's Angels and Demons. I shuddered as I read that news the other day. You see, I've never walked out in the middle of a movie, but The DaVinci Code is about as close as I have come. The new prequel having the same star and director ensures that I won't see it. It's not even that the movie was necessarily horrific. The main problem was that having really enjoyed the book, I had very high expectations for the movie. As I have come to realize is so often the case, I had set myself up for disappointment by expecting something more or different than what was delivered.

A similar, though less intense experience was last year's Macworld keynote. As an avid follower of Apple news and technology I was looking forward to announcements that just didn't pan out. The event turned out to be almost all iPhone, no new Macs, no super surprise features for Leopard, not even a Beatles music distribution deal. Sure the iPhone looked cool, but it was six months away and more money than I could afford. The things I was looking forward to didn't materialize. 

With that experience in mind, here are my down to earth expectations for Macworld 2008, and unlike everyone and their sister regurgitating the same predictions I even have some new ideas to contribute.

Staying down to earth I will start with what is already known. The iTunes store will get movie rentals. The details to be filled in include how many movies will be available, how much will they cost, and how long will the rental period be. Related to this is a likely update to AppleTV. While it was announced as a mere hobby it's possible for this simple set-top box to start making inroads into the American living room. The key factors to address are price and content. Price is easy enough to take care of. With a year of Moore's law since its introduction I expect that AppleTV will be repriced in the $199-$250 range. Content, on the other hand, is a bigger question. Certainly being able to browse, buy, or rent movies directly from the unit should be fairly trivial to add. Hopefully enough studios are on board to make the selection worthwhile. If Apple could deliver TV shows on demand (maybe a 24 hour window) for say 50 cents to a dollar, plus movie rentals for $2-3, many people could virtually replace their cable service with on demand programming for less money.
Unfortunately, where the AppleTV really shines for many users is the elimination of physical discs for watching DVD video. This requires technically breaking copyright law with a program like Handbrake. Apple can't really advertise this great selling point without incurring the wrath of the film industry.

Almost certainly the Software Development Kit for third party applications will get some attention. It has been promised for a February release. I could see Steve Jobs bringing out some developers who have had preview access showing off new iPhone functions. Maybe a Google app like Google Earth?
I don't expect a new model or change in pricing. People are still coveting the current iPhone, so why mess with it.

Notebook Updates
The idea of a subnotebook, superthin laptop, or Newton-like Mac tablet is getting a lot of attention on rumor sites. The MacBook line could use some revamping, but in keeping my expectations in line, I'm not looking for anything dramatic. Imagine the change from the Titanium Powerbook to aluminum. Make it slimmer, lighter, maybe update the color design (most new Apple products tout the black and silver styling of the iMac). A fairly minor new feature that could dramatically change the usability of the laptops would be the incorporation of the iPhone style multitouch to the MacBook's trackpad. While a touch screen would be much more exceptional, the added price and difficulties like keeping it clean seem to make that concept less likely.
One area that everyone seems to be missing is the shrinking of recent Apple keyboards. The most recent wireless keyboard in particular is little more than ultra-thin keys atop an ultra-thin sheet of aluminum. It has even dispensed with the numeric keypad and separate cursor keys. I mention this because I could see the thinner MacBook having more of the guts of the system behind the screen like an iMac, with the ultra-thin keyboard flipping down. If, as rumors suggest, it does not have a DVD drive or conventional hard drive (using Flash instead), this would make for a very slim, light system indeed.

Mac Pro
Just based on the age of the current Mac Pro design it would make sense for there to be an update. Little has been said about this, so speculation is basically up to the imagination. I could see a new, smaller case  and modest speed increases. I won't be in the market for a Mac Pro any time soon, so this is not that important to me.

Overall I have set some conservative, realistic expectations for the upcoming keynote. The main thing I am hoping for is there to be much more Mac in this year's Macworld. I'll be anxiously awaiting "one more thing" and, unlike some movie franchises, even if not every expectation is met I will keep watching year after year.

Saturday, January 5, 2008

Nintendo Wii: Good but Not Too Good

Sales reports have consistently shown the Nintendo Wii to be leading the pack when it comes to current generation game consoles. Back when the Wii was just the conceptual "Revolution" I predicted and hoped that it would indeed revolutionize gaming with its user interface innovations. The Wii has been successful because it is good but not too good.

The most obvious interpretation of this statement involves the price/performance trade offs that console manufacturers face. While Sony and Microsoft chose to continue escalating the technical specifications of their hardware Nintendo took a middle ground approach. The Wii does HD but not 1080 resolution. It has a DVD drive but doesn't play movies (let alone Blu-ray or HD DVD). In all respects the system has less power than the competition, but by choosing lower hardware requirements Nintendo was able to deliver a more affordable, smaller console.

A less apparent application of good but not too good is an aspect of human nature that I believe will foretell near term advances in virtual reality (VR). On the commentary for one of the early CGI movies (it may have been Shrek, but I don't recall) the animators talk about a phenomenon whereby people started to dislike the characters if they became too close to real. It seems the human mind is happy to place itself in a state of suspended disbelief when what it is experiencing is clearly unbelievable. We don't watch a Roadrunner cartoon and complain that there is no way the coyote could survive that fall. The problem for movie makers occurred as animated characters started approaching reality. At that point people would look at them and know that something was "not right" but not necessarily be able to put their finger on it. The computer graphics had passed the threshold of being obviously fake but had not yet reached the point of being believable. They were too good for their own good and actually had to be made less realistic.

The same logic can be used with virtual reality and the Wii. Nobody would claim that waving around a remote control truly gives you the same experience as swinging a tennis racket at a ball or slicing a goblin with a sword. Yesterday I was reading about haptic interfaces. The Webopedia article states, "For example, in a virtual reality environment, a user can pick up a virtual tennis ball using a data glove. The computer senses the movement and moves the virtual ball on the display. However, because of the nature of a haptic interface, the user will feel the tennis ball in his hand through tactile sensations that the computer sends through the data glove, mimicking the feel of the tennis ball in the user's hand." This is certainly far above what the Wii's controller offers. Will this be the next generation of gaming? I don't think so, and the reason is that it defies the good but not too good philosophy. When games start to mimic tactile sensations it butts up against the "close to reality but just not right" barrier. I'm sure such a device would be interesting to try, but in order to lose ourselves in the experience of a game, just like with a movie, we either need to be in a clearly non-real environment or so totally immersed that it is difficult to distinguish what is and is not real.

It's been over a decade since Pixar introduced us to full length CGI animation with Toy Story. Movies are just now approaching the use of fully realistic human characters. While VR has also been in development for decades, the Wii gaming console is definitely the largest real world application of virtual reality concepts. Before the next level of immersive VR is achieved the industry will have to overcome the problem of being too close to reality without being close enough. In my opinion this will likely take the next ten years. In the meantime there is plenty of opportunity using the current technology for unbelievable games to be incredibly fun.