Quick demonstration of Kinect controlling Google Street View with Adobe AIR:


  • Rotate body right: rotate image right
  • Rotate Body left: rotate image left
  • Walk (knee up): progress forward
  • Lean Up: look up
  • Lean Down: look down

This is a quick demonstration of leveraging the kinect for NUI with Street View. The example above uses AS3OpenNI (shout out) with Google Street View in Adobe AIR.

There is a new tutorial on The Tech Labs that deals with runtime skinning of Adobe AIR applications and performance considerations. I wrote the tutorial to expose some general best practices of using runtime skinning and also to detail performance considerations. I hinted at the idea of native application skinning, but i wanted to dedicate a blog post to that topics, which follows below. The basic idea behind native application skinning is going beyond service based applications to truly integrated desktop applications that leverages OS specific user experience expectations. When we look at the applications we use day to day, we develop expectations on how it looks, functions, and interacts with us. It becomes easy to tell the difference between a widget and a desktop application. Main factors for integration are color and style, integration of user interaction paradigms per OS, and visual controls and implicit hot keys. Lets look at a couple of chat application to give a visual description of what I mean.

Above we see three applications – Tweetie (Mac client), TweetDeck(AIR client), and Twitterrific(Mac client). When visually describing the application we want to concentrate on how the application provides similar experiences and expectations to other applications we use on our desktop.  Tweetie is a fully integrated native application that keeps with Mac UI and UX. The Tweetie application leverages a large header to add user controls such as search, and navigation. Tweetie has a preferences menu and multiple hot keys that can be accessed from the toolbar. Application controls and configuration preferences are basic expectations from the user of an application. When we look at TweetDeck, we see a SWF wrapped in a standard chrome window from AIR. These native window are OS specific, but you loose the ability to leverage the header for user controls and advanced application name management.  When using the application, the first time user has to hover over icons to get descriptions, and the hot key are not available on the toolbar. When examining Twitterrific, we have tons of hot keys that are unknown unless you read the documentation. There is no preferences integration and it really can be defined as more of a widget then a desktop application. So we see that the platform on which the application is build does not define how the application meets native integration. We as AIR developer can do anything and are really unlimited in our experiences that we create with our applications. This idea of unlimited design is good and bad, the good is that we can create truly brilliant applications, the bad is that it takes more time and development just to get to the level of integrated native style and expectations.

The idea is that Adobe AIR developers should be creating application like Tweetie that really leverage the native UI and compliment user workflow. It is upon us as developer to leverage the work that Microsoft and Apple have already done in creating user expectation and workflow, and leverage those to create a truly integrated user experience. I can say that the current way we use the computer is not the best, but at the same point it is what we use and unless you are creating something revolutionary, you should create your applications on user expected experiences. The New York Times desktop reader application highlights how an AIR application can be integrated successfully on the desktop:

Adobe did an excellent job in leveraging the header to manage user controls and really create a sweet application name style. The user notification on updating of articles is awesome, the only thing i would like to see change is the window controls which always seem disabled. This shows an enterprise application that meets my expectations as a user. I would like to see hot key support in the toolbar, but this is a great step and example.

I want to see more and more AIR application that are not just widgets, dashboard apps, or SWFs wrapped in native chrome, but instead are an extension of the native OS that is a fully functional and integrated kick ass application. I have been working on a skin library to support OSX, XP, and Vista in Flash. Each has its own challenges, but i am looking so see if what i am talking about above makes sense, or if i am just a crack pot. If you are interested, drop me a line or create a rebuttal that tells me i do not  know what i am talking about – I look forward to the challenge.

So being a developer and a designer I have recently come to the conclusion that chances are some one has done it before and some one has used it before. What this means is that many of us know the websites we like, the tools we use, and the interactions we expect. The development community has built thousands of paradigms that encapsulate user needs and provide distinct user patterns. We have compiled a knowledge base on our users over the years in order to develop better, stronger, faster applications. Some companies have a UX team and a UI team – treating each as a separate entity. By todays definition, the UX is based on the UI and the experience that the UI provides to the user. When i think of this i wonder – does the user not already have an expectation of their experience before they even begin? Are these expectations not in line with other user’s expectations and how users may experience other applications? I understand that RIA is a new way to experience web applications, with that expectation that the browser  would be similar to expectations from the desktop.

Due to these questions i am trying to discern a new development strategy that incorporates what we already know about the UX and build the UI off of that, not vice versa. It seems like a term for now and really is a simple hybrid, but i would like to develop a framework off of this that implements the strategy of using predefined interactions and expected movement / results as the prime motivation for how an application is designed and developed – the interactions are there, now lets build the logic. I would like to see Flash Catalyst as a first step, but i think this also has to do with the company, developer mindsets, and development practices. When designing in UX and UI , we understand that we are providing an application where the user expects (1-2-3-4) and when executing (1-2-3-4) they expect (a-b-c-d). So when the user initiates (a), there may be an expectation that (e-f-g) is ready and instant – an expectation or more from less. This is the ability of defining the design from expectation that follow discernible patterns.

When creating a search application the user expects an input box and a “search” button. When the button is clicked the user expects the input box and button to still be present and the results to load below it. There is no expectation on how the results load (movement of adding or effects), but they expect the results to load rather fast and for the first result to be the most important (key). The results are paginated and there is a total on how many articles or pages are available for the search. These are the expectations of the user and for a good user experience they must be met. Google has taken this one step further by then adding new articles to the topics search for instantly (live) – this is the (e-f-g). We do not develop an experience that is off centered from basic paradigms, nor do we develop interactions that will cause a new experience that is not a current paradigm if the application is not of a predefined sort, unless we educate the user to the advantages and efficiencies. The education of the user to a new experience is key to creating a new expectation that the user can expect from the interface, not the user creating an expectation from using the interface. by following this pattern you can develop a core structure that can be built upon to promote advanced user interactions.  This does go against the idea of user trial and error – but for things such as the touch screen integration, the user expects that they use their finger instead of a cursor or keyboard – but they must be educated through advertising, friends, or documentation on how they should expect any other interaction to work – pinch and rotate. This states that a company must develop from predefined expectations unless they are willing to educate and inform. This does not state that all paradigms are correct, most really are not, but they have come to be expectations to a user and should be treated as such.