Version 2 of the Motion Nexus plugin is new from the ground up. You now have a choice between Kinect for Windows and OpenNI. We are very excited to offer the OpenNI integration and the NiTE skeletal tracking is truly impressive! We have separated the plugins in order to give you the advantages that each has to offer. The plugins are new with priorities being performance, usability, and extendibility. We have removed cloud and Facebook integration and will be offering these as a separate premium library, contact support@motionnexus.com for more information. With version 2 we are also offering the ANE for Adobe AIR applications for the best possible performance. The plugin now offers a log and raw feed window so you can easily see information about the plugin environment and also visibly see the camera environment. We are excited to be working with the Primesense and Kinect teams, and are very excited towards future libraries and integration.

The new architecture uses interfaces as the core in order to support development of 3rd party libraries that are not dependent on one environment or the other. This new structure allows for gesture and UI libraries for example, to be created that work in both environments. Support for multiple users for gaming and interaction environments have been added, plus many new events and utilities to help promote ease of development. Thanks for using Motion Nexus and happy holidays!

1.3.6 update included some new functionality that I wanted to highlight in code scripts and identify its use.

RGB Image

One of the most requested features, this allows for Bitmap image directly from the Kinect camera’s color video. The image is pushed at 640×480 witch is the native resolution supported by the hardware. You will use the same MotionNexusCameraEvent as with depth – the only change being in the plugin settings.

//You must turn off depth image or depth will be automatically defaulted
var settings:MotionNexusPluginSettings=new MotionNexusPluginSettings(MotionNexusPluginSettings.USER_LOCAL);
settings.rgbImageEnabled=true;
MotionNexus.pluginSettings=settings;

Player Masking

Player masking is essentially a green screen or the depth image with the user cut out. The user is colored a solid color and removed from background leaving a transparent image. The masked images use the same MotionNexusCameraEvent as depth and RGB.

var settings:MotionNexusPluginSettings=new MotionNexusPluginSettings(MotionNexusPluginSettings.USER_LOCAL,true,false,false,true);
settings.playerMaskEnabled=true;
//settings.playerMaskSettings=[Red,Green,Blue,Alpha];
settings.playerMaskSettings=[120,180,200,1];
MotionNexus.pluginSettings=settings;

Mirroring

You now have the ability to control the mirroring of the skeleton, depth image, and RGB image. This can be done from the plugin settings.

var settings:MotionNexusPluginSettings=new MotionNexusPluginSettings(MotionNexusPluginSettings.USER_LOCAL,true,false,false,true);
settings.mirrorDepthImage=true;
settings.mirrorSkeleton=true;
MotionNexus.pluginSettings=settings;

Skeleton Events

Add and Remove of the skeleton are now events to help promote a clean user experience

MotionNexus.addEventListener(MotionNexusSkeletonEvent.USER_SKELETON_ADDED, onSkeletonAdded);
MotionNexus.addEventListener(MotionNexusSkeletonEvent.USER_SKELETON_UPDATED, onSkeletonUpdated);
MotionNexus.addEventListener(MotionNexusSkeletonEvent.USER_SKELETON_REMOVED, onSkeletonRemoved);

Speech Recognition Language support

This gives you the ability to provide speech recognition in multiple language scenarios. In order for this to work correctly you will need to download and install the corresponding speech pack http://www.microsoft.com/en-us/download/details.aspx?id=29864. After you have the correct speech pack installed you will want to reference the language identifier from here – http://msdn.microsoft.com/en-us/library/dd318693(VS.85).aspx.

When using the identifier you will need to use the base value. For example English United States is referenced as 0×0409, but instead you pass in 409. You will also want to look over the grammar rules to create a correct grammar file – http://msdn.microsoft.com/en-us/library/ms723632(v=vs.85).aspx

var settings:MotionNexusPluginSettings=new MotionNexusPluginSettings(MotionNexusPluginSettings.USER_LOCAL,true,false,false,true);
settings.speechRecognitionEnabled=true;
settings.speechRecognitionGrammarFile='Location of grammar file';
settings.speechRecognitionLanguage = '409';
MotionNexus.pluginSettings=settings;

Speech recognition is a great way for user’s to leverage verbal commands to control games and application interfaces. The current release supports US English speech and leverages XML grammar files for configuration. Speech recognition is supported over the cloud and is a great way to control NUI applications. The 1.3.5 update also includes a trackingId for face models, so when using the cloud you can decipher between who is who for face and speech interaction between remote users. This is just another great feature on the Motion Nexus platform, but we have some really great things in development that we are extremely excited about and will have demos of soon.

This is a quick video to highlight the power of the Motion Nexus platform and what it provides to developers. Yes, we offer a wrapper style SDK for the Microsoft Kinect platform, but we also offer so much more. With the Motion Nexus platform you can stream skeleton data from the Kinect to phones, TVs, macs, tablets and just about anything where AIR or Flash is supported. Not only can you stream Kinect data, but you can combine multiple streams so that skeletons can interact with each other from remote locations. We want to empower developers to push the boundaries of interactivity and social engagement. We are very excited about the platform and where we are going, many more great things to come!

We have just posted an update to the library and plugin that includes support for avateering and contains security enhancements.It is just a blast to experience an avatar mirroring your movements. You will have to tweak settings based on the avatar due to bone differences, but with the rotation processor you have 3D vectors of rotations for all joints. We look forward to feedback and there are more great additions coming soon. With the release of face tracking in version 1.3.1 and now avateering, we are excited to see what developers build. We have already seen a awesome image viewer using face tracking and the flash community is one of the best! If you have some demos, please tweet or email us – thanks!


With the release of version 1.3.1 library we are providing access to face tracking. We are providing over 100 face recognition points that include rotation, translation, and animation motions. This data is provided locally and just like all function of Motion Nexus – through the cloud. In the video below the face seems to lag a little, but that is more the screen capture – the actual result are fun and real time. If you have not already, sign up for Motion Nexus because we have some more really awesome tools and libraries coming soon!

Example of the Motion Nexus Avateering library currently in early beta – leverages Flare 3D with Mixamo skeleton in the hopes of just switching out the skeleton and setting some transform values based on avatar size. To learn more and get introduced to Motion Nexus check out MotionNexus.com and sign up for early beta  - we are making Kinect development life easy.

Today is the private beta release of the Motion Nexus platform. The Motion Nexus platform empowers developers to build Kinect applications in Adobe Flash that support local and cloud integration. Cloud integration enables multiple skeleton interactions from multiple remote users.

Here are some hightlights for the v1.3 release:

  • Simple and concise API for managing application configuration and deployment
  • Full 20 joint Kinect skeleton with velocity, rgb points, depth points, joint rotation, and tracking state.
  • Easily switch between local, local with cloud participants, and cloud with cloud participants
  • Depth image 320×240 support in local mode
  • Near mode enabled – Skeleton with 10 joints, upper body skeleton
  • Facebook API integration
  • Consumer driven plugin – we manage the environment, so all the developer needs is the swc library and your ready to develop.
  • Developer skeleton samples, Facebook integration examples, and much more…

Sign up now –  links to the SDK and downloads will be emailed later in the week.  We want to provide the best platform for developers and have some really great tools and libraries in development that we are really excited about.  This is a great time for developers, and an amazing time for users who can interact and engage with applications and each other in a natural manner.