opensource.google.com

Menu

The NUI Group's First Google Summer of Code

Thursday, January 8, 2009



The Natural User Interface Group (NUI Group) is an interactive media group focused on research and creation of open source machine sensing techniques, such as voice/handwriting/gesture recognition and touch computing, to benefit artistic and educational applications. Additionally, the NUI Group is a world wide community offering a collaborative environment for developers that are interested in learning and sharing new Human Computer Interaction methods and concepts. Last year, we were chosen to participate in Google Summer of Code™ 2008 and we worked with 7 students, 6 of whom successfully completed their projects. It was a great opportunity to bring students into the world of Open Source human computer interaction and we were very excited by the results.

Stanislaw Zabramski worked on the multi-physics project. His main goal was to create a multi‐touch sensitive application for two‐dimensional graphic visualizations of a few basic concepts of physics, especially mechanics. His work is meant to be used by primary school pupils as a simple educational entertainment tool, thus making them familiar with physics in a more creative environment. Young users can actively participate in the learning process by designing and testing their own simulations in a visually catchy, cartoon‐style environment. The basic multi-touch enabled prototype application has been developed using Flash and ActionScript, and you can take a closer look at the interface in this screenshot:



We are looking forward to Stanislaw's release of the final version later this year.

Ashish Kumar Rai wrote a QMTSim application, a multi-touch input Tangible User Interface Object (TUIO) Simulator. Ashish developed this new simulator to allow fast development and debugging of multi-touch applications. TUIO is a versatile protocol, designed specifically to meet the requirements of table-top tangible user interfaces. While there is a Java based TUIO simulator, it does not help in utilizing the full capabilities of the protocol and only rudimentary applications can be developed using it. Ashish's implementation of QMTSim has many advantages over the Java TUIO simulator, including things like user defined touch point movement paths, an animation timeline, and support for simulations of pinching and zooming. Further, QMTSim provides opacity control to make the simulator transparent and to keep it above the application, thus giving an impression of touching the application itself. Ashish has recorded three videos on his project, including an introduction and two screencasts.

Alessandro De Nardi worked on the Grafiti project, a general infrastructure for table-top application multi-touch and multi-user gesture recognition management. Grafiti is a C# framework built on top of the C# TUIO client designed to support the use of third party modules for specialized gesture recognition algorithms. A set of modules for the recognition of some basic gestures is included. You may want to check out Alessandro's demo video:



Thomas Hansen developed Graphics Processor Unit (GPU) accelerated blob tracking for multi-touch user interfaces (and other blob tracking needs for that matter) as part of his gpuTracker project. Video signals are processed by the GPU to provide real time tracking of blobs. gpuTracker is aimed specifically at tracking blobs such as those created by displays using Frustrated Total Internal Reflection (FTIR) or Diffused Illumination (DI). Check out the image of video input from a GPU enabled blob tracker:



Seth Sandler worked on the tbeta project, a blob tracking application for multi-touch screens built using image processing based techniques like FTIR and DI. The application is written in C++ and uses OpenFrameworks. Some of the most interesting features include an input video image filter chain, a quick camera switcher, dynamic mesh calibration for fish eye lenses, image reflection, a GPU mode, which allows for integration with the aforementioned gpuTracker code that was developed by Thomas Hansen. Most importantly, tbeta is cross-platform and already works on Mac, Linux and Windows.



Daniel Lelis Baggio wrote EHCI (Enhanced Human Computer Interface), a webcam image processing library built on top of OpenCV, which generates events from user's head, hand and body movements. This library is also intended to track objects so that augmented reality can be made. In order to enhance human computer interaction, the application uses a single webcam and does not require the use of either FTIR or DI techniques. Besides tracking positions, this library is also able to provide higher level events such as fetching 3d user hand or head position. You can get a better feeling of Daniel's work by watching his EHCI videos on youtube.

Many congratulations to our students and many thanks to our mentors for making our first Summer of Code such a wonderful experience!
.