[This post has been updated to include more from TechFest, including videos.]
From technology that allows people to use the palms of their hands as telephone touchscreens to avatars that can look and sound like you while talking in languages you never knew, TechFest 2012 showcased some of leading advances in computer science being worked on by Microsoft researchers.
Microsoft holds TechFest each year — a sort of high-level science fair in which researchers from the company’s far-flung labs display some of their projects.
Microsoft spends about $9 billion a year on research and development and has about 850 Ph.D-level researchers in labs worldwide. That’s only 1 percent of the company’s employees but it’s the largest computer science research organization in the world, according to Microsoft.
Today, during the public preview, about 20 demos were up. Tomorrow and Thursday, 150 demos will be up and available for viewing by Microsoft employees. Up to 7,000 employees are expected.
The projects this year fell into two main themes: merging the virtual and natural worlds, and systems that can collect big data and analyze and display them in useful ways. The projects were in various stages from early development to those ready to deploy. Some will find their way into Microsoft products that will be released commercially.
Here are some of the interesting projects we saw:
Wearable Multitouch Projector: This technology turns various surfaces — say, your hand or a notebook or a wall — into a multitouch screen. The user simply wears a depth-sensing and projection system on her shoulder. The system then projects a screen on to the surface, allowing that surface to have the capabilities of a touchscreen or mouse. So a user is no longer tied down to a small screen; she could use the palm of her hand or a notebook as a telephone touchscreen while completing the call on, for instance, a Bluetooth headset.
Here’s a video of the Wearable Multitouch Projector from Seattle Times photographer Steve Ringman:
[do action=”brightcove-video” videoid=”1492598501001″][/do]
Holoflector: This interactive, augmented-reality mirror allows users to see their — and other’s — reflections in the mirror while graphics are superimposed onto the life-sized reflection. So you could stand in front of the translucent (one-way) mirror, which is placed three feet away from an LCD panel. A Kinect motion sensor tracks your skeletal movements while images of you are rendered onto the LCD panel. Graphics — images of, say, a bouncing ball — can then be superimposed on the reflection you see, making it look like you’re bouncing a (virtual) ball.
Here’s a video of Holoflector from Seattle Times photographer Steve Ringman:
[do action=”brightcove-video” videoid=”1492637926001″][/do]
Turn a Monolingual TTS into Mixed Language: Researchers record 20 minutes of audio and video of a person speaking, and from that, can break down the head movements and speech into fragments that can be used to create an avatar that can speak in different languages. The demo used an avatar of Craig Mundie, Microsoft’s chief research and strategy officer. Using Mundie’s recorded English sounds, the researchers were able to type in text for Mundie’s avatar to speak in English. And then, separately the researchers were able to type in text for Mundie’s avatar to speak in Mandarin Chinese.
Here’s a video of Turn a Monolingual TTS into Mixed Language from Seattle Times photographer Steve Ringman:
[do action=”brightcove-video” videoid=”1492598502001″][/do]
What’s NUI? Explorations in Naturalness: Among the demonstrations at this booth was a project using Kinect to allow surgeons in an operating room to use gestures — waves of the hands or points of the finger — to manipulate 3-D images. Starting this spring, vascular surgeons at Guy’s & St. Thomas’ hospital in London, with which Microsoft Research has been working on this project, will be using this technology.
Here’s a video of a NUI demo from Seattle Times photographer Steve Ringman:
[do action=”brightcove-video” videoid=”1492670138001″][/do]
IllumiShare: It looks like a desk lamp but the lamp allows two people who are located in different places to work together, virtually, on the same surface. It works like this. Each person has an IllumiShare “lamp.” The lamp lights up a surface at which it is pointed; anything on that surface — whether drawn on it or a physical object — can be seen by the other person. What’s happening is that the lamp shade hides a camera and projector. The camera captures video of the local workspace and sends it to the remote space while the projector projects video of the remote workspace onto the local space, according to the project write-up. This allows people in different parts of the world to draw together, write on a whiteboard together, or even play with real toys together.
Here’s a video of IllumiShare, from Microsoft Research:
[do action=”custom_iframe” width=”640″ height=”360″ src=”http://www.youtube.com/embed/ewmw8fUTa0Y