Welcome to Microsoft Pri0: That's Microspeak for top priority, and that's the news and observations you'll find here from Seattle Times technology reporter Janet I. Tu.
You are viewing the most recent posts on this topic.
August 14, 2013 at 11:47 AM
When Microsoft announced the Xbox One at the E3 gaming conference in June, it said it would be launching in 21 countries.
Now, the company has cut that number to 13.
July 16, 2013 at 6:13 PM
A sign language translator that uses the Kinect motion sensor.
A platform that lets city planners keep in touch with neighborhood residents during development projects.
Those were among the dozens of projects that Microsoft researchers, as well as teams of university students, demonstrated Tuesday during Microsoft Research Faculty Summit’s DemoFest.
June 10, 2013 at 2:46 PM
The price includes the Xbox One console (which has a 500GB hard drive, Blu-ray player and built-in Wi-Fi), the new Kinect, an Xbox One Wireless Controller and a 14-day free trial of Xbox Live Gold for new members.
Here’s a roundup of the news coming out of the event:
My Seattle Times colleague, Brier Dudley, writes about Microsoft’s press event this morning, in which the company also disclosed the roster of games that will be available when the Xbox One launches.
Here’s The Verge talking about how digital purchases on Xbox One will use real currency instead of Microsoft Points.
Here’s Engadget’s roundup of details on Xbox One hardware, software services and games.
Xbox One is Microsoft’s long-awaited successor to its Xbox 360 gaming console. It marks another step in the company’s goal of making the Xbox a living room entertainment center, not just a gaming console, and also another step forward in Microsoft’s evolution into a devices-and-services company.
May 23, 2013 at 11:14 AM
Coming on the heels of the announcement of the Xbox One with new Kinect sensor, Microsoft announced today that it will also deliver a new Kinect for Windows sensor and software development kit sometime next year.
The Kinect for Windows sensor, which allows developers to add motion- and voice-sensing technologies to computers, will be built on a shared set of technologies with the new Kinect sensor, according to Microsoft. The new Kinect for Windows sensor will include higher fidelity (including a high definition color camera and new noise-isolating multi-microphone array), an expanded field of view, improved skeletal tracking, and new active infrared, according to a company blog post.
Microsoft did not specify price or give a more specific timeframe for general availability of the Kinect for Windows but said it would have more to share at its Build developers conference in late June.
March 18, 2013 at 8:15 AM
Microsoft is releasing today a new version of its Kinect for Windows software development kit (SDK).
In the recent TechFest, Microsoft’s annual “science fair” in which its advanced researchers show what they’re working on, it was obvious that the folks at Microsoft are really emphasizing the use of the Kinect voice- and motion-sensing technology. The Kinect was used in everything from detecting when a user steps away from a giant digital touchboard to 3-D scanning.
TechFest was also where we heard about some of the features that are now being included in this new SDK, including Kinect Fusion, which creates 3-D renderings by fusing together multiple images from the Kinect; and the ability for the Kinect to read hand gestures in addition to larger, skeletal motions.
The ability to read hand gestures sounds like it’s being categorized as part of the new SDK’s Kinect Interactions, which gives developers the tools to create more natural-user interfaces, including grip-to-pan and push-to-press buttons capabilities, and ways to accommodate multiple users.
This version of Kinect for Windows is “the most significant update to the SDK since we released the first version a little over a year ago,” Bob Heddle, director of Kinect for Windows, wrote in an official blog post.
March 6, 2013 at 10:41 AM
From Kinect 3-D scanning to big data mapping, Microsoft researchers give glimpse of company’s future
[This story is running in the print edition of The Seattle Times March 6, 2013.]
From a smartphone app capable of capturing 3-D scans to interactive whiteboards to a browser-based program allowing users to build a predictive model in minutes, the preview Tuesday of Microsoft’s TechFest 2013 was full of cool stuff.
But the demos were more than just about cool. Taken together, they gave a broad yet cohesive view of three areas of the future that Microsoft is concentrating on:
• Natural user interface — meaning interacting with computing devices using touch, speech or gestures.
• Big data — synthesizing and making useful large amounts of information.
• Machine learning — the ability of computers to learn.
TechFest is the company’s annual science fair at which its advanced researchers from around the world demonstrate what they’re working on.
On Tuesday, a handful of the approximately 150 demonstrations were shown to some customers, partners and the media. On Wednesday and Thursday, thousands of Microsoft employees are expected to attend.
Microsoft employs some 850 Ph.D.-level researchers worldwide — about half in the U.S. — and spends about $9 billion a year on research and development.
That makes Microsoft the No. 1 computer science-research organization in the world, according to Rick Rashid, Microsoft’s chief research officer.
The company, though, has been criticized on whether it gets good return on its heavy R&D investment.
Rashid addressed the issue during his keynote address Tuesday morning in which he said Microsoft Research generates about a quarter of the company’s patents, that it provides “early warning” on new technologies, and that its work has ended up in almost all Microsoft products.
Indeed, a few of the projects on view are expected to be included in some upcoming Microsoft releases. Other projects were in the prototype stage.
Many of the demonstrations featured work on natural user interfaces, especially those allowing a user to interact with a big display screen.
Researcher Michel Pahud, for instance, is working on an interface that allows people to use touch and a pen, at the same time on a digital whiteboard.
A sensor can also detect when a user steps a few feet away from the board, allowing the presenter to use her smartphone as a controller for the display.
[Continue reading the story here.]