A new visual, multilingual search tool developed at the University of Washington’s Turing Center is being presented today at the Machine Translation Summit in Copenhagen.
The idea was to help people who don’t speak major languages, said Oren Etzioni, a UW computer science professor, in a release:
“We want to serve the vast number of people who don’t speak one of the major languages. As the Internet becomes more widely available outside of the major industrialized nations, it becomes increasingly important to serve people who don’t speak English, French or Chinese.”
“PanImages” uses tagging, online image collections and translation tools to improve search results for people who speak languages that aren’t well-served by today’s online services.
The service automatically translates search terms into about 300 languages and returns images from Google and Flickr, according to the release:
PanImages promises to help people who speak languages that have a small Web presence. Imagine you are a Zulu speaker looking for a picture of a refrigerator, Etzioni said. You type the Zulu word for refrigerator (“ifriji”) into an image search and get two results. The same search using PanImages generates 472,000 hits. In a test of so-called minor languages, PanImages was able to find 57 times more results, on average, than a Google image search.
PanImages includes around 300 languages and 2.5 million words, but it’s designed to grow through user contributions of words and translations.