How do you wow the crowd of uber geeks gathered in Seattle this week for the Society for Industrial and Applied Mathematics conference on parallel processing for scientific computing?
IBM’s trying with a new algorithm for processing the enormous pools of data the world’s generating nowadays, drowning scientists and their supercomputers.
The company says the formula developed by its Zurich-based researcher Costa Bekas reduces the complexity and cost of analyzing huge datasets by two orders of magnitude.
From the announcement:
“The new method was tested on the fourth largest supercomputer in the world and what would normally have taken a day, was crunched in 20 minutes. In terms of energy savings, the analysis required 700 kilowatts total, compared with 52800 kilowatts total.”
Testing was done on a Blue Gene/P system in Germany. The setup accurately validated 9 terabytes of data in less than 20 minutes, a process that would ordinarily take more than a day.
Bekas is presenting his findings this afternoon at the conference at the Grand Hyatt.
Here’s a clever picture IBM provided of Bekas writing part of the algorithm on glass: