Splitting Up Search

NEWS
Nov 12, 2009

By Duncan Graham-Rowe, Technology Review Originally Published on Friday, November 6, 2009 Searching the Web could become faster for users and much more efficient for search companies if search engines were split up and distributed around the world, according to researchers at Yahoo. Currently, search engines are based on a centralized model, explains Ricardo Baeza-Yates, a researcher at Yahoo's Labs in Barcelona, Spain. This means that a search engine's index--the core database that lists the location and relative importance of information stored across the Web--as well as additional data, such as cached copies of content, are replicated within several data centers at different locations. The tendency among search companies, says Baeza-Yates, has been to operate a relatively small number of very large data centers across the globe. Baeza-Yates and his colleagues devised another way: a "distributed" approach, with both the search index and the additional data spread out over a larger number of smaller data centers. With this approach, smaller data centers would contain locally relevant information and a small proportion of globally replicated data. Many search queries common to a particular area could be answered using the content stored in a local data center, while other queries would be passed on to different data centers. "Many people have talked about this in the past," says Baeza-Yates. But there was resistance, he says, because many assumed that such an approach would be too slow or expensive. It was also unclear how to ensure that each query got the best global result and not just the best that the local center had to offer. A few start-up companies have even launched peer-to-peer search engines that harness the power of users' own machines. But this approach hasn't proven very scalable. To achieve a workable distributed system, Baeza-Yates and colleagues designed it so that statistical information about page rankings could be shared between the different data centers. This would allow each data center to run an algorithm that compares its results with those of others. If another data center gave a statistically better result, the query would be forwarded to it. Full story available at http://www.technologyreview.com/web/23892/.