Web2 Search Engines: Why They Suck Now

In the last blog, we discussed how search engines became the foundation of the internet industry. Today, let’s dig deeper into the topic but in a more critical way.

When AltaVista, Yahoo Search, and other search engines were born in 1995, there were only 23,500 websites and 45 million internet users. However, there are more than 1 billion websites today. The Internet has grown by more than 40,000 times.

Unfortunately, search engines themselves still keep the same. Guess when this page was. One year ago? No! Five years ago? No! The answer is twenty years ago!

However, no changes happening does not mean no changes are needed. Current search engines are outdated. For example,

Google search results 2002

Current Search Engines Provide Overloaded Information to Users

Web2 has too much data for users. For a query, there are millions of relevant web pages. Come on! That is too much. Redundant information, outdated data, and spam are everywhere. Users don’t need and cannot digest so much overloaded information.

Search Engines Do Not Organize Information

Throwing infinite information directly to users is not that helpful. People are forced to waste unbelievable time organizing and extracting knowledge from all the mess. However, current search engines focus on listing all information rather than encouraging developers to organize it. It is like a bookstore without categorizing books. Readers have to find everything on their own.

Web2 Search Engines Are Not Suitable for the AI Era

Developing and maintaining a search-engine product is labor-intensive in the Web2 era. Developers have to do everything themselves, e.g., produce data, get needed data, store data, process data, design user interface, etc. The development cost is much higher than you expect. The problem was pointed out in 1999 when some people proposed the idea of the Semantic Web. They hoped the Internet could be readable not only to users but also to machines, and therefore developers could be relieved from the dirty works of cleaning and sorting information. However, the way that Web2 organizes and stores data cannot reach the mission.

The Ecosystem of Current Search Engines Is Closed

Search engines were born and developed because of an open Internet, but they soon “betrayed” the “open” spirit. Everything is closed now. Closed index, closed algorithm, closed user data, and closed ads system. A closed ecosystem might benefit giants to make money but is hurting the overall industry. Those giants “steal” users’ privacy to make money without sharing any revenue, their products are not customized enough to fit diversified user needs, and there are many black-box tricks that might play us.

Surprisingly, Search Engines Only Carry…

Google indexes only less than 10% of all content on the Web. Because of the drawbacks of Web2, developers and users are looking for somewhere else to carry their data. Believe it or not, you cannot use Google, Bing, or any other search engines to easily find most of the information. The unchanged mechanism of indexing, processing, and encoding data prevents search engines from finding data in other ecosystems, e.g., deep web (not the dark web), and dapps on Web3.

Do Google and other search engines recognize those limitations? Of course, they do. Why don’t they make any changes? Let’s look at those two images.

Above is a recently published article from The New York Times. What a well-designed and well-formatted page. As humans, our brilliant brains help us figure out the structured data like title, date, author, and body texts. Unfortunately, machines are not as smart as us humans. What a machine can see from the same page is the code shown in the below image. Engineers working on search engines have to interpret the codes of the page into structured data to enable search engines to extract, sort, and rank information.

I guess you might have the same feeling as I do here — are you kidding me? What’s worse is different search engines might code in different formats, and thus, each search engine has to complete the code-to-structured-data interpretation on its own, which is extremely redundant and inefficient.

Due to all of those limitations, we see people escaping from Google and traditional search engines. They might move to Wikipedia and other knowledge graphs with structured data formats for knowledge organizing. Or, they might use social media like Facebook, Tiktok, Pinterest, and context search platforms like TripAdvisor, Uber, and Booking, or interactive applications like Siri and ChatGPT instead for a better user experience. However, those examples are not perfect solutions. They still have concerns about overloaded information, privacy, etc. That’s why Adot stands up and hopes to lead the changes.

So, what an ideal search engine should look like? Why might Web3 save search engines? Let’s talk about those problems in the next blog.

If you are interested in what we are going to do, please follow us at @Adot_web3 on Twitter and look forward to our future blogs.

AdotTwitterGithub

--

--

Adot|Decentralized Search Network for AI

Search infrastructure built for the future. Meet the fastest and most comprehensive Web3 search engine for the AI era. Website👉https://www.adot.tech