Search engines are your portal to the internet. They take masses of information on a website, break it down, and make decisions on how well it answers a specific query. But with so much data to sift through, how do search engines actually work?
To discover, categorize, and rank the billions of websites that make up the internet, search engines employ sophisticated algorithms that make decisions on the quality and relevancy of any page. It’s a complex process involving significant amounts of data, all of which needs to be presented in a way that’s easy for end users to digest.
Search engines parse all of this information by looking at numerous different ranking factors based on a user’s query. This includes relevancy to the question a user typed in, quality of the content, site speed, metadata, and more. Each data point is combined to help search engines calculate the overall quality of any page. The website is then ranked based on their calculations and presented to the user.
Understanding the behind-the-scenes processes that take place for search engines to make these decisions not only helps you gain insight into why certain pieces of content rank well but also helps you create new content with the potential to rank higher.
Let’s take a look at the general procedures on which each search engine algorithm is built, and then break down four top platforms to see how they do it.
How Do Search Engines work?
To be effective, search engines need to understand exactly what kind of information is available and present it to users logically. The way they accomplish this is through three fundamental actions: crawling, indexing, and ranking.
Search engine process flow
Through these actions, they discover newly published content, store the information on their servers, and organize it for your consumption. Let’s break down what happens during each of these actions:
- Crawl: Search engines send out web crawlers, also known as bots or spiders, to review website content. Paying close attention to new websites and to existing content that has recently been changed, web crawlers look at data such as URLs, sitemaps, and code to discover the types of content being displayed.
- Index: Once a website has been crawled, the search engines need to decide how to organize the information. The indexing process is when they review website data for positive or negative ranking signals and store it in the correct location on their servers.
- Rank: During the indexing process, search engines start making decisions on where to display specific content on the search engine results page (SERP). Ranking is accomplished by assessing a number of different factors based on an end user’s query for quality and relevancy.
Decisions are made during this process to determine the value any website can potentially provide to the end user. These decisions are guided by an algorithm. Understanding how an algorithm works helps you create content that ranks better for each platform.
Whether it’s RankBrain for Google and YouTube, Space Partition Tree And Graph (SPTAG) for Bing, or a proprietary codebase for DuckDuckGo, each platform uses a unique series of ranking factors to determine where websites fall in the search results. If you keep these factors in mind as you create content for your website, it’s easier to tailor specific pages to rank well.
Breaking Down Search Engine Algorithms by Platform
Each search engine goes about surfacing search results in a different way. We’ll take a look at the top four platforms in today’s market and break down how they make decisions about content quality and relevancy.
Google Search Algorithm
Google is the most popular search engine on the planet. Their search engine routinely own above 90% of the market, resulting in approximately 3.5 billion of individual searches on their platform every day. While notoriously tight-lipped about how their algorithm works, Googles does provide some high-level context about how they prioritize websites in the results page.
New websites are created every day. Google can find these pages by following links from existing content they’ve crawled previously, or when a website owner submits their sitemap directly. Any updates to existing content can also be submitted to Google by asking them to recrawl a specific URL. This is done through Google’s Search Console.
While Google doesn’t state how often sites are crawled, any new content that is linked to existing content will be found eventually as well.
Once the web crawlers gather enough information, they bring it back to Google for indexing.
Indexing starts by analyzing website data, including written content, images, videos, and technical site structure. Google is looking for positive and negative ranking signals such as keywords and website freshness to try and understand what any page they crawled is all about.
Google’s website index contains billions of pages and 100,000,000 gigabytes of data. To organize this information, Google uses a machine-learning algorithm called RankBrain and a knowledge base called Knowledge Graph. This is all works together to help Google provide the most relevant content possible for users. Once the indexing is complete, they move on to the ranking action.
Everything that takes place up to this point is done in the background, before a user ever interacts with Google’s search functionality. Ranking is the action that occurs based on what a user is searching for. Google looks at five major factors when someone performs a search:
- Query meaning: This determines the intent of any end user’s question. Google uses this to determine exactly what someone is looking for when they perform a search. They parse each query using complex language models built on past searches and usage behavior.
- Web page relevance: Once Google has determined the intent of a user’s search query, they review the content of ranking web pages to figure out which one is the most relevant. The primary driver for this is keyword analysis. The keywords on a website have to match Google’s understanding of the question a user asked.
- Content quality: With keywords matched, Google takes it a step further and reviews the quality of the content on the requisite web pages. This helps them prioritize which results come first by looking at the authority of a given website as well as its page rank and freshness.
- Web page usability: Google gives ranking priority to websites that are easy to use. Usability covers everything from site speed to responsiveness.
- Additional context and settings: This step tailors searches to past user engagement and specific settings within the Google platform.
Once all of this information has been processed, Google will provide results that look something like this:
Let’s break down these results:
- User query: The question a user asked Google.
- Google shopping: Google considers the intent of this query as someone searching for products to purchase. As a result, they pull products from their index that match this intent and display them first in the results.
- Feature snippet: A result of Knowledge Graph. Google presents specific information from a SERP result to make it easier for users to review without leaving the results page.
- Top-ranking results: The first site listed in the results is the one Google thinks best matches the intent of a user’s query. The top-ranking result is the one that performs best, based on the five ranking factors we discussed earlier.
- People also ask: This box is another result of the Knowledge Graph. It gives users a quick way to move on to another search that might match their intent even better.
These results are possible only because Google has information stored on each of these pages in their index. Before a user performs a search, Google has reviewed websites to figure out what keywords and intent they match for. That process makes it easy to populate the results page quickly when a search is made and helps Google provide the most relevant content possible.
As the most popular search engine around, Google more or less built the framework for how search engines look at content. Most marketers tailor their content specifically to rank on Google, which means they’re potentially missing out on other platforms.
Bing Search Algorithm
Bing, Microsoft’s proprietary search engine, uses an open-source vector-search algorithm called Space Partition Tree And Graph (SPTAG) to surface results. This means they’re going in a totally different direction from Google’s keyword-based search.
Being open source means that anyone can look at the nuts-and-bolts code of what makes up Bing’s search results and make comments. This open model is antithetical to Google’s tight control of their algorithms. The code itself is separated into two separate modules—index builder and searcher:
- Index Builder: The code that works to categorize website information into vectors
- Searcher: The way that Bing makes connections between search queries and vectors in their index
The second big difference between Bing and Google is at the core of how the information is stored and indexed. Instead of a keyword-first model, like Google, Bing breaks down information into individual data points called vectors. A vector is a numerical representation of concept; this concept is the basis for Bing’s search structure.
Search queries for Bing are based on an algorithmic principle called Approximate Nearest Neighbor, which uses deep learning and natural-language models to provide faster results based on the proximity of certain vectors to one another.
Graphical representation of Bing’s Approximate Nearest Neighbor algorithm SPTAG
If we look at the yellow dot as a user query, the green dots are the first closes neighbor, followed by the blue dots. Tracking the orange arrow, we can see how Bing’s algorithm decides which information is most relevant to the user’s search.
While the underlying principles driving Bing’s search structure are fundamentally different, the process of building their database still follows the crawl, index, rank actions.
Bing crawls websites to find new content or updates to existing content. They then create vectors for that information to store in their index. From there, they look at specific ranking factors. The biggest difference in comparison with Google is that Bing does not include pages without ranking authority, meaning that new pages have a more difficult time ranking if they don’t have backlinks to an existing page with more authority.
For more information on how crawling and indexing occur, check out Bing’s Webmaster Guidelines. This page provides an outline on the type of information that is most important if you want to rank on their platform.
If we look at the same search performed on Bing, the results are different:
While the results look similar in their structure, Bing is pulling from different websites for both their Shopping and their feature snippet selections. The top-ranking result is also different from our search in Google, though both match our intent quite well.
If you’re thinking about tailoring content for Bing, you should start by looking at the differences between the top ranking sites and feature snippets. Their platform prioritizes content differently from Google, and these distinctions will help you understand why.
DuckDuckGo Search Algorithm
DuckDuckGo is a bit of a maverick in the search engine market but is gaining headway as the go-to search engine for anyone concerned about their data privacy. While they have a proprietary web crawler called DuckDuckBot to scour web-page content, much of the information DuckDuckGo shows on their results page is compiled from 400+ additional third-party sources, including Bing, Yahoo, and Wikipedia.
Unlike Google and Bing, DuckDuckGo does not capture personal information on their users, including past search history and IP address. This dedication to privacy in some ways makes their algorithm work harder to provide personalized results.
For even more privacy, DuckDuckGo can also be used for completely anonymous browsing using the Tor network or an onion service.
As a result of this focus on privacy, DuckDuckGo has the most streamlined results page so far.
Both Bing and DuckDuckGo have the same first and second results, which makes sense, considering that Bing is included in DuckDuckGo’s search algorithms.
DuckDuckGo’s 400 additional sources also include computational databases like WolframAlpha, a platform built primarily to answer complex mathematical equations and provide tools for data analysis. Other sources come in the form of Instant Answers, which pull content from relevant websites in an effort to provide on-page answers, like the feature snippets we’ve seen from Google and Bing.
The information in our example comes directly from Wikipedia.
DuckDuckGo doesn’t provide specific information on the different kinds of ranking factors that go into these results pages but alludes to the fact that linking to sites with good authority is something to consider.
Another interesting aspect of the DuckDuckGo platform is that they allow users to use custom parameters called bangs to bypass the search results page entirely. A function of pulling from multiple sources to display results, DuckDuckGo then acts as a search portal for platforms like Wikipedia, Amazon, and Twitter.
As a security-conscious platform, we can assume that DuckDuckGo does not include past searches as a part of their ranking algorithm. That, combined with the informational aspects of their additional sources, makes for a platform that is less personalized than Bing or Google but is still able to provide quality and relevant content for their users. Tailoring content for Bing would work for this platform as well.
YouTube Search Algorithm
YouTube is the most popular video-hosting website. Their search engine is effectively run by rules similar to those of Google, which owns the platform, and it focuses on keywords and relevancy. The algorithm is broken down into two separate functions: ranking videos in search and surfacing relevant recommendations.
The specific reasons why certain videos rank higher than others are, like all Google properties, not outwardly defined. That said, most interpretations lean toward newness of video and frequency of channel upload being the most important factors.
In terms of recommendations, this research paper from 2016 lists the main priorities for YouTube as scale, freshness, and noise:
- Scale: There are 300 hours of video uploaded to YouTube every minute, and the platform has approximately 1.3 billion users. This makes parsing information significantly more difficult, so the algorithm’s primary focus is finding ways to sift through this amount of data on a user-by-user basis.
- Freshness: YouTube balances how they recommend videos based on how recently a video was uploaded as well as on individual user’s past behavior.
- Noise: Due to the varying amounts of content most users watch on YT, it is difficult for any AI to parse what is the most relevant at any time.
These factors result in a recommendations page that is tailored to each individual user account.
YouTube recommendations on the home page
This also shows how Subscriptions factor into the way YouTube presents results. When a user subscribes to a particular channel, that boosts its ranking in search results, recommendations, and what to watch next.
Other ranking factors include what a user watches, how long they engage with different videos, and what the overall popularity of a video on YouTube is.
Take a look at the results page for “best wireless headphones.”
The top result is the most-viewed video of the bunch.
YouTube showed probably the most fluctuation with results depending on what I searched: Best wireless headphones, best wireless headphones 2021, best wireless headphones 2020. All of them shuffled the order of results, even though most videos were showing "2020" in their titles. (In one case, it returned one with 2018 in the title.)
From these results, we understand that popularity is likely one of the biggest ranking factors for YouTube, putting it even above a newer video with an exact keyword match.
To rank well on YouTube, you’ll need a solid profile and consistent upload cadence. Their focus on popularity and profile strength takes more investment from marketers but pays off for brands that focus their efforts on the platform.
Understanding How Search Engines Work Helps You Create Better Content
When you know how different platforms display their results, it is easier to create content with the potential to rank well. This understanding also helps you diagnose why other types of content rank better or worse than your own.
We’ve put together five tips based on this information that can help you create better content across every platform:
- Understanding user intent is important. Every platform we looked at today prioritizes content based on how relevant it is to a user’s search query.
- Matching keywords will only get you so far. Including relevant keywords in your content will help search engines discover and index your content easier, but ranking well is more about providing value to users.
- Know how your target customer searches. Matching both keywords and intent requires an in-depth understanding of your customers and how they think about your product and your market.
- New content helps boost rankings. Creating new content or refreshing your existing content helps it rank higher and boosts your credibility as a brand.
- Gaining authoritative links is helpful. The more people link to your page, the better it will appear to search engines. This signals that it’s valuable and relevant to the content of every page it links to.
In the end, it all comes down to understanding your customer. You can’t create content that ranks well if you don’t know what people are looking for when they search for your product.
For more information on creating content for search, check out our SEO Guide to learn more!