Deepcrawl is now Lumar. Read more.
DeepcrawlはLumarになりました。 詳細はこちら

How Do Search Engines Work?

A guide to how search engines work. Topics covered include the processes of search engine crawling and indexing as well as concepts such as crawl budget and PageRank.

how do search engines work?

In this guide we’re going to provide you with an introduction to how search engines work. This will cover the processes of crawling and indexing as well as concepts such as crawl budget and PageRank.

Search engines work by crawling hundreds of billions of pages using their own web crawlers. These web crawlers are commonly referred to as search engine bots or spiders. A search engine navigates the web by downloading web pages and following links on these pages to discover new pages that have been made available.


The search engine index

Webpages that have been discovered by the search engine are added into a data structure called an index.

The index includes all the discovered URLs along with a number of relevant key signals about the contents of each URL such as:

  • The keywords discovered within the page’s content – what topics does the page cover?
  • The type of content that is being crawled (using microdata called Schema) – what is included on the page?
  • The freshness of the page – how recently was it updated?
  • The previous user engagement of the page and/or domain – how do people interact with the page?

What is the aim of a search engine algorithm?

The aim of the search engine algorithm is to present a relevant set of high-quality search results that will fulfill the user’s query/question as quickly as possible.

The user then selects an option from the list of search results and this action, along with subsequent activity, then feeds into future learnings which can affect search engine rankings going forward.


What happens when a search is performed?

When a search query is entered into a search engine by a user, all of the pages which are deemed to be relevant are identified from the index and an algorithm is used to hierarchically rank the relevant pages into a set of results.

The algorithms used to rank the most relevant results differ for each search engine. For example, a page that ranks highly for a search query in Google may not rank highly for the same query in Bing.

In addition to the search query, search engines use other relevant data to return results, including:


Why might a page not be indexed?

There are a number of circumstances where a URL will not be indexed by a search engine. This may be due to:

Next Chapter: Search Engine Crawling


The Full Guide to How Search Engines Work:

how search engines crawl websites

How Search Engines Crawl Websites

How search engine indexing works

How Does Search Engine Indexing Work?

what are the differences between search engines?

What are the Differences Between Search Engines?

What is crawl budget? How does it impact SEO?

What is Crawl Budget?

what is robots.txt used for? An SEO guide to robots txt

What is Robots.txt? How is Robots.txt Used by Search Engines?

tech seo tips for url-level robots.txt directives

A Guide to Robots.txt Directives


Additional Learning Resources

Start building better online experiences today

Lumar is the intelligence & automation platform behind revenue-driving websites

Avatar image for Sam Marsden
Sam Marsden

SEO & Content Manager

Sam Marsden is Lumar's former SEO & Content Manager and currently Head of SEO at Busuu. Sam speaks regularly at marketing conferences, like SMX and BrightonSEO, and is a contributor to industry publications such as Search Engine Journal and State of Digital.


Get the best digital marketing & SEO insights, straight to your inbox