A search engine’s principal objective is to assist its users in completing a task (and, of course, sell advertising).
This may necessitate the acquisition of sophisticated knowledge. Occasionally, a single answer is all that is required from the user.
Learn how search engines classify a query and how they come up with an answer in this chapter.
Types of Queries Search Engines Consider
This question might be the subject of a whole article or perhaps a book.
Nonetheless, we’ll do our best to condense everything into a few hundred words.
To be clear, RankBrain does not play a significant part in this situation.
So, what’s the real story here?
The first stage is to figure out what information is being sought out of you.
That is, determining whether the question is a who, what, where, when, why, or how one.,
No matter what terms are included in a query, this classification can still take place, as demonstrated by:
As a result, two things are taking place here:
- Assuming the user is looking for an answer to a certain topic, Google has concluded that the user’s secondary intentions are likely to be rather distinct from their original intent.
- You may be wondering how the search engines know that the user in the second case above is asking a question. After all, it’s not a part of the question.
In the first case, how do they determine that the user is looking for information about the weather in their location, rather than just in general?
Systems that link and exchange data are key to creating this environment. The following are the pillars of the system:
Asking the Right Questions in the Right Way
Querying is often thought of as a one-way street, where each request is met with a single answer. However, this is not true.
Query engines have the option of creating canonical queries when they don’t have a known-good likely purpose or when they want to test their assumptions.
“Evaluating Semantic Interpretations Of A Search Query” was a patent granted by Google in 2016. (link is to my analysis for easier reading).
Here’s a visual representation of what’s going on:
Multiple meanings in one question.
All conceivable interpretations might be employed to obtain a result, as described in the patent. A collection of results for all five searches would be generated, in other words.
The results of questions 204a, 204b, 204c, and 204d would be compared to query 202. We’d say that 204-series one most closely fits 202 in terms of the intended meaning.
204c appears to have won based on current results:
This technique would have needed to be repeated twice.
Movies are chosen by the first person and then by the second person.
Moreover, the more successful the result is, the fewer people who click on a search result from this page, as stated in the patent:
We can use other data sources like click-through data and user-specific data used to produce search results to evaluate the alternative semantic interpretations without undertaking any additional analysis.
This is not stating that CTR is a direct metric in the patent context. When asked about Google utilizing user metrics, John Mueller responded, “… that’s something we look at across millions of different queries, and millions of different pages, and kind of see in general is this algorithm going the right way or is this algorithm going correctly.”
Put another way; they don’t look at just one result; they look at the entire SERP (including layout) and see how well it performs.
For the most part, Google uses neural matching to develop synonyms.
Neural matching is an AI-driven approach that helps Google (in this case) recognize synonyms at a much higher level.
Google can return results like this because of this:
As you can see, the system is trying to figure out why my TV has a “soap opera effect,” which makes sense.
The term “weird” is missing from the results page.
It’s all about keyword density now.
To comprehend what information will address a purpose even if it isn’t explicitly requested, their AI algorithms search for synonyms at a highly complex level.
Similarity of Circumstances
Situational context can be used in a wide range of situations, but at its heart, we need to think about how to query intent changes due to different circumstances.
A patent has been filed for a system that generates canonical inquiries. The idea of generating a template is included in that patent.
Create a starting point for future similar searches that can be reused.
In this way, they can use their resources more widely, resulting in results like if a single word has a wide range of meanings, they may desire a definition.
As a starting point, begin searching for patterns of exceptions, such as food.
Speaking of food, it’s a perfect example to back up my claim (and logic, I believe) that search volumes are also very likely used by the engines.
I believe it’s safe to assume that if more people look for restaurants than recipes for a term like “pizza,” they would take that as a statistic and know if a food product doesn’t follow that pattern, the template may not apply.
Sets of Seeds
I feel it is highly likely, if not certain, that seed sets of data are used when building templates.
Scenarios in which engineers program and generate template systems based on a real-world understanding of what customers want.
He ordered pizza from Google, looked up the top ten lists on the web, and then began working with the rest of the team on a template.
Although I’m not familiar with the concept of seed sets in this application, it makes it logical.
A History of Contacts
It is common practice for search engines to test their comprehension of user intent by displaying results in an appropriate layout and watching what happens.
To see if the query “what’s the weather like” has the intended meaning of asking for an answer, they’ll run an experiment.
It appears to be what people are looking for on a large basis.
So, what does all of this have to do with responding to questions?
That’s a good one.
As part of our effort to learn more about how Google answers inquiries, we first needed a better understanding of how Google gathers and analyses data.
Sure, I was answering questions like “who,” “what,” “where,” “when,” “why,” and “how” is simple.
However, we must consider how they know that a search for “weather” or “meme” is a search for a specific piece of information.
There are no Ws in this Five Ws question (or an H for that matter).
All that’s left is to discover the solution using a combination of the methods described above (plus a couple I’m sure I’ve overlooked).
As a result of the user’s single word, the engine has figured out that it’s most likely a query for a particular solution. Now it’s up to them to figure out what the answer is.
To get started, I’d recommend reading what John Mueller says about featured snippets and working your way up as applicable to your business.