How to Analyze the SERPs with NLP

preview_player
Показать описание
In this video, we are going to learn how to uncover some powerful information from the search engines with a little help from NLP. Now, if you don't have a technical background, don't worry!  I'll walk you through the process step-by-step and have a free tool for you to make it super easy. 

Resources:

Who is SMA Marketing?

SMA Marketing is an international search marketing agency that has been helping global and local brands grow online since 2009.

We specialize in SEO and Inbound lead generation. Levering both data and intuition, we've help hundreds of companies succeed online and trained thousands of entrepreneurs and marketers with the help of our videos.

Рекомендации по теме
Комментарии
Автор

You are doing great videos, just discovered your channel and watched to many of them

shunmax
Автор

Great work Mate, Always wanted to look at SERP with NLP from 2019, but now you showed us how to do that. And you showed NLP or Google is nt perfect, they don't know how these all really work once deployed.

amalaugustine
Автор

Please make one more detailed video on this topic

famifami
Автор

After extracting words it would useful co confront them again at the Google knowledge graph (in order to see which of the topics are entities). I would also use a stop words or common words list to clean the final table (top 25 terms)

shunmax
Автор

Hello
Thank you for your very good content.
I had a seva, when I run the code I get this error
Traceback (most recent call last)
How can I fix it?

peymanhalimi
Автор

Great video! Which languages are supported for entities, except EN?

stenliseo
Автор

Do you know of a notebook that does this exact same thing but with google nlp? i have an api. thanks

SpiritTracker
Автор

Quick Question, How can you scrap results for only usa or particular country?

Wanderbug
Автор

Hello, the tool isn't working for me. Getting this - ERROR: Failed building wheel for tokenizers

vinayshastri
Автор

Thank you - very interesting. Is there a way I can get UK Google results, rather than US please?

mmadog
Автор

Unfortunately, spacy isn't extacting all entities from the text, there are hundreds of entities it's missing. It's only pulling known entities that either have a wikipedia page or knowledge graph. However "other" as in other entities types plays an important role in content and how they are used increases the salience score of the focus known entity. Is there a way to get spacy to show ALL entity types including "other"?

SpiritTracker
Автор

Do we need to add our own NLP API to this? I've been using this but lately I've been getting a lot of errors even when restarting runtime etc.

SpiritTracker
Автор

THERE IS A MAJOR ISSUE THE TOP 10 RESULTS DOES NOT INCLUDE FEATURED SNIPPETS AND 2ND POSITION, FIX IT SO PEOPLE WOULD GET ACCURATE DATA . THANKS

Rinzler.
Автор

Is this still working? I tried using the template, but its giving me errors. Is that me messing up or has this not been updated a while?

WillemNout
Автор

Hi !
How can we change de language of the query ?

Groupe_Espace
Автор

It is asking for access if i open the link mentioned in the description

burhanuddinmetro
Автор

Looks like Trafilatura is coming up with problems

"NameError Traceback (most recent call last)
in <module>()
----> 1 pd.set_option('display.max_colwidth', None) # make sure output is not truncated (cols width)
2 pd.set_option("display.max_rows", 100) # make sure output is not truncated (rows)"

jonth
Автор

As of June 2023, the
```
!pip install "transformers == 3.3.0"
```
fails with can't build wheels

smartpersonalfinance
Автор

Hi mate, SpaCy did not work for me. It stops here on this ### Scraping results with Trafilatura###.

redwan.affiliate
Автор

Hi.

I run all the cells and when I get to the visualization part, I get this output:


AssertionError Traceback (most recent call last)
in <module>()
5 width_in_pixels=900,
6 minimum_term_frequency=3,
----> 7 term_significance =
8 open("SERP-Visualization_top3.html",
9 display(HTML(html))

2 frames
in to_dict(self, category, category_name, not_category_name, scores, transform, title_case_names, not_categories, neutral_categories, extra_categories, background_scorer, use_offsets, **kwargs)
274
275 all_categories =
--> 276 assert category in all_categories
277
278 if not_categories is None:

nenadlatinovic
join shbcf.ru