Robots Meta Tags | Lesson 9/34 | Semrush Academy

preview_player
Показать описание
You'll gain an understanding of search crawlers and how to optimally budget for them.

0:08 Robots Meta Tag
1:04 Noindex
1:09 Nofollow
1:22 Having multiple meta tags
3:25 Notranslate
3:32 Summary

✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹
You might find it useful:
Tune up your website’s internal linking with the Site Audit tool:
Understand how Google bots interact with your website by using the Log File Analyzer:

Learn how to use SEMrush Site Audit in our free course:
✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹

A robots meta tag is a detailed individual site-specific approach to determine how a particular site should be indexed and presented to users in search results. Usually, it goes into the "head" section of your site but it can also be applied using HTTP server headers.

The robots meta tag can either be applied using a global approach, which would mean you'd serve one directive that would be valid for all crawlers – or you'd take a more granular approach and specify a meta robots tag directive, which would only be valid for Bingbot say – but not for Googlebot.

The most commonly used directive is noindex, which essentially means: “Dear search engine, please do not display this URL in search results”.

It is also possible to combine directives, e.g. noindex and nofollow. Noindex means again that this URL will not show up in search results, nofollow means that search engines are not supposed to pass any link equity to any of the links going out from this specific URL. Keep in mind though that Google will still crawl those outgoing links.

Having multiple robots meta tags is also possible. You can therefore have different directives for different user-agents. This could be helpful if you want to control Googlebot-news and its indexation behaviour differently from Googlebot for regular web or smartphone results.

From a more practical standpoint you would use a noindex, especially for URLs with a minimum amount of content on them, or for direct duplicates or just low value & low quality entry pages that cause a bad user experience: these could be internal search results or category pages with very few items on them, duplicated content (e.g. with the print and regular version of an article). Overall, we're talking low value pages that shouldn't serve as an entry point for your users in search results.

There are also some less commonly known values, e.g. noarchive or nosnippet that prevent a snippet for this URL from showing up in search results. It is not very useful from a practical point of view though – because for regular websites you always want a snippet for your URL. You can also specify things like notranslate which means for Google not to offer a translation of this site in search results.

In Summary: The two most commonly used directives are noindex and nofollow. Noindex is actually the only thing you really need though because using internal nofollow often causes more problems than it actually resolves. And if you do not want anything to happen or say to restrict crawling, you don't need to include a robots meta tag at all. If there are robots meta tag directives present, Google will just treat them as index, so don't waste your time and resources on implementing it at all.

Just in case, to make you aware of pages with valuable content mistakenly blocked by the noindex directive, the SEMrush Site Audit offers you the appropriate check, which we recommend using.

#TechnicalSEO #TechnicalSEOcourse #MetaRobots #SEMrushAcademy
Рекомендации по теме
Комментарии
Автор

0:08 Robots Meta Tag
1:04 Noindex
1:09 Nofollow
1:22 Having multiple meta tags
3:25 Notranslate
3:32 Summary

semrushacademy
Автор

I'm planning on starting up my own online business, someone told me the meta keywords and description is good enough. Should I use more meta tags or just use those two meta tags?

brianjett