Google Webmaster Central Hangout 16th Jan 2015

Google has for a while has been holding a Google Webmaster Central Hangout on weekly basis, and you can find the previous hangout here. This session you will find a fairly high number of short questions which provide some great information about a wide number of topics.  In addition, there is a great and detailed topic that reveals some interesting information on how Google deals with pages with a high number of outgoing links. Anyone can join in on these hangouts, and you can post questions in advanced with John Mueller answering the most popular upvoted ones.

If you don’t have time to watch the whole video then you are in luck.  We have included all the relevant parts of the transcript below (after removing all the superfluous conversation) as well as expanded on the topics covered with our brief opinions and links for further reading.   You will also see some time references next to each Question title so that you can move to the relevant part of the video should you wish.

In order to assist in finding questions that of most interest to you can find a summary of all the questions below (with internal links to take you to the relevant part of the page):

Transcript Commentary of the Google Webmaster Central Hangout 16th Jan 2015

Introduction – 0:02

JOHN MUELLER: Welcome, everyone, to today’s Google Webmaster Central Office Hours Hangout. My name is John Mueller.  I am a webmaster trends analyst at Google here in Switzerland.  And part of what I do is talk with webmasters like you, try to help answer any questions that are open, and make sure that any feedback that you might have is passed on back to the team here as well.

Before we get started, I thought I would mention three short things that happened recently.  On the one hand, we have a bunch of new types of structured data that you can add to your website. That includes things like you can specify your social profile. If you’re showing a knowledge graph card on the side for your website, that might be something that you could do. So you’d have links to your Google+ page, Facebook account, Twitter, whatever you have there. On the other hand, we have more information for events. If you’re an artist or if you’re performing somewhere, then we have a bunch of information about events. That’s also listed on our blog. And we also just yesterday launched a new structured data testing tool, which has support for a lot of new features. So you might want to check that out if you’re into structured data.

And finally, one thing that we did this week is we started a Google Moderator collection of your feedback, your ideas, your things that we should be focusing on more this year. That includes things maybe for a web search in general or for Webmaster Tools specifically or maybe there’s some mixture that you’re still missing that you’d like to let us know about.

All right. With that covered, let’s start off with you guys. Do any of you have any questions to start off with?

Question 1: 1:53 – Question about Webmaster Tools Feedback Google are Gathering

What would you like to see from Google Websearch & Webmaster Tools in 2015?MIHAI APERGHIS: Hey, John. I wanted to ask about this feedback that you’re gathering for Webmaster Tools. Can you tell us about the period of time that you’re going to gather the feedback? And when do you think you’re going to start maybe implementing some of that?

JOHN MUELLER: Well, of course we have to prioritize a little bit earlier. So a lot of things have already been prioritized and planned. The team has a lot of interesting stuff to work on already. But this type of feedback is really useful to help us confirm that we’re headed in the right direction or to think about things that maybe we should add for later this year. Or, depending on how big the things are that are listed there, maybe there are things that we could even sneak in in between as well if it’s not a lot of work or doesn’t take a lot to actually get that done. So it kind of depends.

MIHAI APERGHIS: OK, cool, because one of the most popular requests is the ability to display any path– not path, things like, I guess, what algorithms are affecting your website.


MIHAI APERGHIS: And access to the [INAUDIBLE] I was wondering about the timeline.

JOHN MUELLER: Yeah. I mean, the other thing to keep in mind is that these are essentially suggestions from our side. And just because something gets voted to the top doesn’t mean that we’ll run off and do that right away. It might even be that we have good policy reasons for not doing that at all, which might mean maybe later on during the year we’ll figure out, OK, well, we can’t do this specifically, what people were asking for, but we can do something similar that gives them the same kind of information that wouldn’t be involved by those kind of policy decisions. So I think this is really useful to help us guide the way, but it doesn’t mean that if something is highly voted that we’ll do that right away.

MIHAI APERGHIS: Yeah, that makes sense.

Best Host News Commentary – Google recently posted the question “What would you like to see from Google Websearch & Webmaster Tools in 2015?” on Google+.  You can leave feedback here.  Looking at the comments posted so far, many of the suggestions relate to having access to Webmaster Tools data for longer (i.e. 12 months instead of 3), better sampling with keywords data, clear penalty indicators and less of a delay for the time it takes for the data to update.

Question 2: 3:53 – Question about the Google “OneBox”

MALE SPEAKER: John. If you search for a stock company, like Google, on Google and the first thing you see is this big box with stock information, how would you call that, officially? What’s the name of that box?

JOHN MUELLER: It’s a OneBox but–


JOHN MUELLER: I don’t know if that would be, like,a Finance OneBox or something like that.

MALE SPEAKER: OK, OK Finance OneBox, good. I emailed that to you yesterday. You might not have seen it.

JOHN MUELLER: I saw that, yeah.

MALE SPEAKER: If there’s incorrect information, what do I do? I have no clue, because if I search on Google for “mistake on Google,” then I don’t find any way.

JOHN MUELLER: There’s no mistake, yeah. No, I passed it on. I saw your email there. And that’s something that usually gets updated automatically. I think maybe in this specific case something got stuck or something, just a bit delayed. But I sent that off to the team to have a look to see what was going on there.

MALE SPEAKER: OK, wonderful. And also if the company doesn’t want that to appear there because it doesn’t make sense– if people search for that company, it’s an online broker, they want to lock in. They probably don’t want to have stock information for that. I know that your algorithm might consider that differently, but what do I do if that doesn’t look like the best information that should be there?

JOHN MUELLER: Hard to say. I mean, you can always send that to me as well. In a general case, I think using the feedback link on a search result is useful there because that also gets aggregated and sent back to the team. It’s not the case that you could just say, I want to turn this off, and tomorrow it’ll be gone.

MALE SPEAKER: Yeah, sure.

JOHN MUELLER: But that’s something. I think sending it to me helps make sure that we get it to the right people as soon as possible. If this is a more general case or if you don’t know me or whatever, then giving the normal feedback through the search results is always useful.

Best Host News Commentary – There is not really much to this question, but in case you were wondering what the Google Onebox is, we have included a brief summary below:

The Google usually shows for each results the web page title, description and URL, as well as any Rich Snippets (i.e. review ratings, price etc).  However, you may have seen other listings shown that shows things like recent news articles, stock information, image results, or videos.  It is these results that are referred to as being in the Google One Box. An example of the Finance Google OneBox is shown below:

google finance onebox

Of course, this is just one example, and there are reports that the Google Onebox are increasingly becoming more detailed.

Question 3: 5:49 – Any news on the new structured data testing tool?

Written Submitted Question: Any news on the new structured data testing tool?

JOHN MUELLER: That was released yesterday, so that was good timing, I guess, with that question. I heard some people already testing it out and finding specific cases that maybe aren’t working right or are a bit confusing. So if you have any feedback about this new structured data testing tool, our new documentation there, please let us know. This is still fresh off the press, so we’re happy to make things clearer if there’s something that we can do there.

MALE SPEAKER: Can I just ask something quick now about it?


MALE SPEAKER: Just ran kind of a couple of sites through it. There’s a box on there that’s customized search result features. Is that something that you’ve got on your website or stuff that you’re kind of missing out on? It’s got things in there with regards to opening hours and stuff that didn’t fully make sense.

JOHN MUELLER: I don’t know. I didn’t actually work together with the team on that feature so I don’t know all the details there. I’d have to try that out.

Question 4: 6:58 – I submitted a site map with errors some time ago, but despite correcting it, Google still keeps visiting the error URLs. What would you recommend?

Written Submitted Question: I submitted a site map with errors some time ago, but despite correcting it, Google still keeps visiting the error URLs. What would you recommend?

JOHN MUELLER: This is something where you essentially don’t really need to do anything. If these URLs return 404, that’s perfectly fine. When we discover URLs and we see them linked within your website, we see them linked from somewhere else, we see them in your site map file, then we’ll try to crawl those URLs. And we might retry them regularly to see if there’s anything that we’re missing that we could be showing for your site as well. So we’ll retry those from time to time, but that’s not something that you need to block or that needs to worry you. This is essentially just our algorithms trying to make sure that we’re not missing anything important from your site.

Question 5: 7:50 – Discussion: Competitors are using unethical practices by buying a domain and by copying my website on it and then by adding 75,000 links in five days.

Written Submitted Question: Competitors are using unethical practices by buying a domain and by copying my website on it and then by adding 75,000 links in five days. As a result, my website is replaced with their domain in Google cache. I complained to Google, and they create a new one. What to do?

JOHN MUELLER: This sounds like a case where giving us more detailed information would probably make it easier for us to figure out what exactly is happening there and what we’d need to improve on our side. So if you could set up a forum thread about this, specifically mention your site, specifically mention the other sites, that would make it a lot easier. Sometimes when I see weird things like this, which sound like something maybe technical that’s going wrong, I also notice that the website itself has weird issues as well, which sometimes results in us seeing, maybe, two copies of the same content. And we see the original website, but we see that the original website has a bunch of issues attached to it and a copy somewhere. So it’s hard to make a choice between we think this is probably the original, but it has all of these problems, and this is a copy that doesn’t seem to have these problems, so which one should we choose? And looking at specific examples makes that a lot easier for us so we can really pass that onto the team and think about what technically might be going wrong here, what we could be improving there.

Question 6: 9:19 – Will heavy graphic pages (at the expense of text) effect rankings?

MALE SPEAKER: Hello. Thank you. I am wondering, well, we are trying to make information for the user the most easy to understand. So we are taking, showing data for our results of products. We are on metasearch. We try to synthesize with graphics most of the cases with bars, make it clearer for the user with not a lot amount of text. And I’m wondering if this kind of design that we think is very useful for a user because it makes things very easy to understand at first sight, if it can penalize our performance in Google with competitors. We’re working very heavily to do things clearer for the user using graphics and a little amount of text explaining the graphics. But I’m not sure if this way of work can penalize us in Google.

JOHN MUELLER: It shouldn’t be the case that it would penalize your website in any way in the sense that your website would rank worse than without this content. But what can happen is similar with images, for example, that we kind of miss the context of these images or of this content.:40So if there’s absolutely no text on these pages, then that makes it really hard for us to figure out what is this page relevant for. An example of something similar, maybe taken to an extreme, is if you were a photographer and you put your photographs online directly and you just kept the original file name as it came out of the camera as a title, as the page name, and the page has the text– I don’t know– DSC12345.jpeg, and it has this nice photo on it. And in that case, it has this really nice photo on it. It’s something that probably users really like, but we have absolutely no context about this image, what this photos for, who this page is for. So that’s kind of an extreme case. So adding some kind of context to your pages definitely helps. Adding something like a clear title, maybe a heading, some minimal text, at least, to those pages, that I think is something you should always be doing. It sounds like you’re doing some of that already, so maybe that part is actually fine. And it’s not the case that you need any minimum number of words on a page or a minimum amount of text on a page. If we can recognize a context based on the content you have on your page, then that’s fine. It’s not that you need to write a novel for every page.

[discussion continues, but doesn’t add much more of interest]

Question 7: 15:50 – Are there any updates you can tell us about mobile friendly factors as ranking factors?

Written Submitted Question: Recently Webmaster Tools error message tells non-mobile friendly pages will be ranked appropriately. I guess the message just talks about the possibility in the future, but are there any updates you can tell us about mobile friendly factors as ranking factors?

JOHN MUELLER: So we briefly touched upon this in the blog post when we introduced the mobile friendly label that we’re experimenting with ways to rank this content in subtly different ways. So I think maybe at the moment this is something that we’re not doing now, but I can definitely see this as being something where over the course of the year we’ll be continuing to experiment with this. And maybe we’ll find a way to kind of really tweak that appropriately so that if you’re searching on a smartphone and we know that some pages are mobile friendly and other pages are not mobile friendly, then maybe at some point it makes sense to put the more mobile friendly pages a bit higher up in the search results, especially if they are more or less equivalent. So that’s, I think, the general goal where we’re headed.

I don’t know how far we are at the moment or how quickly we’ll be able to publicly say that this is something we’ll always be doing, but we’re definitely experimenting with this. I think that makes sense for the users. I think it’s important that you as a webmaster have a chance in the meantime to figure out how to make your websites mobile friendly. And if you have a mobile friendly site already that’s not being recognized as being mobile friendly, maybe there are things like robotic CSS, those kind of things that you can do to make sure that Google also understands that. So this is something where I think in the meantime it makes sense for you guys to figure out how to make your sites mobile friendly. Lots of people are using smartphones to access the internet. And in the long term, you’ll probably see us experimenting more and more with actually using this as a ranking factor there. Obviously this makes more sense on smartphones. It’s not something where we’d say we’ll use mobile friendly as a ranking signal for desktop web search because that doesn’t necessarily mean that a site that’s mobile friendly is more relevant on desktop. It’s more useful for a smartphone user, but maybe that doesn’t translate to something that’s useful for a desktop user.

Question 8: 18:18 – Discussion about New Website & Redirection of Pages

MALE SPEAKER: My question is related too that we sometime what happens is–website is being redesigned, right? So same content is being published on a different URL because of some IT changes, whatever the reason is. So we do the 301 redirection. And it shows still in the Google services the old URL, new URL, which creates a concern that it might be a possibility of duplicity too because they aren’t being crawled properly. Though we have done the 301 redirection? So is it a good idea that the old URL are being submitted as in site map and means they will be crawled faster and Google will index those old URL and only new URL will be shown in the results?

JOHN MUELLER: Usually we pick up on those kind of redirects fairly quickly. If we’re not picking up on them individually, then you can let us know about that through a site map, for example, depending on how big your site is, how big those changes are there. But in general, you wouldn’t need to do that. So that’s something that will happen on our side anyway. Sometimes what will happen is you will redirect from one URL to another URL. But if you specifically search for the older URL, you’ll still find that in search. So that’s specifically visible when you move from one site to another. If you do a site query for the old domain, then maybe you’ll still find hundreds or thousands of URLs there. But if you search for that content by looking for the keywords that you’re normally ranked for, you’ll see that your new domain actually ranks for that content. So in a case like that, we know about the old URLs. We know about the new URLs. We know that if you’re specifically searching for those old URLs, then maybe that’s something we would show you because we know about that. So it’s not the case that it’s a technical problem or that you’ll be penalized for duplicate content or anything like that. It’s just we understand the old URLs are still around, and if someone explicitly searches for them, maybe we should just show them.

MALE SPEAKER: OK. So what if we use Remove URL from Google Webmaster to remove those indexed old URLs?

JOHN MUELLER: That’s something I would only do in situations where the content really needs to urgently be removed. I would not do that for normal site migrations. I wouldn’t do that for normal changes of content within a website. If you’re maintaining it, you’re adding and removing content, that’s not what I would use that tool for. I’d really only use that for situations where you urgently need to remove the actual content from search. Maybe you published some private information accidentally, maybe you– I don’t know– allowed your development server to be recalled or something like that. But I wouldn’t use it for normal site migrations.

Question 9: 21:25 – Are Social icons available for the local Knowledge Graph

Written Submitted Question: Yesterday Google launched the functionality for webmasters so that they can display social icons and Knowledge Graph. Is this available for local Knowledge Graph?

JOHN MUELLER: I’m not really sure what you mean with local Knowledge Graph, so I can’t really answer that. Sorry. If this is about Google+ local search, then that’s something that’s essentially separate from web search. And I imagine some of the information might be available for them to look at, but in general the Knowledge Graph entry that’s shown on the side is completely separate from local search or from normal web search.

MIHAI APERGHIS: I think he’s referring to when you search for a brand name that also has a physical location that is connected to the Google Places page and everything. And you get the map with the photo and details to the address and phone number and everything, if the social icons will also be displayed in that case.

JOHN MUELLER: My understanding is that they would also be displayed there, but I’m not 100% sure. I’d have to find someone to ask about that first. My understanding is that they essentially use the same information. But let me just copy that onto the site so I can ask. And if that comes up again, maybe I will have a better answer.

Question 10: 22:48 – Discussion regarding preferred format for Schema Items

MALE SPEAKER: Can I just ask a quick question on schema? I just wondered if there’s a preferred format. I know before a lot of the stuff was kind of the marker data format. Looking through a lot of documentation you’ve got on Google now, a lot of it mentions and references JSON-LD, and I didn’t know if that was kind of moving more towards a preferred format and whether you can actually have the schema items within micro and data if they’re all transferred into JSON-LD, so whether JSON-LD actually has more items that you can kind of mark up as such.

JOHN MUELLER: At the moment, I think we support a little bit less with JSON-LD than with the normal other types of markup. So I could imagine that we’ll start doing more and more in JSON-LD. I can’t promise that that will become our preferred way of marking things up. So that’s thinkable, but I don’t really have any background information there, and I can’t really make that call for them ahead of time.

MALE SPEAKER: No, that’s fine. Obviously you still support both of them.

JOHN MUELLER: Yeah, yeah. I’d just make sure that if you use both of them, make sure that you’re not using the same types of markup in both of them at the same time because that can get a bit confusing, especially in some situations where we’ve seen people use the same markup in multiple formats on the same page but with different content. So that’s something where I’d just pick one of these types to work on your pages and try to put your content into that. And if JSON-LD is the one that you think works best with your content, with your CMS, with your website, then see what you can put in there and otherwise use the other formats.

Question 11: 24:35 – Should I fear downgrade of my site in Google’s eyes if many 404s because of the wrong sitemap?

Written Submitted Question: As a webmaster, should I not fear downgrade of my site in Google’s eyes if many 404s because of the wrong sitemap?

JOHN MUELLER: No, that’s absolutely fine. So if we see a lot of 404s on your website, that doesn’t affect the rest of your website. It’s not that we kind of downgrade the quality of a website because it returns 404s. A large number of 404s is absolutely fine. It’s also essentially a sign that you’re technically doing things the right way because if we can get a 404, then that’s the right response for a page that doesn’t exist. And that’s not something that you need to hide or obfuscate somehow. That’s a perfectly fine technical response to a request that’s invalid. So that’s not something you’d need to worry about.

Question 12: 25:27 – Can you disallow all robots, and then select ones to be allowed specifically?

Written Submitted Question: I came across a discussion the other day which was suggesting on robots.txt all robots are disallowed and then selected ones included back in. For example, user agent asterisk disallow slash user agent Googlebot allow, et cetera. 

JOHN MUELLER: I imagine there’s a second part to this question. Essentially the thing with the robots.txt that you need to keep in mind is that we look at the most specific section within the robots.txt file. So if you only have a user agent asterisk, then that’s the one we’ll look at. If you only have a user agent Googlebot, then that’s the one we’ll look at. If you have both an asterisk, like a generic section, and one for Googlebot, then we’ll only look at the section for Googlebot. So if you have some disallows in your generic section and a specific section for Googlebot, then those disallows from the generic section don’t apply to the section for Googlebot. So if there are parts of your site you want blocked for all crawlers, you need to also include that in the specific sections for the individual crawlers. So in the case like you mentioned here where everything is disallowed by default and individual user agents have an allow section, then essentially those user agents would only look at the allow directive and not even notice that the rest of the site is disallowed for other crawlers. So that’s something kind of to keep in mind. If there’s a part of your site you want blocked for every one and a part that’s only accessible for Googlebot, then you need to block that part that’s blocked for everyone also in the individual Googlebot section.

MALE SPEAKER: Sorry, wouldn’t it be pointless because any other rogue bot could just name himself Googlebot and then would have access here because it’s not safe, this method, is it? Anybody could crawl the site. They just need the name that is said there, which is Googlebot.

JOHN MUELLER: Of course. I mean, the whole robots.txt directives rely on the crawler being well-behaved and kind of following these pseudo-standards. So if there’s a crawler out there that essentially calls itself whatever or just doesn’t read the robots.txt file at all, then that’s something that you can’t really control with this file. That’s something you might be able to control with a firewall or with other ACLs on your website to kind of block that behavior directly on a lower level. But that’s not something you can always guarantee with a robots.txt file. But there are a lot of well-behaved crawlers out there, and they follow the directives in the robots.txt file.  And for those, this kind of set up works fairly well. But like you said, there are definitely crawlers out there that don’t care at all about what you allow, what you don’t allow. They just crawl whatever they want and do whatever you want. And those are not things you can really block in the robots.txt file.

Best Host News Commentary – 

Question 13: 28:43 – Is it OK to have a US web server for Local UK business?

Written Submitted Question: We have a UK domain extension for local businesses based in the UK, and the website is hosted on an IP in the USA. Any problems?

JOHN MUELLER: No, that’s perfectly fine.  For geotargeting, we look at primarily the country code top-level domain. If you have a CCTLD for the UK, then that’s something we can take into account. If you have a generic top-level domain, you can use a setting in Webmaster Tools. Where you essentially host your website is up to you. We also see a lot of websites using content delivery networks that have their hosting essentially in various places worldwide. And that’s fine, too. If you have a specific country you want to target, let us know about that geotargeting through the country code top-level domain or Webmaster Tools. But the location of the hosting isn’t really something I’d worry about here.

Best Host News Commentary – 

Question 14: 29:38 – Any update on the fate of Search Engine Roundtable?

Written Submitted Question: Any update on the fate of Search Engine Roundtable?

JOHN MUELLER: Unfortunately, I don’t have any bigger update there. In general, we also try not to go into too much detail with regards to which page and which type of content our algorithms are picking up on. So it’s possible that we won’t really have anything more specific to point at and say, this specific line of HTML needs to be changed.  For technical things, that’s a lot easier because often there is a clear technical aspect of a website that we can point at. But when we’re looking at quality things, that’s very broad and looks at the website overall.

Best Host News Commentary – 

Question 15: 30:20 – Question about Webmaster Tools Feedback Google are Gathering

Written Submitted Question: If I move our site from one server to another, how is this going to affect my ranking? Crawling is going to hit a lot of 404s and inactive pages, but the new site will be up and active. Is it necessary to move the site with forwarding?

JOHN MUELLER: Yes, I would definitely make sure that you’re using redirects from the old version to the new version. We have a relatively new part in our help center about moving sites, about changing hosting, for example. And I’d definitely go through that so that you understand what we recommend and see how far you can actually go there.  If you’re just moving the site from one server to another and you’re keeping the same URLs, then of course we won’t see any 404s because the URL will just be active on the other server. The only thing you need to watch out for is that there is this kind of overlapping time between the time when we recognize that this URL is hosted somewhere else and the time that we still have your old IP address essentially cached. And during that time, it probably makes sense to have the content available on both of these servers so that users and our crawlers don’t get stuck there. But if you’re moving from one domain to another, definitely make sure you set up the redirects so that we can kind of pick up on that connection and that we don’t have to guess at what’s actually happening here.

Question 16: 31:41 – Long discussion about pages with High Numbers of Outgoing links

MIHAI APERGHIS: Here’s a URL for one of my clients that has a pretty big website, and I wanted to ask a technical question regarding that page. Basically, on the right he lists a lot o links to reviews from [INAUDIBLE] category, but he also lists like 10 previews for each article, like, in the main content section. And there are two sets of paginations, so the paginations of the articles and the pagination to the right sidebar with the links to the actual reviews and such. Would that be a problem, that there are two sets of paginations for Google? And also, do you think there are a bit too many links on the page? Maybe we should structure that better?

JOHN MUELLER: At first glance, it does look a little bit confusing. So I see on top you have a lot of links to individual items, which basically at first glance makes it look like this is some kind of like an HTML sitemap page. So that’s something that I think might be a bit confusing to users. From our point of view, from crawling and indexing, I think we would probably just crawl all of those links and those different pagination parts to kind of find the individual article links as well. So I think from a crawling point of view we’ll probably pick that up. I don’t know from a quality point of view if this is something where our algorithms might start worrying about there being kind of too many different kinds of links in there.

MIHAI APERGHIS: I’m not actually worried from any quality point of view. I’m more worried if structurally this makes sense because this is meant to be a top category page. Let’s just call it that. The top links in the content section are supposed to be subcategories. But we also feature, like, previews in the main content section and then links directly to the reviews in the sidebar.  So do you think maybe we should get rid of the links in the sidebar? Are there too many? Or is it fine to keep them like that?

JOHN MUELLER: It’s hard to say. So at one point, we had the guideline of maximum 100 links per page. Just browsing this page, I imagine you have something like 500 links or something like that on this page. On the one hand, this is really useful for us to kind of discover those individual pages. But what’s going to happen from a technical point of view is we’re probably not going to be able to pass page rank to all of these pages because there are just so many different links there that it’s really hard to figure out where we should be passing page rank, which parts kind of fit together here. So from that point of view, I’d probably try to reduce the number of links on a page like that because it’s just so much content that’s being linked to in one place.

MIHAI APERGHIS: So for the subcategories to get more importance, maybe we should remove the sidebar links so we make sure most of the page rank goes through those subcategories first, then to the articles?

JOHN MUELLER: That might be something to look at. I guess what you can do from your side is to look at how we’re actually crawling these pages, if we’re really, like, going through the individual pages and discovering the content that’s linked there or if we’re going to these pages and essentially not crawling further. So from a technical point of view, that might be something to look at. I don’t think this is something where there are critical issues with the amount of links there. I think maybe you can optimize crawling and maybe you can improve the way we recognize the context of these articles by grouping that a little bit differently or maybe, I don’t know, putting fewer links on a page like this. But I don’t think this is a critical issue at the moment.

JOSHUA BERG: John, so some of those pages may not receive any page rank rather than just being diluted to the point of not being worthwhile?

JOHN MUELLER: At least from a technical point of view, we kind of have to split up the page rank that these pages get among the links that are there. And if you have a really, really high number of links, then that might result in us not actually forwarding that much useful page rank to those individual links. And if these pages aren’t linked anywhere else on the website, then what might happen is we end up not crawling them as frequently. We maybe don’t crawl them at all if we kind of run out of time to actually look at pages that were discovered on the site. So that’s essentially what can happen there. If there are other ways to get to this content, then I’m sure we’ll try to find ways to do that as well. So it’s kind of like when you have an e-commerce website. You tend to set up categories that have a reasonable number of individual product links on a page or that have subcategories linked on that page. If you have hundreds and hundreds of product links on a category page on an e-commerce site, then from a usability point of view, it’s really hard to use. And from understanding the site itself, it’s really hard for us to figure out which of these individual product pages actually belong to which category and how relevant are they for these pages. Is it maybe one important page for this category, or is this one out of 2,000 different things that are on this category page?

JOSHUA BERG: OK, yeah. So a client of mine with an e-commerce site felt that they needed to have more products, or keep adding more products, even if they may not really be worthwhile for the sake of increasing the page rank of the site. And I suggested much better to go with more quality of the content and pages rather than just trying to increase and think of new products that may not be useful or clicked on by customers anyway and may be more effectively diluting the content.

JOHN MUELLER: Yeah. I mean, if you can do that I think that’s a good strategy because it really strengthens those pages and makes those pages more important within the website. Sometimes you just have a ton of content. If you’re someone like Amazon, then it’s hard to go to them and say, you should be selling fewer different types of books because I just want to have one book of every type. And they want to be able to sell that, and I think it makes sense there. But they still need to have a way to kind of categorize that in a way that there is a reasonable selection on a page when you visit it as a user but also as a crawler so that you don’t run into a situation like, oh, these are all books that start with the letter A, and there are 5,000 different ones on here. It makes it really hard for us to figure out, OK, so which of these are really relevant for this category page, or is this essentially just kind of an HTML sitemap page that doesn’t provide any additional context, it’s just a way of discovering links?

JOSHUA BERG: OK.Now, it wouldn’t be the case that by adding more pages we’re bringing down the site overall as a percentage in quality specifically, or is that technically a possibility?

JOHN MUELLER: I mean, just having more pages doesn’t mean that those pages are lower quality. If you’re taking existing content and you’re kind of diluting it by splitting it onto separate pages without adding additional quality to those pages, then that’s of course a kind of diluting things. But if you’re adding content that’s really relevant to your site, then that’s not something you should worry about. You shouldn’t say, well, I have this really great article that I’d love to put online, but I’m kind of worried that it’ll create too many pages for my website. If it’s something great that you think works well for your users, works well for your website, then by all means put that online. That’s not something where I try to hold anyone back on that. But if you’re just taking the existing content you have and say, I have 500 pages. I’ll turn that into 2,000 pages because more pages means more visibility in search, then that’s probably not going to happen. That’s more something where we’ll take those 500 pages and it’s kind of diluted among 2,000 pages, so it doesn’t make those individual pages much more relevant in search.

JOSHUA BERG: OK. And also, how that is diluted, for example the page rank, how that is diluted will have a lot to do with the navigational structure of course then too, wouldn’t it?

JOHN MUELLER: Well, yeah. I mean this, I think, kind of goes into the same thing as before in that if we kind of recognize your site and it looks like you just have lots and lots of pages but you’re not adding additional value for those pages, then that’s something where those individual pages won’t be as relevant in search anymore. And from a page rank point of view, that’s probably not something where you’d see that specifically. But you’d see that maybe in search and search query information in Webmaster Tools that these individual pages aren’t ranking as well as maybe one concentrated really high quality page might be ranking instead.

MIHAI APERGHIS: John, quick follow-up. Instead of having that top section with the subcategory that you said kind of looks like a part of an HTML sitemap– and this is actually a more general question– would it be useful to use a select box that you just select one of those subcategories and it goes right to that page? Would it not be considered a link if there’s a select box? Or how does Google [? team ?] select that?

JOHN MUELLER: Depending on how you implement it, of course. If it’s something– I mean, I’ve seen people do this with CSS, where they have essentially normal links on the page. And with CSS, it looks like a drop-down, or like this menu structure where you hover over it or you click on it and it folds out the submenus. That’s essentially equivalent for us to having those on the page. With regards to usability, that’s somethingyou probably want to test on your side. That’s not something that would affect us. One way that probably wouldn’t work so well is if this is a drop-down and we can’t really recognize which URLs are being pointed at with this drop-down. So if this is like a drop-down that’s created with JavaScript that just has IDs somehow in the JavaScript, a user selects one of these, and then there’s some fancy JavaScript on your backend that figures out which URL is involved with that, then that’s probably not something where we’d been able to treat it like a normal link.  Because we essentially have to have Googlebot click on all of the possible options to see what URL shows up. And I don’t think that’s something that we’d do.

MIHAI APERGHIS: No, I see. So if it’s just normal [INAUDIBLE] text and the CSS just transfers them into a drop-down, those are basically equivalent to actually just having the link.

JOHN MUELLER: Yeah, yeah. And to some extent, we can figure out JavaScript if you’re doing this drop-down with JavaScript. If it’s clear which URLs are involved, if the JavaScript code lists those individual URLs and we can see that directly, then that’s a lot easier. But if the JavaScript code actually has, like, IDs and on your server, you set up an ID and it returns a URL, then that’s not something that we’d be able to do.

Question 17: 44:49 – Why is it we are not being showin in the Google News OneBox

Written Submitted Question: I’m experiencing a problem where my site has recently stopped being selected in the News OneBox. We still rank on the News tab but can’t get a single article selected for the OneBox We seem OK technically. What could be happening?

JOHN MUELLER: I am not sure what criteria the team looks at with regards to the News OneBox there. That seems, I don’t know, like something maybe you can ask your Google News partner to see if there’s something that they see there. Usually if a site works well for Google News, then that’s something we could also show in the News section there. But that’s essentially algorithmically generated, and there might be some factors there past just the site being kind of newsy and having news-type content.

Question 18: 45:48 – Why does Timestamp in search results differ from time actually published?

Written Submitted Question: I work for a news publisher and have seen many articles being time stamped for organic search with a time before they were actually published. The News tab timestamp shows the real time published. I’ve seen this on other sites too. Is this a Google bug?

JOHN MUELLER: So what you might be seeing there is in the organic search results, we sometimes show the date in the snippet. And that date is algorithmically generated partially based on the content you have there and partially based on other factors. And that’s not something that would be affecting the ranking of the page. That’s essentially, from our point of view, a part of the snippet. It’s something we pulled out from your page or kind of inferred from your page. And we’re showing that as a part of the snippet. So it’s not something that we would say is a ranking factor or something that you need to optimize for or something that you need to tweak like that. That’s essentially just a part of the text that we show in the description there.

Question 19: 46:50 – When moving a site how long to leave up the redirects and the site change in Webmaster Tools?

Written Submitted Question: Moved a site three months ago to a new domain with 301 redirects, and Webmaster Tools changed. How long to leave up the redirects and the site change in Webmaster Tools?

JOHN MUELLER: Essentially a 301 redirect is a permanent redirect, so theoretically you’d keep that there indefinitely. Practically, we try to crawl all URLs on a site within a reasonable time. Sometimes that takes a couple of months, sometimes maybe half a year or a bit longer. So if you’re looking for the minimum time that you need to keep these redirects in place, I’d look at something maybe twice as long as the technically minimum time. And at least keep them there for a year. So in a case like that, we can definitely see those redirects. We can definitely follow them. We can cache those redirects. And if we recheck them again after a while, they’ll still be there. So I’d at least keep them there for a year. If you can keep them in place longer, that’s probably a good idea as well.

Question 20: 47:54 – Question regarding moving multiple pages from different sites to a new site

MALE SPEAKER: Hi. Hi, John. I have a question about this issue, too. We’ve moved our homepage landing page we had with a domain into an existing website with seven other web pages inside as a page, a [? more ?] page. But we can’t indicate that into Webmaster Tools because– this is correct, isn’t it?

JOHN MUELLER: Yes, that’s correct.

MALE SPEAKER: Because it’s taking a little bit longer, as you said just now, to take this new location for our landing page.

JOHN MUELLER: Yeah. So individual pages probably get picked up fairly quickly. It might be that there are lower level pages that take longer. And Webmaster Tools, the setting there helps us to confirm what you’re trying to do, but there are sometimes technical reasons why that’s not possible. For example, if you move from one domain to a section of an existing domain or if you move from a subdomain to a higher level domain, then those are the type of things that you currently can’t specify in Webmaster Tools. So that’s kind of an optional, let’s say, setting in Webmaster Tools. If you’re really moving from one domain to another, one to one, then I’d definitely use that setting in Webmaster Tools, but it’s not a requirement.

Question 21: 49:23 – Google is retiring Freebase and shifting to Wikidata. Problem is, Wikidata isn’t as comprehensive in some areas as Freebase. What are the plans there?

Written Submitted Question: Google is retiring Freebase and shifting to Wikidata. Problem is, Wikidata isn’t as comprehensive in some areas as Freebase. What are the plans there?

JOHN MUELLER: I don’t know of the specific plans. From what I have heard, a lot of this is still being worked on. So it’s not the case that everything is already decided and has happened. I imagine there is still work being done on finding the optimal solution there.

Question 22: 49:55 – Is it good advice to flatten an architecture so that equity gets a level one, two, three. For example, 301 redirect up a level on big sites?

Written Submitted Question: Is it good advice to flatten an architecture so that equity gets a level one, two, three. For example, 301 redirect up a level on big sites?

JOHN MUELLER: I don’t think that’s something that could be given as general advice for all websites. I think this is something you have to look at from a case to case basis and look at how your site is structured in general. And for some sites, if it’s a smaller site maybe it makes sense to flatten it a little bit more. Maybe that’s something that works for users better, in any case. Maybe it makes sense to kind of spread it out more into clear categories. That’s really something that’s more a decision on your side, more a decision with regards to usability, than something where I’d say from an SEO point of view you should always go this way or always go that way.

Question 23: 50:55 – Is gaining links from high quality editorial websites alone enough to get a Penguin penalty lifted?

Written Submitted Question: Is gaining links from high quality editorial websites alone enough to get a Penguin penalty lifted?

JOHN MUELLER: So I guess this refers to some of the previous discussions we’ve had around this. Penguin is a web spam algorithm from our point of view, and it looks into things that are kind of web spammy, like unnatural links, and tries to take that into account. And there are different ways of cleaning that up. And depending on your website, when we look at your site overall, it might be that we see a better picture over time. But this isn’t something where I’d say there’s a simple solution to any Penguin problem that you might have or any kind of link problem that you might have. Essentially the best recommendation is really if you see a problem, try to fix that problem. So if you see a problem with regards to unnatural links that maybe a previous SEO put together then try to find a way to clean that problem up. Use a disavow tool. Maybe have those links removed completely if that’s possible, but really try to clean that up. Just hoping this problem will go away or hoping it will go away by doing even more unnatural links, which is something that I sometimes see in these discussions, I don’t think that’s really a feasible business approach. Because if you know that this is a problem and this is something that you kind of understand or accept that it’s problematic, then just hoping it goes away is not really best advice for a normal business that relies on a website. So it’s something where I’d really recommend clean up the problem, make sure it’s resolved, make sure that you can move forward without having to hope that somehow the overall picture changes over time. And the other aspect here of gaining links from high quality editorial websites sounds a lot like you’re building links again. You’re building even more unorganic links to your site. So if you know that you have a problem with your links, you’re not going to solve it by building even more unnatural links. So that’s something where, especially if you’re a business and you see this kind of situation coming up, make sure you clean up the problem and don’t try to sweep it under the rug and hope it goes away over a couple of years.

Best Host News Commentary – 

Question 24: 54:01 – Question regarding hosting images over multiple image servers

Written Submitted Question: You should always avoid duplicate content. But on a larger site with a lot of images, you would split the load over more image servers, for example,, Would the same image served from two domains count as duplicate content?”

JOHN MUELLER: So I think, first of all, the general answer is there’s no penalty for duplicate content. This is more of a technical issue from our point of view, that if you have a lot of duplicates and we have to crawl and index those duplicates to find out that they’re duplicate, we waste a lot of time kind of crawling and indexing things on your site that are unnecessary. And maybe we’ll miss things that are more important on your site because we’ve spent so much time crawling all of these duplicates. So from that point of view, it’s more of a technical thing that you should try to avoid if at all possible than anything that would get your site penalized or would cause quality or ranking problems for your site. With regards to images, splitting them across different servers I think definitely makes sense. But what I would do there is really make sure that the individual images stay on one server. So taking, maybe, a hash or something from the URL or the image name and using that as a way to determine which server this image is served from and consistently serving the image just from that server helps kind of keep things in line.One thing that’s problematic, for example, is if you have images on your site and the URL is constantly changed so it’s shifting between individual servers every time we crawl, then that makes it really hard for us to actually index those images for image search because every time we crawl your pages, we find a new host name for that image or a new URL for that image. Then we have to kind of think about, oh, this is a new image. We should pick this up for image search. It’ll take a while for us to pick it up for image search. We’ll drop the old image in the meantime, and you’re kind of stuck in this limbo state of us saying, OK, we’ll pick up the new image that we just found and we’ve dropped the old image already. So your images aren’t really being indexed that optimally. So the more you can keep this embedded content on consistent URLs and on the same URL over time, the easier we can keep that in image search as well.

Question 25: 57:15 – How to separate different color for product and avoid duplicate content using parameters?

Written Submitted Question: How to separate different color for product and avoid duplicate content using parameters?

JOHN MUELLER: So if you have products in different colors, this is a general, I guess, e-commerce situation where you have one product and you have different variations of that product. I think this is something where you have to make that call on your site where you say the individual variations are relevant enough that they’re worth indexing separately. Or maybe it makes more sense to concentrate all of those variations into a single product page where you just say, this product is available in these different variations. So that’s something where you kind of have to make that decision on your side to say, these variations individually are critical enough that I want them indexed separately, or I prefer a stronger product page that just refers to these individual variations. So with colors, sometimes I could see that making sense, that people are searching for a specific color of a product and you want a specific landing page for that. Sometimes I can see it making sense that you just have a strong product page that mentions that it’s also available in these individual colors. So that’s not something where I’d focus primarily on the duplicate content point of view there but really think about how critical this variation, this specific page, is for your website or if it makes sense to fold that into a stronger product page instead.

Question 26: 58:46 – Is there a list that shows approved SSL certificate providers

Written Submitted Question: Is there a list hiding somewhere that shows which secure certificate providers Google recognizes, validates as trusted? Some people are concerned that these newer, ‘buy cheap SSL now’ type sellers might be selling bad certificates in Google’s eyes.”

JOHN MUELLER: From our point of view, we essentially look at the type of certificate instead. So if it matches the minimum requirements of a good certificate in that it has the right or minimum number of bits length key, for example, then that’s something that we’d look at there. We don’t look at individual providers and say, well, this is a bad certificate provider, and this is a good one. If they have the trusted chain going up to the certificate authorities appropriately, then that’s essentially OK from our point of view. That’s not something where we’d say, well, your browser will trust this certificate, but we don’t trust it. We essentially try to mimic what the browsers will trust. There are also certificate providers coming more and more that are offering certificates for free, which can be just as useful because it gives you essentially all the advantages of HTTPS using TLS certificates without having to pay for them individually. So I think there’s an initiative, I think, by Mozilla and the EFF where they’re going to be giving away certificates in the summer, something like that. So if the price of a certificate has been holding you back maybe that’s something worth waiting for and getting  set up then.

Question 27: 60:27 – Questions regarding GeoIP redirects and indexing

Written Submitted Question: I’m working for a website which has recently launched an AMEA site with GeoIP redirect US traffic to the .com site. How can Google index the pages on the AMEA domain with the GeoIP redirect active? Webmaster Tools is reporting a redirect to .com.

JOHN MUELLER: So essentially for the most part, we crawl from the US. So if users in the US are being redirected to the .com site and are never able to see the site, then we wouldn’t see that either. So we wouldn’t be able to crawl and index that site separately. What we’d recommend doing there instead is either setting up URLs that you know redirect like this, so you have a generic homepage, for example, that anyone can go to and they’ll be redirected to their appropriate country version, and letting the individual country versions be crawled or used by any user.  So if someone in the US explicitly goes to the site, then fine. Let them go there. If someone in Europe goes to the US site, fine. Let them go there. If someone goes to the generic homepage, redirect them to either one or the other, for example. You can specify all of that with Hreflang markup as well to let us know about that. One thing that we recommend doing or that we suggest you might do is if you have pages that you know aren’t relevant for the users in specific countries, just show them a banner on top. So if a user in the US goes to the UK site, show them a banner on top that says, hey, we have a US-based site that has free shipping to the US or that has prices in US dollars or whatever, and recommend that they go there. But don’t force redirect them there. That way, Googlebot can still crawl both of these variations separately. And if users go to the wrong site on purpose, if they follow a link from a friend, they’ll at least know that there’s another site out there. But if they specifically want to access that other version, then they can still go there. So for example, if a user from the UK travels to the US and needs to buy something from your site and they always buy from your UK site, then maybe just let them do that. But obviously there might be legal reasons why you wouldn’t want to allow users from individual countries to access your site, and that’s also fine. But just keep in mind that you need to treat Googlebot the same way. So if there’s a legal reason why you’d want to block all US users from your website, then you’re probably going to be blocking Googlebot as well. So you might need to find some kind of a workaround around that. Maybe you have a generic page that’s accessible to users globally, and your European content that’s only available in Europe is on a separate URL. So that way we could at least crawl and index that generic content even if we can’t access your specific content.

Question 28: 63:42 – Discussion about optimizing title tags and how you don’t always need Keyword in title

MIHAI APERGHIS: I have one, if that’s OK. Ross Hudgens from Siege Media posted this article recently. He talks about how to, basically, optimize title tags to increase their CTR and not focus so much on keywords but focus on maybe what the user is most interested about. So instead of doing something like online refrigerators or something of that kind he uses, maybe go with refrigerators starting from $99 because price is very important to the user. And since you also launched this structured data update and structured data, you said, doesn’t influence directly the ranking but does influence CTR which might later influence the ranking, what would be your recommendation? So make the snippet as useful for the user as possible, but also make sure that it’s relevant for search engines since essentially the type of tag is one of the most important factors. Make sure you have some of the words important to that page there, but also do something that increases the CTR of the title.

JOHN MUELLER: Yeah, sure. I mean, we don’t use a description meta tag at all for ranking, so you’re free to use whatever you think is relevant there. With the title tag it’s a bit tricky because sometimes we’ll recognize things like keyword stuffing or excessively long titles and we’ll algorithmically shorten them. But in general, if you can come up with a short and useful title that works well for your users, then go for it. I think that’s always a good idea. That’s something you have a little bit of flexibility to work on, the description as well. If there’s something that you think tells users what your page is about and kind of encourages them to come into your site if they’re looking for that specific kind of content, then by all means, sure.  Go for that.

MIHAI APERGHIS: OK. So basically, it’s not that important if you lose some of the keywords relevant to that page. So instead of online refrigerators, you’d lose the online, but you’d have it somewhere on the page so Google knows about it. And instead, you’d do the title that is more relevant to the users.

JOHN MUELLER: Sure, yes.

Question 29: 66:08 – Question about Your Money or Your Life sites (i.e. health money related sites)

JOSHUA BERG: Oh, yeah. Hey. So I wanted to ask about the YMYL, the Your Money or Your Life, criteria that has sometimes been talked about Google using in the past that applies to websites that may have to do with big life decisions or financial decisions, medical, health, and including e-commerce. Does this mean that sites that fall under this category according to what we have heard about the YMYL means that they’re held to a little stricter standard or a higher standard or in some way are algorithmically separated, either topically–

JOHN MUELLER: I don’t know. I think that was in one of the search raters guides, perhaps. Is that possible?

JOSHUA BERG: Yes. It was in the quality rater guides.

JOHN MUELLER: Quality rater guides.

JOSHUA BERG: The quality raters are, OK, so not directly affecting the search. But then again, it wouldn’t be useful to have it if you couldn’t convert that information algorithmically in some way. I mean, it seemed like Google was looking at these kinds of sites as a focus. So Google will be more interested in the quality of these kind of sites, will be more important than a lot of–

JOHN MUELLER: I don’t. I don’t have any useful answer there. I imagine this is primarily for these raters to give them some background information on things that they could be watching out for. And these raters, essentially, help us to kind of roughly check our algorithms to see that we’re doing the right thing there. So it’s not the case that they would be, like, guiding the way our algorithms work. And if they’re looking for this specific issue, then that means all of our algorithms are focusing on this as well. This is more the case that we’re trying to see which way of presenting the search results makes more sense. And for that sometimes they need some background information on what to look at that they don’t just, like, I don’t know, rate the search results by titles, but actually look at the pages and think about what pages might be more relevant there. But that’s something I don’t really have much experience in, and I can’t really give you a more useful answer past that.

Question 30: 69:18 – Question about impact of seo titles being trimmed due to being too long

MIHAI APERGHIS: A small, real small, small question regarding the titles. You said that you algorithmically shorten titles that are too long. What actual impact does it have on Google reading the titles? Or if it’s trimmed, does that mean the rest of the title is completely ignored, the actual words of the title?

JOHN MUELLER: No. No, it’s essentially just a display. So it’s like the snippet. It’s just what we show in search results. You also see this differently, for example, on mobile, where there’s just not enough room to show the long title. We might show it on desktop. So you might see a longer title on desktop or a shorter one on mobile, but that doesn’t mean it’ll rank differently or be seen differently from relevance point of view on a mobile.

MIHAI APERGHIS: So there’s no quality impact on the length of the title.

JOHN MUELLER: No, no. Yeah.


Got something to add?  Please let us have your opinions in the comments.

You can reference by Question no. as appropriate for quick reference.  If you have helpful links feel free to include, but they will go directly to moderation first, so don’t worry if your comment doesn’t appear straight away.

We will be happy to see your thoughts

Leave a reply