The hidden impact of the Helpful Content Update

Posted by Owen Powis on 24 Aug, 2022
View comments Search News
The latest update from Google needed to happen; it's a direct response to purposeful manipulation of Google in a way that makes the web worse for everyone. But its effects may ripple further than you think at first glance.

Helpful Content Update.

Image: Pixabay

This day has been coming for some time. I’ve personally been waiting for an update like this to fix one of the largest day-to-day issues we have here at Wordtracker.

Taking a look at our form for new content submissions should give you a pretty good hint.

If you've ever looked at a sign and wondered why on earth would they need to warn me about that, it's because someone at some point has made it necessary. If you're wondering about Rule Number 7 in angry bold, well those people know who they are!

But you can see a theme here when looking at the rules we have curated over time. They are focused on making sure that content is unique and interesting. That it’s good quality and actually trying to say something. There is a reason for this, as of course the majority of content that gets submitted isn't what it purports to be.

The link building problem

The Helpful Content Update began rolling out on the 25th of August and is focused on removing the content which Google doesn’t believe is written ‘by people, for people’.

What does Google mean by this? Well the ‘by people, for people’ is really interesting as it alludes to what is an ever growing problem. Content written by AI with no other purpose than serving the algorithm. Written by one computer program to exploit another.

But before we get a little too dystopian it’s worth noting the impetus behind this is decidedly human and has been driven in large part by the many link building companies out there. Link building companies trying to help clients rank better within Google’s guidelines.

If you have a blog, you’ll likely be aware of this problem already. It’s been growing for some time and as AI content writing programs have improved the problem has got worse. It’s now easier than ever to spit out an extremely mediocre article on a wide range of topics. The problem is, it’s not saying anything new. It’s a rehash of existing content, it might be unique in content but it’s nothing new in ideas.

So with this in mind, Rule 1 seems obvious?

1. Content must be unique, high quality and specific to the Wordtracker audience.

In essence we are asking for content that abides by Google’s rules around content:

  • High quality
  • Unique
  • Relevant

Rule 2 seems a little less obvious, but it’s geared towards the exact same problem that this update is targeting.

2. No top level summary articles, for example '5 ways to improve your website' with a paragraph covering each point. We want in-depth content that covers a single topic.

This is the sort of content we get submitted en masse every day. Even after sending people through our guest post form, where they specifically agree not to, people still submit more of this type of content than anything else.

Spun content

The article itself seems solid but the wording is just… off. This is where a good quality, unique article is taken and ‘spun’ into many copies. This has been going on for years (spun content was a thing back when I first started in the industry 15 years ago) but with the advance of consumer level AI, it’s got a lot better and easier. Taking an article and chucking it through Google Translate in both directions a few times is a rudimetary way to achieve this.

Now here's the previous paragraph after I've done exactly that a couple of times:

"The article itself seems solid, but the wording is just... weird. Here they take something unique, good and "shoot" in many copies. It's been around for years (tools were a thing when I started in the industry 15 years ago), but it's gotten better and easier with the advancement of consumer-level AI. An easy way to do this is to take a story and send it back and forth through Google Translate."

You can see how the purpose of the content stays the same. It's like what's been said doesn't change that much, it's just how it's said that does. Perfect if you want to, for example, avoid content matching on plagerism checkers.

Engineered content

The second type we get a lot of are articles that have been completely engineered from scratch by AI. One or more base articles will be used as the starting point, but the output is completely rewritten by AI.

They tend to have titles like ‘The Top 5 Ways To Do SEO On Your Website’. The content would then be 5 headings which cover incredibly broad topics like ‘Link Building’, with a couple of paragraphs broadly summarizing the topic.

Content for content's sake

These are a couple of examples of what Google refers to as ‘content for search engines first’. Google sums up the problem with this pretty neatly:

"content created primarily for search engine traffic is strongly correlated with content that searchers find unsatisfying”

Google doesn’t mention link building specifically in their guide, which you can see in our breakdown of the post here:

https://www.wordtracker.com/blog/search-news/googles-new-technologies-mum-and-lamda

Google is very much focusing on content that is made in order to appear in the search results. But, it would be very unusual for Google to devalue content in the search results without devaluing the links within it.

Which means the content this affects could be far greater than at first glance. Because well written content in very competitive areas, that is supported by large amounts of link building efforts could also see a very large hit from it.

From my experience not only in the industry helping clients with SEO and working with Wordtracker to ensure our content is the best it can be, I know that a great deal of link building relies on specifically this type of content. It’s not a practice I participate in, but I am very aware of it and surely Google must be too.

Wider impact

It’s interesting the way Google has geared all the information away from this, instead focusing only on content made to capture search results. Content that likely is also supported by links created with more generated content. It could be the case that Google does not want to highlight a fundamental problem with how they rank content, or show their hand to the people who exploit it.

What Google has done is given us what they would consider to be 'fair warning' with an entire guide wriiten to explain what's going to happen. The guide is here and this is our breakdown of it and some of the other information Google has released. It's unusual for Google to go to such lengths to forwarn of an update, and although they have been doing so more frequently, not with this much detail. What makes this likely to be even more significant is how they apply the signal.

Site-wide effect

"Any content — not just unhelpful content — on sites determined to have relatively high amounts of unhelpful content overall is less likely to perform well in Search, assuming there is other content elsewhere from the web that's better to display. For this reason, removing unhelpful content could help the rankings of your other content."

If a site has a lot of content that falls foul of this, then all of the site will be affected. Yes, even really strong content can lose value. So even if you've been doing good quality link building, only submitting articles you've written or legitimately paid someone to write, then that value could all be wiped out. If those sites are hosting lots of content that falls foul of this new signal, that good quality content is dragged down with it.

It's going to take a long time to recover - on purpose

This must be a deterrant. But Google makes it clear that this is not going to be applied like a signal usually is (removing the content removes the effect of the signal) but more like a penalty. In this case, removing the content won't ping the rankings back into place for the rest of your site.

"Sites identified by this update may find the signal applied to them over a period of months. Our classifier for this update runs continuously, allowing it to monitor newly-launched sites and existing ones. As it determines that the unhelpful content has not returned in the long-term, the classification will no longer apply."

So when they say "Sites identified by this update may find the signal applied to them over a period of months" they mean that signal will be applied for months even after removing the content. Which they clarify with "As it determines that the unhelpful content has not returned in the long-term, the classification will no longer apply."

This does 3 things:

1. It shows sites have really changed their ways, creating a new pattern of behaviour and finding alternatives within the guidelines.

2. It acts as a deterrant.

3. It makes testing which content falls found of the guidelines much harder and more costly (you can't quickly add an article, see if it has an impact, then remove it and test another article).

It’s going to be interesting to see the effect of this update rolling out. If you do find your content falling foul of it, but you can’t figure out why, I would strongly advise you to look at your backlinks and see if they were generated with content that does, or if your backlinks themselves are supported by content that does.

Recent articles

Google launches new personalisation options in Search
Posted by Edith MacLeod on 27 November 2023
Google adds small business filter to Search and Maps
Posted by Edith MacLeod on 21 November 2023
Google releases Nov 2023 reviews update
Posted by Edith MacLeod on 9 November 2023
Interactive content: engaging your audience in the digital age
Posted by Brian Shelton on 8 November 2023
Google releases November 2023 core update
Posted by Edith MacLeod on 3 November 2023