Quality Score is the driving force in the PPC ranking algorithm at all major search engines; indeed, it has been since Google AdWords Select was released in 2002. You could do far worse than to model your strategy as if Quality Score were essentially unchanged from that early formulation of AdRank = CTR X Max Bid. Alterations to the formula have largely constituted refinements intended to make Quality Score more accurate, less clunky, and more appropriate to each specific user query.
In the United States and select other markets, Bing Ads continues to surprise with its strong market share for search clicks both in the computer and mobile channels. With such strong market share, its own Quality Score formula isn’t a mere curiosity; it’s a big part of your online ROI. The platform’s reach hasn’t been hurt by the continued and perhaps surprising resilience of Yahoo. Distribution of both Bing and Yahoo Search has enjoyed some recent wins in the form of browser default deals and other such trends that have served to countervail Google’s monopoly stranglehold on the market for paid clicks.
Advertisers with limited knowledge of the complexities of the PPC auction game may find themselves in hot water. They don’t manage their accounts in a granular way, they don’t write compelling ads, and they don’t test. They prioritize the wrong things, desperately raising bids in concentrated parts of the account, and not converting enough from rich veins of untapped keyword inventory. To an unskilled advertiser, Quality Score may seem like voodoo, and the conversion “end zone” like an armed fortress guarded by the entire roster of the New England Patriots (including the practice squad). By contrast, skilled advertisers in their prime will find it easier to score – like Marshawn Lynch in Beast Mode.
I recently caught up with a Microsoft insider over dinner, where he shared some personal reflections on how Seattle fans are taking the Super Bowl loss, and also, of course, how Quality Score works today at both Google AdWords and Bing Ads. Although these aren’t on-the-record comments and involve some speculation, I hope they prove helpful.
There isn’t very much “secret sauce” in a PPC auction environment – unlike organic search algorithms. If advertisers want to game the system, they’d mostly do so by upping their CTR’s, which makes the publisher more money anyway. And they’re paying in the first place. With some minor exceptions, both Google and Microsoft want advertisers to know about the basic principles (other than bidding higher) that can help them do better in the PPC auction.
First, whenever you hear any terms related to “user engagement,” think clickthrough rate. The more you work on weeding out low-intent keywords and just plain poor keyword choices from your account, the better off your overall Quality Score will be. Even at that, the engines have taken steps to ensure that offbeat keyword choices don’t result in punitive treatment of other keywords in the account, campaign, or adgroup.
It’s well known in the industry that another great way to improve CTR is to improve the granularity of your adgroup structure (within reason). Headlines and body copy that contain keywords from the user’s query, rather than being only somewhat related to the user’s query, will garner more clicks and thus higher Quality Scores.
Consider that the search engines can accomplish this goal strictly through measuring CTR’s from various angles. They wouldn’t have to compare the keyword text in ads with user query keywords; nor would they need to look at the way that keywords are deployed on landing pages. Some analysts (but mostly, vendors selling solutions that seem to be forged in the mold of good old fashioned “on-page SEO”) have speculated that there are hard-and-fast rules you should follow because the Ads Bot is “spidering everything,” and has its own SEO-like algorithm. While crawling is happening, that doesn’t mean you should believe in this theory about the actual ranking algorithm.
There’s some additional complexity to calculating appropriate CTR and assigning a score to a keyword for future ranking purposes. The engines normalize CTR by ad position, match type, and other factors that tend to provide a CTR boost not indicative of greater relevance.
So with the “it’s mostly CTR” theme solidly established, let’s turn to the intrigue surrounding Landing Page Quality. How much of the ranking algorithm is determined by the landing page?
It’s important to understand: not very much. The purpose of Landing Page Quality is to seek out advertisers who are cheating or offering very poor landing page experiences. In this regard, Google and Microsoft – as with many standard elements of the PPC auction – seem to be well-aligned. An important wrinkle here is that automated approaches to catching violations of the advertising rules, working in the interests of consumer protection, and discouraging extremely poor user experiences, cannot work on their own. Policy teams work tirelessly behind the scenes to assess whether advertisers are running afoul of guidelines in a major way. The engines have decided, moreover, that providing much direct contact with policy specialists is a poor idea, because there needs to be a separation of church and state. They need to rule firmly and help the engines stick to enforcing their policies. Otherwise, sales-focused reps might be motivated to “train” high-spending advertisers in exactly how to skirt the rules. Inconsistency and favoritism aren’t in anyone’s interest. Hence the invisibility of the policy teams.
The insider suggests that landing page quality plays only a minor (or close to no) role in the vast majority of cases. Consider it like an “on/off” contribution to Quality Score. In (let’s say) one in 400 cases, an advertiser is doing something very wrong on their website, in their overall communications, business model, etc. In such cases, keyword Quality Scores are badly affected.
This can be confusing and bewildering, of course. Why not send an alert notifying the advertiser of an outright ban? Probably this is in place for a number of reasons. I’d speculate that the less direct correspondence on such issue, the fewer nasty email threads get built up for posterity. Couching the process in technical language helps smooth the impact, and – who knows? – might be safer from a legal perspective.
The insider suggested, though, that we could also look past the simple “good-bad” dichotomy in the Landing Page Quality arena, towards a “stair-step model.” This would only be a slight modification of the on-off model. If your policy violation or poor user experience is very bad, in that “less than one percent” cohort of nefarious participants in the ad auction, the extreme penalty would apply (that’s when you see a lot of Quality Scores of 1 and 2 in your account). Now add in a minor penalty for more borderline stuff – the violations and misdeeds that might put you, let’s say, in the 2nd percentile of crap-mongers. Here, a less severe penalty might apply, but it will seriously drive up your CPC’s and decrease your ad impressions. This is where you see a lot of 3, 4, and 5 Quality Scores in your account where you should expect more 6’s and 7’s. Even with the stair-step model, the vast majority of advertisers are unaffected.
In my view, Google (more so than Microsoft) certainly has moved well past the original “on-off” model towards this stair-step approach and even beyond, layering in a lot more data points to help provide a boost to advertisers that provide uncommonly good landing page experiences. But even at that, you could safely ignore this and do OK, if you built a well-organized, highly relevant keyword account.
I asked him about a real-world case with one or two of our accounts. It seemed like our relatively better overall landing page and website user experiences had contributed to a recent vast improvement in our ad positions and average CPC’s, after long periods of wandering in the wilderness (presumably while the engines gradually collected enough data on user experiences on both our clients’ sites, and competitor sites).
He made a powerful observation: that in the vast majority of cases – assuming we were duking it out in an auction with some competitors who had serious ethical shortcomings – this effect is likely to be the result of penalties on the competitors’ side, as opposed to a lot of small optimizations on the website side leading to a Quality Score victory and a serious change in auction dynamics. It’s easier to “demote” in cases of obvious misdeeds or user dissatisfaction than to calculate the exact bonus that should be allotted for many small potential indicators of quality or user satisfaction.
What I like about this explanation is that it does provide a compelling explanation as to what might have happened in these real-world cases. Moreover, a “mostly for punitive purposes” approach to landing page quality avoids turning the exercise into a subjective pursuit of top-down user-experience preaching. Which page or site is the ultimate user experience? Should the search engines dictate that precisely? What user session data is the ultimate proof of bliss? Which micro-behaviors deserve an economic reward in the PPC auction? Let’s hope the publishers don’t split hairs that finely.
As for the Super Bowl? Here’s my armchair coaching attempt to draw up the play the Seahawks should have had ready for the 1.5-yard touchdown plunge. Line up with Lynch and the backup running back (Turbin) in modified I formation. Fake the quick handoff to Lynch, the lead runner, who uses Beast Mode to help up front with the blocking effort for Turbin’s carry. QB Wilson either peels off with a fake carry/pass or helps with blocking. Turbin easily finds an opening and scores. If no score, at least you still have the football!
Good luck with your 2015 PPC playbook.
Latest posts by Andrew Goodman (see all)
- PPC Audits: A Guide to Playing Fair - July 22, 2015
- What Do These Google AdWords Features Really Do? - May 26, 2015
- Not Goliath? Dominating AdWords Won’t Be Easy - April 29, 2015
- Unleash Your Quality Score Beast Mode (Bing Ads Edition) - March 3, 2015
- Google, Partners, and Loyalty (Or: Thanks for the Espresso Machine, But…) - February 3, 2015