In October 2011, Google made an announcement regarding a change to encrypted search queries within Google.com. This involves an SSL encryption protocol which is automatically applied to all users logged in to Google (Gmail, Google+ etc.) and also searches made directly via https://www.google.com (notice the ‘s’ in the URL).
While the secure protocol was not a new feature in Google, the latest update meant that all searches via the secure server would no longer pass keyword referral data. While Google’s announcement initially suggested this was to protect users’ privacy, the SEO community speculated whether Google’s intent was otherwise.
What was particularly suspicious about the update was that the secure keyword data would remain available for paid search referrals, suggesting that Google were intending to encourage paid search, rather than protect users’ privacy. While other sources suggested that privacy was a genuine concern, with the Google+ API allowing webmasters to track search queries down specifically to any individual.
How does it affect us?
Since the update, analytics packages have returned encrypted keyword data as ‘not provided’, while other keywords appear to dip in visits. With the update only implemented on Google.com, US sites have taken the biggest hit, while UK sites have been affected on smaller scale until now…
On 5th March 2012, Google announced that this feature will be pushed out across their localised domains as well, affecting referrals from Google UK as well as Google.com. The announcement states that this will be introduced “over the next few weeks”; therefore UK sites should see an impact by the end of March.
UK webmasters should expect to see a further increase in traffic filed under the keyword ‘not provided’, which Google’s Matt Cutts “estimated even at full roll-out, this would still be in the single-digit percentages.” Although, external research into the impact on US sites shows the average figures to be closer to 11%.
What to do?
With secure keyword referrals returning ‘not provided’, all websites will lose a fraction of keyword data, which is unavoidable. However, there are various ways to make use of the data such as sorting visits by landing page, to determine what keywords may have contributed towards the unknown data. This is particularly useful for websites where individual landing pages correlate closely with specific keywords. In cases where more than 20% of data is being lost and it is having a significantly negative impact, iCrossing suggests that further actions should be taken to make the most of the lost keyword data.
As Twitter has recently hit 500 million registered users and Google+ now has 90 million users, I thought it would be interesting to take a look at the demographic makeup of each of the major social networks to see if certain types of people gravitate to a particular social network.
Looking at UK traffic data for the four social networks, Facebook is unsurprisingly way out in front with over 10x as many unique visitors as Google+. What is worth noting however is Twitters continued growth over the last 12 months, growing over 60% (Jan 2011 vs Jan 2012).
Facebook may get the most traffic but perhaps it doesn’t do as well with user engagement, maybe people spend less time on site, or come back less often? Not the case, looking at the stats above we can see that Facebook also
Today’s changes to Google’s privacy policies (or, the singular ‘policy’ from March 1st) have prompted a wide debate in the media regarding how companies such as Google collect and utilise users’ data.
Google have taken great strides to reassure customers about the new privacy changes, going as far as to launch a media campaign of their own – Good to Know – in partnership with the Citizens Advice Bureau. However, at a time that online privacy has been making major news in the U.S, and new EU cookie legislation, that has already technically been introduced, is raising awareness of web-tracking processes, it is no surprise this unification of privacy policies are a cause for concern for some. France’s privacy watchdog CNIL have already launched a Europe-wide investigation into the legality of the move.
Even though there has been a lot of speculation during the past year or so, about social signals influencing rankings, it was just yesterday that one of the two major search engines shed some more light on what that means from a search engine’s technical standpoint.
Rangan Majumder, Bing’s principal group program manager, defined the new SEO formula during a session at SMX West 2012 which took place in San Jose, California as:
Rank = Authority + Quality of keyword match + Personal preference + Social preference
The first two factors, link equity and keyword relevance have been the main ranking signals for search engines for over decade. However, Rangan, went on giving more details on how Bing perceives the other two factors:
- How much do we think a user will like your content?
- We look at user’s past behaviour with your content or content similar to yours, their likes, and more.
- How much do we think a user’s social graph will like your content?
- We look at a user’s friend’s past behaviour with your content or content similar to yours, their likes and more.
Bing’s representative confirmed that they look at how much (they think) a user will like the content based on past behaviour and his/her likes. Or, in other words that, authority is no longer just link authority but social authority, too. Google, in the meantime, has taken a series of actions recently, which seems to have similar characteristics to Bing’s new SEO formula.
Content often defies the law of gravity; we’ve all seen cases where something innocently placed within the confines of a website goes viral, takes on a life of its own and flies off into the ether.
Whilst these instances can be meticulously planned, more often than not it takes website owners by surprise. To capitalise on the opportunities exponential traffic can bring your way and to make a guesstimate about which content might have wings, it’s important to understand the various environments within which content may exist.
- Content Troposphere – Where things remains close to the core
Content is what keeps your website alive, allowing it to breathe and allowing those looking in to understand your species. Even if visitors have come from many light-years away, this is where they will carry out their interactions and transactions with you, so the content portraying your brand needs to be solid as a rock.
Become a content cosmonaut: The most appealing websites are those which offer a different way of absorbing what can often be standard information. Give all your content a sense of style, from the privacy policies to the press releases.
- Content Stratosphere – Where you’ll experience strong jet streams
Translating company knowledge into interesting content about your products and services is a vital tactic for wide message disbursement. It isn’t about you or what you want to say, it’s about what your audience wants to hear – the key to success is delivering content at the right frequency, in the right format and through the right channels.
Become a content cosmonaut: Treat your customers as individuals and attempt to tailor their experiences as often as possible throughout your website journey with dynamic content placement.
We are always thinking of new and innovative ways to create content for our clients. Often this involves starting from a blank page, but here’s an example of where we discovered existing assets that we didn’t think were getting the attention they deserved.
While you may not think old knickers command much attention, it’s a different story when they’re from legendary lingerie innovator M&S. Using information and images buried deep in the retail giant’s fascinating Company Archive, we brought these to life through an interactive timeline that spotlights the style, technology and history of M&S lingerie from the 1920s, through the War and into the 80s.
What gems does your brand have hidden?
The history of lingerie timeline is hosted on the M&S Stories social site.
Analysing data is paramount in the day-to-day work of internet marketers so strategic decisions can be made based on scientific evidence. However, in some occasions some commonly used terms are sometimes misunderstood and used in the wrong context. Some of the most typical ones include:
- Bounce rate
- Average time on page
- Average time on site
By and large, a high bounce rate is considered as a negative signal, often flagging the need foe some conversion optimisation & usability improvements. Similarly, low average time on page/site is also considered as positive signals although these may not always be valid assumptions. An in-depth understanding of what these metrics represent is absolutely necessary, otherwise decisions may be made based on misconceptions.
The definitions of all major analytics metrics need to be well-understood before analysing the data of the various analytics packages. For instance, the definitions for Google Analytic metrics s can be found here.
Below, we will discuss the flaws of bounce rate and time on site/page, and then define a better alternative, commonly known as dwell time.
Bounce Rate Flaws
This is the percentage of single page visits. However, bounces do not take into account the average time a user spends on a landing page. That means that when a user lands on a page from a search engine and stays just for a few seconds before exiting the site, it counts as a bounce. Similarly, when a user lands on a page and stays on that for several minutes before exiting the site, that also counts as a bounce.
In the first example, the user did not find the landing page useful and left, whereas in the second example the landing page satisfied the user’s requirement.
So far, there are two takeaways:
- Bounce rates should not be seen in isolation but in conjunction with other metrics (e.g. average time on page).
- Bounces should not always raise concerns, as in certain cases a bounce can be a positive signal. This is when there is evidence that the user digested the content of the landing page before they happily left the site.
Google announced in January that it now has 90 million Google+ users, this was a massive jump from the 40 million reported in September 2011, however it turns out that the 90 million user figure reported in January actually included other Google properties such as Gmail and Google Search. However misleading the 90 million figure is the more important question is “what is the impact of Google+ on my site?”
I took a look at the impact of Google+ on some of our clients’ keyword data. Keeping true to all things Google, I used Google Analytics to report on the number of keywords recorded as “(not provided)” from the 1st October 2011 to the 31st January 2012.
Firstly let me explain what “(not provided)” actually means. In essence once you are logged in to Google any search term you enter is not passed across to the website you select from the organic search results. Google wants to protect your privacy so whereas analysts, like myself, and website owners would have seen what terms you actually searched for, now all we see is (not provided). Pretty useless when you are trying to ensure your site is fully optimised for what customers are looking for, but there are a few ways around it so don’t panic SEO and user experience optimisation hasn’t gone back a decade just yet…
So let’s get back to seeing the impact of (not provided) data. I looked at 15 client accounts across the following verticals; retail, travel and finance. Although SEO traffic levels varied greatly across the three verticals (finance received far less SEO traffic), the consistent result was that only 1-2% of keywords were recorded as (not provided).
One common gripe over Facebook page creation is that, as a page administrator, you aren’t able to set a unique URL for a page until it has at least 25 fans.
In September 2011 there was a loophole which allowed page admins to set a username for pages with fewer than 25 fans, but this soon closed up.
However, whilst setting up a page for a client yesterday, I discovered that there is indeed currently a way to set a unique username/URL for a Facebook page with less than 25 fans. It’s a little lengthy but ultimately does work.
However, it’s worth noting that this will only work for brand new pages – you can’t use this loophole to set a unique username for a page you’ve already created. You will also have to follow the entire process again for each individual page/username you want to set.
Expect this loophole to close quickly; here’s how to take advantage of it:
1. Go to Facebook.com and make sure you are logged out. Beneath the sign up form, click on the ‘Create a Page for a celebrity, band or business.’ link
2. Choose whichever category is most appropriate for the page you want to create and continue
3. Even if you already have a Facebook account, select the ‘I do not have a Facebook account’ option
4. Sign up using an email address that is not associated with any Facebook account; you could create one just for this purpose. Follow the Facebook account creation and confirmation steps
5. Set up your profile (you can come back to this later)
6. Click on ‘Edit page’ and then navigate to the ‘Basic Information’ tab on the left
7. Click on the ‘Create a username for this page?’ link
8. This brings up a notice asking you to verify your account before you can create a username. Follow the verification steps
9. Once Facebook confirms that the code is correct, the page doesn’t seem to auto-redirect anywhere. Don’t click continue again, but instead click on the Facebook logo in the top left to go back to your page’s home page feed. Click ‘Edit page’ to bring up the admin section again, and click on the ‘Basic Information’ tab
Last year I posted on elements which I’d like to see Google incorporate into AdWords during 2011. Google claim to have tested over 100 new ad formats in 2011, along with a bunch of new features, so unsurprisingly a couple of my predictions came up. There are still some omissions which I feel are no brainers, but given the progress made this year I’m pretty optimistic about them.
How I did before…
Ad scheduling was resolved by automated rules, which opened up to all advertisers in February. Not only that, it allows for some reasonably advanced automation at most levels of the account. It would be good to see ad extensions added to this, but ultimately it’s a big step in the right direction.
Video Ads have rolled out in the same format as the US rather than the more flexible way I’d hoped. But no real surprises here.
So there’s no sign of sharing impression cookies outside of conversion tracking and Google analytics. Or of the privacy sensitive, search re-marketing. Weather-parting also remains a pipe dream (and probably always will do).
So on to 2012…
Google’s focus on monetising every last click seems to have intensified if anything so expect a further slew of alphas, betas and general releases. So here are some expectations, highly likelies and a couple of very long shots…
Sign up for email notifications of Connect blog posts.
- Connect – iCrossing U.K.
- Conecta2 – iCrossing LATAM & Spain
- Greatfinds – iCrossing U.S.
- Talblick – iCrossing Germany