Brighton SEO 2012 – 10 Key Things To Take Away

Brighton SEO 2012 was held on the 13th of April and attracted upwards of 900 people (although it should have been 1000, many people didn’t turn up). Rather than do a huge write up I wanted to list just the key things that I have taken away from this years Brighton SEO. Links to the currently available slides can be found at the bottom of this post.

Ask the Engines – SEO Panel

Pierre Far, Dave Coplin, Martin McDonald, Rishi Lakhani & Tony Goldstone

1. Bing uses Social Signals and the rate that information / content is shared as a large part of it’s algorithm. Having hundreds of thousands of followers and “sharing” something does not give that shared item a greater benefit to its target.

2. When discussing reconsideration requests (not re-inclusion requests as Pierre stated), Rishi mentioned that a record of everything should be kept including a log of all the emails sent to site owners requesting link removals. In the event that no response is given or the web site owner refuses to remove the link you have clear proof that you have tried and been unsuccessful, this should be included in the reconsideration request.

3. Bing is openly happy about giving away a little more than Google when it comes to questions regarding ranking factors. Google’s representative consistently went on the defensive whereas Bing’s representative was clear, top the point and helpful in his answers. Although it was said that SEO is not a checklist it does help that knowing what is good or bad.

Microformats and SEO

Glen Jones

4. Microformats are currently the better schema for Rich Snippets in the Search Results due to them being somewhat simplified over Microdata & RDFa. That being said there are situations which may call for a specific Microdata or RDFa schema format to be utilised and when required they should be.

NB: Not in the numbered list but, Glen mentioned that Schema mark-up is not just beneficial for rich snippets, it helps the search engines find and extract structured data to build better user experiences in the future.

How you can get BIG links from BIG media sites

Lexi Mills

5. Lexi told the crowd that following three twitter hashtags will enable us to find potential PR oppurtunitiues. These hastags were; #prwin#prfail and #journorequest, having been following these for a few days now I can see how beneficial this can be, specifically the #journorequest hashtag.

6. Lexi also went on to mention that if BusinessWire is a site used for press release distribution, dont’t fill in the site URL and Blog URL boxes, this allows you to achieve followed links within the press release itself (loving this tip!).

7. Use the phone! Emailing just doesn’t cut it in todays big business link building, contact the Journalists themselves and if they cant help you ask for their superior’s number and call them.  Being confidant and happy (smile!!) can get you a long way with journalists.

Maximizing your SEO Agencies

James Owen

8. This tip seems somewhat foreign to me, I have long been a believer of being absolutely transparent to clients and I felt that providing an average ranking position to clients is somewhat covering up the truth but James puts a positive spin on this.

Average Keyword Rankings can benefit the client if the worst case happens on the last day of the month. Position 1 for 29 days and then position 5 on reporting day, reporting on the average or graphing the monthly ranking by day can show the client that its potentially a blip and you that something needs investigating.

Searchbots: Lost Children or Hungry Psychopaths? What Do Searchbots Actually Do?

Roland Dunn

9. Roland’s talk was by far the most interesting at this years Brighton SEO, working with Server Log files is something that I have been, slowly, looking into doing for a while now. I wanted to see what the correlation between what Webmasters Tools Crawl Rate said and what I could assertion from the logs and wether it matched up.

Roland went one step further and showed us that sometimes Googlebot gets lost and spends more time indexing unimportant, less visited pages than it did the important “big hit with the users” pages.

I firmly believe after seeing this talk that understanding what the search engines are doing behind the scenes of your site is somewhat more important than what information you put on your site! If Googlebot, or any other search engine, is not finding the content it is a waste of resources / misinterpretation of the website and these would need to be addressed!

I Believed Authors are the Future

James Carson

10. James’s presentation was part of the 20×20 sets and was somewhat pressure filled however, James delivered and brought up some great points. When using the rel=author tag only write about one topic is by far my favorite tip. If Google recognises that you are a leader in SEO and suddenly you pop up while writing about the carrot farming industry you cannot reliably be delivered in the search results as a rich snippet for SEO.


Below I have linked to the currently available slides:

And there you have it, will try to update as more slides come available.


New Sitemaps Options in Google Webmasters Tools

Google have updated the Sitemaps Site Configuration section of their webmasters tools service.

Although its a minor change I really like the new style, its a lot easier to visualise everything. Hopefully Google will start updating more of the config options in Webmasters Tools. Bringing up to Googles current style guidelines would give it a well needed revamp!



SEO’s Favorite Office Productivity Software on MAC

I recently polled a small and friendly group of Digital Marketers on their favorite office based software for a Mac’s. The reason for me doing this is that I have been struggling to find something that will actually work for me!

The options were:

  • Microsoft Office for Mac
  • iWork
  • Google Docs
  • Open Office
  • Neo Office
  • Use a Windows based machine

I haven’t included all the options as it would just get a little bit silly as there is a tonne of them out there.

The results were a little surprising and also a little frustrating!

  • Microsoft Office for Mac received the most votes for a single piece of software with 16
  • iWork got 4 votes (but people also mentioned that it crashes a lot)
  • Google Docs received 1 vote
  • Open Office received 1 vote
  • Neo Office unsurprisingly got 0 votes

Use a Windows based machine received 10 votes witch means this option came in second.  Many of the voters where Mac users or individuals that use both systems.

I personally thought that Open Office would come much higher up on the list, my personal experience with Microsoft Office for Mac (which got the most votes) has been very poor at best. MS Office on a windows machine is 1000% times better but switch is just too inconvenient.

For the record I am a huge fan of Windows 7, I think its fantastic and my personal computer is a Windows 7 machine. My work computer however is a Mac, it does everything I need it to do and it does it brilliantly. I do have Office 2007 on my Windows machine (which is fantastic btw) but carrying two computers around would become a drag!

If you want to add to this poll please do so below:


YaCy Peer Search Engine – Distributed Search Engines Network

There is a new search engine on the block, well more specifically a P2P (peer to peer) search engine.

YaCy is a free search engine that anyone can use to build a search portal for their intranet or to help search the public internet. When contributing to the world-wide peer network, the scale of YaCy is limited only by the number of using it throughout the world.

It has the potential to index billions of web pages and is fully decentralized which means all users of the search engine network are equal, the network does not store user search requests and it is not possible for anyone to censor the content of the shared index.

YaCy state that they want to achieve freedom of information through a free, distributed web search which is powered by the world’s users.

The first port of call for people wanting to use the Yacy Search Engine is the YaCy website. From here you would need to download the YaCy client and can start performing searches.

Today They had a message up on the search portal portion of the website which stated that due to press releases the site is running incredibly slow. Even now four hours after my first attempt to search I still receive a server error.

I have also downloaded the Mac version of the YaCy client and it runs but very slow, i’m all for freeware and I think this idea is fantastic but it does seem to struggle at the moment. This may be due to the excessive load due to the press releases that have gone out today but I think various sites claims that this is a Google, Bing or Yahoo rival is far fetched.

I will say that you can see some real cool stuff from the Admin panel which, when you have installed the YaCy client, can be found at http://localhost:8090/ConfigBasic.html. The program runs from http://localhost:8090/. Apparently I am currently creating my own index of websites which will allow other people to find the websites that I have sorted! Cooool

I will continue to use this periodically and will be following it with interest so watch this space!


How to Create a Google News Sitemap

Google News Sitemaps are specifically designed to allow you to control what news is submitted to Google. More specifically they allow Google to:

  • Identify specifically which are news articles
  • Spider and Index your news article faster
  • Identify the article titles, as well as the publication date for each article
  • Find each articles unique metadata to display
  • Specify article content with unique tags

Additionally Google states that you should only include news articles that are less than two days old (48 hours), this ensures that the content is fresh. A Google News Sitemap can contain no more that 1,000 urls, to add more you can utilise a sitemap_index file (which I have previously described how to create: The how to guide for Sitemap Index XML Files).

Google News Sitemap Structure

Included below is an example of a Google News Sitemap structure which includes some of the unique tags that can be applied:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns=""
        <news:name>Sam Osborne SEO</news:name>
      <news:genres>UserGenerated, Opinion</news:genres>
      <news:title>How to Create a Google News Sitemap</news:title>
      <news:keywords>google news, sitemaps, xml</news:keywords>
      <news:stock_tickers>IAMASTOCK, SEOSTOCK</news:stock_tickers>

Google News Sitemap Tag Information

Each of these tags has specific requirements and not all of them are needed, for example:

<Publication> Tag

The publication tag requires a name and a location tag as children, For example, if the name appears in Google News as “Sam Osborne SEO (registration)”, you should use the name, “Sam Osborne SEO”. The language tag is pretty simple, its the language of your publication in short format, en for english, fi for finnish and it for italian as so on. This tag is required.

<Access> Tag

The access tag describes the accessibility of the article, if the article is accessible to Google News readers without a registration or subscription, this tag should be left out.

<Genre> Tag

The genre tag is a comma-separated list of properties defining the content of the article, such as “Opinion” or “UserGenerated.” See Google News content properties for a list of possible values.

<Publication_Date> Tag

The publication_date tag displays the date the article was published. Google will accept any of the formats below:

  • Complete date – YYYY-MM-DD (e.g., 1997-07-16)
  • Complete date plus hours and minutes - YYYY-MM-DDThh:mmTZD  (e.g., 1997-07-16T19:20+01:00)
  • Complete date plus hours, minutes and seconds – YYYY-MM-DDThh:mm:ssTZD (e.g., 1997-07-16T19:20:30+01:00)

<Title> Tag

The article title tag should only include the title of the article as it appears on your site, try not to duplicate any information such as the author, the date the article was published. This is something that is recommended to include but is not required.

<Geo_Locations> Tag

The geo_locations tag is added to help Google identify the geographic location of your articles. This can be great to use if you have sections of your site that cater to different locations around the world. Again this is not required but is generally recommended.

<Keyword> Tag

The keyword tag can be used to specify relevant keywords for the article, there is no limit but its generally recommended to keep the individual word count to less that 10 as not to appear spammy in the Google News algorithm.

<Stock_Ticker> Tag

The last tag that can be added is the stock_ticker. So if you had written an article about Admiral Car Insurance and wanted to include the Stock Ticker for them you would include “LON:ADM”.

In my next installation of XML Sitemap guides I will be writing about how to include images within a Google News Sitemap. This will hopefully be shorter that this post!



By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.