Faster than light ! How the geographical position influence a website performances

Update April 2018

Since WordPress has become “de facto” the CMS that rules the world, it’s become very important to optimize it for speed.

There’s an interesting guide about it called 22 Tips To Speed Up WordPress Site Performance” by Cloud Living, so i suggest you start from there and find your own way to speed up wordpress.

And when you’ve finished optimizing it…

Go Static! I’m not suggestic to use a Static CMS (like Pelican or Jekyll), because they are quite nerdy and require considerable tech knowledge, but you could use a plug-in that create an html-version of your blog. Just upload the resulting files and your site will be quick and strong as a fortress.

Just pay in mind that you have to forget interactions with the database (no internal comments, no internal forms).

Explore Simply Static

Update february 2015

i’ve been contacted by Dotcom Tools and they told me that they offer a free test service to check the speed of your website from 20 different locations worldwide!

Have a look! 

it’s pretty neat!


 

Oh my goodness, it’s already July, how time passes.

Without futher delay i would like to present a little speed test i’ve done some weeks ago, i hope it’s still interesting.

The premises

Google pagespeed insights it’s a web service by Google which should grade the performances of web page. It’s a nice tool for novices but it’s very basic and the results produced are lacking in accuracy and context.

As every tool, it must used with discretion, don’t turn off your precious neurons just because Google says so.

Faster and Faster, but it’s really necessary?

Google has stressed time and time that a quick website is fundamental requirement in order to have better results, in terms of user experience.

I will not enter into the details, but a quick user experience means a “relative improvement” of your ranking, meaning that your slower competitors will probably see a decrease in visibility (and sales).

Speed it’s not a silver bullet to rank better: you still need good content, properly structured navigation and of course links.

Think also in terms of sales, an article on Fast Company published in 2012 said:

Surprising as all this may be, the implications of this impatience are even more shocking. Amazon’s calculated that a page load slowdown of just one second could cost it $1.6 billion in sales each yearGoogle has calculated that by slowing its search results by just four tenths of a second they could lose 8 million searches per day–meaning they’d serve up many millions fewer online adverts.

Questions ?

Server response time

One particular aspect of a page performances is “server response time”. I will quote directly from the Google guidelines:

Server response time measures how long it takes to load the necessary HTML to begin rendering the page from your server, subtracting out the network latency between Google and your server. There may be variance from one run to the next, but the differences should not be too large. In fact, highly variable server response time may indicate an underlying performance issue.

One aspect that had me worried is this part of the description:
the network latency between Google and your server

Why ? because i’m an italian guy and i do international Seo! If my clients servers are based in Europe and the Google Pagespeed is based in California, is it possible that the network latency is creating inaccurate reports of “slow response time” ?

One particular aspect i wanted to test is the “time to first byte“.  (TTFB), the time required to get the first byte to render on the client  browser.

According to serverfault, it’s depend on:

DNS Lookup: Definition: Find the IP address of the domain Improve: more numerous/distributed/responsive DNS servers

Connection time: Definition: Open a socket to the server, negotiate the connection Improve: typical value should be around ‘ping’ time – a round trip is usually necessary

Waiting: Definition: initial processing required before first byte can be sent Improve: This is where your improvement should be – it will be most significant for dynamic content.

Theory and experiments

Being a geek and a skeptic, i decided i had to measure the amount of delay caused by network latency.

The services used for the experiments were:
gtmetrix.com: Gtmetrix is a very fast tool to measure pagespeed. It’s not my favourite, but it’s reaaaally quick and never crowded. By becoming a registered user, Gtmetrix allowed me to test the performance from different server locations. (Dallas and London)

  • Digital Ocean: Digital Ocean is a unique server provider. They allows you to create pre-configured virtual servers with several configuration, called “droplets“, quickly and effortlessly. I decided to use them, because they have several server locations… and you can move your “websites” around the world! How? Thanks to the concept of “frozen images”.

It may sound strange to non-coders but each droplet can be “frozen” in time, creating a perfect backup (called “snapshot”), which can be restored in a matter of minutes.

One particular characteristic (not unique to Digital Ocean) is that it’s possible to move the backup to another data center and restore the snapshot, effectively changing country.

It's easy, fast and cool
It’s easy, fast and cool

My process was:

  • create a droplet in the USA
  • measure the performance using gtmetrix from Dallas
  • measure the performance using gtmetrix from London
  • measure the performance using gtmetrix from San Paolo
  • measure the performance using gtmetrix from Sidney
  • move the droplet to Europe (Amsterdam)
  • measure the performance as above
  • move the droplet to Singapore
  • measure the performance as above
  • pour me a tea (mandatory)

The test environment

According to the Google Guidelines

You should reduce your server response time under 200ms. There are dozens of potential factors which may slow down the response of your server: slow application logic, slow database queries, slow routing, frameworks, libraries, resource CPU starvation, or memory starvation

in order to mitigate those factors the test environment included:

  • Droplet: 512MB RAM / 1 CPU / 20 GB SSD disk
  • Configuration: Ubuntu Linux 14.04 X64 pre-configured
  • A single responsive (bootstrap) php page,
  • Some text and 3 jpg images (50kb each)
  • The php page did a single query to a SQLite db, in order to populate a dropdown list.
  • Since digital Ocean map the droplets on an ip address, i was able to call it directly without doing any DNS resolve.
  • Each combo of hosting/client was checked 5 times during an hour in order to avoid temporary network problems.

The results

You'd better stay close
You’d better stay close

Best case scenarios per city

  • User in Sao Paulo -> Hosting in San Francisco
  • User in Sidney -> Hosting in Singapore
  • User in Dallas -> Hosting in San Francisco
  • User in London -> Hosting in Amsterdam

Worst case scenarios per city

  • User in Sao Paulo -> Hosting in Singapore
  • User in Sidney -> Hosting in Amsterdam
  • User in Dallas -> Hosting in Singapore
  • User in London -> Hosting in Singapore

Final considerations

My study doesn’t take into consideration multiple aspects, like rendering time or the use of CDN to parallelize content loading.

What i wanted to  do was evaluate if having a server close to your customers, may drastically change the perception of the website.

the answer is “yes” of course and it’s backed by real data.

So in my opinion, you should choose and hosting which is not only quick but also close to your target audience!