The amount of time that it takes for your website to load is one of the most important user experience factors for your website, and because of this in 2010 Google made this a factor (although a very small one) in their search ranking algorithm. Since then, it has become increasingly important.
In order to test the speed of your website it is recommended that you install the YSlow browser plug-in from Yahoo, and the PageSpeed browser plug-in from Google. Then run your site through those tools. You will receive a list of areas where your site can improve with detailed information about each.
There are many different things that you can do to speed up the loading of your website and in this lesson we will look at some of the most important ones. The following PageSpeed suggestions have been taken from Yahoo’s YSlow recommendations.
When looking to speed up your website, the most optimal results are typically achieved through custom development work addressing the points listed in the expandable sections below this one. However, if you are looking for a solution that is fast, effective and inexpensive you may want to first try using one or even several of these existing modules to turn improve the loading time of your webpages.
Note: Ideally you will test these modules on a development version of your website to ensure that they work before trying to push these types of changes to your live site. You should always fully backup your website before implementing any new modules, especially ones that attempt to make these kinds of performance improvements. So, if your website breaks after implementing any of these modules, you can simply reload the back-up file.
Apache mod_pagespeed: If your website is running on a Apache server, you may want to try installing this awesome module created by Google to deliver many of their recommended site performance improvements. Google's Apache mod_pagespeed
If using Drupal:
- Drupal Boost Module: This is a great option if you are only looking to optimize front end performance for anonymous visitors.
- Memcache: This is an advanced solution that does require some extra software and will not work on a shared server, but the performance boost is well worth it.
- Varnish Cache: Like Memcache, Varnish Cache requires additional software and some technical investment, however it is lightning fast and can allow your site to serve 3,000 pages per second. Not too shabby!
If using Wordpress:
When a visitor first loads your webpage, their browser must load every single separate component of that page. Every script, every image, every reference to a schema, or tool, everything must be loaded and every single one of those is a separate HTTP request. It is estimated that 80% of the end-user response time is spent on this front end loading. Therefore, the single most important task you can undertake in increasing page speed is in minimizing HTTP requests. This can be achieved by simplifying the page design; combining multiple CSS and JS files into fewer CSS and JS files; using CSS sprites for background images; and loading inline images. Different CMS platforms may offer different solutions to achieving this.
For smaller sites, the cost of a content delivery network could be prohibitive. However, as your user base grows or for larger sites, using a content delivery network to deliver your page components, such as images, stylesheets, scripts, etc… from a server that is closer to the end user can speed up your pagespeed by more than 20%.
Going deeper into redesigning your web application to work in a distributed architecture can be a daunting task with unforeseen complications that is only worth it to the largest of companies. However, implementing a CDN for delivery of static components is a relatively easy code change that will give your site a significant performance boost.
The first time a visitor to your site loads a page, they have to load every component from scratch. However, as that visitor searches around your site, many components, such as images, scripts, etc… remain the same. Therefore, a lot of loading time can be saved by properly caching components that can be reused from page to page.
There are two aspects to this:
- For static components, implement an expires header that doesn’t expire until far into the future. Perhaps a month out.
- For dynamic components, use an appropriate cache-control header to help the browser with conditional requests.
(Taken directly from Yahoo’s YSlow recommendations) The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by front-end engineers. It's true that the end-user's bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.
Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.
- Accept-Encoding: gzip, deflate
- Content-Encoding: gzip
Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you're likely to see is deflate, but it's less effective and less popular.
Gzipping generally reduces the response size by about 70%. Approximately 90% of today's Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.
Putting CSS at the top of the source code gives a webpage the appearance of loading faster to visitors because the page can load progressively within the desired layout. This provides a much better user experience and while it may not effect the total time it takes for the entire page to load, it will allow users to begin interacting with your page much faster, which will lead to greater user engagement metrics such as time on site and pageviews.
(Taken directly from Yahoo’s YSlow recommendations) The problem with Scripts is that they hinder parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won't start any other downloads, even on different hostnames.
In some situations it's not easy to move scripts to the bottom. If, for example, the script uses document.write to insert part of the page's content, it can't be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.
An alternative suggestion that often comes up is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn't support the DEFER attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.
CSS expressions are terrible. Do not use them. They actually cause the relevant components to be evaluated whenever the page is loaded, scrolled or interacted with in any way. Because of this they constantly tax bandwidth.
Avoid inline scripting. CSS and JS should be contained in external files, and in order to keep those files to a minimum, your CSS and JS should be minified, combined whenever possible and externalized. This can go a long way to speeding up the loading of your page.
(Taken directly from Yahoo’s YSlow recommendations) The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people's names to their phone numbers. When you type www.example.com into your browser, a DNS resolver contacted by the browser returns that server's IP address. DNS has a cost. It typically takes 20-120 milliseconds for DNS to lookup the IP address for a given hostname. The browser can't download anything from this hostname until the DNS lookup is completed.
DNS lookups are cached for better performance. This caching can occur on a special caching server, maintained by the user's ISP or local area network, but there is also caching that occurs on the individual user's computer. The DNS information remains in the operating system's DNS cache (the "DNS Client service" on Microsoft Windows). Most browsers have their own caches, separate from the operating system's cache. As long as the browser keeps a DNS record in its own cache, it doesn't bother the operating system with a request for the record.
Internet Explorer caches DNS lookups for 30 minutes by default, as specified by the DnsCacheTimeout registry setting. Firefox caches DNS lookups for 1 minute, controlled by the network.dnsCacheExpiration configuration setting. (Fasterfox changes this to 1 hour.
When the client's DNS cache is empty (for both the browser and the operating system), the number of DNS lookups is equal to the number of unique hostnames in the web page. This includes the hostnames used in the page's URL, images, script files, stylesheets, Flash objects, etc. Reducing the number of unique hostnames reduces the number of DNS lookups.
Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.
Ensure that your pages do not include the same JS files more than once. While this sounds like a no-brainer it is unfortunately quite common.
(Taken directly from Yahoo’s YSlow recommendations) Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser's cache matches the one on the origin server. (An "entity" is another word a "component": images, scripts, stylesheets, etc.) ETags were added to provide a mechanism for validating entities that is more flexible than the last-modified date. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted. The origin server specifies the component's ETag using the ETag response header.
- HTTP/1.1 200 OK
- Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT
- ETag: "10c24bc-4ab-457e1c1f"
- Content-Length: 12195
Later, if the browser has to validate a component, it uses the If-None-Match header to pass the ETag back to the origin server. If the ETags match, a 304 status code is returned reducing the response by 12195 bytes for this example.
- GET /i/yahoo.gif HTTP/1.1
- Host: us.yimg.com
- If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT
- If-None-Match: "10c24bc-4ab-457e1c1f"
- HTTP/1.1 304 Not Modified
The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won't match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests. By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.
The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next.
IIS 5.0 and 6.0 have a similar issue with ETags. The format for ETags on IIS is Filetimestamp:ChangeNumber. A ChangeNumber is a counter used to track configuration changes to IIS. It's unlikely that the ChangeNumber is the same across all IIS servers behind a web site.
The end result is ETags generated by Apache and IIS for the exact same component won't match from one server to another. If the ETags don't match, the user doesn't receive the small, fast 304 response that ETags were designed for; instead, they'll get a normal 200 response along with all the data for the component. If you host your web site on just one server, this isn't a problem. But if you have multiple servers hosting your web site, and you're using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you're consuming greater bandwidth, and proxies aren't caching your content efficiently. Even if your components have a far future Expires header, a conditional GET request is still made whenever the user hits Reload or Refresh.
If you're not taking advantage of the flexible validation model that ETags provide, it's better to just remove the ETag altogether. The Last-Modified header validates based on the component's timestamp. And removing the ETag reduces the size of the HTTP headers in both the response and subsequent requests. This Microsoft Support article describes how to remove ETags. In Apache, this is done by simply adding the following line to your Apache configuration file:
- FileETag none
To improve performance, it's important to optimize these Ajax responses. The most important way to improve the performance of Ajax is to make the responses cacheable, as discussed in Add an Expires or a Cache-Control Header. Some of the other rules also apply to Ajax:
- Gzip Components
- Reduce DNS Lookups
- Avoid Redirects
- Configure ETags
Let's look at an example. A Web 2.0 email client might use Ajax to download the user's address book for autocompletion. If the user hasn't modified her address book since the last time she used the email web app, the previous address book response could be read from cache if that Ajax response was made cacheable with a future Expires or Cache-Control header. The browser must be informed when to use a previously cached address book response versus requesting a new one. This could be done by adding a timestamp to the address book Ajax URL indicating the last time the user modified her address book, for example, &t=1190241612. If the address book hasn't been modified since the last download, the timestamp will be the same and the address book will be read from the browser's cache eliminating an extra HTTP roundtrip. If the user has modified her address book, the timestamp ensures the new URL doesn't match the cached response, and the browser will request the updated address book entries.
Even though your Ajax responses are created dynamically, and might only be applicable to a single user, they can still be cached. Doing so will make your Web 2.0 apps faster.
For additional information about speeding up your website, visit: