A customer is visiting your e-commerce Web site. She’s decided to do more online shopping this holiday season to save on gasoline and find the lowest prices. She’s using a Mac running Safari, but your site is optimized for Internet Explorer (IE) 7 and your development budget is mainly focused on preparing for IE 8. She selects a few products and heads for the shopping cart, but the “checkout” button isn’t available. Frustrated, she’s off to another site. You’ve lost the sale.
IT managers are now working feverishly to avoid this type of incident, which underscores a current fact of life for Web site designers, Web application developers and your entire IT department: Web pages can look and perform differently from one browser to another.
With Internet Explorer’s dominance waning and Firefox, Safari, Google Chrome and others growing in market share, plus with multiple older browser versions still in use, managing browser diversity is becoming increasingly challenging. This is not your average browser transition.
The new generation of browsers signals a major change in the way browsers operate. In the future, when the entire Internet is better optimized for these new browsers, we’ll all have a faster, better-looking, easier-to-develop Web site experience with more exciting new applications. But the next year or two will be a difficult transition. Not staying ahead of these changes will mean lost customers and lost traffic.
In the current economic environment, your business can’t afford to lose even the smallest portion of your audience who use browsers for which your site is not optimized. In this article we’ll review the significant changes in the 2008 class of browsers and detail the optimal site measurements needed to gauge the real impact on your Web business.
The 2008 Class of Browsers
Chrome and the latest versions of IE, Firefox and Safari were all designed as platforms to handle the ever-expanding generation of Web apps, where the browser is being asked to handle more of the processing load than ever before. Let’s take a look at four key differences in the way these new browsers operate to get a clearer sense of their long-term promise and near-term pitfalls:
- Parallel Connections: IE6 and IE7 were designed to make two host connections at a time (e.g., two images load at once). The new IE8, Firefox and Google’s Chrome triple the number of parallel connections per host to speed the browser experience (the maximum number of concurrent connections is limited only by the host itself). But there’s a downside.
Tests against a Web site serving up content from three hosts behind a firewall used both IE8’s connection profile, with 18 parallel connections, and the IE6 connection profile, with 6 parallel connections. While the 18 connection mode was faster at times, the average response time over hundreds of tests was actually slower. Why? The infrastructure can’t always keep up with the dramatic increase in simultaneous connections.
Now imagine your host server handling peak traffic with triple the number of connection hits. Is your server ready for this? What about those domain name system tweaks you did to increase parallel connections under the old browsers? Those short-term workarounds will have to be reconciled with the new browser connection schemes.
- Changes at the Presentation Layer: Developers certainly remember the transition to IE7 where certain CSS (cascading style sheets) hacks ceased to work. Microsoft was very proactive about preparing the developer community and issued dire warnings about the consequences of failing to test your site in the new browser. The transition to Internet Explorer 8, with its emphasis on Web standards, carries similar risks and opportunities.
These new browsers also offer new, simple-to-use features that developers can leverage to provide a much richer experience. Animations and other rich interactions that used to require a significant amount of often brittle code will now be readily available in CSS.
The larger trend of network processing moving to the edges of the cloud empowers developers to build increasingly richer applications. So it’s possible for a poorly written application to have a significantly negative impact on the end user’s machine through CPU and memory utilization. To make matters worse, tools like Chrome’s Task Manager make it easy for users to see exactly which application is putting the most strain on the browser. Your users may not be quick to forgive if your application slows their systems.
Measurements to Ensure a Quality End User Web Experience
Ensuring that your Web site looks and performs well in a multiple browser world starts with some form of measurement and testing. The overall goal is to provide a quality Web experience for the end user, and to accomplish this consistently. Cross browser testing can be divided into four main categories:
- The Visual — Does Your Web site Look “Right?” The easiest measure is to simply take a look. Do graphics look the same across all browsers? How is text rendering? Are Web apps showing up in the right place? Any missing function buttons? Visual verification is a straightforward process, but with five or six major browser vendors and multiple versions of each, a tedious one.
- The Functional — Do Web Site Functions Work Correctly? Do your critical business functions work successfully across all browsers and OS combinations? Or will you leave someone with a full shopping cart waiting or unable to check-out?
- Performance — Overall, How Does Your Site Perform? Three benchmarks can be used to assess end user Web application performance:
- Availability: shows that a Web page or full end-to-end transaction requested by a user is executed successfully without error,
- Response Time: shows the speed at which every end-to-end transaction, page, image or third party content downloads,
- Consistency: shows the site’s ability to achieve a quality customer experience across multiple visits, regardless of the user’s geographic location.
As we’ve seen, consistency scores can be especially impacted by connection parallelism. More connections might mean better page load times when the site is under light load, but much worse performance under heavy load. Some end-users will hesitate to return to a site if they have to wonder whether this visit will be one of the fast times or a slow time.
- Perceived Performance — What the User Actually Sees: You are always optimizing for the user’s actual experience. Unlike response time, which typically measures how long it takes for a page and its components to load, perceived performance measures how long it takes for the page to look like it took to load. That means how long it took for the page to appear like it finished moving, and how long it took for all visible “above the fold” components to load.
The perceived performance metric essentially captures the time it takes for your application to be available for user interaction. This is usually impacted by the size of the end-user’s browser window and how the page is designed. Is the layout such that “below the fold” components load before the visible ones? In either case, the load time of the page will be essentially the same, but the perceived performance will be very different.
Using this metric offers the optimal mindset for success in Web application performance testing: understanding that a Web application isn’t what the developer builds, it is what the end user sees. So moving your testing beyond your firewall into the end user’s browser is key. You must know exactly what your end users are experiencing across multiple browsers.
Once you’ve determined your customers’ browser preferences, using tools and approaches based on the parameters outlined above will help you make the right development and optimization choices as you navigate what may be the most challenging browser transition since the birth of the Web.
Imad Mouline is the chief technology officer of Gomez, a provider of Web application experience management services.