Global Provider View

Considering using an infrastructure as a service (IaaS) or platform as a service (PaaS) to host your web applications? Are you concerned about performance and availability? Using the Global Provider View, compare cloud performance of PaaS and IaaS providers as we continuously monitor a sample application running in each of the top cloud computing service providers from around the world. See firsthand, in real time, how well the sample application performs over time from the various Internet backbone locations you see here.

Methodology

Overview

CloudSleuth's application called Global Provider View was initially created as an internal resource to help us understand the reliability and consistency of the most popular public Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) cloud providers. Frankly, we had grown tired of the claims and counter claims made by all the industry “experts.” We needed real data to make better design decisions as to how to deploy applications into the cloud. The results were so enlightening that we decided to make the tool available to a wider audience.

The Global Provider View uses the Gomez Performance Network (GPN) to measure the performance of an identical sample application running on several popular cloud service providers. One of the reasons why Gomez has developed a worldwide reputation for the quality and impartialness is that it clearly and unambiguously defines the methodology used for each of its benchmarks. CloudSleuth subscribes to the same open methodology in its performance visualization practices.

While it uses the same tools and techniques as Gomez’s formal benchmarks, the Global Provider View is actually a near real-time visualization tool rather than a benchmark. Unlike benchmarks which are published periodically, Global Provider View provides users with a continuously updating view into the performance of cloud service providers. Performance data is algorithmically checked and filtered to ensure accuracy and consistency as described below, but users should be aware that, due to the real-time nature of the visualization tools, data that may be unrepresentative of performance could be included in the visualization.

Approach

Global Provider View's approach is conceptually very simple. We deploy an identical “target” application to each cloud platform. The Gomez Performance Network (GPN) is used to run test transactions on the deployed target applications and monitor the response time and availability from various points around the globe. Hundreds of data points from each successive test run are collected and aggregated into a cloud performance database. The Global Provider View application enables users to visually interact with the data from the cloud performance database.

A global perspective is essential when evaluating service provider performance, but it is often best to start from a regional perspective. The Global Provider View provides both. If the ISP you are using to access the Web is located in North America, South America, Europe or Asia, your first map view will be a regionalized perspective. (Your geophysical location view is identified using MaxMind's GeoIP database.) Users coming in from other regions will start from the global perspective. To get data from another regional perspective, or from a global perspective, simply change the location with the "locations pull-down at the top of the page.

Performance calculations are done based upon the region's backbone nodes. For example, the response time averages displayed in the "Provider Response Time" list for North America are based on the data retrieved from the 17 backbone nodes that make up the region. European averages are based on the 9 nodes in that region. The global perspective includes data from all 38 backbone nodes.

Global Provider View “Target” Application

In creating a target application for the Global Provider View, we wanted to ensure the test application could be deployed to each provider without modification. It also needed to be a representative proxy for a type of application very commonly deployed to cloud service providers. Finally, the test application had to be relatively small, yet still give us sufficient feedback to make monitoring practical.

We decided to begin by instantiating a very simple simulated retail shopping site as the target application. The “site” consists of two pages. The first page is a list of 40 item descriptions and associated images. Each image is a small (approximately 4K) JPEG file. The second page contains a single large (1.75MByte) JPEG image. The test script directly navigates between the two pages of the site, rendering each page in full. The test is intended to simulate a user browsing a product catalog and viewing a single product image in detail.

The choice of a web site as the initial target application should be seen as a first step to understanding the availability, responsiveness and consistency of cloud service providers. While admittedly monochromatic (especially in light of the richness of services provided by cloud providers), the choice reflects the observation that the majority of modern applications rely on the Internet protocols as their transport mechanism. It enables us to create a relatively small and simple application that still gives us great insight into the core performance of cloud service providers. Just as importantly, it can be easily implemented on both PaaS and IaaS cloud providers.

Our approach to implementing the target application stressed parity. Where absolute parity was not possible, because of the inherent differences between IaaS and PaaS service providers, we chose the implementation practice recommended by the service provider’s publicly available documentation. The content of the web site is identical for all implementations. The following table summarizes the infrastructure components used:

(Note: As our test is focused on page delivery times, not compute performance, server configuration has little impact on the results of our test - the reference application is composed of static pages that require only file retrieval. In most cases, we select the default machine image available from the provider – most often their smallest.)

Reported Metrics

Global Provider View depicts two basic user experience metrics – response time and availability – as measured by the Gomez Performance Network.

Response Time

Response time is the total time elapsed while downloading both web pages in the multi-step test transaction. Each page’s end-to-end response time includes the page’s root object as well as all referenced image objects, JavaScript, Cascading Style Sheets, and any other related content. For aggregate reporting results, the response time is the average response time of all successfully completed tests over the period.

Availability

Availability measures the percentage of test transactions that completed successfully out of the set of transactions attempted. An unsuccessful test transaction is a transaction that returns a status code other than “200," one that provides some other critical error or fails to download a page in the maximum allowable time frame (currently 60 seconds). If a measurement period contained 100 total tests – 99 successful tests and 1 failed test – the provider’s availability would be 99 percent that period.

Range Value

Range value is calculated based on the aggregate values of the different backbone nodes that have been chosen, or, by default, the Range value utilizes the first five checked providers in the cloud provider list. The Range value provides a nice look at the Range of values on a given backbone node for the given geographical location.

Reporting Windows

The Global Provider View reports metrics are a moving average over a user selectable reporting window. Currently, four reporting windows are available: 6 hours, 24 hours, 7 days and 30 days.

Moving averages are calculated based upon a periodic sample of the most recently available measurements. Test transactions are executed continuously from selected backbone nodes in the Gomez Performance Network. Results are stored in the Gomez Performance Data Warehouse. The Global Provider View’s application server samples the data in the Gomez Performance Data Warehouse periodically (currently, once every 5 minutes) and tabulates new results. The most recent results from the sample set.

The size of a sample set will vary depending on the availability of the cloud providers and the periodic sample rate. The average of the sample set is calculated based on the actual size of the sample set.

The Gomez Performance Network

Test transactions are continuously run against test targets using the Gomez Performance Network (GPN). CloudSleuth's Global Provider View uses Gomez Active Backbone nodes.

Gomez Active Backbone Nodes

Gomez Active Backbone Nodes are enterprise-class servers located in data centers with high-bandwidth, direct connections to the Internet backbone. Since these nodes are resourced-managed and use high-bandwidth connections, they generate highly accurate and consistent test loads with little network-induced variability.

The Global Provider View runs test transaction from 38 backbone nodes located around the world: 17 nodes are located in the U.S and 21 are outside of the U.S. The distribution of the nodes within the U.S is designed to be representative of six geographic regions described by http://www.fcc.gov/oet/info/maps/areas/. Specifically, these zones are defined as:

  • Zone 1: Northeast
  • Zone 2: Mid-Atlantic
  • Zone 3: Southeast
  • Zone 4: Great Lakes
  • Zone 5: Central/Mountain
  • Zone 6: Pacific

US Backbone Node Locations by Zone

 

Zone 1Zone 2Zone 3Zone 4Zone 5Zone 6
Boston, MA
New York, NY
Philadelphia, PA
Reston, VA
Atlanta, GA
St. Louis, MO
Chicago, IL
Kansas City, MO
Mesa, AZ
Dallas, TX
Houston, TX
San Jose, CA
Denver, CO
Los Angeles, CA
San Diego, CA
Seattle, WA

Global Backbone Node Locations

Argentina: Buenos Aires
Australia: Sydney
Brazil: Sao Paulo
Canada: Toronto
China: Beijing
China: Chengdu
China: Shanghai
Denmark: Copenhagen
Finland: Helsinki
France: Paris
Germany: Berlin
Germany: Frankfurt
Hong Kong: Quarry Bay
India: Bangalore
India: Mumbai
Japan: Tokyo
Norway: Oslo
Switzerland: Bern
Turkey: Istanbul
United Kingdom: London

Each backbone node runs four test transactions per hour against each target application instance. The Gomez Universal Transaction Agent (UTA) is used for all transactions.

Provider Configurations

The following table summarizes the infrastructure components used:

ProviderLocationsData CentersConfiguration
Amazon EC2US East - Virginia
US West - California
EU - Ireland
Asia - Singapore
Asia - Japan
Ashburn (Zone 1a)
Palo Alto (Zone 1a)
Dublin (Zone eu-west-1)
Singapore (Zone ap-southeast-1)
Tokyo (Zone ap-southeast-1)
Tomcat 6.0.24
Default Configuration (Small Instance)
Content stored in AMI
BitRefineryUS Central - ColoradoDenverTomcat 6.0.24
Default Configuration
Content stored in machine image
BlueLockUS Central - IndianaIndianapolisTomcat 6.0.24
Default Configuration
Content stored in machine image
BT Global ServicesEU - France
EU - Italy
EU - England
EU - Netherlands
EU - Spain
Paris
Minal
London
Nieuwegein
Madrid
Tomcat 6.0.24
Default Configuration
Content stored in machine image
City CloudEU - SwedenKarlskronaTomcat 6.0.24
Default Configuration
Content stored in machine image
Claris NetworksUS South - TennesseeKnoxvilleTomcat 6.0.24
Default Configuration
Content stored in machine image
Cloud SigmaEU - SwitzerlandZurichTomcat 6.0.24
Default Configuration
Content stored in machine image
CoderoUS South - ArizonaPhoenixTomcat 6.0.24
Default Configuration
Content stored in machine image
eAppsUS East - GeorgiaAtlantaTomcat 6.0.24
Default Configuration
Content stored in machine image
Elastic HostsUS South - TexasSan AntonioTomcat 6.0.24
Default Configuration
Content stored in machine image
Elastic Hosts BlueSquareEU - EnglandLondonTomcat 6.0.24
Default Configuration
Content stored in machine image
Elastic Hosts PeerEU - EnglandLondonTomcat 6.0.24
Default Configuration
Content stored in machine image
FlexiscaleEU - ScotlandLivingstonTomcat 6.0.24
Default Configuration
Content stored in machine image
GoGrid EastUS East - VirginiaAshburnTomcat 6.0.24
Default Configuration
Content stored in machine image
GoGrid WestUS West - CaliforniaSan FranciscoTomcat 6.0.24
Default Configuration
Content stored in machine image
Google App Engine  Platform-specific
Green House DataUS West - WyomingCheyenneTomcat 6.0.24
Default Configuration
Content stored in machine image
IIJ GIOAsia - JapanTokyoTomcat 6.0.26
iland CloudUS South - TexasDallasTomcat 6.0.24
Default Configuration
Content stored in machine image
JoyentUS West - CaliforniaSan FranciscoTomcat 6.0.24
Default Configuration
Content stored in machine image
OpSourceUS East - Virginia
US West - California
Ashburn
San Jose
Tomcat 6.0.24
Default Configuration
Content stored in machine image
QubeEU - England
EU - Switzerland
US East - New York
London
Zurich
New York
Tomcat 6.0.24
Default Configuration
Content stored in machine image
RackspaceEU - England
US South - Texas
London
Dallas
Tomcat 6.0.24
Default Configuration
Content stored in machine image
ReliaCloudUS Central - MinnesotaSt. PaulTomcat 6.0.24
Default Configuration
Content stored in machine image
SoftLayerUS East - Virginia
US South - Texas
US West - California
US West - Washington
Washington D.C.
Dallas
San Jose
Seattle
Tomcat 6.0.24
Default Configuration
Content stored in machine image
TeklinksUS South - AlabamaBirminghamTomcat 6.0.24
Default Configuration
Content stored in machine image
TekLinks2US South - AlabamaBirminghamTomcat 6.0.24
Default Configuration
Content stored in machine image
TerremarkUS East - FloridaMiamiTomcat 6.0.24
Default Configuration
Content stored in machine image
Tier3US West - WashingtonSeattleTomcat 6.0.24
Default Configuration
Content stored in machine image
VPS.netAsia - JapanTokyoTomcat 6.0.24
Default Configuration
Content stored in machine image
VoxelAsia - Singapore
EU - Netherlands
US East - New York
US West - California
Singapore
Amsterdam
New York
San Jose
Tomcat 6.0.24
Default Configuration
Content stored in machine image
Windows AzureEU - Ireland
Asia - Singapore
US Central - Illinois
Dublin
Singapore
Chicago
Platform-specific