Welcome!

Microsoft Cloud Authors: Yeshim Deniz, Janakiram MSV, John Katrick, David H Deans, Andreas Grabner

Related Topics: Microsoft Cloud

Microsoft Cloud: Article

VS2010 Load Testing for Distributed and Heterogeneous Applications

Microsoft added new interfaces for performance management solutions

Visual Studio 2010 is almost here – Microsoft just released the first Release Candidate which looks pretty solid and good. Microsoft added new interfaces for performance management solutions like dynaTrace to extend the Web- and Load-Testing capabilities (check out Ed Glas’s blog on what’s in VSTS Load Testing) to go beyond .NET environments and deeper than what Load Testing Reports tell you about the performance of the tested application.

But before we go into what can be done by extending Visual Studio – lets have a look of what we get out of the box:

Standard Load Testing Reports from Visual Studio 2010
While running a load-test Visual Studio 2010 is collecting all sorts of information. Starting from the response times of the executed requests, performance counters of the tested application infrastructure (like CPU, Memory, I/O, …) and also the health of your load-testing infrastructure (load controller and agents). In my scenario I run a 4 tier (2 JVMs, 2 CLRs) web application. The 4 tiers communicate via SOAP Web Services (Axis->ASMX). The frontend web application is implemented using Java Servlets. I run a 15 minute load test with increasing load. The test is structured into multiple different transactions, e.g.: Home Page, Search, Login, BuyDirect, … – While running my test I also monitor all relevant performance counters from the application server and the load testing infrastructure. Visual Studio 2010 allows me to monitor the current state of the Load Test via configurable graphs as shown here:

Visual Studio Load Testing Graphs

Visual Studio Load Testing Graphs

The graphs show that response times of some (not all) of my transactions increase with increasing user load. It also highlights that CPU usage on my application server became a problem (exceeds 80% with ~20 concurrent users). At the end of the load test a summary report highlights what load was executed against the application – which errors happened and which pages performed slowest:

Load Testing Summary Report

Load Testing Summary Report

Switching to the Tables view gives a detailed breakdown into individual result dimension, e.g.: Transactions, Pages, Errors, … :

Visual Studio Load Testing Summary Tables

Visual Studio Load Testing Summary Tables

From the table view we can make the following observations:

  • 553 page requests exceeded my rule of 200ms per page
  • the 553 pages were the menu.do, netpay.do and userlogin.do (you can see this when you look at the individual error requests)
  • The LastMinute transaction was by far the slowest with 1.41s average response time and a max of 5.64s

What we don’t know is WHY THESE TRANSACTIONS ARE SLOW: The performance counters indicate that CPU is a potential problem but it doesn’t give us an indication what caused the CPU overhead and whether this is something that can be fixed or whether we are just running against our system performance boundaries.

Performance Reports by dynaTrace captured during Visual Studio 2010 Load Test
dynaTrace customers can download the Visual Studio 2010 plugin on the dynaTrace Community Portal. The package includes a Visual Studio Add-In and a Visual Studio Testing Plugin Library that extends its Web- and Load-Testing capabilities. We also offer the Automatic Session Analysis plugin that helps in analyzing data captured during longer load tests.

I used dynaTrace Test Center Edition on my 4 tier application while running the load test. The Visual Studio 2010 plugin made sure that dynaTrace automatically captured all server-side transactions (PurePath’s) in a dynaTrace Session. It also made sure that the same transaction names used in the Web Test script were passed on to dynaTrace.

While running the load test the Load Testing Performance Dashboard that I’ve created for my application allows me to watch the requests that come in and the memory consumption on each of my JVMs and CLRs. I can also see which Layers of my application contribute to the performance – with layers being ADO.NET, ASP.NET, SharePoint, Servlets, JDBC, Web Services, RMI, .NET Remoting, … – dynaTrace automatically detects these layers and it helps me to understand which components/layers of my app actually consume most of the execution time and how increasing load is affecting these components individually. Besides that I also watch the number of SQL statements executed (whether via Java or .NET) and also the number of Exceptions that happen:

dynaTrace Load Testing Performance Dashboard

dynaTrace Load Testing Performance Dashboard

On the top left I see the individual transaction response times and the accumulated transaction counts underneath. These are the number of incoming requests were it is easy to see how VS2010 increased the load during my test.
On the top right I see the memory usage of my two JVMs and underneath the memory usage of my two CLRs (seems I have a nice memory leak in my 2nd JVM and one very “quiet” CLR.

The bottom left chart (titled with Layer Breakdown) now shows me what’s going on within my application with increasing load. I can see that my application scales well until a certain user load – but then the Web Service Layer (dark gray color) starts performing much worse than all other involved application layers.

On the bottom right the number of database statements and number of exceptions show me that these counters increase linearly with increasing load – but – it seems we have quite a lot database queries (up to 350/second) and we also have quite a lot exceptions that we should investigate.

After the load test is finished the first report that I pull up is a report that shows me the slowest web transactions grouped by the transaction names used in Visual Studio:

dynaTrace Performance Report per Web Transaction

dynaTrace Performance Report per Web Transaction

I can see that the LastMinute is indeed the slowest transaction with a max of 5.6 seconds. The great thing about this report is that we get a detailed breakdown of these top transactions into application layers, database calls and method calls. We can immediately see that Java Web Services are the highest performance contributor to the Last Minute transaction. We also see that we have several thousand database queries for the 448 requests to this transaction and we also see which Java & .NET methods contributed to the execution time. A click on Slowest Page opens the PurePath Dashlet showing every individual transaction that got executed. Sorting it by duration shows the big variance between the execution times. The PurePath Hot Spot View makes it easy to spot the most contributing methods in the slowest transaction:

Individual PurePaths of slowest running Transactions showing a big  variance

Individual PurePaths of slowest running Transactions showing a big variance

With the PurePath Comparison feature I go one step further to find out what the difference between two transactions that show a big execution time difference are:

Comparing two transactions and identify the difference

Comparing two transactions and identify the difference

Visually in the Chart as well as in the PurePath Comparison Tree we see that getting the SpecialOffer’s and all calls in that context (creating the web service and calling it) make up most of the time difference. The difference table on the bottom lists all timing and structural differences between these two PurePaths giving even more insight into where else we have differences.

Show me the PurePath to individual failed Web Requests
In your VS2010 Run Configuration for your load test you can specify to store detailed response results in a SQL Database. This allows you to look up individual failed transactions including the actual HTTP traffic and all associated timings after the load test is finished. In my case I had another slow transaction type called BuyDirect. Via the VS2010 Load Testing Report I open individual failed transactions and analyze the individual requests that were slow:

Problematic Request from the Load Test with linkage to the  dynaTrace PurePath

Problematic Request from the Load Test with linkage to the dynaTrace PurePath

The result view shows me that the request took 1.988s.  The dynaTrace VS2010 Plugin adds a new tab in the Results Viewer allowing me to open the captured PurePath for that particular slow request by clicking on a PurePath link. Clicking on this link opens the PurePath in the dynaTrace Client:

Long running Heterogenous Transaction opened from Visual Studio

Long running Heterogeneous Transaction opened from Visual Studio

We can easily spot where the time is spent in this transaction – it is the web service call from the 2nd JVM (GoSpaceBackend) to the CLR that hosts the Web Service (DotNetPayFrontend). One of the problems also seems to be related to the exceptions that happen when calling the web service. These are exceptions that didn’t make it up to our own logging framework as they were handled internally by Axis but are caused by a configuration issue (we can look at the full exception stack trace here to find that out). With one further click I go ahead and look at the Sequence Diagram of this transaction. This diagram provides a better overview of the interactions between my 4 different servers:

dynaTrace Sequence Diagram showing interactions between servers for  a single transaction

dynaTrace Sequence Diagram showing interactions between servers for a single transaction

The sequence diagram goes on beyond what’s in the screenshot – but I guess you get the idea that we have a very chatty transaction here.

The dynaTrace VS2010 Plugin allows me to drill down to the problematic methods in a distributed heterogeneous transaction within a matter of seconds saving me a lot of time analyzing the problem based on the load testing report alone.

Share results with Developers and lookup problems in Source Code
Now we have all this great information and already found several hotspots that our developers should look into. Instead of giving my developers access to my test environment I simply export the captured data to a dynaTrace Session file and attach it to a JIRA issue (or whatever bug tracking tool you use) that I assign my dev. I can either export all captured data (PurePaths and performance counters) or be more specific and only export those PurePaths that have been identified as being problematic.

Development picks up the dynaTrace Session file, imports it into their local dynaTrace Client and analyzes the same granular data as we analyzed in our test environment. Having the dynaTrace Visual Studio 2010 Plugin installed allows the developer to Lookup individual methods in Visual Studio starting from the PurePath or Methods Dashlet in the dynaTrace Client:

Lookup source code of problematic method

Lookup source code of problematic method

The dynaTrace Plugin in Visual Studio – where you have to have your solution file opened – searches for the selected method, opens the source code file and sets the cursor to that method:

Problematic source code method in Visual Studio 2010 Editor

Problematic source code method in Visual Studio 2010 Editor

The data is easily shareable with anybody that needs to look at it. Within a matter of seconds the developer ends up at the source code line within Visual Studio 2010 that represents a problematic method in terms of performance. The dev also has all the contextual information on hand that shows why individual executions of the same transaction were faster than others, as the PurePath’s include information like method arguments, HTTP parameters, SQL Statements with Bind Variables, Exception Stack Traces, … -> this is all information that developers will love you for :-)

Identify Regressions across Test Runs
When running continuous load tests against different builds we expect performance to get better and better. But what if that is not the case? What has changed from the last build to the current? Which components don’t perform as well as they did the build before? Has the way we access the database changed? Is it an algorithm in custom code that takes too much time or is it a new 3rd party library that was introduced with this build that slows everything down?

The Automatic Session Analysis plugin also analyzes data across two load testing sessions generating a report that highlights the differences between these two sessions. The following screenshot shows the result of a load testing regression analysis:

Regression Analysis by comparing two load testing sessions

Regression Analysis by comparing two load testing sessions

It shows us which transactions were actually executed in the latest (top left) and previous (top right) build. In the middle we get an overview about which layers/components contributed to the performance in each of the two sessions and also shows a side by-side comparison (center) where the bars tell us which components performed faster or slower. It seems we had some serious performance decrease in most of our components. On the bottom we additionally see a comparison between executed database statements and methods. Similar to what I showed in the previous sections we would drill into more details from this report to analyze more details.

In Summary
Visual Studio 2010 is a good tool for performing load tests against .NET or Java Web Applications. The Load Testing Reports have been improved in this version and allow you to get a better understanding about the performance of your application. For multi-tier or heterogeneous applications like the one I used in my scenario it is now easy to go beyond the standard load testing reports by using an Application Performance Management Solution like dynaTrace. The combination of a load testing solution and an APM solution helps you to not only know that you have a performance problem but it allows you to identify the problem faster and therefore reduce test cycles and time spent in the testing phase.

There is more to read if you are interesting in these topics: White Paper on how to Automate Load Testing and Problem Analysis, webinars with Novell and Zappos that use a combination of a Load Testing Solution and dynaTrace to speed up their testing process as well as and additional blog posts called 101 on Load-Testing.

Feedback is always welcome and appreciated – thanks for reading all the way to the end :-)

Related reading :

  1. Getting ready for TechReady8: Load- and Web-Testing with VSTS and dynaTrace I’ve been invited by Microsoft to show dynaTrace’s integration into...
  2. Visual Studio Team System for Unit-, Web- and Load-Testing with dynaTrace Last week I was given the opportunity to meet the...
  3. Performance Analysis: Identify GC bottlenecks in distributed heterogeneous environments Garbage Collection can have a major impact on application performance....
  4. Boston .NET User Group: Load and Performance Testing: How to do Transactional Root-Cause Analysis with Visual Studio Team System for Testers I am going to present at the next Boston .NET...
  5. How to extend Visual Studio 2010 Web- and Load-Testing with Transactional Tracing Microsoft recently published the first official beta build of Visual...

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

@ThingsExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.