When to Leverage Commercial Load Testing Services, and When to Go it Alone

How and where you execute load and performance testing is a decision that depends on a number of factors in your organization and even within the application development team.

It is not a clear cut decision that can be made based on the type of application or the number of users, but must be made in light of organizational preferences, cadence of development, timeline and of course the nature of the application itself and technical expertise currently on staff.

In this post we will provide some context around some of the key decision points that companies of all size should consider when putting together load NS performance testing plans.

This discussion is really an amalgamation of On-Premise versus SaaS/Open-Source versus Commercial Services.

In the load testing space there are commercial offerings that offer both SaaS and on-premise solutions as well as many SaaS only solutions for generating user load.

From an open source perspective, JMeter is the obvious choice (there are other less popular options such as FunkLoad, Gatling, Grinder, SOAPUI, etc). Having said that, let’s look at the advantages and challenges of the open source solution,  JMeter, and contrast that with a cloud-based commercial offering.

Key JMeter Advantages:

  1. 100% Java application so it can be run on any platform (windows, osx, linux) that can run Java.
  2. Ability to test a variety of types of servers – not just front end HTTP servers.  LDAP, JMS, JDBC, SOAP, FTP are some of the more popular services that JMeter can load test out of the box.
  3. Extensible, plug-in architecture. The open source community is very active in development around JMeter plugins and many additional capabilities exist to extend reporting, graphing, server resource monitoring and other feature sets.  Users can write their own plugins if desired as well.  Depending on how much time and effort is spent there is little that JMeter can’t be made to do.
  4. Other than the time to learn the platform there is no software cost of course since it is open source.  This may be of particular value to development teams with limited budget or who have management teams who prefer to spend on in-house expertise versus commercial tools.
  5. It can be easy to point the testing platform at a development server and not have to engage the network or server team to provide external access for test traffic.  It’s worth noting that while this is easier it is also less realistic in terms of real world results.

Key JMeter Disadvantages:

  1. Being that it is open source you do not have an industry vendor to rely upon for support, development or expertise.  This doesn’t mean that JMeter isn’t developed well or that the community isn’t robust – quite the opposite. Depending on the scope of the project and visibility of the application it can be very helpful to have industry expertise available and obligated to assist.  Putting myself in a project manager’s shoes, would I be comfortable telling upper management, “we thoroughly tested the application with an open source tool with assistance from forums and mailing lists?” if there were to be a major scale issue discovered in production?
  2. It’s very easy to end up with test results that aren’t valid.  The results may be highly reliable – but reliably measuring bottlenecks that have nothing to do with the application infrastructure isn’t terribly useful.  Since JMeter can be run right from a desktop workstation, you can quickly run into network and CPU bottlenecks from the testing platform itself – ultimately giving you unrealistic results.
  3. Large scale tests – not in the wheelhouse of JMeter.  Right in the documentation (section 16.2 of best practices) is a warning about limiting numbers of threads.  If a truly large scale test is required you can build a farm of test servers orchestrated by a central controller, but this is getting pretty complicated, requires dedicated hardware and network resources, and still isn’t a realistic real-world scenario anyway.
  4. The biggest disadvantage is inherent in all on-premise tools in this category in that it is not cloud based.  Unless you are developing an in-house application and all users are on the LAN, it does not makes a ton of sense to rely (entirely) on test results from inside your network.  I’m not suggesting they aren’t useful but if users are geographically distributed then testing in that mode should be considered.
  5. Your time: doing everything yourself is a trap many smart folks fall into, and often times at the expense of project deadlines, focus. Your time is valuable and in most cases it could be better spent somewhere else.

This discussion really boils down to if you like to do things yourself or if the project scope and criticality dictate using commercial tools and expertise.

For the purposes of general testing, getting familiar with how load testing works and rough order of magnitude sizing, you can certainly use open source tools on your own – with the caveats mentioned.  If the application is likely to scale significantly or have users geographically distributed, then I do think using a cloud based service is a much more realistic way to test.

In addition to the decision of open source versus commercial tools is if professional consulting services should be engaged.  Testing should be an integral part of the development process and many teams do not have expertise (or time) to develop a comprehensive test plan, script and configure the test, analyse the data and finally sort out remediation strategies on their own.

This is where engaging experts who are 100% focused on testing can provide real tangible value and ensure that your application scales and performs exactly as planned.

A strategy I have personally seen work quite well with a variety of complex technologies is to engage professional services and training at the onset of a project to develop internal capabilities and expertise, allowing the organization to extract maximum value from the commercial product of choice.

I always recommended to my customers to budget for training and service up front with any product purchase instead of trying to shoe-horn it in later, ensuring new capabilities promised by the commercial product are realized and management is satisfied with the product value and vendor relationship.

——

Peter CannellThis post was written by Peter Cannell. Peter has been a sales and engineering professional in the IT industry for over 15 years. His experience spans multiple disciplines including Networking, Security, Virtualization and Applications. He enjoys writing about technology and offering a practical perspective to new technologies and how they can be deployed. Follow Peter on his blog or connect with him on Linkedin.

Don’t miss Peter’s next post, subscribe to the Load Impact blog by clicking the “follow” button below. 

Performance Testing Versus Performance Tuning

Performance testing is often mistaken for performance tuning. The two are related, but they are certainly not the same thing. To see what these differences are, let’s look at a quick analogy.

Most governments mandate that you bring your vehicles to the workshop for an inspection once a year. This is to ensure that your car meets the minimum safety standards that have been set to ensure it is safe for road use. A website performance test can be likened to a yearly inspection – It ensures that your website isn’t performing terribly and should perform reasonably well under most circumstances.

When the inspection shows that the vehicle isn’t performing up to par, we start running through a small series of checks to see how to get the problem solved in order to pass the inspection. This is similar to performance tuning, where we shift our focus to discovering what is necessary to making the application perform acceptably.

Looking in depth at the performance test results helps you to narrow down the problematic spots so you can identify your bottlenecks quicker. This in turn helps you to make optimization adjustments cost and time efficient.

Then we have the car enthusiasts. This group constantly works toward tuning their vehicle for great performance. Their vehicles have met the minimum performance criteria, but their goal now is to probably make their car more energy-efficient, or perhaps to run faster. Performance tuning goals are simply that – You might be aiming to reduce the amount of resources consumed to decrease the volume of hardware needed, and/or to get your website to load resources quicker.

Next, we will talk about the importance of establishing a baseline when doing performance tuning.

Tuning your website for consistent performance

Now that we know the difference between performance testing and performance tuning, let’s talk about why you will need a controlled environment and an established baseline prior to tuning web applications.

The importance of a baseline: Tuning your web application is an iterative process. There might be several factors contributing to poor website performance, and it is recommended to make optimization adjustments in small steps in a controlled test environment. Baselines help to determine whether an adjustment to your build or version improves or declines performance. If the conditions of your environment are constantly changing or too many large changes are made at once, it will be difficult to see where the impact of your optimization efforts come from.

To establish a baseline, try tracking specific criteria such as page load times, bandwidth, requests per second, memory and CPU usage. Load Impact’s server metrics helps to combine all these areas in a single graph from the time you run your first performance test. Take note of how these changes improve or degrade when you make optimization improvements (i.e. if you have made hardware upgrades).

Remember that baselines can evolve over time, and might need to be redefined if changes to the system have been made since the time the baseline was initially recorded.If your web application is constantly undergoing changes and development work, you might want to consider doing small but constant tests prior to, for instance, a new fix being integrated or a new version launch.

As your product development lifecycle changes, so will your baseline. Hence, doing consistent testing prior to a release helps save plenty of time and money by catching performance degradation issues early.

There is an increasing number of companies adopting a practice known as Continuous Integration. This practice helps to identify integration difficulties and errors through a series of automated checks, to ensure that code deployment is as smooth and rapid as possible.

If this is something that your company already practices, then integrating performance tuning into your product delivery pipeline might be as simple as using Load Impact’s Continuous Delivery Jenkins plugin. A plugin like this allows you to quickly integrate Jenkins with our API to allow for automated testing with a few simple clicks.

By Chris Chong (@chrishweehwee)

About Load Impact

Load Impact is the leading cloud-based load testing software trusted by over 123,000 website, mobile app and API developers worldwide.

Companies like JWT, NASDAQ, The European Space Agency and ServiceNow have used Load Impact to detect, predict, and analyze performance problems.
 
Load Impact requires no download or installation, is completely free to try, and users can start a test with just one click.
 
Test your website, app or API at loadimpact.com

Enter your email address to follow this blog and receive notifications of new posts by email.