Post-2.0 updates and plans for 2012
We released Load Impact 2.0 at the end of October, and the reception has been really, really good. We see an increased number of user registrations and more user activity and what is especially fun is to see that people are starting to use Load Impact for really complex load testing of a wide range of different platforms and applications. It seems all the new functionality of Load Impact 2.0 has been very well received and that people are starting to realize its potential, which is great to see for us who have worked so hard in 2011 on getting it out the door.
Right after release we had a number of issues with the payment system, as well as numerous small bugs that only manifested themselves in production, but overall it was a successful release without any major malfunctions. There are still small annoyances left to iron out, but we are making the service better by the day, and also adding new functionality. Here is a list of some things we have done post-release:
- Support for new load zone in south america – Sao Paolo, Brazil – and new US West load zone – Portland, Oregon.
- New chart/graph component implemented, providing even more advanced graphing capabilities (e.g. instant zoom)
- Improved help/documentation – customer case studies, load scripting tutorial and example scripts
- Data export functionality added (export to CSV)
- Several problems related to the payment system have been fixed. AMEX support was added.
- Several problems related to test startup have been fixed.
- Credit refund logic for aborted or failed tests has been improved. You now get partial or full refunds when a test fails, for some reason.
- Anonymous tests are run from random load zones.
- Numerous small UI bugfixes/improvements.
- HTTP basic Auth now supported for automatic load script generation.
- Improved data parameterization support
We will implement “data stores” that allow people to upload large sets of data, which will then be made available to them in load scripts. This functionality will make it a lot simpler for people who e.g. have a large list with usernames and passwords that they want the simulated clients in a load test to make use of. - User-defined metrics
You will be able to create your own reporting metrics and have your load script store results values for those metrics during a test. Then you can plot graphs for these metrics along with the standard metrics in the reporting (test result) interface. An obvious use for this functionality can be to report load times for individual web pages, in case a user scenario accesses multiple pages (which is fairly common). - Server metrics
This is also a power-user type of feature that allows you to import performance data from the web server(s) you are testing and plot graphs of e.g. the CPU usage on your web frontend machine, overlaid with a graph of the average response time for an HTTP transaction. Being able to import server metrics from the machines that are being stressed in the load test provides a much simpler way of correlating information in order to find out where performnce bottlenecks are. Of course, we will support importing data from database servers and other systems your site/application might be dependent on also.