Know your node.js

As part of a follow up to last months column about PHP vs Node.js, I hit some problems with Node under load. As with all technologies, Node.js does have some limitations that may or may not be a problem for your specific use case. If the last column about comparing PHP and Node.js had a deeper message, that message would be that if you want to scale you have to know your stack. To be completely clear, when I say stack I mean the layers of technology used to server http requests. One of the most common stacks out there are simply called LAMP – (L)inux (A)pache2 (M)ySQL (P)HP (or Perl). You now see a lot of references to LNMP, where Apache2 is replaced with Nginx. When building Node.js applications, things can vary a lot since node.js comes with it’s own http server. In my previous text, I used Node.js together with MySQL on a Linux box, so I guess we can dub that the LNM stack if we absolutely need to have a name for it. And when I say Know your stack. I mean that if you want to produce better than average performance numbers, you have to be better than average in understanding how the different parts in your stack works together. There are hundreds of little things that most of us never knew mattered that suddenly becomes important when things come under load. As it happens, watching your application work under load is a great way to force yourself to know your stack a little better.

Background

When testing Apache/PHP against Node.js, I found that the raw performance of Node.js as well as the ability to handle many concurrent clients was excellent. Faster and more scalable than Apache2/PHP. One reader pointed out that the test wasn’t very realistic since there was just one single resource being queried and there was no static content involved. Apache2/PHP could very well relatively better if some of the content was static. So I set up a test to check this and while running this. Node.js crashed. As in stopped working. As in would not server any more http reqeusts without manual intervention. So to keep it shord, Apach2/PHP won that round. But in the spirit of ‘know your stack’, we need to understand why Node.js crashed. The error message I got was this:

Unhandled 'error' event "events.js:71"

First of all, it took a fair amout of googling to figure out what that the error message was really about. Or, rather, the error message was saying that something happened and there’s no error handler for it. So good luck.

Fixing it.

The first indication I got via Google and Stack Overflow was that this may be an issue with Node.js before 0.8.22 and sure enough, I was running 0.8.19. So the first thing I did was upgrade to version 0.8.22. But that did not fix the problem at all (but a later and greater version is of course a nice side effect). With almost all other software involved being up to date, this actually required some structured problem solving.

Back to the drawing board

I eventually managed to trace the error message down to a ‘too many open files’ problem which is Interesting as it answers the crucial question: What went wong? This happened at roughly 250 concurrent users with a test that was accessing 6 different static files. This is what it looks like in LoadImpact:

node_failed_test

So a little depending on timing, and exactly when each request comes in, it would roughly indicate that some 1500 (6 files times 250 users) files can be open at the same time. Give or take. Most Linux systems are, by default, configured to allow relatively small number of open files, e.g. 1024. The Linux command to check this is ulimit:

$ ulimit -n
1024

1024 is the default on a lot of distros, including Ubuntu 12.10 that I was running the tests on. So my machine had 1024 as the limit but it appears that I had 1500 files open at the same time. Does this make any sense? Well, sort of, there are at least 3 factors involved here that would affect the results:

  1. Load Impact simulates real browsers (Virtual Users). A VU user only opens 4 concurrent connections to the same server even if the script tells it to download 6 resources. The other 2 resources are simply queued.
  2. Each open TCP socket counts as an open file. So each concurrent TCP connection is an open file. Knowing that our limit is 1024, that would indicate that node.js could handle up to 256 concurrent users if each user uses the maximum of 4 open connections.
  3. In our sample, the requests for static resources also opens a file and thereby occupies another file handle. This file is open for less time than the actual connection, but still, under a certain time, a single request can consume 2 open file handles.

So in theory, the limit for concurrent simulated browser users should be 256 or less. But in reality, I saw the number of concurrent users go all the way up to 270 before the Node.js process died on me. The explanation to that is more likely than anything just timing. Not all VU’s will hit the server at exactly the same time. At the end, hitting problems when running about 250 concurrent users reasons well with the open files limit being the problem. Luckily, the limit of number of open files per process is easy to change:

$ ulimit -n 2048

The next test shows real progress. Here’s the graph:

node_better_test

Problem solved (at least within the limits of this test).

Summary

Understanding what you build upon is important. If you choose to rely on node.js, you probably want to be aware of how that increases your dependency on various per process limitations in the operating system in general and max number of open files in particular. You are more affected by these limitations since everything you do takes place inside a single process. And yes. I know. There are numerous of more or less fantastic ways to work around this particular limitation. Just as there are plenty of ways to work around limitations in any other web development stack. The key thing to remember is that when you select your stack, framework, language or server, you also select all the limitations that comes with it. There’s (still) no silver bullet, even if some bullets are better out of the box than other. Having spent countless of hours with other web development languages, I think I’m in a good position to compare and yes indeed! Node.js delivers some amazing performance. But at present, it comes with a bigger responsibility to ‘Know Your stack’ than a lot of the others.

Load generation from Australia

A few days ago Amazon announced the availability of its new AWS data center in Sydney, Australia. We here at Load Impact are now happy to announce that we have implemented support for load tests from the Sydney data center.

This means that we are now able to generate simulated user traffic from 8 different locations around the globe:

  • Palo Alto, California, USA
  • Portland, Oregon, USA
  • Ashburn, Virginia, USA
  • Sao Paolo, Brazil
  • Dublin, Ireland
  • Singapore, Singapore
  • Tokyo, Japan
  • Sydney, Australia

You are of course also able to use multiple locations where your traffic is generated, in a single load test. This is something unique to Load Impact that other services don’t offer.

Some other new things we have released recently include:

  • Parallel deployment of test configurations for large tests – This will drastically reduce the time it takes to start larger load tests. Where previously we configured all involved load generator cloud instances in a load test sequentially, we now configure them in parallel.
  • Graceful test shutdown – Previously, when a test was completed, the load generator instances would be killed abruptly. This meant that tests usually ended at the exact moment they were supposed to, but it also meant that some transactions that were “in transit” would never be recorded by Load Impact, even though they might appear in the logs on the web server. To make this less likely to happen, we have now started shutting down tests more gracefully than before, waiting a little bit for all simulated clients to be done with their current transactions. This means that while earlier, all clients in a test would just disappear when the test ended, you will now see a short ramp-down period where the number of clients ramps down quickly from whatever level you were at, to zero.
  • Various bug fixes and improvements – We have implemented lots of smaller bug fixes and improvements, for example in script validation, script conversion, user session handling, etc.

New updates

The world’s best load testing service gets even better!

Yesterday, on Aug 2, we pushed a small update that included the following features and changes:

  • Ramp-up restrictions removed
    You can now ramp up or down the number of simulated users in a test as quickly as you like (well, almost – a ramp operation can be done in as little as 1 minute now).
  • Ramp steps can be any length
    Previously, a single ramp-up/down step could have a duration of max 120 minutes. This limit has been removed. There is now just a single limit for the whole test schedule, which can be max 24 hours (1440 minutes).
  • Changed default values for tests
    All defaults for test configurations, are now to ramp up to 50 VU during 10 minutes (previously, an automatically created test configuration would ramp up to 50 VU during 15 minutes, while a new test configuration created by a user would by default ramp up to 25 VU during 10 minutes).
  • New load script API functionality: auto_cookie_handling
    There is now a new boolean option that can be set with http.set_option() – “auto_cookie_handling”. It is set to true by default, but if set to false it will turn off the automatic handling of cookies between requests, allowing the script programmer to design his/her own cookie management.
  • Load generator bug fix
    Fixed an intermittent bug caused by issuing sleep statements with sleep time set to zero.
On June 20, we pushed another small update that included these changes:
  • Several proxy recorder bug fixes:
    • Fixed problem with injected toolbar appearing in the wrong place
    • Fixed problem with extra HTML added
    • Fixed problem with proxy sometimes generating extra CRLF’s to requests
  • New page on site: The state of web readiness 2012
    http://loadimpact.com/readiness
  • Company address on invoices
    You now get your company’s address on your receipts/invoices, viewable online at loadimpact.com
On June 8, we released Load Impact 2.4 that contained the following fixes and improvements:
  • New load script API functionality
    In this release we introduced a range of new API functions:

    • client.get_user_scenario_name() – returns name of currently executing user scenario
    • client.get_time_since_start() – returns elapsed time since the simulated user started executing
    • client.get_source_ip() – returns the source IP address seen in network traffic generated by this user
    • test.get_name() – returns the name of the currently running load test
    • test.get_time_since_start() – returns elapsed time since start of test execution
    • util.unique() – returns a string guaranteed to be unique across simulated clients in a test
  • Extra IP addresses
    You can now configure your load test to use more source IP addresses when generating traffic. This comes at an extra cost as it requires more infrastructure (cloud) resources, but can be very useful for e.g. spreading traffic evenly if you have a load balancer.
  • Small UI changes
    Several minor UI tweaks & fixes:

    • Changed “Test title” to “Test name”, for consistency
    • Fixed inconsistent naming of load zones. Load zones are now named as: CITY, COUNTRY CODE (PROVIDER)
      E.g. “Ashburn, US (Amazon)”
  • Bugfixes
    • Fixed broken resend email activation link
    • Fixed bug allowing tests to be scheduled up to 1 hour in the past
    • Fixed pagination bug in URL table on test results page
    • Fixed deployment bug affecting graphical editor user scenarios containing Unicode characters
    • Fixed bug causing screen to gray out in certain cases when selecting script editor for new user scenario

The State of Web Readiness 2012

For the moment we are attending the O’Reilly Velocity Conference in Santa Clara, where we have launched our brand new report “The State of Web Readiness 2012”.  In short the report covers how robust sites are based on 8,522 load tests executed in 132 countries. We found out that the average site was load tested at up to 3.4 times the actual capacity. What does that mean? Well the short summary is that a large part of the websites in the world might not stand up to what site owners expect of them.

This is actual data from actual load tests conducted with our own cloud-based online load test tool and frankly, we were a bit concerned with the findings of our study. Not that we are surprised that websites go down when we need them the most. Even though web sites have been a mainstream occurrence for over 15 years, we don’t lift an eyebrow when Apple Store crashes when a new iPhone-model is released. And if even the largest company in the world isn’t able to provide a premium sales channel that performs reliably, then who is, right? It almost seems unavoidable that websites go down. Like a natural disaster you can’t prepare for.

Our analysis indicates something else. After going through 8,522 actual tests we believe that you can be prepared with the right knowledge. The analysis shows that an important factor in the unreliable web is simply overconfidence about how many visitors websites can really handle. If you haven’t done the tests and you still think your website will continue to work unaffected during a hot product launch, a seasonal peak in interest or if you are luckily beeing “slashdotted”, think again!

Have a look at our report here. And we’re looking forward to hear more about what you think about the state of web readiness.

New partner in Benelux

Today we’re proud to announce that we have signed a premium partnership agreement with Systemation in Benelux, one of the leading distributors in integration testing and quality assurance in northern Europe. 
As you might already know, we provide a full-scaled partner program in priority markets in the U.S., Europe and Asia. Together with partners like Systemation we can reach out much faster to clients in need of efficient load testing. As a Premium Partner, Systemation will actively represent, sell and implement customer projects for the Benelux market.
Systemation acts as distributor in the Benelux countries for several leading international software companies like Load Impact. Together with the software solutions Systemation provides their clients with a comprehensive suite of professional services, including implementation, local technical support and software maintenance.
Jaap Franse, Managing Director of Systemation, states the following concerning our partnership:
“We are proud to add the world class Load Impact solution to our portfolio of services. This marks a significant step in supporting our customers when executing performance tests for their business critical web applications.”
So if you’re active in the Benelux market and in need for performance testing, contact Systemation directly.

Load Impact 2.3 released!

We’re happy to introduce Load Impact 2.3!

Load Impact 2.3 contains a new and improved proxy recorder that automatically detects pages and creates page load time result metrics for each of your web pages. The recorder also allows you to insert code comments in the generated user scenario, which can be useful in order to find where in your user scenario code a certain page is being loaded.

Behind the scenes, Load Impact 2.3 also includes a lot of optimizations that result in a much faster reporting interface, especially for large tests that generate a lot of results data these optimizations will make a huge difference to how snappy the “view test” page feels. And for live tests, the reporting page will also be a lot smoother. In fact, Load Impact 2.3 is a major rewrite of the underlying storage subsystem and how data is being accessed by the user interface code. More things are loaded on-demand now (i.e. as/when needed) and this results in a page that is much lighter on the client computer. You should now be able to view even the largest tests on the flimsiest of laptops.

Other improvements you will find in 2.3 include:

 

  • Graphical editor support for data stores, custom metrics and other new API functionality
  • Several API updates – http.page API functions, named parameters, etc.
  • You can now plot graphs of load generator CPU and memory usage during the test!
  • The URL list on the report page now displays bytes received and compression ratio
  • Content type classification now uses the Content-Type header
  • Click the pie charts to highlight different objects in the URL list on the test report page
  • Many bug fixes…

 

Parameterized data, and more

We, are happy to introduce two new, major features in Load Impact that many users have asked for: parameterized data (“data stores”) andcustom metrics.

Parameterized data is when you’re able to provide data in bulk using some common format – often a CSV (comma-separated) file that you upload, and which you can then access from your load script. The typical example is when you have e.g. 10,000 login names and passwords that you want to use in your load test. Entering these by hand into your load script code is prohibitively time-consuming, but with parameterized data you just upload a text file with all the usernames and passwords, and are then able to access these from inside your load script.

Custom metrics is a feature that allows you to store arbitrary result metrics during a load test. A typical use-case would be to store the load time for a certain page on your site (as opposed to just storing the load time for individual URLs/resources on the page). A more advanced use-case would be to fetch server monitoring data (via HTTP) from the web servers that are being tested, and log e.g. their CPU load along with the standard response time data collected by Load Impact. Any metrics stored with our custom metric feature will be visible in the test results interface, and can be plotted as graphs for easy correlation with the standard metrics.

Parameterized data in Load Impact

Parameterized data in Load Impact is implemented using something we call data stores. A data store is basically a two-dimensional array (a “table”) with data that can be shared by multiple clients in a load test. The usage is simple: You create a new data store in the user scenario configuration interface, assign a name to it, then upload a text file with the data you want to insert into it. The data file should be in CSV format (comma-separated values) and be a two-dimensional table, but can contain any number of rows and columns. When you have a data store assigned to a user scenario your load test clients can then use the data store API functions to access the data store, and retrieve data from it.

Further reading: FAQ: How do I use parameterized data?

Custom metrics

Custom metrics allows you to create your own, arbitrary result metrics and store sample points for them that you can then plot in graphs just like any other measurements. Custom metrics are really simple to use – in your load script you just have to call the special functionresult.custom_metric() and supply it with one parameter defining the name of the metric – e.g. “page 1 load time” – and one parameter defining the current measurement value for the same metric (a numeric value). Custom metrics can be used to plot all sorts of interesting measurement data, such as page load times, bandwidth usage for a single URL/resource, time to first byte for new TCP connections, and a multitude of things.

After 2.0 – what is next?

Post-2.0 updates and plans for 2012

We released Load Impact 2.0 at the end of October, and the reception has been really, really good. We see an increased number of user registrations and more user activity and what is especially fun is to see that people are starting to use Load Impact for really complex load testing of a wide range of different platforms and applications. It seems all the new functionality of Load Impact 2.0 has been very well received and that people are starting to realize its potential, which is great to see for us who have worked so hard in 2011 on getting it out the door.

Right after release we had a number of issues with the payment system, as well as numerous small bugs that only manifested themselves in production, but overall it was a successful release without any major malfunctions. There are still small annoyances left to iron out, but we are making the service better by the day, and also adding new functionality. Here is a list of some things we have done post-release:

 

  • Support for new load zone in south america – Sao Paolo, Brazil – and new US West load zone – Portland, Oregon.
  • New chart/graph component implemented, providing even more advanced graphing capabilities (e.g. instant zoom)
  • Improved help/documentation – customer case studies, load scripting tutorial and example scripts
  • Data export functionality added (export to CSV)
  • Several problems related to the payment system have been fixed. AMEX support was added.
  • Several problems related to test startup have been fixed.
  • Credit refund logic for aborted or failed tests has been improved. You now get partial or full refunds when a test fails, for some reason.
  • Anonymous tests are run from random load zones.
  • Numerous small UI bugfixes/improvements.
  • HTTP basic Auth now supported for automatic load script generation.
This list is by no means exhaustive. We update the service every week, usually, with many minor fixes and improvements, sometimes adding new features also. For 2012 we have some much-asked-for features on the road map, such as:
  • Improved data parameterization support
    We will implement “data stores” that allow people to upload large sets of data, which will then be made available to them in load scripts. This functionality will make it a lot simpler for people who e.g. have a large list with usernames and passwords that they want the simulated clients in a load test to make use of.
  • User-defined metrics
    You will be able to create your own reporting metrics and have your load script store results values for those metrics during a test. Then you can plot graphs for these metrics along with the standard metrics in the reporting (test result) interface. An obvious use for this functionality can be to report load times for individual web pages, in case a user scenario accesses multiple pages (which is fairly common).
  • Server metrics
    This is also a power-user type of feature that allows you to import performance data from the web server(s) you are testing and plot graphs of e.g. the CPU usage on your web frontend machine, overlaid with a graph of the average response time for an HTTP transaction. Being able to import server metrics from the machines that are being stressed in the load test provides a much simpler way of correlating information in order to find out where performnce bottlenecks are. Of course, we will support importing data from database servers and other systems your site/application might be dependent on also.
If you have any other features you think we should rather be focusing on, don’t hesitate to tell us about it!  We love feedback.
A merry christmas and a happy new year to you all!

 

2.0 Highlights

Load Impact 2.0 was released at the end of October (27th). The first few days after release were pretty chaotic, with lots of minor issues and some major ones, but having been involved in many big releases during my career I have to say that this one went pretty well actually. The system was up and functional most of the time, the first few days post-release, and that isn’t bad at all 🙂

Still, there were some difficulties, of course. We had problems first with AMEX payments due to contractual reasons (AMEX payments have been removed for now, until we manage to get through the AMEX bureaucracy) and then with VISA/MC payments. Then there was occasional problems with internal queueing systems that caused some load tests to either fail, “freeze” (get stuck in some state), or never get started. All these issues should have been resolved by now, but there are likely smaller things that will pop up from time to time, so we urge everyone to get in touch with us if you see anything strange happening on the site. Don’t hesitate to get in touch with us even if you’re unsure whether something is a problem on our side or not, we want to know about all situations where someone has any kind of problem using our service. No issue is too small.

In general, the system is starting to get very stable now, however, and we see more activity than before the release, with more user registrations and more tests being executed. We also see more advanced usage of our service – more people are writing advanced load scripts and running both larger and more complex load tests than ever before. It is all very encouraging and tells us that we are moving in the right direction!

So what is so great about 2.0 then?

Some people may see Load Impact 2.0 as simply an upgrade, but it’s more like the launch of a whole new service. It is that much different from 1.0. We have kept some 1.0 key elements that we (and hopefully everyone else) liked such as the ability to run small, simple tests anonymously from our front page, the ability to watch other such anonymous tests that are being run, and the scripting language and scripting API, but behind the scenes most of the code base is new and 2.0 includes a lot of new functionality that didn’t exist in 1.0. Here is a small list:

  • Large-scale load tests
    As we are now using public cloud infrastructure (Amazon) to generate load test traffic, we have the ability to scale up a load test to a very large size at any of the geographic locations where there are cloud servers available (currently California, Oregon and Virginia in the US, plus Ireland, Japan and Singapore outside the US).
  • Multiple user scenarios in a single test
    In 2.0 we introduce “user scenarios”. A user scenario defines a certain simulated user category and what that category should be doing on your site. An example can be an e-shopping site that has two types of visitors – one type that just browses the site without buying anything, and another type that registers a user account on the site and then goes on to actually buy products on the site. In Load Impact 1.0 you could not easily combine these two different user categories in a single load test, but with Load Impact 2.0 it is easy – you just create two different user scenarios, that run different load scripts, then you configure your load test to use these two scenarios.
  • Multiple geographical traffic sources
    With Load Impact 2.0 you can now choose to have your traffic originate from more than one physical place, if you want. You can specify any number of combinations of user scenarios (described above) and geographical locations where that particular user scenario should be executed, and create very complex load test configurations where you define that e.g. 10% of the total number of simulated users during the load test should run user scenario X from geographical location Y.
  • More performance metrics
    We now collect more performance metrics than in 1.0, such as “requests per second”, and we collect many more sampling points that are all time-based rather than client level-based. This results in more performance data available at higher resolutions than before.
  • Much more advanced chart/graph capabilities
    We provide a very dynamic test report page where you can create your own charts and graphs, plotting a wide range of parameters and correlate data with a certain user scenario or test results from a certain geographical region.
  • Text-based script editor

    For expert users, a text-based scripting editor is usually the best choice, and in Load Impact 2.0 we provide the option to choose between our graphical script editor (LILE) and a text editor that allows easy copy-and-paste and faster code entry for the experienced programmer. Load script programmers now have much more choice in how they create their load scripts.
  • Continuous tests
    Load tests are now executed continuously, which means that a simulated client thread is never shut down as long as the load level is meant to increase. Old simulated clients will just continue execution, reiterating their load script again and again, while more clients are being added. The result is a smoother and more time-efficient ramp-up process than was offered in Load Impact 1.0.
  • Credit based pricing model
    Load Impact 2.0 introuces the credit based model that means there is no difference between one user and the next as regarding them being a “premium” user or not. All users are the same, they just have different amounts of credits, and the ones that have more credits can run larger and longer tests than those who have few credits. This provides several advantages – first of all it allows us to skip all the old limits on how many tests you can run per 24 hours, etc. Now, every test you run consumes credits and only the number of credits you have affects the number of tests you can run. Secondly it means we don’t have to restrict access to some functionality to premium users – everyone can do everything on the system, so it is easy to “try before you buy”. Thirdly, it makes our product much simpler in general as we only sell one single thing now – Credits – while as earlier we sold access to different premium levels for different amounts of time, making everything a lot more complex. The drawback, however, is that it can be difficult for people to understand exactly how many credits they need to do the testing they want to do. All in all, though, we think the upsides with the credit model are much bigger than the downsides.
You can watch a video introduction to Load Impact 2.0 on Youtube: http://www.youtube.com/watch?v=CkGuBONAXLE
There are many exciting new features on our road map for the end of the year, and for 2012, and we really appreciate your feedback on exactly what things you would like to see in future versions of Load Impact. If there is something you think is missing that would really make a difference to you, please tell us about it!
We will continue to work hard on making Load Impact the best load testing solution in the world. We are slowly becoming the de-facto standard for online load testing, and it’s all thanks to you, our users, so we would like to extend a big thank you for your support ever since we launched back in 2009!  So keep load testing and don’t forget to try out all our new features!
  /Ragnar & the Load Impact team

Load Impact 2.0!

We’re excited to announce Load Impact 2.0 !

Early spring 2011, we were sitting on a ton of ideas about how to improve Load Impact. We had lots of things on our TODO list for the next few major releases of the service, and were discussing what to focus on first and what our general development road map should look like for the rest of 2011.

We came to the conclusion that incremental updates, that we had been doing so far, was not the best course of action. Some of the changes we wanted to make to the service were dependent on other changes we also wanted to make, and some were hard to achieve on top of the current legacy system. Some parts of old Load Impact we had long been wanting to remake from the ground up, and we realized that this was the time to do it. To break with the old codebase and start a new one, transferring everything we liked from the old code base but not hesitating to throw out anything we did not like.

So we embarked on that long and hard, but also fun, journey. Initially, we aimed to continue updating the old platform regularly, rolling out new features and updates to the live site while developing Load Impact 2.0 in parallel. We soon realized that this was overly ambitious, however, and decided that advanced scripting and the menu-based scripting editor that we released in April would be the last major update to the old Load Impact code base.

Then we spent most of the summer and autumn frantically developing Load Impact 2.0. Since August we have been in crunch mode, working 10-hour days, 6 days a week (which is quite a lot to us lazy and decadent Europeans) and our efforts are starting to pay off now, with the 2.0 platform getting closer and closer to being release ready. At the time of writing we are running a closed beta test, and we expect that to continue for another week or two, then we will take 1-2 weeks to finish off everything, and finally release in the second half of October.

So, what’s in it for me?  How will Load Impact 2.0 affect me?

First of all, Load Impact 2.0 is a huge upgrade from the old system. We don’t want to spoil the surprise, but it will mean a big step up functionality-wise. We expect our competitors to tear their hair out when they see it, at the very least. Introducing a lot of new features often means that you also introduce complexity, but we think we have made a pretty good job of hiding complex functionality until the user asks for it. Load Impact 2.0 should be as easy to use as (or easier than) the current system.

 

Introducing Load Impact credits

One big change that we want to announce beforehand, however, is the new pricing model we will adopt in 2.0. So far, we have been selling subscriptions to premium users, letting them buy premium access for a certain amount of time (a day, a week or a month) but we have realized there are several drawbacks to this scheme. For example, people cannot try out all the Load Impact features until they buy a premium subscription. How do they know that they will be able to do what they want to do, if they can’t try before they buy? Also, we have to have limits in place on how many tests you can run, how much data you can transfer etc during your subscription period, otherwise we could be hard hit if someone bought e.g. premium access for a month and then ran one test after another, continuously throughout the whole month. So we set limits, and when a user runs one test too many they are told they can’t run any more tests. Many people miss these limits, and are upset when they suddenly get denied trying to start a test.

To avoid these problems, and to get a simpler premium product, we have decided to scrap the old time-based subscriptions and instead sell Load Impact Credits. The credits are used whenever you run a load test, with a small test costing less than a large test. Just by having a registered account you will automatically receive a small amount of credits for free every month. You can use these credits to run several smaller load tests, or perhaps one medium-sized test. Per month. If your needs are more frequent or you need to run larger tests, you have to buy extra credits.

We think this system is fair and that it will allow all our amateur load testing users to continue running really small-scale load tests for free, with access to all our functionality, while the professional testers will have to pay for their testing as they often need to run more large-scale tests and sometimes more frequently also.

 

What will happen with the old system?  Will I be able to access my old test results?

When Load Impact 2.0 is released, we will transfer all users from the old system to the new. We will then also migrate all old test results, configurations etc. The new system will be backwards compatible with the old so you will not lose any data. In fact, there are some test result metrics that we collect today, but which you are not able to see in the user interface (such as how many transactions returned error codes). These metrics will be available in 2.0, even for your old test results.

As Load Impact 2.0 will contain all the functionality (and more) of the current system, we have no plans on keeping the old system running in parallel with the new. When we release, you will not be able to logon to the old system anymore. The web address will still be the same as always – http://loadimpact.com – but the look-and-feel, and the functionality will be different.

 

What if I have an active subscription at the time you upgrade the site – what happens to my subscription?

Existing subscribers will be given a generous supply of credits, so they will not feel they lost anything by buying a premium account just before the upgrade.

 

When is the exact date of the release?

We have to get back to you on that!  When the exact date is set, we will email all our users about it.

 

If you have any more thoughts or questions, don’t hesitate to contact us

About Load Impact

Load Impact is the leading cloud-based load testing software trusted by over 123,000 website, mobile app and API developers worldwide.

Companies like JWT, NASDAQ, The European Space Agency and ServiceNow have used Load Impact to detect, predict, and analyze performance problems.
 
Load Impact requires no download or installation, is completely free to try, and users can start a test with just one click.
 
Test your website, app or API at loadimpact.com

Enter your email address to follow this blog and receive notifications of new posts by email.