Performance Testing Tips


Why Performance Testing
  • Availability: The amount of time an application is available to the end user. Lack of availability is significant because many applications will have a substantial business cost for even a small outage. In performance testing terms, this would mean the complete inability of an end user to make effective use of the application.
  • Response Time:The amount of time it takes for the application to respond to a user request. For performance testing, one normally measures system response time, which is the time between the user’s requesting a response from the application and a complete reply arriving at the user’s workstation.
  • Throughput:The rate at which application-oriented events occur. A good example would be the number of hits on a web page within a given period of time.
  • UtilizationThe percentage of the theoretical capacity of a resource that is being used. Examples include how much network bandwidth is being consumed by application traffic and the amount of memory used on a server when a thousand visitors are active.

Performance Testing Key Objectives
  • Test the application’s response time under user load.
  • Analyze the configuration at which user load and system configuration application will give the best performance.
  • Test reliability or stability of the system under heavy work load.
  • Analyze the point at which the performance degrades and the cause.
  • Test system stability in the production environment.

Types of Performance Testing
  • Load Testing:This is the classic performance test, where the application is loaded up to the target vartual user but usually no further. The aim is to meet performance targets for availability, concurrency or throughput, and response time. Load testing is the closest approximation of real application use, and it normally includes a simulation of the effects of user interaction with the application client. These include the delays and pauses experienced during data entry as well as (human) responses to information returned from the application servers.
  • Stress Testing:This has quite a different aim from a load test. A stress test causes the application or some part of the supporting infrastructure to fail. The purpose is to determine the upper limits or sizing of the infrastructure. Thus, a stress test continues until something breaks: no more users can log in, response time exceeds the value you defined as acceptable, or the application becomes unavailable. The rationale for stress testing is that if our target concurrency is 1,000 users but the infrastructure fails at only 1,005 users, then this is worth knowing because it clearly demonstrates that there is very little extra capacity available. The results of stress testing measure capacity as much as performance. It’s important to know your upper limits, particularly if future growth of application traffic is hard to predict. For example, the scenario just described would be disastrous for something like an airport air traffic control system, where downtime is not an option.
  • Volume Testing:It is done to test the stability of the system by processing huge amount of data.
  • Soak Testing: The soak test is intended to identify problems that may appear only after an extended period of time. A classic example would be a slowly developing memory leak or some unforeseen limitation in the number of times that a transaction can be executed. This sort of test cannot be carried out effectively unless appropriate server monitoring is in place. Problems of this sort will typically manifest themselves either as a gradual slowdown in response time or as a sudden loss of availability of the application. Correlation of data from the users and servers at the point of failure or perceived slowdown is vital to ensure accurate diagnosis.

Performance Testing Tools:
  • ANTS (Advanced .Net testing System)
  • LoadRunner
  • SilkPerformer
  • JMeter