QA Performance Testing Basics

Performance Testing: A Simple Guide for QA Engineers


A few years ago, I worked at a company called nTrust, a FinTech business in the cryptocurrency space. They hired me to review their overall software quality processes and ensure they were delivering the best possible product. nTrust offered online wallets for customers, similar to a bank account, where users could buy, sell, and store cryptocurrency. Everything was set for a big release. But before we moved forward, I decided to run some performance testing. Accordingly, I picked up an open sourced tool named JMeter. What happened next really opened my eyes to the importance of performance testing under real world conditions.

During the performance tests, we uncovered a blocker bug no one had anticipated. The system was handling individual user sessions well, but when multiple users tried to access the site simultaneously and perform transactions, the system failed to maintain proper sessions and security states. This was a critical issue for an online wallet system, one that could’ve impacted thousands of users.

This experience taught me a valuable lesson: performance testing is crucial, especially when multiple users will be accessing and interacting with the system at the same time. It’s not enough to test features in isolation; we need to test the scalability and reliability of the system under load and high concurrency to ensure it can handle real world usage.

Software QA engineers typically focus on ensuring that individual features work as expected for users. They test tasks like logging in, searching for data, submitting forms, or completing transactions. This is usually straightforward when only a few users are interacting with the system, as the application can easily manage sessions, allocate resources, and respond quickly. However, web applications use HTTP, which is a stateless protocol. This means the server doesn’t automatically remember users between requests and relies on sessions, cookies, or tokens to maintain user identity and continuity.

The challenge arises when many users, sometimes thousands or millions, access the site at the same time. Each user generates multiple requests, and the system needs to manage individual sessions or tokens while also sharing limited resources like CPU, memory, database connections, and network bandwidth. At this scale, we must ask key questions:

  • Can the system maintain sessions for all users?
  • Are resources allocated efficiently?
  • Does response time remain acceptable?
  • Does the system break under heavy load?

So the real question is not just … Does this feature work?

Instead, how does the application behave when many users use it at the same time?

This is where performance testing plays a critical role.


What is Performance Testing?

Performance Testing is a type of testing that checks:

  • How fast an application is
  • How stable it is under load
  • How many users it can handle at the same time

In simple words, performance testing helps us understand how the application behaves under pressure. It is about pretending many users are using the system, slowly increasing them, and watching how the application reacts.


Why is Performance Testing Important?

Imagine this scenario:

  • A shopping website works fine with 5 users
  • But during a sale, 5,000 users log in
  • The website becomes slow or crashes

This leads to:

  • Poor user experience
  • Loss of customers
  • Loss of business

Performance testing helps prevent such problems before it reaches production.


What Do We Measure in Performance Testing?

Here are the most common things we check:

Response Time

How long does the application take to respond to a request?

Example: Login should happen within 2–3 seconds

Throughput

How many requests can the system handle in a given time?

Example: 1,000 requests per minute

CPU and Memory Usage

How much system resources are used when users increase?

Error Rate

Does the application start failing when load increases?


Types of Performance Testing

Load Testing

Tests how the system behaves with expected user traffic

Example: Testing whether a banking website can handle 10,000 users checking balances during salary day.

Stress Testing

Pushes the system beyond limits

Example: 5,000 users when the limit is 2,000
Increasing users beyond expected traffic

Spike Testing

Tests how the system reacts to a sudden increase in users / sudden traffic surge

Example: Users jump from 100 to 2,000 suddenly when a flash sale starts

Endurance Testing

Tests if the system is stable over a long time.

Example: Running load for 8 – 10 hours. Running a banking transaction load continuously for 8 – 10 hours to check for memory leaks or slowdowns.


When Should QA Do Performance Testing?

Performance testing should be done:

  • When performance issues are reported
  • Before major releases
  • After major code changes
  • Before marketing campaigns or sales

Role of a QA Engineer in Performance Testing

As a QA Engineer, your role includes:

  • Understanding performance requirements
  • Creating realistic test scenarios
  • Running performance tests
  • Analyzing results
  • Reporting bottlenecks clearly

Common Performance Testing Concepts

Performance testing tools use many technical words. Let’s break down the most common terms you’ll hear and explain them like a normal conversation.

Virtual Users (VUs)

Meaning:
A virtual user is a user created by a performance testing tool. Performance testing tools create virtual users that behave like real users.

Example:
Instead of asking 1,000 real people to use the website, the tool pretends to be 1,000 users. Each virtual user behaves or interacts like a real person

  • It can log in
  • It can search for a product
  • It can click buttons
  • It can place an order
  • It can send requests
  • It can wait for responses

Ramp-Up Time

Meaning:
Users are added gradually, not all at once. This simulates real traffic and helps identify when performance starts degrading. In real life, users do not come all at once.

Example:
Users don’t enter a website all at once. They come in gradually.

On a sale day, an e-commerce website might see:

100 users at 9:00 AM

1,000 users at 9:05 AM

5,000 users by 9:15 AM

This gradual increase is called Ramp-Up Time. Ramp-up time helps us simulate this gradual increase instead of sending all users at once. This simulates real traffic growth and helps identify when performance starts to degrade. Ramp-up time controls: How fast new users join the system.

A slow ramp-up = users arrive calmly
A fast ramp-up = users rush in together

Load

Meaning:
Number of users using the system. Basically, the amount of traffic on the system.

Example:
Load simply means: How many users are using this App right now. 2,000 users checking their bank balance at the same time. For your site, you may define it. Say

100 users = low load
5,000 users = high load

Response Time

Meaning:
Time taken to get a response from the system after an action

Example:
Response time is: How long you wait after clicking a button. Or say, How long it takes for the “Account Summary” page to load after clicking it.

Example:

After clicking login button, Login takes 1 second perhaps it is good.

Login takes 10 seconds perhaps bad

Throughput

Meaning:
Number of requests handled per unit time

Example:
Throughput answers: How many requests can the system handle in a given time? A payment system processing 500 transactions per minute during peak hours.

Other examples:

500 searches per minute

2,000 logins per second

Higher throughput = better capacity

Latency

Meaning:
Delay before the response starts.

Example:
Latency is the waiting time before the system even begins to respond. The pause between clicking “Pay Now” and the system starting to process the payment.

Think of calling customer care:

You dial, Phone rings, Then someone answers

That ringing time = latency

Think Time

Meaning:
Pause between user actions.

Example:
Real users don’t click nonstop. They read, think, then click

Think time adds small pauses so virtual users behave more like humans. You do not add product instantly, you browsed it first. A customer reads product details for a few seconds before clicking “Add to Cart”.

Error Rate

Meaning:
Percentage of failed requests.

Example:
Error rate tells us: How many user actions failed under load? Out of 1,000 payment attempts, 30 fail due to timeouts – this results in a 3% error rate.

Example:

1,000 requests sent, 50 failed then

Error rate = 5%

Bottleneck

Meaning:
The slowest part of the system limiting performance

Example:
One weak part slowing everything down. A slow database query causes the “Order History” page to load slowly even though the UI and APIs are fast.

Example:

  • Slow database
  • Slow API
  • Limited server memory

Even if everything else is fast, one bottleneck can affect the whole system.

Users → Web Server → API Layer → Slow Database

The database limits the entire system’s performance.

Baseline

Meaning:
Normal performance reference.

Example:
How the system behaves on a normal day.

Later, we compare:

  • Before vs after changes
  • Low load vs high load

Peak Load

Meaning:
Maximum expected traffic.

Example:
Busiest time of usage:

  • Sale day
  • Product launch
  • Festival traffic

Performance testing ensures the system survives peak load.

Test Duration

Meaning:
How long the test runs.

Example:
Some problems appear only after time passes.

Machine Example:
A fan works fine for 5 minutes but overheats after 5 hours.

Warm-Up Period

Meaning:
System needs time to get ready before real testing.

Example:
Systems take time to settle. Engine warms up before smooth driving

Warm-up allows:

  • Caches to load
  • Connections to stabilize

We don’t judge performance during warm-up.

Scalability

Meaning:
Ability to handle growth.

Example:
If users increase, can the system grow smoothly?

A scalable system grows smoothly without breaking.

Saturation Point

Meaning:
Maximum limit of the system.

Example:
Saturation point is where: Point where adding more users only makes things worse. This is the system’s breaking point.

  • Adding more users doesn’t increase performance
  • Errors and delays increase

Distributed Performance Testing

Meaning:
Running performance tests from different geographical locations.

Example:

In real life, users do not come from just one place.

Some users may access the application from: India, USA, Europe, Australia

Distributed performance testing means:
Virtual users are created from different regions of the world, not just one location.

Distributed testing helps answer:

  • Is the app slow for users in other countries?
  • Do some regions face more errors?
  • Is a CDN (Content Delivery Network) needed?

Why We Need Performance Testing Tools

In an ideal world, we would invite thousands or even millions of real users to use the application at the same time and observe how the system behaves. However, this is not practical, repeatable or realistic. Coordinating real people, ensuring they perform consistent actions, and repeating the same test multiple times is almost impossible. This is where performance testing tools, such as JMeter, come into the picture. Instead of real humans, these tools create virtual users. These virtual users simulate real human behavior by:

  • Sending requests to the application
  • Maintaining sessions or authentication tokens
  • Following defined user actions such as login, search, or payment

Using a tool, QA engineers can simulate hundreds, thousands, or even millions of concurrent users in a controlled and repeatable way. This allows teams to understand how the system behaves under load, identify breaking points, and fix performance issues before the application reaches real users before production release.


Tools Commonly Used for Performance Testing

Some popular tools QA engineers use:

  • JMeter – Open-source, widely used
  • LoadRunner – Enterprise-level tool
  • Gatling – Developer-friendly tool
  • k6 – Modern and script-based

For beginners, JMeter is a great starting point.


What Is JMeter?

JMeter is a performance testing tool that pretends to be many users using your application at the same time.

That’s it.

  • Creates virtual users
  • Sends requests like a real browser
  • Measures how fast or slow the system responds

After test execution, analyze results carefully.


Basic JMeter Building Blocks

Let’s break JMeter into easy pieces.

Test Plan

What it is:
The main container.

Example:
A folder that holds everything related to your test. A project file that contains all test details.

Thread Group

What it is:
Controls users and timing.

Example: It tells JMeter:

  • How many users to create
  • How fast users start
  • How long the test runs

Think of it as: A crowd entering a shopping mall.

Number of Threads (Users)

What it is:
How many virtual users you want.

Example: 100 threads = 100 users

Ramp-Up Period

What is it: How slowly users enter the system.

Example:

  • 100 users in 100 seconds
  • One new user every second

This makes traffic realistic.

Sampler

What it is:
The action users perform.

Example:
A sampler is what the user does.

  • Open login page
  • Search product
  • Submit order

HTTP Request Sampler (Most Common)

What is it: This sends a request to your application.

Example:

  • Login API
  • Search API

Listeners

What they are:
Result viewers.

Example: Listeners show:

  • Response time
  • Errors
  • Graphs
  • Reports

Listeners do not send traffic. They only show results.


Do QA Engineers Need Scripting?

Short answer: No (for basics).

Most beginner tests require:

  • Copy-paste URLs
  • Fill request parameters
  • Set users and ramp-up

Scripting helps for advanced cases, but: Basics are enough to start.


Simple basic JMeter Test

This section shows how a non-technical QA can create a basic JMeter test.

Step 1: Open JMeter

Just launch JMeter. You’ll see a blank screen with Test Plan.

Step 2: Add a Thread Group

Right clickTest Plan, click Add, click Threads, choose Thread Group.

This decides how many users will test your app.

Set:

  • Number of Threads (Users): 10
  • Ramp-Up Period: 5
  • Loop Count:
    Meaning: 10 users will start slowly, two users per second.

Step 3: Add HTTP Request

Right-click Thread Group, Click Add, choose Sampler, click HTTP Request.

This is where you tell JMeter: Which page or API should users hit?

Fill in:

  • Server Name: your app URL, for example, www.google.com
  • Method: POST / GET
  • Path: /

This represents:User clicks Login button

Step 4: Add a Listener

Right-click Thread Group, click Add, choose Listener, choose View Results Tree.

you may also choose Summary Report

This will show:

  • Response time
  • Errors
  • Success rate

Step 5: Run the Test

Start the test by clicking the Green Play Button to run

JMeter will:

  • Create users
  • Send requests
  • Collect results

You just ran your first performance test!


How to Know If Test Passed or Failed?

Check the run results in View Results Tree :

  • Are response times acceptable?
  • Are there errors?
  • Did the system slow down?

That’s it. No complexity.

Understanding result

A good performance report clearly explains:

  1. What was tested
  2. How many users were simulated
  3. What happened under load
  4. Risks identified
  5. Recommendations

Example: When concurrent users exceeded 1,500, checkout response time increased significantly and errors were observed. This may impact sales during peak traffic.

Analyze Results

  • Login response time increases after 6,000 users
  • Transfer API starts failing at 7,500 users
  • Database connection pool becomes a bottleneck

Key Metrics From Listeners

  • Response Time – How long each request took
    Example: Login page took 2.5 seconds on average.
  • Throughput – Requests processed per second or minute
    Example: The payment API handled 500 transactions per minute.
  • Error Rate – Percentage of failed requests
    Example: 3% of fund transfer requests failed during peak load.
  • Latency – Delay before a response starts
    Example: Users experienced 0.5-second delay before the transaction started processing.
  • Apdex / Satisfaction Score – Optional measure of user experience
    Example: 90% of requests completed within acceptable time thresholds.

How to Interpret the Results

  1. Look for trends, not single numbers:
    • Are response times increasing as users increase?
    • Is the error rate spiking at a certain load?
  2. Identify bottlenecks:
    • Database slow?
    • API failure?
    • Web server overloaded?
  3. Compare against expectations:
    • SLA (Service Level Agreement) – e.g., 95% of users should get response <2 seconds
    • Business requirements – peak sales hours or banking transaction limits

Reporting to Product Team or Management

  • Keep it simple: Use plain language, not just technical metrics.
    Example: When concurrent users exceeded 6,500, fund transfers slowed down and errors appeared. This may affect peak-hour banking transactions.
  • Highlight impact: Focus on user experience and business risks.
    Example: Checkout delays could reduce sales during promotions.
  • Provide actionable recommendations:
    • Increase DB connection pool
    • Optimize slow APIs
    • Retest after fixes
  • Visuals help: Include graphs of response time, throughput, and error rate for clarity.

Without interpreting and reporting test results, performance testing loses its value. Tools generate numbers, but QA engineers translate those numbers into actionable insights for product and management teams.


Scope Note

The concepts discussed in this blog are primarily focused on web application performance testing, where communication happens over HTTP/HTTPS and user continuity is maintained using sessions, cookies, or authentication tokens.

While many principles also apply to mobile apps or WebView-based applications, performance testing for those platforms involves additional considerations such as device constraints and OS behavior, which are outside the scope of this discussion.


About the Author

The author is a Software Engineer focusing on product quality with experience in accessibility, performance, security, and functionality issues of web and mobile applications, both manual and automation. Passionate about performance testing and Quality best practices, He helps teams ensure applications are stable, scalable, and user-friendly, and writes technical guides to mentor engineers in building high-quality software.