
Open-source Load Testing
Introduction
Load testing is crucial for ensuring your applications can handle expected load volumes. In this guide, we'll set up a complete load testing environment using k6 for testing, Prometheus for metrics collection, and Grafana for visualization, all orchestrated with Docker.
Although there are paid versions of these products, this guide will focus exclusively on a basic setup with their open-source Docker images.
Prerequisites
- Docker and Docker Compose installed
- Basic understanding of load testing concepts
- Familiarity with Docker
Architecture Overview
Our setup consists of four main components:
- k6: An open-source load testing tool that enables you to write test scripts in JavaScript to simulate real user traffic, measure application performance, and export detailed metrics for analysis.
- Application: A simple API-based application to test
- Prometheus: An open-source monitoring and alerting toolkit that collects, stores, and queries time-series metrics from k6 and other sources, making them available for analysis and visualization.
- Grafana: An open-source analytics and visualization platform that lets you create interactive dashboards and graphs from a wide variety of data sources—including Prometheus, InfluxDB, Elasticsearch, MySQL, PostgreSQL, and many others.
These components will be implemented with 4 Docker containers. Here's how these components interact:
Data Flow:
- Load generation: our k6 script sends HTTP requests to the Sample API to simulate user traffic
- Metrics Export: as the test runs, performance metrics from k6 are exported to Prometheus via remote write
- Data Query: Grafana uses PromQL to query Prometheus for metrics
All components run within the same Docker network, enabling seamless communication between services.
Project Structure
1k6-prometheus-grafana/
2├── docker-compose.yml
3├── prometheus/
4│ └── prometheus.yml
5├── grafana/
6│ └── dashboards/
7│ └── k6-dashboard.json
8├── k6/
9│ └── script.js
10└── sample-api/
11 └── Dockerfile
12 └── server.js
Step 1: Create the Sample API
First, let's create a simple Node.js API to test against:
sample-api/server.js
1const express = require('express');
2const app = express();
3
4app.get('/health', (req, res) => {
5 res.json({ status: 'healthy', timestamp: new Date().toISOString() });
6});
7
8app.get('/api/users/:id', (req, res) => {
9 const { id } = req.params;
10 // Simulate some processing delay
11 setTimeout(() => {
12 res.json({ id, name: `User ${id}`, timestamp: new Date().toISOString() });
13 }, Math.random() * 100);
14});
15
16app.post('/api/users', (req, res) => {
17 // Simulate user creation
18 setTimeout(() => {
19 res.status(201).json({
20 id: Math.floor(Math.random() * 1000),
21 message: 'User created successfully'
22 });
23 }, Math.random() * 200);
24});
25
26app.listen(3000, () => {
27 console.log('Server running on port 3000');
28});
And add a docker file that will start the app:
sample-api/Dockerfile
1FROM node:16-alpine
2WORKDIR /app
3RUN npm init -y && npm install express
4COPY . .
5EXPOSE 3000
6CMD ["node", "server.js"]
Step 2: Create k6 Test Script
This JavaScript test script defines how k6 will interact with our sample API during the load test.
k6/script.js
1import http from 'k6/http';
2import { check, sleep } from 'k6';
3import { Rate, Counter, Trend } from 'k6/metrics';
4
5// Custom metrics - these allow us to track specific aspects of our test
6export const errorRate = new Rate('errors'); // Tracks percentage of errors
7export const myCounter = new Counter('my_counter'); // Simple incrementing counter
8export const responseTime = new Trend('response_time'); // Tracks response time distribution
9
10export const options = {
11 stages: [
12 { duration: '30s', target: 5 }, // Ramp up to 5 virtual users over 30 seconds
13 { duration: '90s', target: 20 }, // Ramp to from 5 to 20 virtual users over 90 seconds
14 { duration: '3m', target: 20 }, // Stay at 20 virtual users for 3 minutes
15 { duration: '30s', target: 0 }, // Gradually ramp down to 0 over 30 seconds
16 ],
17 thresholds: {
18 http_req_duration: ['p(95)<500'], // 95% of requests must complete in less than 500ms for the test to pass
19 http_req_failed: ['rate<0.1'], // Test fails if more than 10% of requests fail
20 },
21};
22
23export default function () {
24 const baseUrl = 'http://sample-api:3000';
25
26 // Test GET endpoint - fetches a random user
27 let getResponse = http.get(`${baseUrl}/api/users/${Math.floor(Math.random() * 100)}`);
28 check(getResponse, {
29 'GET status is 200': (r) => r.status === 200,
30 'GET response time < 500ms': (r) => r.timings.duration < 500,
31 });
32
33 // Track custom metrics for this request
34 errorRate.add(getResponse.status !== 200);
35 responseTime.add(getResponse.timings.duration);
36 myCounter.add(1);
37
38 sleep(1); // Pause for 1 second between requests
39
40 // Test POST endpoint - creates a new user
41 let postResponse = http.post(`${baseUrl}/api/users`, JSON.stringify({
42 name: `TestUser_${Date.now()}`,
43 email: `test_${Date.now()}@example.com`
44 }), {
45 headers: { 'Content-Type': 'application/json' },
46 });
47
48 check(postResponse, {
49 'POST status is 201': (r) => r.status === 201,
50 'POST response time < 1000ms': (r) => r.timings.duration < 1000,
51 });
52
53 errorRate.add(postResponse.status !== 201);
54 myCounter.add(1);
55
56 sleep(1);
57}
Step 3: Configure Prometheus
Prometheus is an open-source monitoring and alerting toolkit that collects and stores time-series metrics. The configuration below sets up Prometheus to scrape metrics from both itself and the k6 load testing tool.
prometheus/prometheus.yml
1global:
2 scrape_interval: 15s # How frequently to scrape targets by default
3 evaluation_interval: 15s # How frequently to evaluate rules
4scrape_configs:
5 - job_name: 'prometheus' # Self-monitoring configuration
6 static_configs:
7 - targets: ['localhost:9090'] # Prometheus's own metrics endpoint
8 - job_name: 'k6' # Configuration to scrape k6 metrics
9 static_configs:
10 - targets: ['k6:6565'] # k6's metrics endpoint (using Docker service name)
11 scrape_interval: 5s # More frequent scraping for k6 during tests
12 metrics_path: /metrics # Path where metrics are exposed
Once Prometheus is collecting metrics, we'll be able to query this data directly or visualize it through Grafana in the next steps.
Step 4: Grafana Dashboard Configuration
Create a dashboard provisioning file for automatic setup:
grafana/dashboards/dashboard.yml
1apiVersion: 1
2
3providers:
4 - name: 'default'
5 orgId: 1
6 folder: ''
7 type: file
8 disableDeletion: false
9 editable: true
10 options:
11 path: /etc/grafana/provisioning/dashboards
Step 5: Docker Compose Configuration
docker-compose.yml
1services:
2 # Sample API service to be load tested by k6
3 sample-api:
4 build: ./sample-api
5 ports:
6 - "3000:3000" # Exposes API on localhost:3000
7 networks:
8 - k6-net
9
10 # Prometheus for metrics collection
11 prometheus:
12 image: prom/prometheus:latest
13 container_name: prometheus
14 ports:
15 - "9090:9090" # Prometheus UI available at localhost:9090
16 volumes:
17 - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml # Custom config
18 command:
19 - '--config.file=/etc/prometheus/prometheus.yml'
20 - '--storage.tsdb.path=/prometheus'
21 - '--web.console.libraries=/etc/prometheus/console_libraries'
22 - '--web.console.templates=/etc/prometheus/consoles'
23 - '--web.enable-lifecycle' # Allows config reloads without restart
24 - '--web.enable-remote-write-receiver' # Enables remote write endpoint for k6
25 networks:
26 - k6-net
27
28 # Grafana for dashboarding and visualization
29 grafana:
30 image: grafana/grafana:latest
31 container_name: grafana
32 ports:
33 - "3001:3000" # Grafana UI available at localhost:3001
34 environment:
35 - GF_SECURITY_ADMIN_PASSWORD=admin # Default admin password
36 volumes:
37 - grafana-storage:/var/lib/grafana # Persistent storage for Grafana data
38 - ./grafana/dashboards:/etc/grafana/provisioning/dashboards # Pre-provisioned dashboards
39 networks:
40 - k6-net
41 depends_on:
42 - prometheus # Waits for Prometheus to be ready
43
44 # k6 load testing tool with Prometheus remote write output
45 k6:
46 image: grafana/k6:latest
47 container_name: k6
48 ports:
49 - "6565:6565"
50 environment:
51 - K6_PROMETHEUS_RW_SERVER_URL=http://prometheus:9090/api/v1/write # Prometheus remote write endpoint
52 - K6_PROMETHEUS_RW_TREND_STATS=p(95),p(99),min,max # Custom trend stats
53 volumes:
54 - ./k6:/scripts # Mounts local k6 scripts
55 command: run --out experimental-prometheus-rw /scripts/script.js # Runs the main k6 script
56 networks:
57 - k6-net
58 depends_on:
59 - sample-api
60 - prometheus
61
62volumes:
63 grafana-storage: # Named volume for Grafana data
64
65networks:
66 k6-net:
67 driver: bridge # Isolated network for all services
Step 6: Start the stack
- Start all services:
1docker-compose up -d
Step 7: Setting Up a Pre-built K6 Dashboard
- Access Grafana: Navigate to http://localhost:3001
- Login: Use admin/admin (you'll be prompted to change the password)
- Add Prometheus Data Source First:
- Go to Configuration → Data Sources
- Click "Add data source"
- Select "Prometheus"
- Set URL to:
http://prometheus:9090
- Click "Save & Test"
- Import K6 Dashboard:
- Click the "+" icon in the left sidebar
- Select "Import"
- Use one of these dashboard IDs for Prometheus:
- 19665 - K6 Prometheus (recommended)
- 10660 - K6 Load Testing Results (Prometheus)
- 19634 - K6 Performance Test Dashboard
- Click "Load"
- Select your Prometheus data source
- Click "Import"
Step 8: Run the load test
- Run the k6 test:
1docker-compose run --rm k6 run --out experimental-prometheus-rw /scripts/script.js
As the test runs, k6 will send API requests to the sample API, and metrics will be collected and sent to Prometheus. You can monitor the test progress in the terminal.
Step 9: Monitor your test run in Grafana
The Grafana UI available at http://localhost:3001. Select your Dashboard from the left nav and you can monitor your test real time with the Grafana dashboard, which should look something like this:
Cleanup
Stop and remove all containers and volumes:
1docker-compose down -v
Conclusion
The point of this post was just to provide awareness of the open-source options available to you as you consider k6 for load testing. I skimmed over a lot of detail and explanation about k6, Prometheus, and Grafana. I will likely fill in some detail with future posts. Until then, this setup provides a complete observability stack for K6 load testing.
The Docker-based approach ensures consistency across environments and makes it easy to integrate into CI/CD pipelines. And FYI, you can find all the code from this blog post here.
Thanks for reading and let me know if you have any questions or suggestions for future posts!