Skip to content

Commit 996362e

Browse files
committed
Add benchmarking folder with common config set ups
1 parent 831a919 commit 996362e

File tree

5 files changed

+808
-15
lines changed

5 files changed

+808
-15
lines changed

benchmarking/README.md

Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
# Benchmarking Helm Chart
2+
3+
This Helm chart deploys the `inference-perf` benchmarking tool. This guide will walk you through deploying a basic benchmarking job. By default, the `shareGPT` dataset is used for configuration.
4+
5+
## Prerequisites
6+
7+
Before you begin, ensure you have the following:
8+
9+
* **Helm 3+**: [Installation Guide](https://helm.sh/docs/intro/install/)
10+
* **Kubernetes Cluster**: Access to a Kubernetes cluster
11+
* **Gateway Deployed**: Your inference server/gateway must be deployed and accessible within the cluster.
12+
13+
14+
**Hugging Face Token Secret**
15+
16+
The benchmark requires a Hugging Face token to pull tokenizers. Create a Kubernetes Secret named `hf-token` (or a custom name you provide) in your target namespace, containing your Hugging Face token.
17+
18+
To create this secret:
19+
```bash
20+
export _HF_TOKEN='<YOUR_HF_TOKEN>'
21+
kubectl create secret generic hf-token --from-literal=token=$_HF_TOKEN
22+
```
23+
24+
## Deployment
25+
26+
To deploy the benchmarking chart:
27+
28+
```bash
29+
export IP='<YOUR_IP>'
30+
export PORT='<YOUR_PORT>'
31+
helm install benchmark -f benchmark-values.yaml \
32+
--set hfTokenSecret.name=hf-token \
33+
--set hfTokenSecret.key=token \
34+
--set "config.server.base_url=http://${IP}:${PORT}" \
35+
oci://quay.io/inference-perf/charts/inference-perf:latest
36+
```
37+
38+
**Parameters to customize:**
39+
40+
For more parameter customizations, refer to inference-perf [guides](https://github.com/kubernetes-sigs/inference-perf/blob/main/docs/config.md)
41+
42+
* `benchmark`: A unique name for this deployment.
43+
* `hfTokenSecret.name`: The name of your Kubernetes Secret containing the Hugging Face token (default: `hf-token`).
44+
* `hfTokenSecret.key`: The key in your Kubernetes Secret pointing to the Hugging Face token (default: `token`).
45+
* `config.server.base_url`: The base URL (IP and port) of your inference server.
46+
47+
### Storage Parameters
48+
49+
#### 1. Local Storage (Default)
50+
51+
By default, reports are saved locally but **lost when the Pod terminates**.
52+
```yaml
53+
storage:
54+
local_storage:
55+
path: "reports-{timestamp}" # Local directory path
56+
report_file_prefix: null # Optional filename prefix
57+
```
58+
59+
#### 2. Google Cloud Storage (GCS)
60+
61+
Use the `google_cloud_storage` block to save reports to a GCS bucket.
62+
63+
```yaml
64+
storage:
65+
google_cloud_storage: # Optional GCS configuration
66+
bucket_name: "your-bucket-name" # Required GCS bucket
67+
path: "reports-{timestamp}" # Optional path prefix
68+
report_file_prefix: null # Optional filename prefix
69+
```
70+
71+
###### 🚨 GCS Permissions Checklist (Required for Write Access)
72+
73+
1. **IAM Role (Service Account):** Bound to the target bucket.
74+
75+
* **Minimum:** **Storage Object Creator** (`roles/storage.objectCreator`)
76+
77+
* **Full:** **Storage Object Admin** (`roles/storage.objectAdmin`)
78+
79+
2. **Node Access Scope (GKE Node Pool):** Set during node pool creation.
80+
81+
* **Required Scope:** **`devstorage.read_write`** or **`cloud-platform`**
82+
83+
#### 3. Simple Storage Service (S3)
84+
85+
Use the `simple_storage_service` block for S3-compatible storage. Requires appropriate AWS credentials configured in the runtime environment.
86+
87+
```yaml
88+
storage:
89+
simple_storage_service:
90+
bucket_name: "your-bucket-name" # Required S3 bucket
91+
path: "reports-{timestamp}" # Optional path prefix
92+
report_file_prefix: null # Optional filename prefix
93+
```
94+
95+
## Uninstalling the Chart
96+
97+
To uninstall the deployed chart:
98+
99+
```bash
100+
helm uninstall my-benchmark
101+
```
102+

benchmarking/benchmark-values.yaml

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# High-Cache Configuration
2+
job:
3+
image: "quay.io/inference-perf/inference-perf:latest"
4+
memory: "8G"
5+
6+
logLevel: INFO
7+
8+
hfTokenSecret:
9+
name: hf-token
10+
key: token
11+
12+
config:
13+
load:
14+
type: constant
15+
interval: 15
16+
stages:
17+
- rate: 10
18+
duration: 20
19+
- rate: 20
20+
duration: 20
21+
- rate: 30
22+
duration: 20
23+
api:
24+
type: completion
25+
streaming: true
26+
server:
27+
type: vllm
28+
model_name: meta-llama/Llama-3.1-8B-Instruct
29+
base_url: http://0.0.0.0:8000
30+
ignore_eos: true
31+
tokenizer:
32+
pretrained_model_name_or_path: meta-llama/Llama-3.1-8B-Instruct
33+
data:
34+
type: shareGPT
35+
storage:
36+
google_cloud_storage:
37+
bucket_name: "inference-perf-results"
38+
report_file_prefix: benchmark
39+
metrics:
40+
type: prometheus
41+
prometheus:
42+
google_managed: true
43+
report:
44+
request_lifecycle:
45+
summary: true
46+
per_stage: true
47+
per_request: true
48+
prometheus:
49+
summary: true
50+
per_stage: true

0 commit comments

Comments
 (0)