Skip to content

Conversation

royyhuang
Copy link
Collaborator

@royyhuang royyhuang commented May 8, 2025

This PR creates two CRDs, VLLMRuntime and VLLMRouter, along with their corresponding controllers to dynamically manage the configuration for the production stack. VLLMRuntime manages all configuration related to the vLLM engine as well as LMCache. VLLMRouter is responsible for managing router-related configuration.

Note: This PR is an initial step to support CRDs for the production stack and currently does not support a remote LMCache server (i.e., it only supports CPU offloading at this time). I plan to add another CRD, LMCacheServer, to manage the configuration for a remote cache server required by LMCache.

Attached are example manifest for each of the CRD.

apiVersion: serving.vllm.ai/v1alpha1
kind: VLLMRouter
metadata:
  labels:
    app.kubernetes.io/name: production-stack
    app.kubernetes.io/managed-by: kustomize
  name: vllmrouter-sample
spec:
  # Enable the router deployment
  enableRouter: true

  # Number of router replicas
  replicas: 1

  # Service discovery method (k8s or static)
  serviceDiscovery: k8s

  # Routing strategy (roundrobin or session)
  routingLogic: roundrobin

  # Engine statistics collection interval
  engineScrapeInterval: "30"

  # Request statistics window
  requestStatsWindow: "60"

  # Container port for the router service
  port: 80

  # Service account name
  serviceAccountName: vllmrouter-sa

  # Image configuration
  image:
    registry: docker.io
    name: lmcache/lmstack-router
    pullPolicy: IfNotPresent

  # Resource requirements
  resources:
    cpu: "2"
    memory: "8Gi"

  # Environment variables
  env:
    - name: LOG_LEVEL
      value: "info"
    - name: METRICS_ENABLED
      value: "true"

  # Node selector for pod scheduling
  nodeSelectorTerms:
    - matchExpressions:
        - key: kubernetes.io/os
          operator: In
          values:
            - linux
apiVersion: serving.vllm.ai/v1alpha1
kind: VLLMRuntime
metadata:
  labels:
    app.kubernetes.io/name: production-stack
    app.kubernetes.io/managed-by: kustomize
  name: vllmruntime-sample
spec:

  # vLLM specific configurations
  enableChunkedPrefill: false
  enablePrefixCaching: false
  tensorParallelSize: 1
  gpuMemoryUtilization: "0.8"
  maxLoras: 4
  extraArgs: ["--disable-log-requests"]
  v1: false

  # LM Cache configuration
  lmCacheConfig:
    enabled: true
    cpuOffloadingBufferSize: "4Gi"
    diskOffloadingBufferSize: "8Gi"
    remoteUrl: ""
    remoteSerde: ""

  # Model configuration
  model:
    modelURL: "meta-llama/Llama-3.1-8B"
    enableLoRA: false
    enableTool: false
    toolCallParser: ""
    maxModelLen: 4096
    dtype: "bfloat16"
    maxNumSeqs: 32

  # Environment variables
  env:
    - name: HF_HOME
      value: "/data"

  # Resource requirements
  resources:
    cpu: "10"
    memory: "32Gi"
    gpu: "1"

  # Image configuration
  image:
    registry: "docker.io"
    name: "lmcache/vllm-openai:2025-04-18"
    pullPolicy: "IfNotPresent"
    pullSecretName: ""

  # HuggingFace token secret (optional)
  hfTokenSecret:
    name: "huggingface-token"

  # Number of replicas
  replicas: 1

  # Deployment strategy
  deploymentStrategy: "Recreate"

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


  • Make sure the code changes pass the pre-commit checks.
  • Sign-off your commit by using -s when doing git commit
  • Try to classify PRs for easy understanding of the type of changes, such as [Bugfix], [Feat], and [CI].
Detailed Checklist (Click to Expand)

Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Feat] for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).
  • [Router] for changes to the vllm_router (e.g., routing algorithm, router observability, etc.).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • Pass all linter checks. Please use pre-commit to format your code. See README.md for installation.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Please include sufficient tests to ensure the change is stay correct and robust. This includes both unit tests and integration tests.

DCO and Signed-off-by

When contributing changes to this project, you must agree to the DCO. Commits must include a Signed-off-by: header which certifies agreement with the terms of the DCO.

Using -s with git commit will automatically add this header.

What to Expect for the Reviews

We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.

@royyhuang royyhuang marked this pull request as draft May 8, 2025 06:21
@royyhuang royyhuang marked this pull request as ready for review May 13, 2025 21:56
Copy link
Collaborator

@YuhanLiu11 YuhanLiu11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Let's have more tests for CRD soon.

@YuhanLiu11 YuhanLiu11 merged commit 5b7af2b into vllm-project:main May 16, 2025
6 checks passed
JustinDuy pushed a commit to JustinDuy/production-stack-1 that referenced this pull request Jun 13, 2025
* add CRD support for production stack

Signed-off-by: royyhuang <[email protected]>

* move opertor to a secondary dir instead of in root dir

Signed-off-by: royyhuang <[email protected]>

* rename api group from serving.vllm.ai to production-stack.vllm.ai

Signed-off-by: royyhuang <[email protected]>

* enable lmcache cpu offloading

Signed-off-by: royyhuang <[email protected]>

* enable lmcache remote cache server offloading

Signed-off-by: royyhuang <[email protected]>

* fix service discorvery issue by adding readiness probe to vllm pod

Signed-off-by: royyhuang <[email protected]>

* fix readiness probe

Signed-off-by: royyhuang <[email protected]>

---------

Signed-off-by: royyhuang <[email protected]>
allytotheson pushed a commit to allytotheson/production-stack that referenced this pull request Jun 30, 2025
* add CRD support for production stack

Signed-off-by: royyhuang <[email protected]>

* move opertor to a secondary dir instead of in root dir

Signed-off-by: royyhuang <[email protected]>

* rename api group from serving.vllm.ai to production-stack.vllm.ai

Signed-off-by: royyhuang <[email protected]>

* enable lmcache cpu offloading

Signed-off-by: royyhuang <[email protected]>

* enable lmcache remote cache server offloading

Signed-off-by: royyhuang <[email protected]>

* fix service discorvery issue by adding readiness probe to vllm pod

Signed-off-by: royyhuang <[email protected]>

* fix readiness probe

Signed-off-by: royyhuang <[email protected]>

---------

Signed-off-by: royyhuang <[email protected]>
Signed-off-by: allytotheson <[email protected]>
allytotheson pushed a commit to allytotheson/production-stack that referenced this pull request Jun 30, 2025
* add CRD support for production stack

Signed-off-by: royyhuang <[email protected]>

* move opertor to a secondary dir instead of in root dir

Signed-off-by: royyhuang <[email protected]>

* rename api group from serving.vllm.ai to production-stack.vllm.ai

Signed-off-by: royyhuang <[email protected]>

* enable lmcache cpu offloading

Signed-off-by: royyhuang <[email protected]>

* enable lmcache remote cache server offloading

Signed-off-by: royyhuang <[email protected]>

* fix service discorvery issue by adding readiness probe to vllm pod

Signed-off-by: royyhuang <[email protected]>

* fix readiness probe

Signed-off-by: royyhuang <[email protected]>

---------

Signed-off-by: royyhuang <[email protected]>
Signed-off-by: allytotheson <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants