Skip to content

[Usage]: Vllm whisper model response_format verbose_json not working #14818

@deepakkumar07-debug

Description

@deepakkumar07-debug

My current environment

I'm running Dockerfile.cpu, and added just these installation at line number 44. Since I'm using whisper model.

# install optional dependencies like librosa
RUN --mount=type=cache,target=/root/.cache/pip \
    pip install librosa && \
    pip install vllm[audio,video]==0.7.3

and I'm serving the VLLm using docker command like below

docker run -d --restart=unless-stopped --name vllm-whisper-api  \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HUGGING_FACE_HUB_TOKEN=<MY_TOKEN>" \
    -p 4001:8000 \
    --ipc=host \
    vllm-cpu-inference \
    --model openai/whisper-small \
    --task transcription \
    --host 0.0.0.0 --port 8000

I'm testing the whisper model audio file, with text its working, with json same output only, but with verbose_json I'm getting error.

import requests

with open("audio-samples/audio.wav", "rb") as audio_file:
    response = requests.post("http://localhost:4001/v1/audio/transcriptions",
                             files={"file": audio_file},
                             data={"model": "openai/whisper-small",
                                   "language": "en",
                                   # "response_format": "json",
                                   # "response_format": "text",
                                   # "stream": True
                                   "response_format": "verbose_json",
                                   "timestamp_granularities[]": ["word", "segment"]
                                   # "timestamp_granularities[]": ["segment"]
                                   }
                             )
print("Transcription:", response.text)

Output

Transcription: {"object":"error","message":"Currently only support response_format `text` or `json`","type":"BadRequestError","param":null,"code":400}

Metadata

Metadata

Assignees

No one assigned

    Labels

    usageHow to use vllm

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions