Skip to content

feat(mock): add vLLM-Omni endpoint support to mock app#2036

Merged
Jeffwan merged 1 commit intovllm-project:mainfrom
Jeffwan:jiaxin/development-app-multi-modality
Mar 21, 2026
Merged

feat(mock): add vLLM-Omni endpoint support to mock app#2036
Jeffwan merged 1 commit intovllm-project:mainfrom
Jeffwan:jiaxin/development-app-multi-modality

Conversation

@Jeffwan
Copy link
Collaborator

@Jeffwan Jeffwan commented Mar 21, 2026

Pull Request Description

  • /v1/chat/completions: image gen/edit responses, modalities param
  • /v1/audio/speech: vLLM-Omni TTS params and voices
  • /v1/audio/voices: list available TTS voices
  • /v1/videos: synchronous video gen with I2V support

Related Issues

Resolves: #1951

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

- /v1/chat/completions: image gen/edit responses, modalities param - /v1/audio/speech: vLLM-Omni TTS params and voices - /v1/audio/voices: list available TTS voices - /v1/videos: synchronous video gen with I2V support Signed-off-by: Jiaxin Shan <seedjeffwan@gmail.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the mock application's capabilities by integrating comprehensive support for vLLM-Omni endpoints. It introduces mock responses for multimodal interactions, including image generation and editing via chat completions, advanced text-to-speech features with vLLM-Omni specific parameters and voices, and synchronous video generation. These additions enable broader testing and development against a mock environment that mimics vLLM-Omni's advanced functionalities.

Highlights

  • vLLM-Omni Chat Completions: Added support for image generation and editing responses to the /v1/chat/completions endpoint when specific model keywords or diffusion parameters are present. Also enabled mock audio responses when the modalities parameter includes "audio".
  • vLLM-Omni Text-to-Speech (TTS): Integrated vLLM-Omni specific parameters (language, instructions, task_type, ref_audio, ref_text) into the /v1/audio/speech endpoint and expanded voice validation to include vLLM-Omni voices.
  • vLLM-Omni TTS Voice Listing: Introduced a new /v1/audio/voices endpoint to list available vLLM-Omni Text-to-Speech voices.
  • vLLM-Omni Video Generation: Implemented a new /v1/videos endpoint for synchronous vLLM-Omni video generation, supporting multipart/form-data and Image-to-Video (I2V) functionality.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Jeffwan Jeffwan force-pushed the jiaxin/development-app-multi-modality branch from 5943c1f to f7629ea Compare March 21, 2026 22:29
@Jeffwan
Copy link
Collaborator Author

Jeffwan commented Mar 21, 2026

this for mocked app, not production code, there's no owners yet so I will merge this one directly.

@Jeffwan Jeffwan merged commit 317cfb3 into vllm-project:main Mar 21, 2026
3 checks passed
@Jeffwan Jeffwan deleted the jiaxin/development-app-multi-modality branch March 21, 2026 22:30
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces mock support for several vLLM-Omni endpoints, enhancing the mock application's capabilities for image/video generation, multimodal chat, and text-to-speech. The implementation is well-structured and correctly simulates the new functionalities. I've identified a few minor opportunities for improvement concerning code duplication and unused variables, which would enhance the code's maintainability.

# --- vLLM-Omni: image gen/edit via /v1/chat/completions ---
if _is_image_request(request.json):
time.sleep(0.2) # simulate diffusion time
is_edit = _has_input_images(messages)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable is_edit is assigned a value but is never used. This unused variable should be removed to improve code clarity and maintainability.

Comment on lines +801 to +806
openai_voices = ["alloy", "echo", "fable", "onyx", "nova", "shimmer"]
vllm_omni_voices = [
"aiden", "dylan", "eric", "one_anna", "ryan",
"serena", "sohee", "uncle_fu", "vivian",
]
valid_voices = openai_voices + vllm_omni_voices
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list vllm_omni_voices is hardcoded here and also in the /v1/audio/voices endpoint (line 876). This duplication can lead to inconsistencies if the list of voices changes. To improve maintainability, you should define this list as a constant at the module level (e.g., VLLM_OMNI_VOICES) and reuse it in both places. The same could be done for openai_voices.

For example, you could add this at the top of the file:

OPENAI_VOICES = ["alloy", "echo", "fable", "onyx", "nova", "shimmer"] VLLM_OMNI_VOICES = [ "aiden", "dylan", "eric", "one_anna", "ryan", "serena", "sohee", "uncle_fu", "vivian", ]

Then you can use these constants in audio_speech() and audio_voices().

Comment on lines +1371 to +1377
width = request.form.get("width", "832")
height = request.form.get("height", "480")
num_frames = request.form.get("num_frames", "33")
fps = request.form.get("fps", "16")
seed = request.form.get("seed")
negative_prompt = request.form.get("negative_prompt")
input_reference = request.files.get("input_reference")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variables width, height, num_frames, fps, seed, negative_prompt, and input_reference are assigned values from the request but are never used. This makes the code less readable and maintainable. If these parameters are not used in the mock implementation, they should be removed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

1 participant