feat(mock): add vLLM-Omni endpoint support to mock app#2036
feat(mock): add vLLM-Omni endpoint support to mock app#2036Jeffwan merged 1 commit intovllm-project:mainfrom
Conversation
- /v1/chat/completions: image gen/edit responses, modalities param - /v1/audio/speech: vLLM-Omni TTS params and voices - /v1/audio/voices: list available TTS voices - /v1/videos: synchronous video gen with I2V support Signed-off-by: Jiaxin Shan <seedjeffwan@gmail.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the mock application's capabilities by integrating comprehensive support for vLLM-Omni endpoints. It introduces mock responses for multimodal interactions, including image generation and editing via chat completions, advanced text-to-speech features with vLLM-Omni specific parameters and voices, and synchronous video generation. These additions enable broader testing and development against a mock environment that mimics vLLM-Omni's advanced functionalities. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
5943c1f to f7629ea Compare | this for mocked app, not production code, there's no owners yet so I will merge this one directly. |
There was a problem hiding this comment.
Code Review
This pull request introduces mock support for several vLLM-Omni endpoints, enhancing the mock application's capabilities for image/video generation, multimodal chat, and text-to-speech. The implementation is well-structured and correctly simulates the new functionalities. I've identified a few minor opportunities for improvement concerning code duplication and unused variables, which would enhance the code's maintainability.
| # --- vLLM-Omni: image gen/edit via /v1/chat/completions --- | ||
| if _is_image_request(request.json): | ||
| time.sleep(0.2) # simulate diffusion time | ||
| is_edit = _has_input_images(messages) |
| openai_voices = ["alloy", "echo", "fable", "onyx", "nova", "shimmer"] | ||
| vllm_omni_voices = [ | ||
| "aiden", "dylan", "eric", "one_anna", "ryan", | ||
| "serena", "sohee", "uncle_fu", "vivian", | ||
| ] | ||
| valid_voices = openai_voices + vllm_omni_voices |
There was a problem hiding this comment.
The list vllm_omni_voices is hardcoded here and also in the /v1/audio/voices endpoint (line 876). This duplication can lead to inconsistencies if the list of voices changes. To improve maintainability, you should define this list as a constant at the module level (e.g., VLLM_OMNI_VOICES) and reuse it in both places. The same could be done for openai_voices.
For example, you could add this at the top of the file:
OPENAI_VOICES = ["alloy", "echo", "fable", "onyx", "nova", "shimmer"] VLLM_OMNI_VOICES = [ "aiden", "dylan", "eric", "one_anna", "ryan", "serena", "sohee", "uncle_fu", "vivian", ]Then you can use these constants in audio_speech() and audio_voices().
| width = request.form.get("width", "832") | ||
| height = request.form.get("height", "480") | ||
| num_frames = request.form.get("num_frames", "33") | ||
| fps = request.form.get("fps", "16") | ||
| seed = request.form.get("seed") | ||
| negative_prompt = request.form.get("negative_prompt") | ||
| input_reference = request.files.get("input_reference") |
There was a problem hiding this comment.
Pull Request Description
Related Issues
Resolves: #1951
Important: Before submitting, please complete the description above and review the checklist below.
Contribution Guidelines (Expand for Details)
We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:
Pull Request Title Format
Your PR title should start with one of these prefixes to indicate the nature of the change:
[Bug]: Corrections to existing functionality[CI]: Changes to build process or CI pipeline[Docs]: Updates or additions to documentation[API]: Modifications to aibrix's API or interface[CLI]: Changes or additions to the Command Line Interface[Misc]: For changes not covered above (use sparingly)Note: For changes spanning multiple categories, use multiple prefixes in order of importance.
Submission Checklist
By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.