Skip to content

Conversation

@codingl2k1
Copy link
Contributor

@codingl2k1 codingl2k1 commented Jan 24, 2024

  • Currently, only whisper model is supported.
  • Provide /v1/audio/transcriptions and /v1/audio/translations.
  • Compatible with openai API.

Example:

import openai

client = openai.Client(api_key="not empty", base_url=f"{endpoint}/v1")
with open(zh_cn_audio_path, "rb") as f:
    completion = client.audio.transcriptions.create(model=model_uid, file=f)
    assert "列表" in completion.text
    assert "香港" in completion.text
    assert "航空" in completion.text

    completion = client.audio.translations.create(model=model_uid, file=f)
    translation = completion.text.lower()
    assert "list" in translation
    assert "airlines" in translation
    assert "hong kong" in translation

@XprobeBot XprobeBot added this to the v0.8.2 milestone Jan 24, 2024
@codingl2k1 codingl2k1 marked this pull request as ready for review January 25, 2024 08:36
@aresnow1 aresnow1 merged commit 8069552 into xorbitsai:main Jan 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants