Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: count tokens before calling '/v1/chat/completions' #98

Open
GPTLocalhost opened this issue Oct 31, 2024 · 1 comment
Open

Comments

@GPTLocalhost
Copy link

Recently, we integrated Microsoft Word with LM Studio through a local Word Add-in. You can view a demo here. We're planning to add a feature to count tokens before calling '/v1/chat/completions,' allowing users to see the remaining tokens available for inference. Our question is: Is it possible for LM Studio to count the tokens of the prompt before calling '/v1/chat/completions'? Thank you for any advice.

@ryan-the-crayon
Copy link
Contributor

Currently, the functionality is only available in LMStudio.js (https://github.com/lmstudio-ai/lmstudio.js/blob/main/packages/lms-client/src/llm/LLMDynamicHandle.ts#L432)

In the future, we will expand the functionality to our Restful API as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants