Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Use GPTCache with HuggingFacePipeline #653

Open
ste3v0 opened this issue Sep 20, 2024 · 1 comment
Open

[Bug]: Use GPTCache with HuggingFacePipeline #653

ste3v0 opened this issue Sep 20, 2024 · 1 comment

Comments

@ste3v0
Copy link

ste3v0 commented Sep 20, 2024

Current Behavior

When I implement GPTCache according to Documentation it does not work.

I am using the GPTCache adapter of langchain and the Langchain Adapter for my embedding

In the end i call
set_llm_cache(GPTCache(init_gptcache)

the error I am recieving is:

adapter.py-adapter:278 - WARNING: failed to save the data to cache, error: get_models..EmbeddingType.validate() takes 2 positional arguments but 3 were given

Can you please just tell me, that the functionality is not implemented yet for HuggingFacePipeline using local Llama3.1

Expected Behavior

No response

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

@SimFG
Copy link
Collaborator

SimFG commented Sep 25, 2024

can you show your demo code, maybe i will can check it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@SimFG @ste3v0 and others