-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama 3.1 Code Interpreter file reference #610
Comments
Hey @jonatananselmo thanks for the feedback and Q! I will updating the repo with some example references on how to apply function calling along with tutorials, I will let you know once they are updated. |
Great @init27, an example using a file reference should be helpful! |
Commenting to get notifications. |
@init27 Great! May I ask when it will be updated? |
@init27 Thanks, but I still want to know how to operate on files when using |
I have searched in “Model Cards and Prompt Formats” and in the Github repositories about what is the correct way to reference a file in the prompt to be used by the code interpreter but I have not found an example.
In the paper, they only mention a few examples where they do it in the following way:
file_path = “path/to/file”.
But I'm not sure if this is the right way as sometimes it doesn't work and it gets worse when the language is not English.
In the repository “llama-agentic-system” there are some examples about using files, but the code has too many layers and I couldn't get to the prompt that is sent to the model.
I double checked that the code interpreter tool was enabled by the line “Environment: ipython” at the system prompt.
I also tried enabling the brave_search and wolfram_alpha tools, but the result is the same.
Here is the prompt I'm trying:
Playing with the temperature parameter, sometimes the model responds with the <|python_tag|> token but sometimes not.
I am using Llama 3.1 - 70B - fp8 by NeuralMagic
The text was updated successfully, but these errors were encountered: