Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upgrade typing_extensions version #645

Open
2 tasks
lxning opened this issue Aug 26, 2024 · 1 comment
Open
2 tasks

upgrade typing_extensions version #645

lxning opened this issue Aug 26, 2024 · 1 comment
Assignees

Comments

@lxning
Copy link

lxning commented Aug 26, 2024

System Info

PyTorch: 2.3
Cuda: 12.1

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

I got error when i ran the command generated from python prepare_meta_eval.py --config_path ./eval_config.yaml. The root cause is typing-extensions==4.8.0, but vllm is based on typing_extensions >= 4.10.

Error logs

lm_eval --model vllm   --model_args pretrained=meta-llama/Meta-Llama-3.1-8B-Instruct,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=4,max_model_len=8192,add_bos_token=True,seed=42 --tasks meta_instruct --batch_size auto --output_path eval_results --include_path /home/ubuntu/llama-recipes/tools/benchmarks/llm_eval_harness/meta_eval_reproduce/work_dir --seed 42  --log_samples

cannot import name 'TypeIs' from 'typing_extensions'

Expected behavior

the following command can run successfully.

lm_eval --model vllm   --model_args pretrained=meta-llama/Meta-Llama-3.1-8B-Instruct,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=4,max_model_len=8192,add_bos_token=True,seed=42 --tasks meta_instruct --batch_size auto --output_path eval_results --include_path /home/ubuntu/llama-recipes/tools/benchmarks/llm_eval_harness/meta_eval_reproduce/work_dir --seed 42  --log_samples
@lxning lxning changed the title upgrade typing_extensions upgrade typing_extensions version Aug 26, 2024
@wukaixingxp
Copy link
Contributor

wukaixingxp commented Aug 26, 2024

Hi! Thank you for the bug report, I think we should first install llama-recipe then install vllm which will override the typing_extensions version. Can you help me to verify if this modification works?

git clone git@github.com:meta-llama/llama-recipes.git
cd llama-recipes
pip install -U pip setuptools
pip install -e .
pip install lm-eval[math,ifeval,sentencepiece,vllm]==0.4.3
cd tools/benchmarks/llm_eval_harness/meta_eval_reproduce

@wukaixingxp wukaixingxp self-assigned this Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants
@wukaixingxp @lxning and others