Warning
🚨 Evaluating LLM-generated code on a dataset with just _3_ test-cases is NOT enough! 🚨
To address this, we started the EvalPlus project -- a rigourous evaluation framework for LLM4Code that:
- ✨ improves programming benchmarks by patching up to thousands of new tests!
EvalPlus(HumanEval) => HumanEval+(81x new tests!) - ✨ crafts a set utility tools to sanitize, visualize and inspect LLM-generated code and evaluation results!
- ✨ accelerates LLM4Code research by open-sourcing LLM-generated samples for 14+ models -- no need to re-run the expensive benchmarks!
Read our paper for more detailed findings!
To get started, please first setup the environment:
git clone https://github.com/evalplus/evalplus.git
cd evalplus
pip install -r requirements.txt
export PYTHONPATH=$PYTHONPATH:$(pwd)from evalplus.data import get_human_eval_plus
fe = get_human_eval_plus() # -> a list of dictionaries (each is a programming problem)
# "task_id" is the identifier string for the task
# "entry_point": name of the function
# "prompt" is the function signature with docstring
# + "canonical_solution" is the ground-truth implementation (re-implemented to fix bugs in HumanEval)
# + "base_input" is the test inputs in original HumanEval
# + "plus_input" is the test inputs brought by EvalPlus
# and others...Check LLM-produced code and answer the following questions:
- Is the generation entirely done for all samples / all problems in the dataset?
- Are LLM-generated code compilable? (if no, something could be wrong and you'd better check)
python tools/checker.py --folder /path/to/[model]-[??]b_temp_[??] --dataset humanevalLLM-generated code may contain some syntax errors. But some of them can be easily fixable by doing simple post-processing. This tool will make the LLM-generated code more clean/compilable by doing certain post-processing such as trimming with more magical EOFs and some garbage non-code tokens.
python tools/sanitize.py --eof --folder /path/to/vicuna-[??]b_temp_[??]
# Sanitized code will be produced to `/path/to/vicuna-[??]b_temp_[??]-sanitized`python tools/render.py --type /path/to/[model]-[??]b # NOTE: no `_temp_[??]`Before you start:
pip install -r requirements.txt
pre-commit install
export PYTHONPATH=$PYTHONPATH:$(pwd)evalplusis the package name.${DATASET}_plusis the name of dataset applied withevalplus.
@article{evalplus,
title={Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation},
author={Jiawei Liu and Chunqiu Steven Xia and Yuyao Wang and Lingming Zhang},
journal={arXiv preprint arXiv:2305.01210},
year={2023},
}