This repository contains our senior research project code, titled "Browser-based Locally Hosted Arabic LLM Optimaization".
You will find our implementation of LLM evaluation and model compression methods such as quantization here.
├── evaluation
│ ├── bench.py # benchmarking script
│ ├── evaluate.py # tasks evaluation script
│ └── tasks
│ └── tasks.txt # chosen evaluation tasks
├── models # chosen models
├── requirements.txt # pip dependencies
└── llmini.py # helper script to perform evaluation- Fork or clone the repository into your local environment, then move inside it.
git clone https://github.com/S-Y-A-N/ar-llm-browser.git && cd ar-llm-browser- Create and activate a python virtual environment, then install the required pip dependencies.
python -m venv .venv && source .venv/bin/activatepip install -r requirements.txt -e .- If you want to run an evaluation, simply use the helper script
llmini.py:
python llmini.py <path/to/model> <path/to/tasks> --optionsor you can run chmod +x llmini.py to make it an executable and run:
./llmini.py <path/to/model> <path/to/tasks> --optionsto find out about the available options, simply run llmini.py with -h or --help.
Before commiting your changes, make sure to run pre-commit for formatting consistency:
pre-commit install # for the first time only
pre-commit run -a # run formatting on all files