Tags: andimarafioti/mlx-vlm
Tags
Fix Idefics-2 Mask & Trainer Crash (Blaizzy#91) * fix mask * fix trainer crash * fix version
Qwen2-VL fix vision tower bug (Blaizzy#62) * add qwen2-vl * fix model loading * language model w/o multimodal rope * vision model w multimodal rope (torch) * fix image features * formatting * fixed output coherence * convert get_rope_index to mlx * fix patch embed and convert np/torch to mx ops * fix quants * remove lm sanitize * support qwen2_vl 7B * convert to mlx * formating * remove torch * fix vision tower bug * fix language model sanitization * bump dependencies * bump version * formatting
Fix gradio app generation (Blaizzy#54) * fix gradio app generation * remove unused * pin gradio * bump version
Add Dolphin-vision and bunny (Blaizzy#50) * add Dolphin-vision * add bunny * bump version
Add support for phi-3-vision-128k-instruct (Blaizzy#36) * add phi3_v * Update test_models.py * rebase branch * remove debug print * add prompt format * add condition to fix quantisation * bump version --------- Co-authored-by: Prince Canuma <prince.gdt@gmail.com>
PreviousNext