-
💼 Dassault Systèmes / 🏫 ESIEE Paris
- Paris
-
15:05
(UTC +01:00) - in/lou-kail
Highlights
- Pro
💬 LLM
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.
JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Official inference repo for FLUX.1 models
🚀🤖 Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper. Don't be shy, join here: https://discord.gg/jP8KfhDhyN