Hugging Face’s cover photo
Hugging Face

Hugging Face

Software Development

The AI community building the future.

About us

The AI community building the future.

Website
https://huggingface.co
Industry
Software Development
Company size
51-200 employees
Type
Privately Held
Founded
2016
Specialties
machine learning, natural language processing, and deep learning

Products

Locations

Employees at Hugging Face

Updates

  • Hugging Face reposted this

    View organization page for Ai2

    58,609 followers

    Introducing Molmo 2 🎥: State-of-the-art video understanding, pointing, and tracking Last year, Molmo helped push image understanding forward with pointing—grounded answers you can verify. Now, Molmo 2 brings those capabilities to video—so the model doesn’t just answer questions, it can show you where & when something is happening. On major industry benchmarks, Molmo 2 surpasses most open multimodal models & even rivals closed peers like Gemini 3 Pro and Claude Sonnet 4.5. Molmo 2 returns pixel coordinates + timestamps over videos & coordinates over images, enabling: ◘ Video + image QA ◘ Counting-by-pointing ◘ Dense captioning ◘ Artifact detection ◘ Subtitle-aware analysis …and more! Three variants depending on your needs: 🔹 Molmo 2 (8B): Qwen 3 backbone, best overall performance 🔹 Molmo 2 (4B): Qwen 3 backbone, fast + efficient 🔹 Molmo 2-O (7B): Olmo backbone, fully open model flow Demos: 🎯 Counting objects & actions (“How many times does the ball hit the ground?”)—returns the count plus space–time pointers for each event: https://lnkd.in/eAg8nNWP ❓ Ask-it-anything long-video QA (“Why does the player change strategy here?”)—points to the moments supporting the answer: https://lnkd.in/eXSf5dYb 📍 Object tracking (“Follow the red race car.”)—tracks it across frames with coordinates over time: https://lnkd.in/ezDy38cR We’ve also significantly upgraded the Ai2 Playground 🛠️ You can now upload a video or multiple images to try summarization, tracking, and counting—while seeing exactly where the model is looking. Try it and learn more: ▶️ Playground: https://lnkd.in/gk3Q49a5 ⬇️ Models: https://lnkd.in/eJXDvZ_m 📝 Blog: https://lnkd.in/emNwBqrH 📑 Report: https://lnkd.in/enZG-Z5Q 💻 API coming soon

  • Hugging Face reposted this

    NVIDIA released Nemotron 3 Nano, and now you can fine tune it with agents, colab, locally or on the hub. It’s a 30B hybrid reasoning model with a 1M context window and best-in-class performance for code and reasoning. To fine tune, pick one of these tools: - Combine HF skills with a coding agent like claude code. - Use this colab notebook. - Train it on HF jobs using the Hugging Face hub - If you can, run this script on your own setup with uv This should get anyone started with fine tuning, and this is the perfect model to start with.

    • No alternative text description for this image
  • Hugging Face reposted this

    View organization page for Gradio

    71,804 followers

    🏆 Announcing the First Winners of MCP’s 1st Birthday Hackathon Over 7,200 developers from around the world came together to build agentic applications using Model Context Protocol (MCP) and Gradio 6. We’re excited to begin announcing sponsor-selected winners from this global, online hackathon—recognizing projects that stood out in innovation, execution, and real-world impact. 🎉Here are the first sponsor-selected winners ↓ 🤖 Anthropic Prize Winners (Building MCP Track) 🥇 Best Overall → Cite Before Act MCP: https://lnkd.in/gBVxgdpi 🥈 Enterprise MCP Server → MCEPTION: https://lnkd.in/gDPBMG6B 🥈 Consumer MCP Server → Finance Portfolio Intelligence Platform: https://lnkd.in/gnezCqys 🥈 Creative MCP Server → Game Context Protocol: https://lnkd.in/g2C6N5dSModal Innovation Award 🏆 Legacy Code Modernizer: https://lnkd.in/gUZz_JhX 🦙 LlamaIndex Category Award 🏆 DungeonMaster AI: https://lnkd.in/gf5Pnnh4 🧠 OpenAI Category Awards 🏆 Best API Integration → Brand Identity Generator: https://lnkd.in/grtinY_9 🏆 Best ChatGPT App → Dental ChatGPT App: https://lnkd.in/g98G8hfT 🚀 Blaxel (YC X25) Choice Award 🏆 BlaxML Agent: https://lnkd.in/gYDjiKRF Each of these projects showcases what’s possible when MCP meets a strong developer ecosystem. 👏 Congratulations to all the teams, and thank you to our partners for supporting the global MCP developer movement : Anthropic, OpenAI, Modal, LlamaIndex, Blaxel (YC X25), Nebius Token Factory, Hyperbolic, SambaNova, and ElevenLabs More winners—including Track 2 and additional sponsor awards—will be announced soon. Built with ❤️ on Gradio × Hugging Face

    • No alternative text description for this image
  • Hugging Face reposted this

    IBM dropped CUGA, open-source enterprise agent to automate boring tasks 🔥 > given workspace files, it writes and executes code to accomplish any task 🤯 > comes with a ton of tools built for enterprise tasks, supports MCPs > plug in your favorite LLM 👏 here's a small demo where it retrieves info from a file, calculates revenue by writing code, and drafts an e-mail 🤯 they release code with Apache-2.0 license, a blog and a Space on Hugging Face 🙌🏻 you can run this locally release blog with the demo, code and more! https://lnkd.in/dG7_idBY

  • Hugging Face reposted this

    View organization page for NVIDIA

    4,452,994 followers

    Today we announced the NVIDIA Nemotron™ 3 family of open models, data, and libraries, offering a transparent and efficient foundation for building specialized agentic AI across industries. Nemotron 3 features a hybrid mixture-of-experts (MoE) architecture and new open Nemotron pretraining and post-training datasets, paired with NeMo™ Gym, an open-source reinforcement learning library that enables scalable, verifiable agent training. Read more: https://nvda.ws/3YkDl54

    • No alternative text description for this image
  • Hugging Face reposted this

    Just got my 2025 Hugging Face Wrapped and it made me smile. This year was about building and sharing. Five models, two datasets, and nine Spaces shipped through my Hugging Face wrapper. A lot of experimentation, iteration, and learning along the way. Apparently I am tagged as a 𝐫𝐞𝐢𝐧𝐟𝐨𝐫𝐜𝐞𝐦𝐞𝐧𝐭 𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐫𝐨𝐨𝐤𝐢𝐞, which feels fair. Still learning, still testing, still stopping agents a bit too early. Most of the work revolved around the transformers library, turning ideas into small but usable artifacts. Nothing huge yet, but steady progress and public work matter. Open tools, open experiments, and learning in the open. On to the next iteration.

    • No alternative text description for this image
  • Hugging Face reposted this

    2025’s vibe check says: “Label Legend” ✅ (and yes, I proudly lived in text classification this year.) Quick stats: 42 artifacts shipped (some public, some private — good work doesn’t always need a spotlight 😉) 26 models trained/fine-tuned 16 datasets built & improved What I worked on in 2025: Text classification (making messy text make sense) Noise-robust ASR (speech recognition that holds up in real environments) Urdu + English TTS (bringing better voices to both languages) Fine-tuning LLMs (turning strong base models into task-focused performers) Grateful for the open-source community and everyone sharing learnings, code, and feedback. Excited to build even more in 2026 🚀 If you’re working in ASR / TTS / NLP / LLM fine-tuning, let’s connect! #HuggingFace #MachineLearning #NLP #ASR #TTS #UrduNLP #LLM #Transformers #AI #OpenSource

    • No alternative text description for this image
  • Hugging Face reposted this

    Hugging Face 2025 Wrapped 🚀 𝗞𝗲𝗲𝗽𝗶𝗻𝗴 𝘁𝗵𝗲 𝗺𝗼𝗺𝗲𝗻𝘁𝘂𝗺 𝗳𝗿𝗼𝗺 𝟮𝟬𝟮𝟰, with a visual reasoning instruction dataset among the most liked of the year 👏. 𝗜𝗻 𝟮𝟬𝟮𝟱, my Hugging Face account got 𝟰𝟲𝟭 𝗹𝗶𝗸𝗲𝘀, Spaces reached ~𝟭𝟬𝟴.𝟬𝟬𝟬 𝘃𝗶𝘀𝗶𝘁𝘀, and my Vietnamese VLM models passed 𝟭.𝟬𝟬𝟬.𝟬𝟬𝟬 downloads in 12 months 🚀. Another productive year for open source 🔥

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Hugging Face reposted this

    Hugging Face just dropped 2025 Wrapped, and honestly… this is one of the coolest “year in review” formats I’ve seen. 👉 https://lnkd.in/d5TCRRGY Huge props to Hugging Face for turning open-source contribution data into something this fun and reflective. My 2025 Wrapped highlights: 🤖 47 Models 📊 347 Datasets 🚀 7 Spaces 🧠 401 artifacts shipped (apparently I built a small LLM instead of a portfolio 😅) ❤️ Transformers was my most-used library — zero regrets The “LLM Logorrhea” badge made me laugh, but it’s also kinda accurate. This year was pure output mode. Beyond the stats: Participated in 2 Hackathons Shipped full ecosystems, not just demos Got merchandise for both hackathons (small thing, big morale boost) Learned (again) that building in public compounds faster than anything else What I appreciate most is how this Wrapped doesn’t just count likes — it captures momentum. Experiments, iterations, late nights, and shipping anyway. If you’re on the fence about contributing more openly in 2026: do it. The feedback loop is worth it. Onward 🚀 #HuggingFace #HFWrapped #OpenSource #GenAI #LLMs #Hackathons #Builders #Shipping

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Hugging Face 8 total rounds

Last Round

Series unknown
See more info on crunchbase