Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

danielhanchen 
posted an update 1 day ago
view post
Post
3779
We’re excited to announce that Unsloth has joined the PyTorch Ecosystem! 🔥🦥

Unsloth is an open-source project that makes training & running models more accurate and faster with less compute. Our mission is to make local AI accessible to everyone. Thanks to all of you for making this possible! 💕

Blog: https://unsloth.ai/blog/pytorch
GitHub: https://github.com/unslothai/unsloth
  • 1 reply
·
spillai 
posted an update 2 days ago
view post
Post
8563
mm-ctx – fast, multimodal context for agents.

LLM-based agents handle text incredibly well, but images, videos, or PDFs with visual content are hard to interpret. mm-ctx gives your CLI agent multi-modal skills.

Try it interactively in Spaces: vlm-run/mm-ctx

Readme: https://vlm-run.github.io/mm/
PyPI: https://pypi.org/project/mm-ctx
SKILL.md: https://github.com/vlm-run/skills/blob/main/skills/mm-cli-skill/SKILL.md

mm-ctx is meant to feel familiar: the UNIX tools we already love (find/cat/grep/wc), rebuilt for file types LLMs can't read natively and designed to work with agents via the CLI.
- mm grep "invoice #1234" ~/Downloads searches across PDFs and returns line-numbered matches
- mm cat <document>.pdf returns a metadata description of the file
- mm cat <photo>.jpg returns a caption of the photo
- mm cat <video>.mp4 returns a caption of the video

A few things we obsessed over:
⚡ Speed: Rust core for the hot paths
🏠 Local-first, BYO model: Uses any OpenAI-compatible endpoint: Ollama, vLLM/SGLang, LMStudio with any multimodal LLM (Gemma4, Qwen3.5, GLM-4.6V).
🔗 Composable: stdin + structured outputs
🤖 Drops into any agent via mm-cli-skills: Claude Code, Codex, Gemini CLI, OpenClaw.

We’d love to hear your feedback! Especially on the CLI and what file types and workflows you would like to see next.
  • 2 replies
·
HannesVonEssen 
posted an update 3 days ago
blanchon 
posted an update 1 day ago
view post
Post
876
I'm releasing OpenCS2 a 11TB dataset of around 5000 hours of counter strike gameplay recording.
- HD resolution - 1280×720 · 32 fps
- For each frame keyboard and mouse + world state (player position, velocity, weapon ...)
- HD Stereo audio
- All 10 players perspective

https://huggingface.co/collections/blanchon/opencs2
  • 1 reply
·
Imosu 
posted an update 1 day ago
view post
Post
2432
# ZeroGPU Hardware Mismatch: Why Am I Getting RTX PRO 6000 Blackwell MIG Instead of the Documented H200?

I recently ran into a surprising issue while debugging a Hugging Face ZeroGPU Space.

According to the Hugging Face ZeroGPU documentation, ZeroGPU is described as using NVIDIA H200-based resources, with configurations such as “large” and “xlarge” offering H200-class memory. However, when I printed the actual GPU information inside my Space, I got something different:

`txt
GPU: NVIDIA RTX PRO 6000 Blackwell Server Edition MIG 2g.48gb
Capability: (12, 0)
Torch: 2.8.0+cu128
CUDA: 12.8

This is not an H200. It appears to be a MIG slice of an RTX PRO 6000 Blackwell Server Edition GPU, with 48GB VRAM.

This difference matters. It is not just a cosmetic hardware-name issue.

In my case, the Space was running Qwen3-TTS and failed with:

CUDA error:
no kernel image is available for execution on the device

The issue appears related to GPU architecture compatibility. The app was using kernels-community/flash-attn3, which is generally aligned with Hopper-class GPUs such as H100/H200, but the actual device exposed to the Space was Blackwell with compute capability 12.0. As a result, CUDA kernels that might work on the expected H200 environment failed on the actual assigned GPU.

To be clear, I am not saying the RTX PRO 6000 Blackwell is a bad GPU. It is a newer architecture and may be powerful in many workloads. But it is not the same as H200, and the software ecosystem compatibility is different. For ML workloads, especially those relying on custom CUDA kernels, the exact GPU architecture matters a lot.

This raises a few questions:

Is Hugging Face ZeroGPU now assigning RTX PRO 6000 Blackwell MIG instances instead of H200 instances?
If yes, why is this not clearly documented?
  • 1 reply
·
cesear64 
posted an update about 13 hours ago
view post
Post
773
Just published: how we built production Sango (Central African Republic) translation without fine-tuning, parallel corpus, or training compute.

The method — vocabulary-augmented prompting with a 581-entry native-speaker-verified lexicon — generalizes to any of the ~2,000 African languages at the same data-poverty level. Recipe, dataset, and code template all included.

📄 Blog: https://huggingface.co/blog/MEYNG/sangoai
📦 Dataset: MEYNG/sango-vocabulary

Would especially value feedback from anyone working on other low-resource African languages — Ewondo, Lingala, Wolof next on our roadmap.
rajkumarrawal 
posted an update 2 days ago
view post
Post
2015
LLMs aren’t just answering questions anymore, they’re learning to evolve. Self evolving AI is the true endgame.

AI has shifted from short tasks to long missions. The breakthrough isn’t just automation, it’s machines learning human methods and applying them at machine speed. From cybersecurity to finance, from OPCs to NPCs, the wave is irreversible.

Read the full article: Self Evolving is the Endgame or final destiny

https://huggingface.co/blog/rajkumarrawal/self-evolving-is-the-endgame-or-final-destiny

What’s your definition of true AGI? Comment below.
  • 1 reply
·
kalyan-ks 
posted an update 2 days ago
unmodeled-tyler 
posted an update 1 day ago
view post
Post
766
The UFO/UAP Dataset is complete!

unmodeled-tyler/DoW-UFO-UAP-1

The most recent release from the Department of War is there up in full and ready for analysis!

The dataset ships with an Hermes Agent Skill so you can quickly and easily start parsing through the data immediately.

Go chase some anomalies! 🚀

kanaria007 
posted an update 1 day ago
view post
Post
105
✅ Article highlight: *Determinism Profiles, Scheduler Consistency, and Replay Honesty* (art-60-234, v0.1)

TL;DR:
This article argues that determinism is not a binary badge.

A serious system should not just say “this run was deterministic.” It should say *what kind* of determinism claim is being made: exact reproducibility, epsilon-bounded replay, scheduler-stable replay, or a degraded posture due to platform drift. In other words, replay honesty needs profiles, not slogans.

Read:
kanaria007/agi-structural-intelligence-protocols

Why it matters:
• turns “deterministic enough” into an explicit, auditable claim
• separates exact replay, epsilon-bounded replay, and scheduler stability instead of blurring them
• makes platform drift and topology changes visible instead of silently laundering weaker replay results
• prevents teams from confusing bundle validity with strong DET validity

What’s inside:
• a practical determinism ladder: *EXACT_REPRODUCIBLE*, *EPSILON_BOUNDED*, *SCHEDULER_STABLE*, *PLATFORM_DRIFT_DEGRADED*
• *determinism profiles* that define what replay truth is being claimed
• *epsilon-bound policies* for declared approximate replay
• *scheduler consistency reports* for ordering and partial-order stability
• *DET run comparisons* with explicit replay honesty statements about what matched exactly, approximately, or not at all

Key idea:
Do not ask only:

*“was it deterministic?”*

Ask:

*“under what determinism profile, under what epsilon policy, under what scheduler consistency report, and with what replay honesty statement did this scope remain exact, approximate, scheduler-stable, or degraded?”*