The Dataset Viewer has been disabled on this dataset.

Streaming LLM Training with Unsloth

Train on massive datasets without downloading anything - data streams directly from the Hub.

πŸ¦₯ Latin LLM Example

Teaches Qwen Latin using 1.47M texts from FineWeb-2, streamed directly from the Hub.

Blog post: Train on Massive Datasets Without Downloading

Quick Start

# Run on HF Jobs (recommended - 2x faster streaming)
hf jobs uv run latin-llm-streaming.py \
  --flavor a100-large \
  --timeout 2h \
  --secrets HF_TOKEN \
  -- \
  --max-steps 500 \
  --output-repo your-username/qwen-latin

# Run locally
uv run latin-llm-streaming.py \
  --max-steps 100 \
  --output-repo your-username/qwen-latin-test

Why Streaming?

  • No disk space needed - train on TB-scale datasets without downloading
  • Works everywhere - Colab, Kaggle, HF Jobs
  • Any language - FineWeb-2 has 90+ languages available

Options

Argument Default Description
--base-model unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit Base model
--max-steps 500 Training steps
--batch-size 4 Per-device batch size
--gradient-accumulation 4 Gradient accumulation steps
--learning-rate 2e-4 Learning rate
--output-repo Required Where to push model
--wandb-project None Wandb project for logging

Performance

Environment Speed Why
Colab A100 ~0.36 it/s Network latency
HF Jobs A100 ~0.74 it/s Co-located compute

Streaming is ~2x faster on HF Jobs because compute is co-located with the data.


🎨 VLM Streaming Fine-tuning (Qwen3-VL)

Fine-tune Vision Language Models with streaming datasets - ideal for large image-text datasets.

Script: vlm-streaming-sft-unsloth-qwen.py Default model: unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit Example dataset: davanstrien/iconclass-vlm-sft

Note: This script uses pinned dependencies (transformers==4.57.1, trl==0.22.2) matching the official Unsloth Qwen3-VL notebook for maximum compatibility.

Quick Start

# Run on HF Jobs (recommended)
hf jobs uv run \
  --flavor a100-large \
  --secrets HF_TOKEN \
  -- \
  https://huggingface.co/datasets/uv-scripts/training/raw/main/vlm-streaming-sft-unsloth-qwen.py \
  --max-steps 500 \
  --output-repo your-username/vlm-finetuned

# With Trackio monitoring dashboard
hf jobs uv run \
  --flavor a100-large \
  --secrets HF_TOKEN \
  -- \
  https://huggingface.co/datasets/uv-scripts/training/raw/main/vlm-streaming-sft-unsloth-qwen.py \
  --max-steps 500 \
  --output-repo your-username/vlm-finetuned \
  --trackio-space your-username/trackio

Why Streaming for VLMs?

  • No disk space needed - images stream directly from Hub
  • Works with massive datasets - train on datasets larger than your storage
  • Memory efficient - Unsloth uses ~60% less VRAM
  • 2x faster - Unsloth optimizations for Qwen3-VL

Verified Performance

Tested on HF Jobs with A100-80GB:

Setting Value
Model Qwen3-VL-8B (4-bit)
Dataset iconclass-vlm-sft
Speed ~3s/step
50 steps ~3 minutes
Starting loss 4.3
Final loss ~0.85

Options

Argument Default Description
--base-model unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit Base VLM model
--dataset davanstrien/iconclass-vlm-sft Dataset with images + messages
--max-steps 500 Training steps (required for streaming)
--batch-size 2 Per-device batch size
--gradient-accumulation 4 Gradient accumulation steps
--learning-rate 2e-4 Learning rate
--lora-r 16 LoRA rank
--lora-alpha 16 LoRA alpha (same as r per Unsloth notebook)
--output-repo Required Where to push model
--trackio-space None HF Space for Trackio dashboard

Dataset Format

The script works with any dataset that has images and messages columns in the standard VLM conversation format:

{
    "images": [<PIL.Image>],  # Single image or list of images
    "messages": [
        {"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "Describe this image"}]},
        {"role": "assistant", "content": [{"type": "text", "text": "The image shows..."}]}
    ]
}

Compatible datasets:

Calculating Steps from Dataset Size

Since streaming datasets don't expose their length, use this formula:

steps = dataset_size / (batch_size * gradient_accumulation)

For example, with 10,000 samples, batch_size=2, gradient_accumulation=4:

steps = 10000 / (2 * 4) = 1250 steps for 1 epoch

πŸš€ Running on HF Jobs

# Basic usage
hf jobs uv run latin-llm-streaming.py --flavor a100-large --secrets HF_TOKEN

# With timeout for long training
hf jobs uv run latin-llm-streaming.py --flavor a100-large --timeout 2h --secrets HF_TOKEN

# Pass script arguments after --
hf jobs uv run latin-llm-streaming.py --flavor a100-large -- --max-steps 1000 --batch-size 8

Available Flavors

  • a100-large - 80GB VRAM (recommended)
  • a10g-large - 24GB VRAM
  • t4-small - 16GB VRAM

πŸ”— Resources


Made with πŸ¦₯ Unsloth

Downloads last month
51