1

Qwen3-VL-8B-Abliterated-Caption-it-FP8

Qwen3-VL-8B-Abliterated-Caption-it-FP8 is an FP8-compressed variant built on top of prithivMLmods/Qwen3-VL-8B-Abliterated-Caption-it. This edition applies BF16 · FP8 (F8_E4M3) precision formats to significantly reduce memory usage and improve inference throughput while preserving the dense captioning strength and abliterated behavioral characteristics of the original 8B architecture. The base Qwen3-VL-8B-Abliterated-Caption-it model is a fine-tuned version of Qwen3-VL-8B-Instruct, tailored for Abliterated Captioning and uncensored image description. It is designed to generate highly detailed, descriptive captions across a broad range of visual categories, including complex, sensitive, or nuanced content, while supporting varying aspect ratios and resolutions.

FP8 (8-bit floating point) weight and activation quantization using hardware acceleration on GPUs – FP8 W8A8. Quantization W8A8 FP8-dynamic recipe – examples.

Key Highlights

  • BF16 · FP8 (F8_E4M3) Compression: Transformer Engine based FP8 quantization reduces VRAM footprint and improves generation speed while maintaining caption quality.
  • Abliterated Caption Fine-Tuning: Optimized for highly descriptive, dense caption generation with minimized refusal behavior.
  • 8B Vision-Language Architecture: Balanced performance and deployment efficiency compared to larger parameter scales.
  • High-Density Descriptions: Produces richly detailed captions suitable for dataset generation, metadata enrichment, archival systems, and accessibility pipelines.
  • Dynamic Resolution Support: Handles diverse image sizes and aspect ratios effectively.
  • Optimized Deployment: FP8 compression enables smoother deployment on Hopper and other compatible GPU architectures.

Quick Start with Transformers

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch

# Load the 8B Abliterated Caption FP8 model
model = Qwen3VLForConditionalGeneration.from_pretrained(
    "prithivMLmods/Qwen3-VL-8B-Abliterated-Caption-it-FP8",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Qwen3-VL-8B-Abliterated-Caption-it-FP8"
)

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Generate a highly detailed caption for this image."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

image_inputs, video_inputs = process_vision_info(messages)

inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
).to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=512)

generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • Dataset Caption Generation: Creating high-density captions for training and archival datasets.
  • Metadata Enrichment: Enhancing searchability and indexing for large image collections.
  • Visual Documentation Research: Studying descriptive robustness across complex or sensitive imagery.
  • Creative and Narrative Projects: Producing rich descriptive text for storytelling and world-building.
  • Behavioral Analysis Research: Evaluating the impact of abliterated fine-tuning on captioning behavior.

Limitations & Risks

Critical Note: This model minimizes built-in refusal behaviors.

  • Sensitive Content Exposure: The model may generate explicit or controversial descriptions depending on the input image.
  • User Responsibility: Outputs must be handled responsibly and used within ethical and legal boundaries.
  • Hardware Requirements: FP8 requires compatible GPU hardware support for optimal performance and efficiency.
Downloads last month
270
Safetensors
Model size
9B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-8B-Abliterated-Caption-it-FP8

Collection including prithivMLmods/Qwen3-VL-8B-Abliterated-Caption-it-FP8