InternVLA-A1-FineTuned Collection The InternVLA-A1 models (including the 3B and 2B variants) fine-tuned on downstream tasks • 0 items • Updated 8 days ago • 1
Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation Paper • 2409.09016 • Published Sep 13, 2024
Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation Paper • 2410.08001 • Published Oct 10, 2024 • 4
Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation Paper • 2412.15109 • Published Dec 19, 2024
HOMIE: Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit Paper • 2502.13013 • Published Feb 18, 2025
Novel Demonstration Generation with Gaussian Splatting Enables Robust One-Shot Manipulation Paper • 2504.13175 • Published Apr 17, 2025
F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions Paper • 2509.06951 • Published Sep 8, 2025 • 32
SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning Paper • 2509.09674 • Published Sep 11, 2025 • 80
InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy Paper • 2510.13778 • Published Oct 15, 2025 • 17
X-VLA: Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model Paper • 2510.10274 • Published Oct 11, 2025 • 16