We are a group focused on vision-language-action models (VLAs). We wish to bring insights to the community with our research.
OpenHelix-Team introduces a novel family of fully open-source Vision-Language-Action Models (VLAs) that achieves state-of-the-art performance with substantially lower cost.
- ReconVLA (AAAI 2026 Best Paper Award): Reconstructive Vision-Language-Action Model as Effective Robot Perceiver
- Spatial Forcing (ICLR 2026): Implicit Spatial Representation Alignment for Vision-Language-Action Model
- OpenTrajBooster (ICRA 2026): Official implementation of TrajBooster
- Unified Diffusion VLA (ICLR 2026): The first open-sourced diffusion Vision-Language-Action model
- HiF-VLA: An efficient, bidirectional spatiotemporal expansion Vision-Language-Action Model
- frappe: Infusing World Modeling into Generalist Policies via Multiple Future Representation Alignment
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning
- VLA-Adapter (AAAI 2026 (Oral)): An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
- LLaVA-VLA (ICRA 2026): A Simple Yet Powerful Vision-Language-Action Model
- CEED-VLA: Consistency Vision-Language-Action Model with Early-Exit Decoding
- OpenHelix: An Open-Source Dual-System Vision-Language-Action Model for Robotic Manipulation
- VLA-2: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation
- LongVLA (CoRL 2025): Unleashing Long-Horizon Capability of Vision-Language-Action Models for Robot Manipulation
- Awesome-Force-Tactile-VLA: A paper list of multimodal VLAs
- Awesome-VLA-RL: A taxonomy and summary of recent advances in VLA + RL
This initiative is jointly established and co-developed with the following research institutions:
- Westlake University
- The Hong Kong University of Science and Technology (Guangzhou)
- Zhejiang University
- Tsinghua University
- Beijing Academy of Artificial Intelligence (BAAI)
- Xi’an Jiaotong University
- Beijing University of Posts and Telecommunications
If you are interested in discussion or joining us, please send emails to songwenxuan0115@gmail.com.