Skip to content
@OpenHelix-Team

OpenHelix Robotics

OpenHelix Robotics: Building Next-generation Embodiment Intelligence

We are a group focused on vision-language-action models (VLAs). We wish to bring insights to the community with our research.

GitHub User's stars Followers

Introduction

OpenHelix-Team introduces a novel family of fully open-source Vision-Language-Action Models (VLAs) that achieves state-of-the-art performance with substantially lower cost.

Visual Feature Alignment for VLAs

  • ReconVLA (AAAI 2026 Best Paper Award): Reconstructive Vision-Language-Action Model as Effective Robot Perceiver
  • Spatial Forcing (ICLR 2026): Implicit Spatial Representation Alignment for Vision-Language-Action Model

Humanoid VLAs

World-modeling VLAs

  • Unified Diffusion VLA (ICLR 2026): The first open-sourced diffusion Vision-Language-Action model
  • HiF-VLA: An efficient, bidirectional spatiotemporal expansion Vision-Language-Action Model
  • frappe: Infusing World Modeling into Generalist Policies via Multiple Future Representation Alignment
  • VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning

General Foundation Models

  • VLA-Adapter (AAAI 2026 (Oral)): An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
  • LLaVA-VLA (ICRA 2026): A Simple Yet Powerful Vision-Language-Action Model

Efficient VLAs

  • CEED-VLA: Consistency Vision-Language-Action Model with Early-Exit Decoding
  • OpenHelix: An Open-Source Dual-System Vision-Language-Action Model for Robotic Manipulation

Visual Enhanced Frameworks

  • VLA-2: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation
  • LongVLA (CoRL 2025): Unleashing Long-Horizon Capability of Vision-Language-Action Models for Robot Manipulation

Awesome VLAs

Collaborating Institutions

This initiative is jointly established and co-developed with the following research institutions:

  • Westlake University
  • The Hong Kong University of Science and Technology (Guangzhou)
  • Zhejiang University
  • Tsinghua University
  • Beijing Academy of Artificial Intelligence (BAAI)
  • Xi’an Jiaotong University
  • Beijing University of Posts and Telecommunications

Contact

If you are interested in discussion or joining us, please send emails to songwenxuan0115@gmail.com.

Pinned Loading

  1. Awesome-Force-Tactile-VLA Awesome-Force-Tactile-VLA Public

    A paper list of multimodal VLAs

    36 2

Repositories

Showing 10 of 15 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.