- 📚 I study generalization of LLM post-training.
- See my personal website.
scaling reasoning compute
-
Brown University
- https://yongzx.github.io
- @yong_zhengxin
- in/yongzx
Highlights
- Pro
Pinned Loading
-
BatsResearch/self-jailbreaking
BatsResearch/self-jailbreaking Public[ICLR'26] Official code for "Self-Jailbreaking: Language Models Can Reason Themselves Out of Safety Alignment After Benign Reasoning Training"
Python 10
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.





