Meituan releases efficient inference model LongCat-Flash-Thinking
2025-09-22 14:12:31

On September 22nd, Meituan released LongCat-Flash-Thinking, a highly efficient inference model. Meituan stated that based on AIME25 test data, LongCat-Flash-Thinking demonstrated more efficient agent tool invocation within the framework, saving 64.5% of tokens compared to no tool invocation while maintaining 90% accuracy. LongCat-Flash-Thinking is now open source on HuggingFace and Github, and is available for trial on its official website.
Email Subscription
Newsletters and emails are now available! Delivered on time, every weekday, to keep you up to date with North American business news.
ASIA TECH WIRE

Grasp technology trends

Download