ring-1t by openrouter - AI Model Details, Pricing, and Performance Metrics
ring-1t
completionsring-1t
Ring-1T has undergone continued scaling with large-scale verifiable reward reinforcement learning (RLVR) training, further unlocking the natural language reasoning capabilities of the trillion-parameter foundation model. Through RLHF training, the model's general abilities have also been refined, making this release of Ring-1T more balanced in performance across various tasks. Ring-1T adopts the Ling 2.0 architecture and is trained on the Ling-1T-base foundation model, which contains 1 trillion total parameters with 50 billion activated parameters, supporting a context window of up to 128K tokens.
Ring-1T has undergone continued scaling with large-scale verifiable reward reinforcement learning (RLVR) training, further unlocking the natural language reasoning capabilities of the trillion-parameter foundation model. Through RLHF training, the model's general abilities have also been refined, making this release of Ring-1T more balanced in performance across various tasks. Ring-1T adopts the Ling 2.0 architecture and is trained on the Ling-1T-base foundation model, which contains 1 trillion total parameters with 50 billion activated parameters, supporting a context window of up to 128K tokens.
Access ring-1t through LangDB AI Gateway
Integrate with inclusionai's ring-1t and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Category Scores
Benchmark Tests
| Metric | AA Coding Index | AAII | AA Math Index | GPQA | HLE | LiveCodeBench | MMLU-Pro | SciCode |
|---|---|---|---|---|---|---|---|---|
| Score | 35.8 | 41.8 | 89.3 | 59.5 | 10.2 | 64.3 | 80.6 | 36.7 |
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from openrouter