glm-4.5 by zai - AI Model Details, Pricing, and Performance Metrics
glm-4.5
completionsGLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly enhanced capabilities in reasoning, code generation, and agent alignment. It supports a hybrid inference mode with two options, a "thinking mode" designed for complex reasoning and tool use, and a "non-thinking mode" optimized for instant responses.
GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly enhanced capabilities in reasoning, code generation, and agent alignment. It supports a hybrid inference mode with two options, a "thinking mode" designed for complex reasoning and tool use, and a "non-thinking mode" optimized for instant responses.
Access glm-4.5 through LangDB AI Gateway
Integrate with z-ai's glm-4.5 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Statistics
Category Scores
Benchmark Tests
Metric | AIME | AA Coding Index | AAII | AA Math Index | GPQA | HLE | LiveCodeBench | MATH-500 | MMLU-Pro | SciCode |
---|---|---|---|---|---|---|---|---|---|---|
Score | 87.3 | 54.3 | 49.4 | 92.6 | 78.2 | 12.2 | 73.8 | 97.9 | 83.5 | 34.8 |
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from zai