llama-3.1-8b-instruct Performance Analytics - Real-time Metrics, Token Usage & Cost Analysis

Back to Model Details
deepinfra

llama-3.1-8b-instruct

deepinfra • Performance Analytics

Core Performance Metrics

Total Requests
14
55.6%
Error Rate
0.00%
0.0%
Total Input Tokens
30,310
46.9%
Total Output Tokens
1,089
112.3%

Access llama-3.1-8b-instruct through LangDB AI Gateway

Recommended

Integrate with meta's llama-3.1-8b-instruct and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

Performance Percentiles

Response Time
1.94s
30.3%
TTFT
1.94s
30.4%
TPS (Tokens/Second)
1154.9 TPS
36.9%
TPOT (Time/Output Token)
0.020ms

Performance Trends

Oct 2 - Oct 9, 2025

Request Volume
Daily API requests
14
Performance (TPS)
Tokens per second
1154.93 tokens/s
Response Time
Average response latency (ms)
1941.90 ms
TTFT
Time to First Token (ms)
1941.90 ms

Token Analytics

Token usage distribution and efficiency metrics

Token Distribution
Input vs Output token usage
Input Tokens:30,310
Output Tokens:1,089
Total Tokens:31,399
Token Usage Timeline
Daily token consumption trends