llama-3.1-8b-instruct Performance Analytics - Real-time Metrics, Token Usage & Cost Analysis
Back to Model Details
llama-3.1-8b-instruct
deepinfra • Performance Analytics
Core Performance Metrics
Total Requests
110
214.3%
Error Rate
0.00%
0.0%
Total Input Tokens
43,523
126.8%
Total Output Tokens
3,697
412.8%
Access llama-3.1-8b-instruct through LangDB AI Gateway
Recommended
Integrate with meta's llama-3.1-8b-instruct and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Unified API
Cost Optimization
Enterprise Security
Get Started Now
Free tier available • No credit card required
Instant Setup
99.9% Uptime
10,000+Monthly Requests
Performance Percentiles
Response Time
0.78s
25.9%
TTFT
0.78s
25.8%
TPS (Tokens/Second)
551.0 TPS
1.8%
TPOT (Time/Output Token)
0.020ms
80.0%
Performance Trends
Nov 11 - Nov 18, 2025
Request Volume
Daily API requests
Performance (TPS)
Tokens per second
Response Time
Average response latency (ms)
TTFT
Time to First Token (ms)
Token Analytics
Token usage distribution and efficiency metrics
Token Distribution
Input vs Output token usage
Input Tokens:43,523
Output Tokens:3,697
Total Tokens:47,220
Token Usage Timeline
Daily token consumption trends