llama3-2-3b-instruct-v1.0 Performance Analytics - Real-time Metrics, Token Usage & Cost Analysis

Back to Model Details
bedrock

llama3-2-3b-instruct-v1.0

bedrock • Performance Analytics

Core Performance Metrics

Total Requests
2
100.0%
Error Rate
50.00%
100.0%
Total Input Tokens
86,434
Total Output Tokens
542

Access llama3-2-3b-instruct-v1.0 through LangDB AI Gateway

Recommended

Integrate with meta's llama3-2-3b-instruct-v1.0 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

Performance Percentiles

Response Time
9.67s
100.0%
TTFT
4.79s
TPS (Tokens/Second)
4499.1 TPS
100.0%
TPOT (Time/Output Token)
0.040ms

Performance Trends

Sep 12 - Sep 19, 2025

Request Volume
Daily API requests
2
Performance (TPS)
Tokens per second
4499.08 tokens/s
Response Time
Average response latency (ms)
9666.00 ms
TTFT
Time to First Token (ms)
4791.90 ms

Token Analytics

Token usage distribution and efficiency metrics

Token Distribution
Input vs Output token usage
Input Tokens:86,434
Output Tokens:542
Total Tokens:86,976
Token Usage Timeline
Daily token consumption trends