qwq-32b-arliai-rpr-v1 by openrouter - AI Model Details, Pricing, and Performance Metrics
qwq-32b-arliai-rpr-v1
completionsqwq-32b-arliai-rpr-v1
QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain coherence and reasoning across long multi-turn conversations by introducing explicit reasoning steps per dialogue turn, generated and refined using the base model itself. The model was trained using RS-QLORA+ on 8K sequence lengths and supports up to 128K context windows (with practical performance around 32K). It is optimized for creative roleplay and dialogue generation, with an emphasis on minimizing cross-context repetition while preserving stylistic diversity.
QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain coherence and reasoning across long multi-turn conversations by introducing explicit reasoning steps per dialogue turn, generated and refined using the base model itself. The model was trained using RS-QLORA+ on 8K sequence lengths and supports up to 128K context windows (with practical performance around 32K). It is optimized for creative roleplay and dialogue generation, with an emphasis on minimizing cross-context repetition while preserving stylistic diversity.
Access qwq-32b-arliai-rpr-v1 through LangDB AI Gateway
Integrate with arliai's qwq-32b-arliai-rpr-v1 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Statistics
Code Examples
Integration samples and API usage
Related Models
Similar models from openrouter