Qwen-72B

Maintained By
Qwen

Qwen-72B

PropertyValue
Parameter Count72.3B
Context Length32,768 tokens
ArchitectureTransformer with 80 layers, 64 heads
LicenseTongyi Qianwen License Agreement
PaperarXiv:2309.16609

What is Qwen-72B?

Qwen-72B is a state-of-the-art large language model developed by Alibaba Cloud, representing a significant advancement in the field of natural language processing. Trained on over 3 trillion tokens, it features a comprehensive 150K token vocabulary and demonstrates exceptional performance across multiple languages and tasks.

Implementation Details

The model implements cutting-edge architectural choices including RoPE relative position encoding, SwiGLU activation functions, and RMSNorm. It requires significant computational resources, with minimum requirements of 144GB GPU memory for BF16/FP16 inference.

  • 80 transformer layers with 64 attention heads
  • 8192 dimensional model size
  • 151,851 vocabulary tokens
  • 32K sequence length support

Core Capabilities

  • Superior performance on benchmarks like MMLU (77.4%), C-Eval (83.3%), and GSM8K (78.9%)
  • Strong multilingual support with optimized tokenization for various languages
  • Advanced code generation and mathematical reasoning capabilities
  • Efficient long-context processing with 32K token support

Frequently Asked Questions

Q: What makes this model unique?

Qwen-72B stands out for its comprehensive multilingual support, extensive training data (3T+ tokens), and state-of-the-art performance across various benchmarks. Its optimized tokenizer and extended context length make it particularly versatile for complex applications.

Q: What are the recommended use cases?

The model excels in multiple applications including complex reasoning, code generation, mathematical problem-solving, and multilingual text processing. It's particularly suitable for scenarios requiring long context understanding and cross-lingual capabilities.

The first platform built for prompt engineering