by Mistral
Mistral Large is Mistral AI's flagship language model — a 675B parameter Mixture-of-Experts model (41B active) with a 256K token context window. Fully open-source under Apache 2.0, it delivers frontier-level performance in general knowledge, multilingual conversation, coding, and multimodal understanding while being one of the best $/token values in the LLM market.
Mistral Large follows structured prompts well:
Analyze this Python codebase for security vulnerabilities.
For each issue found, provide:
1. File and line number
2. Vulnerability type (OWASP category)
3. Severity (Critical/High/Medium/Low)
4. Recommended fix with code example
Set behavior and constraints in the system prompt:
System: You are a senior DevOps engineer. Always suggest
infrastructure-as-code solutions. Prefer Terraform over manual
configuration. Flag any security concerns proactively.
With 256K tokens, provide extensive background documents:
Here are our last 6 months of incident reports. Identify
recurring patterns, root causes, and recommend systemic fixes.
| Parameter | Description |
|---|---|
| temperature | Randomness 0-2 |
| max_tokens | Maximum response length |
| top_p | Nucleus sampling threshold |
| system | System prompt for role/behavior |
Quick tips from the community about what works with Mistral Large right now.
Sign in to share a tip.
No tips yet. Add a tip for this model.