Explore AI models with community-written guides. Learn what each model excels at and how to craft the best prompts for it.
Submit a modelMinimax
ByteDance / Peking University
14B autoregressive diffusion model generating 60-second videos at real-time speed on a single GPU.
Kuaishou
Chinese video model with impressive motion quality and longer generation times.
Kuaishou
Major upgrade with improved motion quality, longer clips, and better facial expressions. Cost-effective for social media content.
Lightricks
Open-source 22B diffusion transformer. Native 4K video with synchronized audio generation in a single pass.
Luma
Minimax
Chinese video model known for consistent character identity and long-form generation.
Pika Labs
Fast video generation with good creative control and image-to-video capabilities.
Pika
Enhanced creative effects, improved scene generation, and new 'Scenes' feature for multi-shot storytelling.
Runway
Professional video generation with precise motion control and camera direction.
Runway
Runway's latest with improved temporal consistency, camera control, and up to 20-second coherent clips.
ByteDance
ByteDance's video generation model with high motion quality and dance/movement specialization.
OpenAI
OpenAI's video generation model producing cinematic-quality clips from text prompts.
Google's video generation model producing high-fidelity clips with cinematic camera control.