Discover and improve AI prompts across all domains and models.
7 prompts
Build an AI agent that reasons and acts using tools. ReAct loop with function calling, error recovery, and conversation memory.
Reliably extract structured JSON from LLM responses. Schema validation, retry on malformed output, and fallback parsing strategies.
Build a prompt template engine that supports variables, conditionals, loops, and includes. Like Handlebars but for LLM prompts.
Track LLM API costs across multiple providers. Token counting, cost calculation, budget alerts, and per-feature usage attribution.
Implement streaming LLM responses with Server-Sent Events. Token-by-token output from API to browser with proper error handling and abort.
Craft llm prompt chain with this structured code prompt. Adjustable variables let you fine-tune the output.
Craft rag pipeline with this structured code prompt. Adjustable variables let you fine-tune the output.