AI coding assistants are changing how we build software.
But they still need context about your API.
The problem
When you ask Claude Code or Cursor to build something, you need to explain:
- Your API endpoints
- Authentication headers
- Request/response shapes
- Your data model
- Error handling patterns
That's a lot of context to type out every time.
Prompts that know your project
We built LLM Prompts to solve this.
Each prompt template is pre-filled with your actual project data:
- Your API keys (manage + public)
- Your collection names and schemas
- The correct base URL
- Auth flow examples with your keys
- Error handling patterns
Select your project, pick a prompt, copy it, paste it into your AI tool.
Three categories
Build
Full-stack React apps, authentication flows, admin dashboards, and simple CRUD starters. These prompts give the AI everything it needs to generate working code against your ReqRes backend.
Full-stack React app is the most popular. It generates a complete Vite + React app with:
- Magic link auth
- User-scoped data
- CRUD operations
- Session token handling
Test
Playwright test suites, Postman collections, and CI pipelines. Get real integration tests running against your live API.
The Playwright test suite prompt generates tests for:
- CRUD operations with assertions
- Auth flows (login → verify → session)
- Error cases (401, 404, 429)
- Cleanup helpers
Learn
Explore your API with guided prompts that walk through your endpoints and suggest what you could build.
Compatible tools
These prompts work with:
- Claude Code (terminal)
- Cursor
- Windsurf
- ChatGPT
- Claude (web)
Any tool that accepts long-form prompts with markdown and code blocks.
How it works
Each prompt uses placeholders like {{API_KEY}} and {{COLLECTIONS}} that get replaced with your actual data when you copy.
For example, if you have a "tasks" collection with a schema, the prompt includes:
### Collection: tasks
Endpoint: https://reqres.in/api/collections/tasks/records
Schema:
- title: string
- completed: boolean
The AI sees your real structure, not a generic example.
The API key distinction
We include both keys with clear guidance:
- Manage key (
pro_*) - Server-side only. CRUD operations, CI/CD. - Public key (
pub_*) - Browser-safe. Reads and magic link auth.
The prompts explain when to use each, so the AI generates secure code by default.
Try it
Head to the LLM Prompts page in your dashboard:
👉 https://app.reqres.in/prompts
Select a project, pick a prompt, and paste it into your AI tool.
You'll have working code in minutes-not hours.