ReqRes Blog

How to use ChatGPT, Claude, or Cursor to build an app against ReqRes

Apr 2, 2026 4 min read
guide ai llm developer-tools
Next step

Want to try this in 2 minutes? Start a project or open the Notes example.

You can build a working app against ReqRes in minutes if you give your AI tool the right context. Without it, the LLM will guess at endpoints, invent auth headers, and generate code that looks right but fails on every request.

This guide covers what to include in your prompt, what mistakes LLMs make, and how to get from zero to a working app fast.

The one thing LLMs get wrong

Every LLM knows about reqres.in. It's in their training data. But they know the old ReqRes: the free demo API with /api/users and fixture data that doesn't persist.

The new ReqRes has persistent collections, real auth, and project-scoped API keys. When you say "build an app against ReqRes," the LLM defaults to the demo endpoints. Your app will fetch fixture users and think it's working, but nothing will actually save.

Fix: Tell the LLM which API surface you want. There are two:

  1. Demo API (/api/users, /api/login) - Read-only fixtures, no auth needed, nothing persists
  2. Project API (/api/collections/{slug}/records) - Your data, your schema, full CRUD, persistent

What to include in your prompt

Here's the minimum context an LLM needs to generate working ReqRes code:

I'm building against ReqRes (reqres.in).

My API base URL: https://reqres.in
My project ID: [from your dashboard]
My manage key: pro_[your key] (server-side only, full CRUD)
My public key: pub_[your key] (browser-safe, read-only)

I have a "products" collection with this schema:
  - name: string
  - price: number
  - category: string
  - in_stock: boolean

Endpoints:
  GET    /api/collections/products/records?project_id=MY_ID
  POST   /api/collections/products/records?project_id=MY_ID
  PUT    /api/collections/products/records/{id}?project_id=MY_ID
  DELETE /api/collections/products/records/{id}?project_id=MY_ID

Auth: include x-api-key header on all requests.
Environment: include X-Reqres-Env: prod header.
POST/PUT body must be wrapped: {"data": {"name": "...", "price": 9.99}}

Full API reference: https://reqres.in/llm.txt

That's it. With this context, Claude Code, Cursor, or ChatGPT will generate correct requests on the first try.

Common LLM mistakes and how to prevent them

1. Missing the data wrapper

LLMs will generate:

{"name": "Product", "price": 9.99}

The API expects:

{"data": {"name": "Product", "price": 9.99}}

Fix: Include "POST/PUT body must be wrapped in {data: {...}}" in your prompt. Or point the LLM to https://reqres.in/llm.txt which documents this.

2. Using the demo API instead of collections

The LLM will default to POST /api/users (demo, doesn't persist) instead of POST /api/collections/products/records (project, persists).

Fix: Be explicit: "Use the project collections API, not the demo /api/users endpoints."

3. Confusing the two key types

LLMs will put your manage key (pro_*) in browser-side JavaScript. That key has write/delete permissions and should never be exposed to clients.

Fix: Include both keys in your prompt with clear labels:

  • pro_* = server-side, curl, CI/CD, Node.js scripts
  • pub_* = browser, React, client-side reads

For browser writes, use App Users with session tokens instead.

4. Forgetting the environment header

Requests without X-Reqres-Env: prod may hit the wrong environment or fail.

Fix: Include "Add X-Reqres-Env: prod header" in your prompt.

5. Inventing emails for demo login

The demo /api/login only accepts fixture emails ([email protected], [email protected], etc.). LLMs will generate [email protected] which returns 400.

Fix: Include a few fixture emails in your prompt, or say "Use [email protected] for demo login."

Tool-specific tips

Claude Code (terminal)

Claude Code can read files. Point it at the LLM guide:

Read https://reqres.in/llm.txt for the full API reference.
Build a React app that lists products from my ReqRes collection.
My API key is pro_xxx, project ID is 123.

Claude Code will fetch the guide, understand the API contract, and generate working code.

Cursor / Windsurf

Paste the full prompt with your keys and schema into the chat. These tools work best with a single, complete prompt rather than a conversation.

You can also add https://reqres.in/llm.txt as a doc reference in Cursor's settings so it's always available.

ChatGPT

ChatGPT can't fetch URLs in all modes. Paste the relevant sections of https://reqres.in/llm.txt directly into your prompt, or use the pre-built prompts in your dashboard at https://app.reqres.in/prompts.

Pre-built prompts

We built a prompt library that injects your actual project data (keys, collections, schemas) into ready-to-paste prompts. Available at:

https://app.reqres.in/prompts

Pick a template (full-stack app, Playwright tests, admin dashboard, etc.), select your project, and copy. The prompt includes everything the LLM needs.

The fastest path: zero to working app

  1. Sign up at https://app.reqres.in (free, no credit card)
  2. You get a project with a "products" collection and 3 sample records
  3. Go to https://app.reqres.in/prompts
  4. Select "Full-stack React app" and your project
  5. Copy the prompt, paste into Claude Code or Cursor
  6. You have a working app with CRUD and auth in under 5 minutes

Machine-readable API reference

If you're building tooling or want to give an LLM the complete API contract:

The JSON and text guides include every endpoint, auth pattern, error shape, and example curl command. They're updated with every release.

Ready to ship? Continue in the app.