This example shows how to implement a rate limit on a Next.js API route which
uses the OpenAI chat API. It uses the openai-chat-tokens
library to track the number of
tokens used by a gpt-3.5-turbo
AI chatbot.
There are 2 example routes:
-
/api/chat
is the default route that tracks the user by IP address. It applies a limit of 2,000 tokens per hour with a maximum of 5,000 tokens in the bucket. This allows for a reasonable conversation length without consuming too many tokens. -
/api/chat_userid
is a route that tracks the user by a unique identifier. You could use this to track a quota per authenticated user.
-
From the root of the project, install the SDK dependencies.
npm ci
-
Enter this directory and install the example's dependencies.
cd examples/nextjs-openai npm ci
-
Rename
.env.local.example
to.env.local
and add your Arcjet & OpenAI keys. -
Start the dev server.
npm run dev
-
Visit
http://localhost:3000
. -
Refresh the page to trigger the rate limit.