AI is everywhere now — from content creation tools to customer support bots. If you’re building a React app, it’s surprisingly easy to integrate OpenAI or Gemini (Google’s AI) using just fetch() or any HTTP client.
In this blog, I’ll show you how to connect either API to your frontend securely and efficiently.
🧠 Use Cases for AI in a React App
Here are a few things you can do with OpenAI or Gemini:
- 🧾 Summarize or generate content
- 💬 Build a chatbot UI
- 🧠 Implement grammar/spell correction
- 📊 Analyze user input or extract meaning
⚙️ What You’ll Need
- A React app (Next.js or Vite — doesn't matter)
- An API key from either OpenAI or Gemini
- A backend route or proxy (recommended to avoid exposing the key)
✅ Option 1: Using OpenAI API
1. Get an API Key
Go to: https://platform.openai.com/account/api-keys
⚠️ Never expose this key in your frontend!
2. Set Up a Backend Proxy Route (e.g. in Next.js)
// app/api/openai/route.ts (App Router)
export async function POST(req: Request) {
const { prompt } = await req.json();
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: prompt }],
}),
});
const data = await response.json();
return new Response(JSON.stringify(data), { status: 200 });
}
- Frontend Usage in React
"use client";
import { useState } from "react";
export default function AIForm() {
const [input, setInput] = useState("");
const [response, setResponse] = useState("");
const handleSubmit = async () => {
const res = await fetch("/api/openai", {
method: "POST",
body: JSON.stringify({ prompt: input }),
});
const data = await res.json();
setResponse(data.choices[0].message.content);
};
return (
<div className="space-y-4">
<textarea
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Ask something..."
className="w-full p-3 border rounded"
/>
<button onClick={handleSubmit} className="bg-blue-600 text-white px-4 py-2 rounded">
Submit
</button>
{response && <p className="mt-4 text-gray-700">{response}</p>}
</div>
);
}
🤖 Option 2: Using Google Gemini API
Enable the Gemini API and get your key
Set up a proxy route:
export async function POST(req: Request) {
const { prompt } = await req.json();
const response = await fetch("https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=" + process.env.GEMINI_API_KEY, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
contents: [{ parts: [{ text: prompt }] }]
})
});
const data = await response.json();
return new Response(JSON.stringify(data), { status: 200 });
}
Google returns responses in a different structure, so you’ll need to adjust how you access the output on the frontend.
🔐 Securing API Keys Always use a .env file and access environment variables only on the server:
OPENAI_API_KEY=your_openai_key
GEMINI_API_KEY=your_gemini_key
🧩 Tailwind Styling Tips
Wrap long responses with overflow-auto
Add loading states for better UX
Use dark:bg-gray-800 and dark:text-white if supporting dark mode
🏁 Final Thoughts AI integration doesn’t have to be complicated.
For simple queries: use OpenAI
For experimentation: try Gemini
Always use a backend proxy for security
Tailwind makes styling the frontend easy
💬 Want a full AI chatbot layout or editor tool using these APIs? Let me know and I’ll build a full UI post next.
---
✅ You can save this as:
`/blog/integrating-openai-gemini-react.md`
✅ Place a cover image at:
`/public/images/blog/openai-gemini-react.jpg`
