I want to use Chat GPT Turbo api directly in react native (expo) with word by word stream here is working example without stream
fetch(`https://api.openai.com/v1/chat/completions`, {
body: JSON.stringify({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'hello' }],
temperature: 0.3,
max_tokens: 2000,
}),
method: 'POST',
headers: {
'content-type': 'application/json',
Authorization: 'Bearer ' + API_KEY,
},
}).then((response) => {
console.log(response); //If you want to check the full response
if (response.ok) {
response.json().then((json) => {
console.log(json); //If you want to check the response as JSON
console.log(json.choices[0].message.content); //HERE'S THE CHATBOT'S RESPONSE
});
}
});
what can i change to stream data word by word
2
Answers
OpenAI APIs rely on SSE (Server Side Events) to stream the response back to you. If you pass the stream parameter in your API request, you will receive chunks of data when they are calculated by OpenAI.
This creates the illusion of a real-time response that mimics someone typing.
The hardest part to figure out might be how to connect your frontend with your backend. Every-time the backend receives a new chunk you want to display it in the frontend.
I created a simple NextJs project on Replit that demonstrates just that. Live demo
you will need to install better-sse package
npm install better-sse
Server side
In an API route file
On your front end you can now call this API route
Hope this makes sense and help others with similar issue.
Visit this GitHub repository: https://github.com/jonrhall/openai-streaming-hooks. I recommend exploring this library as it offers React hooks that function solely on the client-side, requiring no server support.