skip to Main Content

I want to use Chat GPT Turbo api directly in react native (expo) with word by word stream here is working example without stream

  fetch(`https://api.openai.com/v1/chat/completions`, {
  body: JSON.stringify({
    model: 'gpt-3.5-turbo',
    messages: [{ role: 'user', content: 'hello' }],
    temperature: 0.3,
    max_tokens: 2000,
  }),
  method: 'POST',
  headers: {
    'content-type': 'application/json',
    Authorization: 'Bearer ' + API_KEY,
  },
}).then((response) => {
  console.log(response); //If you want to check the full response
  if (response.ok) {
    response.json().then((json) => {
      console.log(json); //If you want to check the response as JSON
      console.log(json.choices[0].message.content); //HERE'S THE CHATBOT'S RESPONSE
    });
  }
});

what can i change to stream data word by word

2

Answers


  1. OpenAI APIs rely on SSE (Server Side Events) to stream the response back to you. If you pass the stream parameter in your API request, you will receive chunks of data when they are calculated by OpenAI.
    This creates the illusion of a real-time response that mimics someone typing.

    The hardest part to figure out might be how to connect your frontend with your backend. Every-time the backend receives a new chunk you want to display it in the frontend.

    I created a simple NextJs project on Replit that demonstrates just that. Live demo

    you will need to install better-sse package

    npm install better-sse

    Server side
    In an API route file

    import {createSession} from "better-sse";
    
    const session = await createSession(req, res);
          if (!session.isConnected) throw new Error('Not connected');
    
    const { data } = await openai.createCompletion({
      model: 'text-davinci-003',
      n: 1,
      max_tokens: 2048,
      temperature: 0.3,
      stream: true,
      prompt: `CHANGE TO YOUR OWN PROMPTS`
    }, {
      timeout: 1000 * 60 * 2,
      responseType: 'stream'
    });
    
    //what to do when receiving data from the API
    data.on('data', text => {
      const lines = text.toString().split('n').filter(line => line.trim() !== '');
      for (const line of lines) {
        const message = line.replace(/^data: /, '');
        if (message === '[DONE]') { //OpenAI sends [DONE] to say it's over
          session.push('DONE', 'error');
          return;
        }
        try {
          const { choices } = JSON.parse(message);
          session.push({text:choices[0].text});
        } catch (err) {
          console.log(err);
        }
      }
    });
    
    //connection is close
    data.on('close', () => { 
      console.log("close")
      res.end();
    });
    
    data.on('error', (err) => {
      console.error(err);
    });
    

    On your front end you can now call this API route

    let [result, setResult] = useState("");
    
    //create the sse connection
    const sse = new EventSource(`/api/completion?prompt=${inputPrompt}`);
    
    //listen to incoming messages
    sse.addEventListener("message", ({ data }) => {
      let msgObj = JSON.parse(data)
      setResult((r) => r + msgObj.text)
    });
    

    Hope this makes sense and help others with similar issue.

    Login or Signup to reply.
  2. Visit this GitHub repository: https://github.com/jonrhall/openai-streaming-hooks. I recommend exploring this library as it offers React hooks that function solely on the client-side, requiring no server support.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search