skip to Main Content

I have created an openAI API using python, to respond to any type of prompt.

I want to make the API respond to requests that are only related to Ad from product description and greetings requests only and if the user sends a request that’s not related to this task, the API should send a message like I’m not suitable for tasks like this.


import os
import openai

openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.Completion.create(
  model="text-davinci-003",
  prompt="Write a creative ad for the following product to run on Facebook aimed at parents:nnProduct: Learning Room is a virtual environment to help students from kindergarten to high school excel in school.",
  temperature=0.5,
  max_tokens=100,
  top_p=1.0,
  frequency_penalty=0.0,
  presence_penalty=0.0
)

I want to update the code to generate a chat like this. make bot understand generating ADs and greetings requests and ignoring the others

EX:-

user:- Hello

api:- Hello, How can I assist you today with your brand?

user:- Write a social media post for the following product to run on Facebook aimed at parents:nnProduct: Learning Room is a virtual environment to help students from kindergarten to high school excel in school.

api:- Are you looking for a way to give your child a head start in school? Look no further than Learning Room! Our virtual environment is designed to help students from kindergarten to high school excel in their studies. Our unique platform offers personalized learning plans, interactive activities, and real-time feedback to ensure your child is getting the most out of their education. Give your child the best chance to succeed in school with Learning Room!

user:- where is the united states located?

api:- I’m not suitable for this type of tasks.

So, How can update my code?

2

Answers


  1. A naive implementation would be to check a user prompt before you send it to GPT3 and make sure it includes words related to advertisments via comparing every word in the prompt to a hashtable full of related words beforehand.

    If it has enough related words, let the prompt go through. Else, change the prompt to ‘Say "I’m not suitable for this type of tasks."’.

    Login or Signup to reply.
  2. Try gpt3.5-turbo (10x times cheaper than davinci).

    # example in python=3.9 /// openai==0.27.0
    
    import openai
    openai.api_key = "YOUR_TOKEN"
    
    
    
    messages = []
    
    # be as specific as possible in the behavior it should have
    system_content = '''You are a marketing assistant called MarkBot. 
    You only respond to greetings and marketing-related questions.
    For any other question you must answer "I'm not suitable for this type of tasks.".'''
    
    messages.append({"role": "system", "content": system_content})
    
    prompt_text = 'Hi, How can i improve my sellings of cakes?'
    
    messages.append({"role": "user", "content": prompt_text})
    
    response = openai.ChatCompletion.create(
                            model="gpt-3.5-turbo",
                            messages=messages,
                            max_tokens=1000,
                            temperature=0.5)
        
    # In my tests
    
    # Q: 'How many states does Brazil have?'
    # A: "I'm not suitable for this type of tasks."
    
    # Q: 'Hi, What can you do? Can you help me sell more cakes?'
    # A: 
    '''"Hello! As a marketing assistant, I can assist you in 
    developing a marketing plan for your cake business, 
    including identifying your target audience, creating 
    advertising materials, and implementing promotional campaigns.
    Let me know if you have any specific questions or concerns!"'''
    
    
    
    

    Note that it won’t be 100% accurate, but telling how CHATGPT should behave should help get the behavior you want. And you can go that way with just one question without keeping the context. You might want to lower the temperature as well, in the documentation you find that: For temperature, higher values ​​like 0.8 will make the output more random, while lower values ​​like 0.2 will make it more focused and deterministic. (https://platform.openai.com/docs/guides/chat/instructing-chat-models).

    Obs: If you want to keep the context, as far as I’ve been able to test, creating the "chat" functionality consists of adding the entire conversation to "messages". So if you want your bot to keep the context of the conversation (like the chatgpt website) you need to send the entire conversation history + new question to receive a new answer (which would be more expensive in terms of tokens, but you will get better answers).

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search