Search

Thursday 30 November 2023

Simple chat app using openai client.chat.completions endpoint after openai version >= 1.2.0

import streamlit as st
import openai
from openai import OpenAI
from dotenv import load_dotenv, find_dotenv
# Load environment variables
load_dotenv(find_dotenv())

st.title('OpenAI Chat App')

client = OpenAI()

user_input = st.text_input("You: ", "")
if st.button('Send'):
    response = client.chat.completions.create(
      model="gpt-3.5-turbo",
      messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": user_input}
        ],
    temperature=0
    )
    st.write('Assistant: ', response.choices[0].message.content)

This is a Python script that uses the Streamlit library to create a web application, and the OpenAI API to generate responses from the GPT-3 model.

Here’s a breakdown of what each part of the code does:

  1. Importing necessary libraries:
import streamlit as st
import openai
from openai import OpenAI
from dotenv import load_dotenv, find_dotenv

These lines import the necessary libraries. streamlit is a library for building web applications, openai is the OpenAI API client, and dotenv is used to load environment variables from a .env file.

  1. Loading environment variables:
load_dotenv(find_dotenv())

This line loads environment variables from a .env file in your project directory. This is typically used to securely store sensitive information like API keys.

  1. Setting up the Streamlit app:
st.title('OpenAI Chat App')

This line sets the title of the web application to ‘OpenAI Chat App’.

  1. Creating an OpenAI client:
client = OpenAI()

This line creates an instance of the OpenAI client, which is used to interact with the OpenAI API.

  1. Getting user input:
user_input = st.text_input("You: ", "")

This line creates a text input field in the web application where the user can enter their message.

  1. Sending the user’s message to the OpenAI API and displaying the response:
if st.button('Send'):
    response = client.chat.completions.create(
      model="gpt-3.5-turbo",
      messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": user_input}
        ],
    temperature=0
    )
    st.write('Assistant: ', response.choices[0].message.content)

When the ‘Send’ button is clicked, this block of code sends the user’s message to the OpenAI API, along with a system message that sets the behavior of the assistant. The response from the API is then displayed in the web application.

The temperature parameter controls the randomness of the AI’s output. A value of 0 makes the output deterministic, meaning the AI will always choose the most likely next word when generating its response.