ChatGPT

This script allows you to send sequential queries to ChatGPT, drawn from the input.csv file. The answers are saved in the output.csv file.

Usage

Being able to send queries automatically to AI Chatbots means you can create high volumes of good quality content without intervention. For example, you could use this script to ask ChatGPT to create an outline for a book, and then ask it to write each chapter in sequence. This tends to produce better quality responses than asking it to write a book in one go. Alternatively you could instruct it to read a document and then provide an analysis, step by step.
In this script there is an estimate of the cost of the work, allowing you to manage your financial commitment. (Don’t worry though – as a new user of OpenAI you receive free credits, and even after they have been consumed, the costs are low.)

Prerequisites

  1. You need an account with OpenAI – click here to get started
  2. Once you have the account you will need your own API key. At the moment, you can create one at this link, although the link may change in future. (If in doubt use Google to help you find the API page.)
  3. Now install the openai library
pip install openai

How to use it

  1. Save the script below in the folder of your choice.
  2. Edit the file and insert your OpenAI API key
  3. Also edit the model if desired (in this script the model used is “gpt-4-1106-preview”. This may not be available in future, so please check with OpenAI)
  4. Create an input.csv file, which contains the prompts in the first column to be sent to OpenAI.
  5. The output is saved as output.csv
import csv
import openai
import time
# Try importing OpenAIError using the package's full path
from openai import OpenAIError

# Replace 'YOUR_API_KEY' with your actual OpenAI API key
openai.api_key = 'YOUR API KEY'

# PRICING
# gpt-4-1106-preview	$0.01 / 1K tokens	$0.03 / 1K tokens
# gpt-4-1106-vision-preview	$0.01 / 1K tokens	$0.03 / 1K tokens
# gpt-4	$0.03 / 1K tokens	$0.06 / 1K tokens
# gpt-4-32k	$0.06 / 1K tokens	$0.12 / 1K tokens
# gpt-3.5-turbo-1106	$0.0010 / 1K tokens	$0.0020 / 1K tokens
# gpt-3.5-turbo-instruct	$0.0015 / 1K tokens	$0.0020 / 1K tokens

# more here: https://openai.com/pricing


# Cost per token information (per thousand tokens)
input_token_cost_per_thousand = 0.01
output_token_cost_per_thousand = 0.03
###########################################



# Function to interact with ChatGPT and save responses to a CSV file with retries and rate limiting
def process_queries_with_retries_and_rate_limit(input_file, output_file, max_retries=3, retry_delay=5, max_requests_per_minute=60):
    with open(input_file, 'r') as input_csv, open(output_file, 'w', newline='') as output_csv:
        input_reader = csv.reader(input_csv)
        output_writer = csv.writer(output_csv)
        output_writer.writerow(['Input', 'Response', 'Input Tokens', 'Output Tokens', 'Total Cost'])
        
        conversation_state = []  # List to maintain conversation state including system and user messages
        total_cost = 0.0
        
        # Calculate how many seconds to wait between requests to stay under the rate limit
        min_time_between_requests = 60.0 / max_requests_per_minute

        last_request_time = None

        for row in input_reader:
            if not row:
                continue

            input_query = row[0]
            print(f"Processing query: {input_query}")
            
            for retry in range(max_retries):
                try:
                    # Implement rate limiting logic
                    if last_request_time is not None:
                        elapsed_time = time.time() - last_request_time
                        if elapsed_time < min_time_between_requests:
                            time_to_wait = min_time_between_requests - elapsed_time
                            print(f"Rate limited. Waiting for {time_to_wait:.2f} seconds.")
                            time.sleep(time_to_wait)
                    
                    response = openai.ChatCompletion.create(
                        model="gpt-4-1106-preview", ###EDIT HERE TO CHANGE THE LANGUAGE MODEL
                        messages=conversation_state + [
                            {"role": "user", "content": input_query}
                        ]
                    )
                    
                    if 'choices' in response and response['choices']:
                        # Append the latest user and assistant messages to the conversation state
                        conversation_state.append({"role": "user", "content": input_query})
                        conversation_state.append(response['choices'][0]['message'])

                    output_tokens = response['usage']['total_tokens']
                    input_tokens = response['usage']['prompt_tokens']
                    output_cost = (output_tokens / 1000) * output_token_cost_per_thousand
                    total_cost += output_cost

                    output_writer.writerow([
                        input_query,
                        response['choices'][0]['message']['content'],
                        input_tokens,
                        output_tokens,
                        output_cost
                    ])

                    last_request_time = time.time()
                    break
############

  # Catch OpenAI's API exception using the correct reference
                except OpenAIError as e:
                    print(f"OpenAI API Error: {e}")
                    time.sleep(retry_delay)
                    if retry == max_retries - 1:
                        print(f"Max retries reached for query: {input_query}. Skipping.")

                
        print(f"Total cost of the routine: ${total_cost:.2f}")

if __name__ == "__main__":
    input_file_path = "input.csv"
    output_file_path = "output.csv"

    process_queries_with_retries_and_rate_limit(input_file_path, output_file_path)