Google Gemini: chat script

For my Gemini scripts you will need a Gemini API key: don’t worry, they’re easy to obtain. Go HERE to apply for your key. At the time of writing, some users may need to use a US proxy to obtain the API key. If the link doesn’t work, Google “Gemini API key” for the correct page.

This script allows you to send sequential linked queries to Gemini in a chat format, drawn from the chat.csv file. The answers are saved in the chatoutput.csv file. Unlike this script, this script is a continuous conversation, so the prompts are responded to as part of the same chat.

Usage

This is similar to the ChatGPT script, except that this does not use the OpenAI API, and therefore costs nothing to run. It does require a bit more setting up. Being able to send queries automatically to AI Chatbots means you can create high volumes of good quality content without intervention. For example, you can use this script to ask Gemini to write a story outline, and then ask it to completre each chapter, prompt by prompt.

Prerequisites

  1. You need a Gemini account
  2. You need a Gemini API key. Apply here: https://aistudio.google.com/app/apikey
  3. Now install the Gemini API
pip install -q -U google-generativeai

How to use it

  1. Save the script below in the folder of your choice.
  2. Edit the script and insert your API key
  3. Create an chat.csv file, which contains the prompts in the first column to be sent to Gemini
  4. The output is saved as chatoutput.csv
import pathlib
import textwrap
import csv
import time
import google.generativeai as genai

# THIS PRODUCES LINKED CONVERSATIONAL CHATS

#  !!! CAUTION: Hardcoding API keys is highly discouraged !!!
api_key = "YOUR-API-KEY"
genai.configure(api_key=api_key)
model = genai.GenerativeModel('gemini-pro')
chat = model.start_chat(history=[])
chat

# Rate limiting (adjust sleep time if needed)
RATE_LIMIT_SLEEP_SECONDS = 5  

def process_query(query):
    """Sends a query to Gemini and handles potential rate limiting"""
    try:
        response = chat.send_message(query)
        return response.text
    except genai.exceptions.RateLimitExceededError as e:
        print(f"Rate limit exceeded. Waiting {RATE_LIMIT_SLEEP_SECONDS} seconds...")
        time.sleep(RATE_LIMIT_SLEEP_SECONDS)
        return process_query(query)  # Retry the query

# Load queries from input.csv
input_filepath = pathlib.Path("chat.csv")
with input_filepath.open('r', newline='', encoding='utf-8') as csvfile:
    reader = csv.reader(csvfile)
    queries = list(reader)  

# Process queries and save responses
output_filepath = pathlib.Path("chatoutput.csv")
with output_filepath.open('w', newline='', encoding='utf-8') as csvfile:
    writer = csv.writer(csvfile)
    for query in queries:
        response = process_query(query[0])  # Assume query in the first column
        writer.writerow([query[0], response])
        print(f"Query: {query[0]}\nResponse: {textwrap.shorten(response, width=80)}\n")

Google Gemini: non-chat script

For my Gemini scripts you will need a Gemini API key: don’t worry, they’re easy to obtain. Go HERE to apply for your key. At the time of writing, some users may need to use a US proxy to obtain the API key. If the link doesn’t work, Google “Gemini API key” for the correct page.

This script allows you to send sequential queries to Gemini, drawn from the input.csv file. The answers are saved in the output.csv file. This is not a chat, so the prompts are responded to independently.

Usage

This is similar to the ChatGPT script, except that this does not use the OpenAI API, and therefore costs nothing to run. It does require a bit more setting up. Being able to send queries automatically to AI Chatbots means you can create high volumes of good quality content without intervention. For example, you can use this script to ask Gemini to write a product review in one shot, and then repeat the prompt for different products.

Prerequisites

  1. You need a Gemini account
  2. You need a Gemini API key. Apply here: https://aistudio.google.com/app/apikey
  3. Now install the Gemini API
pip install -q -U google-generativeai

How to use it

  1. Save the script below in the folder of your choice.
  2. Create an input.csv file, which contains the prompts in the first column to be sent to Gemini
  3. The output is saved as output.csv
import pathlib
import textwrap
import csv
import time

import google.generativeai as genai

#  !!! CAUTION: Hardcoding API keys is highly discouraged !!!
api_key = "YOU_API_KEY"
genai.configure(api_key=api_key)
model = genai.GenerativeModel('gemini-pro')

# Rate limiting (adjust sleep time if needed)
RATE_LIMIT_SLEEP_SECONDS = 5  

def process_query(query):
    """Sends a query to Gemini and handles potential rate limiting"""
    try:
        response = model.generate_content(query)
        return response.text
    except genai.exceptions.RateLimitExceededError as e:
        print(f"Rate limit exceeded. Waiting {RATE_LIMIT_SLEEP_SECONDS} seconds...")
        time.sleep(RATE_LIMIT_SLEEP_SECONDS)
        return process_query(query)  # Retry the query

# Load queries from input.csv
input_filepath = pathlib.Path("input.csv")
with input_filepath.open('r', newline='') as csvfile:
    reader = csv.reader(csvfile)
    queries = list(reader)  

# Process queries and save responses
output_filepath = pathlib.Path("output.csv")
with output_filepath.open('w', newline='') as csvfile:
    writer = csv.writer(csvfile)
    for query in queries:
        response = process_query(query[0])  # Assume query in the first column
        writer.writerow([query[0], response])
        print(f"Query: {query[0]}\nResponse: {textwrap.shorten(response, width=80)}\n")

a person holding a cell phone in their hand

Bard – DEPRECATED

Bard has been superseded by Google Gemini – you can find my Gemini scripts in the AI category.

This script allows you to send sequential queries to Bard, drawn from the input.csv file. The answers are saved in the output.csv file.

Usage

This is similar to the ChatGPT script, except that this does not use the OpenAI API, and therefore costs nothing to run. It does require a bit more setting up. Being able to send queries automatically to AI Chatbots means you can create high volumes of good quality content without intervention. For example, you could use this script to ask Bard to create an outline for a book, and then ask it to write each chapter in sequence. This tends to produce better quality responses than asking it to write a book in one go. Alternatively you could instruct it to read a document and then provide an analysis, step by step.

Prerequisites

  1. You need a Bard account
  2. Once you have the account you will need to log in with a Chrome browser and locate the __Secure-1PSID cookie. You do this by right clicking anywhere on the Bard page once you’ve logged in then click “Inspect”. In the console that opens, click on the “Application” tab, and in the “Storage” window, click on the Cookie dropdown. Now click on the https://bard.google.com item which should present all the cookies for that page. Copy the long string associated with the __Secure-1PSID cookie. (Be careful to choose the right one). Paste that into the script at the point shown.
  3. Now install the Bardapi and requests libraries
pip install requests
pip install bardapi

How to use it

  1. Save the script below in the folder of your choice.
  2. Edit the file and insert your Bard cookie string
  3. Also edit the prompts and timings if desired – you can change the speed of responses. In this script previous questions and answers are provided as context.
  4. Create an input.csv file, which contains the prompts in the first column to be sent to OpenAI.
  5. The output is saved as output.csv
import csv
from bardapi import Bard
import time
import os
import re
import requests

# Set up a reusable session
session = requests.Session()
session.headers = {
    "Host": "bard.google.com",
    "X-Same-Domain": "1",
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36",
    "Content-Type": "application/x-www-form-urlencoded;charset=UTF-8",
    "Origin": "https://bard.google.com",
    "Referer": "https://bard.google.com/",
}

# Set cookies for the session
token = 'your_Bard_cookie_very_long_string_of_digits_etc_etc_etc.'  
session.cookies.set("__Secure-1PSID", token)

# Create a Bard instance with the reusable session
bard_instance = Bard(token=token, session=session, timeout=30)

# Open input and output CSV files with error handling
input_file = open('input.csv', 'r', encoding='utf-8', errors='replace')
output_file = open('output.csv', 'w', encoding='utf-8', newline='')

# Create CSV writers
input_csv = csv.reader(input_file)
output_csv = csv.writer(output_file)

# Write headers to output CSV
# output_csv.writerow(['Prompt', 'Response'])

# Set the desired API call rate (2 calls per minute)
calls_per_minute = 1
interval = 60 / calls_per_minute

# Regular expression pattern to match file paths in prompts
file_path_pattern = r"C:/Users/Steve/.*?\.txt"

context = ""

# Iterate through prompts and generate responses
for row in input_csv:
    prompt = row[0]

    # Check if the prompt contains a file path
    file_paths = re.findall(file_path_pattern, prompt)

    if file_paths:
        # Upload and generate a response for each file path found
        for file_path in file_paths:
            # Read the content of the file with the specified encoding and error handling
            with open(file_path, 'r', encoding='utf-8', errors='replace') as file:
                file_content = file.read()

            # Replace the file path with the content in the prompt
            prompt = prompt.replace(file_path, file_content)

    if not context:
        # If context is empty, set the prompt without the previous conversation context
        full_prompt = "Please answer this question: " + prompt
        print("Prompt:", full_prompt)
    else:
        # Otherwise, include the previous conversation context
        full_prompt = (
            "Please consider the previous conversation we have had, which I am recording for you here within the <context> tags: <context> "
            + context
            + " </context> Now, bearing in mind the conversation so far within the context tags which you have already responded to - no need to answer any of those questions again -  please answer this question: "
            + prompt
        )
        print("Prompt:", full_prompt)

    # Send an API request and get a response - print to check.
    response = bard_instance.get_answer(full_prompt)
    response_content = response['content']
    print(response_content)

    # Update context after each loop - depracated in Bard
    # context = response_content

    # Write to file
    output_csv.writerow([response_content])

    # Introduce a delay to limit the rate of API calls
    time.sleep(interval)

# Close files
input_file.close()
output_file.close()

a green square with a white knot on it

ChatGPT

This script allows you to send sequential queries to ChatGPT, drawn from the input.csv file. The answers are saved in the output.csv file.

Usage

Being able to send queries automatically to AI Chatbots means you can create high volumes of good quality content without intervention. For example, you could use this script to ask ChatGPT to create an outline for a book, and then ask it to write each chapter in sequence. This tends to produce better quality responses than asking it to write a book in one go. Alternatively you could instruct it to read a document and then provide an analysis, step by step.
In this script there is an estimate of the cost of the work, allowing you to manage your financial commitment. (Don’t worry though – as a new user of OpenAI you receive free credits, and even after they have been consumed, the costs are low.)

Prerequisites

  1. You need an account with OpenAI – click here to get started
  2. Once you have the account you will need your own API key. At the moment, you can create one at this link, although the link may change in future. (If in doubt use Google to help you find the API page.)
  3. Now install the openai library
pip install openai

How to use it

  1. Save the script below in the folder of your choice.
  2. Edit the file and insert your OpenAI API key
  3. Also edit the model if desired (in this script the model used is “gpt-4-1106-preview”. This may not be available in future, so please check with OpenAI)
  4. Create an input.csv file, which contains the prompts in the first column to be sent to OpenAI.
  5. The output is saved as output.csv
import csv
import openai
import time
# Try importing OpenAIError using the package's full path
from openai import OpenAIError

# Replace 'YOUR_API_KEY' with your actual OpenAI API key
openai.api_key = 'YOUR API KEY'

# PRICING
# gpt-4-1106-preview	$0.01 / 1K tokens	$0.03 / 1K tokens
# gpt-4-1106-vision-preview	$0.01 / 1K tokens	$0.03 / 1K tokens
# gpt-4	$0.03 / 1K tokens	$0.06 / 1K tokens
# gpt-4-32k	$0.06 / 1K tokens	$0.12 / 1K tokens
# gpt-3.5-turbo-1106	$0.0010 / 1K tokens	$0.0020 / 1K tokens
# gpt-3.5-turbo-instruct	$0.0015 / 1K tokens	$0.0020 / 1K tokens

# more here: https://openai.com/pricing


# Cost per token information (per thousand tokens)
input_token_cost_per_thousand = 0.01
output_token_cost_per_thousand = 0.03
###########################################



# Function to interact with ChatGPT and save responses to a CSV file with retries and rate limiting
def process_queries_with_retries_and_rate_limit(input_file, output_file, max_retries=3, retry_delay=5, max_requests_per_minute=60):
    with open(input_file, 'r') as input_csv, open(output_file, 'w', newline='') as output_csv:
        input_reader = csv.reader(input_csv)
        output_writer = csv.writer(output_csv)
        output_writer.writerow(['Input', 'Response', 'Input Tokens', 'Output Tokens', 'Total Cost'])
        
        conversation_state = []  # List to maintain conversation state including system and user messages
        total_cost = 0.0
        
        # Calculate how many seconds to wait between requests to stay under the rate limit
        min_time_between_requests = 60.0 / max_requests_per_minute

        last_request_time = None

        for row in input_reader:
            if not row:
                continue

            input_query = row[0]
            print(f"Processing query: {input_query}")
            
            for retry in range(max_retries):
                try:
                    # Implement rate limiting logic
                    if last_request_time is not None:
                        elapsed_time = time.time() - last_request_time
                        if elapsed_time < min_time_between_requests:
                            time_to_wait = min_time_between_requests - elapsed_time
                            print(f"Rate limited. Waiting for {time_to_wait:.2f} seconds.")
                            time.sleep(time_to_wait)
                    
                    response = openai.ChatCompletion.create(
                        model="gpt-4-1106-preview", ###EDIT HERE TO CHANGE THE LANGUAGE MODEL
                        messages=conversation_state + [
                            {"role": "user", "content": input_query}
                        ]
                    )
                    
                    if 'choices' in response and response['choices']:
                        # Append the latest user and assistant messages to the conversation state
                        conversation_state.append({"role": "user", "content": input_query})
                        conversation_state.append(response['choices'][0]['message'])

                    output_tokens = response['usage']['total_tokens']
                    input_tokens = response['usage']['prompt_tokens']
                    output_cost = (output_tokens / 1000) * output_token_cost_per_thousand
                    total_cost += output_cost

                    output_writer.writerow([
                        input_query,
                        response['choices'][0]['message']['content'],
                        input_tokens,
                        output_tokens,
                        output_cost
                    ])

                    last_request_time = time.time()
                    break
############

  # Catch OpenAI's API exception using the correct reference
                except OpenAIError as e:
                    print(f"OpenAI API Error: {e}")
                    time.sleep(retry_delay)
                    if retry == max_retries - 1:
                        print(f"Max retries reached for query: {input_query}. Skipping.")

                
        print(f"Total cost of the routine: ${total_cost:.2f}")

if __name__ == "__main__":
    input_file_path = "input.csv"
    output_file_path = "output.csv"

    process_queries_with_retries_and_rate_limit(input_file_path, output_file_path)