Chunk ID
stringlengths
5
184
Chunk
stringlengths
20
3.59k
Source
stringclasses
22 values
How to count tokens with tiktoken
tiktoken is a fast open-source tokenizer by OpenAI. Given a text string (e.g., "tiktoken is great!") and an encoding (e.g., "cl100k_base"), a tokenizer can split the text string into a list of tokens (e.g., ["t", "ik", "token", " is", " great", "!"]). Splitting text strings into tokens is useful because GPT models see text in the form of tokens. Knowing how many tokens are in a text string can tell you (a) whether the string is too long for a text model to process and (b) how much an OpenAI API call costs (as usage is priced by token).
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
How to count tokens with tiktoken
Encodings Encodings specify how text is converted into tokens. Different models use different encodings. tiktoken supports three encodings used by OpenAI models: Encoding name OpenAI models cl100k_base gpt-4, gpt-3.5-turbo, text-embedding-ada-002 p50k_base Codex models, text-davinci-002, text-davinci-003 r50k_base (or gpt2) GPT-3 models like davinci You can retrieve the encoding for a model using tiktoken.encoding_for_model() as follows:
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
How to count tokens with tiktoken
0. Install tiktoken If needed, install tiktoken with pip: %pip install --upgrade tiktoken 1. Import tiktoken import tiktoken 2. Load an encoding Use tiktoken.get_encoding() to load an encoding by name. The first time this runs, it will require an internet connection to download. Later runs won't need an internet connection.
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
How to count tokens with tiktoken
3. Turn text into tokens with encoding.encode() The .encode() method converts a text string into a list of token integers. encoding.encode("tiktoken is great!") [83, 1609, 5963, 374, 2294, 0] Count tokens by counting the length of the list returned by .encode().
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
How to count tokens with tiktoken
4. Turn tokens into text with encoding.decode() .decode() converts a list of token integers to a string. encoding.decode([83, 1609, 5963, 374, 2294, 0]) 'tiktoken is great!' Warning: although .decode() can be applied to single tokens, beware that it can be lossy for tokens that aren't on utf-8 boundaries. For single tokens, .decode_single_token_bytes() safely converts a single integer token to the bytes it represents.
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
How to count tokens with tiktoken
5. Comparing encodings Different encodings vary in how they split words, group spaces, and handle non-English characters. Using the methods above, we can compare different encodings on a few example strings.
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
How to count tokens with tiktoken
6. Counting tokens for chat completions API calls ChatGPT models like gpt-3.5-turbo and gpt-4 use tokens in the same way as older completions models, but because of their message-based formatting, it's more difficult to count how many tokens will be used by a conversation.
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
How to count tokens with tiktoken
Below is an example function for counting tokens for messages passed to gpt-3.5-turbo or gpt-4. Note that the exact way that tokens are counted from messages may change from model to model. Consider the counts from the function below an estimate, not a timeless guarantee. In particular, requests that use the optional functions input will consume extra tokens on top of the estimates calculated below:
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
How to count tokens with tiktoken
def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613"): """Return the number of tokens used by a list of messages.""" try: encoding = tiktoken.encoding_for_model(model) except KeyError: print("Warning: model not found. Using cl100k_base encoding.") encoding = tiktoken.get_encoding("cl100k_base") if model in { "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-4-0314", "gpt-4-32k-0314", "gpt-4-0613", "gpt-4-32k-0613", }: tokens_per_message = 3 tokens_per_name = 1 elif model == "gpt-3.5-turbo-0301": tokens_per_message = 4 # every message follows {role/name}\n{content}\n tokens_per_name = -1 # if there's a name, the role is omitted elif "gpt-3.5-turbo" in model: print("Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.") return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613") elif "gpt-4" in model: print("Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.") return num_tokens_from_messages(messages, model="gpt-4-0613") else: raise NotImplementedError( f"""num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.""" ) num_tokens = 0 for message in messages: num_tokens += tokens_per_message for key, value in message.items(): num_tokens += len(encoding.encode(value)) if key == "name": num_tokens += tokens_per_name num_tokens += 3 # every reply is primed with assistant return num_tokens # let's verify the function above matches the OpenAI API response import openai example_messages = [ { "role": "system", "content": "You are a helpful, pattern-following assistant that translates corporate jargon into plain English.", }, { "role": "system", "name": "example_user", "content": "New synergies will help drive top-line growth.", }, { "role": "system", "name": "example_assistant", "content": "Things working well together will increase revenue.", }, { "role": "system", "name": "example_user", "content": "Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage.", }, { "role": "system", "name": "example_assistant", "content": "Let's talk later when we're less busy about how to do better.", }, { "role": "user", "content": "This late pivot means we don't have time to boil the ocean for the client deliverable.", }, ] for model in [ "gpt-3.5-turbo-0301", "gpt-3.5-turbo-0613", "gpt-3.5-turbo", "gpt-4-0314", "gpt-4-0613", "gpt-4", ]: print(model) # example token count from the function defined above print(f"{num_tokens_from_messages(example_messages, model)} prompt tokens counted by num_tokens_from_messages().") # example token count from the OpenAI API response = openai.ChatCompletion.create( model=model, messages=example_messages, temperature=0, max_tokens=1, # we're only counting input tokens here, so let's not waste tokens on the output ) print(f'{response["usage"]["prompt_tokens"]} prompt tokens counted by the OpenAI API.') print()
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
How to call functions with chat models - Part 1
This notebook covers how to use the Chat Completions API in combination with external functions to extend the capabilities of GPT models. functions is an optional parameter in the Chat Completion API which can be used to provide function specifications. The purpose of this is to enable models to generate function arguments which adhere to the provided specifications. Note that the API will not actually execute any function calls. It is up to developers to execute function calls using model outputs.
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Part 2
If the functions parameter is provided then by default the model will decide when it is appropriate to use one of the functions. The API can be forced to use a specific function by setting the function_call parameter to {'name': '<insert-function-name>'}. The API can also be forced to not use any function by setting the function_call parameter to 'none'. If a function is used, the output will contain 'finish_reason': 'function_call' in the response, as well as a function_call object that has the name of the function and the generated function arguments.
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Overview
Overview This notebook contains the following 2 sections: How to generate function arguments: Specify a set of functions and use the API to generate function arguments. How to call functions with model generated arguments: Close the loop by actually executing functions with model generated arguments.
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Installation
!pip install scipy !pip install tenacity !pip install tiktoken !pip install termcolor !pip install openai !pip install requests import json import openai import requests from tenacity import retry, wait_random_exponential, stop_after_attempt from termcolor import colored GPT_MODEL = 'gpt-3.5-turbo-0613'
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Utilities
Utilities First let's define a few utilities for making calls to the Chat Completions API and for maintaining and keeping track of the conversation state. @retry(wait=wait_random_exponential(multiplier=1, max=40), stop=stop_after_attempt(3)) def chat_completion_request(messages, functions=None, function_call=None, model=GPT_MODEL): headers = { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + openai.api_key, } json_data = {'model': model, 'messages': messages} if functions is not None: json_data.update({'functions': functions}) if function_call is not None: json_data.update({'function_call': function_call}) try: response = requests.post( 'https://api.openai.com/v1/chat/completions', headers=headers, json=json_data, ) return response except Exception as e: print('Unable to generate ChatCompletion response') print(f'Exception: {e}') return e
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Pretty Print
def pretty_print_conversation(messages): role_to_color = { 'system': 'red', 'user': 'green', 'assistant': 'blue', 'function': 'magenta', } for message in messages: if message['role'] == 'system': print(colored(f'system: {message['content']} ', role_to_color[message['role']])) elif message['role'] == 'user': print(colored(f'user: {message['content']} ', role_to_color[message['role']])) elif message['role'] == 'assistant' and message.get('function_call'): print(colored(f'assistant: {message['function_call']} ', role_to_color[message['role']])) elif message['role'] == 'assistant' and not message.get('function_call'): print(colored(f'assistant: {message['content']} ', role_to_color[message['role']])) elif message['role'] == 'function': print(colored(f'function ({message['name']}): {message['content']} ', role_to_color[message['role']]))
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Basic Concepts
Basic concepts Let's create some function specifications to interface with a hypothetical weather API. We'll pass these function specification to the Chat Completions API in order to generate function arguments that adhere to the specification. functions = [ { 'name': 'get_current_weather', 'description': 'Get the current weather', 'parameters': { 'type': 'object', 'properties': { 'location': { 'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA', }, 'format': { 'type': 'string', 'enum': ['celsius', 'fahrenheit'], 'description': 'The temperature unit to use. Infer this from the users location.', }, }, 'required': ['location', 'format'], }, }, { 'name': 'get_n_day_weather_forecast', 'description': 'Get an N-day weather forecast', 'parameters': { 'type': 'object', 'properties': { 'location': { 'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA', }, 'format': { 'type': 'string', 'enum': ['celsius', 'fahrenheit'], 'description': 'The temperature unit to use. Infer this from the users location.', }, 'num_days': { 'type': 'integer', 'description': 'The number of days to forecast', } }, 'required': ['location', 'format', 'num_days'] }, }, ]
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Weather Query
If we prompt the model about the current weather, it will respond with some clarifying questions. messages = [] messages.append({'role': 'system', 'content': 'Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.'}) messages.append({'role': 'user', 'content': 'What's the weather like today'}) chat_response = chat_completion_request( messages, functions=functions ) assistant_message = chat_response.json()['choices'][0]['message'] messages.append(assistant_message) assistant_message
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Weather Query Response
{'role': 'assistant', 'content': 'In which city and state would you like to know the current weather?'} Once we provide the missing information, it will generate the appropriate function arguments for us. messages.append({'role': 'user', 'content': 'I'm in Glasgow, Scotland.'}) chat_response = chat_completion_request( messages, functions=functions ) assistant_message = chat_response.json()['choices'][0]['message'] messages.append(assistant_message) assistant_message
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Weather Query Different Prompt
If we prompt it differently, we can get it to target the other function we've told it about. messages = [] messages.append({'role': 'system', 'content': 'Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.'}) messages.append({'role': 'user', 'content': 'what is the weather going to be like in Glasgow, Scotland over the next x days'}) chat_response = chat_completion_request( messages, functions=functions ) assistant_message = chat_response.json()['choices'][0]['message'] messages.append(assistant_message) assistant_message
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Weather Query Response 2
{'role': 'assistant', 'content': 'Sure, I can help you with that. Please provide me with the number of days you want to forecast for.'} Once again, the model is asking us for clarification because it doesn't have enough information yet. In this case it already knows the location for the forecast, but it needs to know how many days are required in the forecast. messages.append({'role': 'user', 'content': '5 days'}) chat_response = chat_completion_request( messages, functions=functions ) chat_response.json()['choices'][0]
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Force Specific Function
Forcing the use of specific functions or no function We can force the model to use a specific function, for example get_n_day_weather_forecast by using the function_call argument. By doing so, we force the model to make assumptions about how to use it. # in this cell we force the model to use get_n_day_weather_forecast messages = [] messages.append({'role': 'system', 'content': 'Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.'}) messages.append({'role': 'user', 'content': 'Give me a weather report for Toronto, Canada.'}) chat_response = chat_completion_request( messages, functions=functions, function_call={'name': 'get_n_day_weather_forecast'} ) chat_response.json()['choices'][0]['message']
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Without Forcing Function
# if we don't force the model to use get_n_day_weather_forecast it may not messages = [] messages.append({'role': 'system', 'content': 'Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.'}) messages.append({'role': 'user', 'content': 'Give me a weather report for Toronto, Canada.'}) chat_response = chat_completion_request( messages, functions=functions ) chat_response.json()['choices'][0]['message']
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Force No Function
We can also force the model to not use a function at all. By doing so we prevent it from producing a proper function call. messages = [] messages.append({'role': 'system', 'content': 'Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.'}) messages.append({'role': 'user', 'content': 'Give me the current weather (use Celcius) for Toronto, Canada.'}) chat_response = chat_completion_request( messages, functions=functions, function_call='none' ) chat_response.json()['choices'][0]['message']
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - SQL Functions
How to call functions with model generated arguments In our next example, we'll demonstrate how to execute functions whose inputs are model-generated, and use this to implement an agent that can answer questions for us about a database. For simplicity we'll use the Chinook sample database. Note: SQL generation can be high-risk in a production environment since models are not perfectly reliable at generating correct SQL.
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - SQL Function Specification
Specifying a function to execute SQL queries First let's define some helpful utility functions to extract data from a SQLite database. import sqlite3 conn = sqlite3.connect('data/Chinook.db') print('Opened database successfully')
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Database Schema Functions
def get_table_names(conn): 'Return a list of table names' table_names = [] tables = conn.execute('SELECT name FROM sqlite_master WHERE type='table';') for table in tables.fetchall(): table_names.append(table[0]) return table_names def get_column_names(conn, table_name): 'Return a list of column names.' column_names = [] columns = conn.execute(f'PRAGMA table_info({table_name});').fetchall() for col in columns: column_names.append(col[1]) return column_names def get_database_info(conn): 'Return a list of dicts containing the table name and columns for each table in the database' table_dicts = [] for table_name in get_table_names(conn): columns_names = get_column_names(conn, table_name) table_dicts.append({'table_name': table_name, 'column_names': columns_names}) return table_dicts
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Database Schema Representation
database_schema_dict = get_database_info(conn) database_schema_string = '\n'.join( [ f'Table: {table['table_name']}'\nColumns: {', '.join(table['column_names'])}' for table in database_schema_dict ] )
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - SQL Function Specification 2
As before, we'll define a function specification for the function we'd like the API to generate arguments for. Notice that we are inserting the database schema into the function specification. This will be important for the model to know about. functions = [ { 'name': 'ask_database', 'description': 'Use this function to answer user questions about music. Input should be a fully formed SQL query.', 'parameters': { 'type': 'object', 'properties': { 'query': { 'type': 'string', 'description': f' SQL query extracting info to answer the user's question. SQL should be written using this database schema: {database_schema_string} The query should be returned in plain text, not in JSON.', }, }, 'required': ['query'], }, }, ]
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - Execute SQL Queries
Executing SQL queries Now let's implement the function that will actually excute queries against the database. def ask_database(conn, query): 'Function to query SQLite database with a provided SQL query.' try: results = str(conn.execute(query).fetchall()) except Exception as e: results = f'query failed with error: {e}' return results def execute_function_call(message): if message['function_call']['name'] == 'ask_database': query = json.loads(message['function_call']['arguments'])['query'] results = ask_database(conn, query) else: results = f'Error: function {message['function_call']['name']} does not exist' return results
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - SQL Query Example
messages = [] messages.append({'role': 'system', 'content': 'Answer user questions by generating SQL queries against the Chinook Music Database.'}) messages.append({'role': 'user', 'content': 'Hi, who are the top 5 artists by number of tracks?'}) chat_response = chat_completion_request(messages, functions) assistant_message = chat_response.json()['choices'][0]['message'] messages.append(assistant_message) if assistant_message.get('function_call'): results = execute_function_call(assistant_message) messages.append({'role': 'function', 'name': assistant_message['function_call']['name'], 'content': results}) pretty_print_conversation(messages)
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
How to call functions with chat models - SQL Query Example 2
messages.append({'role': 'user', 'content': 'What is the name of the album with the most tracks?'}) chat_response = chat_completion_request(messages, functions) assistant_message = chat_response.json()['choices'][0]['message'] messages.append(assistant_message) if assistant_message.get('function_call'): results = execute_function_call(assistant_message) messages.append({'role': 'function', 'content': results, 'name': assistant_message['function_call']['name']}) pretty_print_conversation(messages)
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
Data preparation and analysis for chat model fine-tuning
This notebook serves as a tool to preprocess and analyze the chat dataset used for fine-tuning a chat model. It checks for format errors, provides basic statistics, and estimates token counts for fine-tuning costs. The method shown here corresponds to legacy fine-tuning for models like babbage-002 and davinci-002. For fine-tuning gpt-3.5-turbo, see the current fine-tuning page.
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Data preparation and analysis for chat model fine-tuning
We first load the chat dataset from an example JSONL file. data_path = "data/toy_chat_fine_tuning.jsonl" # Load the dataset with open(data_path, 'r', encoding='utf-8') as f: dataset = [json.loads(line) for line in f] # Initial dataset stats print("Num examples:", len(dataset)) print("First example:") for message in dataset[0]["messages"]: print(message)
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Data preparation and analysis for chat model fine-tuning
We can perform a variety of error checks to validate that each conversation in the dataset adheres to the format expected by the fine-tuning API. Errors are categorized based on their nature for easier debugging. Data Type Check: Checks whether each entry in the dataset is a dictionary (dict). Error type: data_type. Presence of Message List: Checks if a messages list is present in each entry. Error type: missing_messages_list. Message Keys Check: Validates that each message in the messages list contains the keys role and content. Error type: message_missing_key. Unrecognized Keys in Messages: Logs if a message has keys other than role, content, and name. Error type: message_unrecognized_key. Role Validation: Ensures the role is one of "system", "user", or "assistant". Error type: unrecognized_role. Content Validation: Verifies that content has textual data and is a string. Error type: missing_content. Assistant Message Presence: Checks that each conversation has at least one message from the assistant. Error type: example_missing_assistant_message. The code below performs these checks, and outputs counts for each type of error found are printed. This is useful for debugging and ensuring the dataset is ready for the next steps.
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Data preparation and analysis for chat model fine-tuning
Lets define a few helpful utilities to be used in the rest of the notebook. encoding = tiktoken.get_encoding("cl100k_base") # not exact! # simplified from https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb def num_tokens_from_messages(messages, tokens_per_message=3, tokens_per_name=1): num_tokens = 0 for message in messages: num_tokens += tokens_per_message for key, value in message.items(): num_tokens += len(encoding.encode(value)) if key == "name": num_tokens += tokens_per_name num_tokens += 3 return num_tokens
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Data preparation and analysis for chat model fine-tuning
def num_assistant_tokens_from_messages(messages): num_tokens = 0 for message in messages: if message["role"] == "assistant": num_tokens += len(encoding.encode(message["content"])) return num_tokens
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Data preparation and analysis for chat model fine-tuning
def print_distribution(values, name): print(f"\n#### Distribution of {name}:") print(f"min / max: {min(values)}, {max(values)}") print(f"mean / median: {np.mean(values)}, {np.median(values)}") print(f"p5 / p95: {np.quantile(values, 0.1)}, {np.quantile(values, 0.9)}")
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Data preparation and analysis for chat model fine-tuning
With some lightweight analysis we can identify potential issues in the dataset, like missing messages, and provide statistical insights into message and token counts. Missing System/User Messages: Counts the number of conversations missing a "system" or "user" message. Such messages are critical for defining the assistant's behavior and initiating the conversation. Number of Messages Per Example: Summarizes the distribution of the number of messages in each conversation, providing insight into dialogue complexity. Total Tokens Per Example: Calculates and summarizes the distribution of the total number of tokens in each conversation. Important for understanding fine-tuning costs. Tokens in Assistant's Messages: Calculates the number of tokens in the assistant's messages per conversation and summarizes this distribution. Useful for understanding the assistant's verbosity. Token Limit Warnings: Checks if any examples exceed the maximum token limit (4096 tokens), as such examples will be truncated during fine-tuning, potentially resulting in data loss.
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Data preparation and analysis for chat model fine-tuning
# Warnings and tokens counts n_missing_system = 0 n_missing_user = 0 n_messages = [] convo_lens = [] assistant_message_lens = [] for ex in dataset: messages = ex["messages"] if not any(message["role"] == "system" for message in messages): n_missing_system += 1 if not any(message["role"] == "user" for message in messages): n_missing_user += 1 n_messages.append(len(messages)) convo_lens.append(num_tokens_from_messages(messages)) assistant_message_lens.append(num_assistant_tokens_from_messages(messages)) print("Num examples missing system message:", n_missing_system) print("Num examples missing user message:", n_missing_user) print_distribution(n_messages, "num_messages_per_example") print_distribution(convo_lens, "num_total_tokens_per_example") print_distribution(assistant_message_lens, "num_assistant_tokens_per_example") n_too_long = sum(l > 4096 for l in convo_lens) print(f"\n{n_too_long} examples may be over the 4096 token limit, they will be truncated during fine-tuning")
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Data preparation and analysis for chat model fine-tuning
In this final section, we estimate the total number of tokens that will be used for fine-tuning, which allows us to approximate the cost. It is worth noting that the duration of the fine-tuning jobs will also increase with the token count. # Pricing and default n_epochs estimate MAX_TOKENS_PER_EXAMPLE = 4096 TARGET_EPOCHS = 3 MIN_TARGET_EXAMPLES = 100 MAX_TARGET_EXAMPLES = 25000 MIN_DEFAULT_EPOCHS = 1 MAX_DEFAULT_EPOCHS = 25 n_epochs = TARGET_EPOCHS n_train_examples = len(dataset) if n_train_examples * TARGET_EPOCHS < MIN_TARGET_EXAMPLES: n_epochs = min(MAX_DEFAULT_EPOCHS, MIN_TARGET_EXAMPLES // n_train_examples) elif n_train_examples * TARGET_EPOCHS > MAX_TARGET_EXAMPLES: n_epochs = max(MIN_DEFAULT_EPOCHS, MAX_TARGET_EXAMPLES // n_train_examples) n_billing_tokens_in_dataset = sum(min(MAX_TOKENS_PER_EXAMPLE, length) for length in convo_lens) print(f"Dataset has ~{n_billing_tokens_in_dataset} tokens that will be charged for during training") print(f"By default, you'll train for {n_epochs} epochs on this dataset") print(f"By default, you'll be charged for ~{n_epochs * n_billing_tokens_in_dataset} tokens") Dataset has ~4306 tokens that will be charged for during training By default, you'll train for 20 epochs on this dataset By default, you'll be charged for ~86120 tokens See https://openai.com/pricing to estimate total costs.
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Data preparation and analysis for chat model fine-tuning
Wikipedia article section: Curling at the 2022 Winter Olympics ==Results summary== ===Women's tournament=== ====Playoffs==== =====Gold medal game===== ''Sunday, 20 February, 9:05'' {{#lst:Curling at the 2022 Winter Olympics – Women's tournament|GM}} {{Player percentages | team1 = {{flagIOC|JPN|2022 Winter}} | [[Yurika Yoshida]] | 97% | [[Yumi Suzuki]] | 82% | [[Chinami Yoshida]] | 64% | [[Satsuki Fujisawa]] | 69% | teampct1 = 78% | team2 = {{flagIOC|GBR|2022 Winter}} | [[Hailey Duff]] | 90% | [[Jennifer Dodds]] | 89% | [[Vicky Wright]] | 89% | [[Eve Muirhead]] | 88% | teampct2 = 89% }}
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Curling at the 2022 Winter Olympics - Medal summary
Wikipedia article section: Curling at the 2022 Winter Olympics ==Medal summary== ===Medal table=== {{Medals table | caption = | host = | flag_template = flagIOC | event = 2022 Winter | team = | gold_CAN = 0 | silver_CAN = 0 | bronze_CAN = 1 | gold_ITA = 1 | silver_ITA = 0 | bronze_ITA = 0 | gold_NOR = 0 | silver_NOR = 1 | bronze_NOR = 0 | gold_SWE = 1 | silver_SWE = 0 | bronze_SWE = 2 | gold_GBR = 1 | silver_GBR = 1 | bronze_GBR = 0 | gold_JPN = 0 | silver_JPN = 1 | bronze_JPN - 0 }}
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Curling at the 2022 Winter Olympics - Men's tournament
Wikipedia article section: Curling at the 2022 Winter Olympics ==Results summary== ===Men's tournament=== ====Playoffs==== =====Gold medal game===== ''Saturday, 19 February, 14:50'' {{#lst:Curling at the 2022 Winter Olympics – Men's tournament|GM}} {{Player percentages | team1 = {{flagIOC|GBR|2022 Winter}} | [[Hammy McMillan Jr.]] | 95% | [[Bobby Lammie]] | 80% | [[Grant Hardie]] | 94% | [[Bruce Mouat]] | 89% | teampct1 = 90% | team2 = {{flagIOC|SWE|2022 Winter}} | [[Christoffer Sundgren]] | 99% | [[Rasmus Wranå]] | 95% | [[Oskar Eriksson]] | 93% | [[Niklas Edin]] | 87% | teampct2 = 94% }}
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Curling at the 2022 Winter Olympics - Medalists
Wikipedia article section: Curling at the 2022 Winter Olympics ==Medal summary== ===Medalists=== {| {{MedalistTable|type=Event|columns=1}} |- |Men<br/>{{DetailsLink|Curling at the 2022 Winter Olympics – Men's tournament}} |{{flagIOC|SWE|2022 Winter}}<br>[[Niklas Edin]]<br>[[Oskar Eriksson]]<br>[[Rasmus Wranå]]<br>[[Christoffer Sundgren]]<br>[[Daniel Magnusson (curler)|Daniel Magnusson]] |{{flagIOC|GBR|2022 Winter}}<br>[[Bruce Mouat]]<br>[[Grant Hardie]]<br>[[Bobby Lammie]]<br>[[Hammy McMillan Jr.]]<br>[[Ross Whyte]] |{{flagIOC|CAN|2022 Winter}}<br>[[Brad Gushue]]<br>[[Mark Nichols (curler)|Mark Nichols]]<br>[[Brett Gallant]]<br>[[Geoff Walker (curler)|Geoff Walker]]<br>[[Marc Kennedy]] |- |Women<br/>{{DetailsLink|Curling at the 2022 Winter Olympics – Women's tournament}} |{{flagIOC|GBR|2022 Winter}}<br>[[Eve Muirhead]]<br>[[Vicky Wright]]<br>[[Jennifer Dodds]]<br>[[Hailey Duff]]<br>[[Mili Smith]] |{{flagIOC|JPN|2022 Winter}}<br>[[Satsuki Fujisawa]]<br>[[Chinami Yoshida]]<br>[[Yumi Suzuki]]<br>[[Yurika Yoshida]]<br>[[Kotomi Ishizaki]] |{{flagIOC|SWE|2022 Winter}}<br>[[Anna Hasselborg]]<br>[[Sara McManus]]<br>[[Agnes Knochenhauer]]<br>[[Sofia Mabergs]]<br>[[Johanna Heldin]] |- |Mixed doubles<br/>{{DetailsLink|Curling at the 2022 Winter Olympics – Mixed doubles tournament}} |{{flagIOC|ITA|2022 Winter}}<br>[[Stefania Constantini]]<br>[[Amos Mosaner]] |{{flagIOC|NOR|2022 Winter}}<br>[[Kristin Skaslien]]<br>[[Magnus Nedregotten]] |{{flagIOC|SWE|2022 Winter}}<br>[[Almida de Val]]<br>[[Oskar Eriksson]] |}
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Curling at the 2022 Winter Olympics - Men's tournament (Bronze medal game)
Wikipedia article section: Curling at the 2022 Winter Olympics ==Results summary== ===Men's tournament=== ====Playoffs==== =====Bronze medal game===== ''Friday, 18 February, 14:05'' {{#lst:Curling at the 2022 Winter Olympics – Men's tournament|BM}} {{Player percentages | team1 = {{flagIOC|USA|2022 Winter}} | [[John Landsteiner]] | 80% | [[Matt Hamilton (curler)|Matt Hamilton]] | 86% | [[Chris Plys]] | 74% | [[John Shuster]] | 69% | teampct1 = 77% | team2 = {{flagIOC|CAN|2022 Winter}} | [[Geoff Walker (curler)|Geoff Walker]] | 84% | [[Brett Gallant]] | 86% | [[Mark Nichols (curler)|Mark Nichols]] | 78% | [[Brad Gushue]] | 78% | teampct2 = 82% }}
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Curling at the 2022 Winter Olympics - Teams (Mixed doubles)
Wikipedia article section: Curling at the 2022 Winter Olympics ==Teams== ===Mixed doubles=== {| class=wikitable |- !width=200|{{flagIOC|AUS|2022 Winter}} !width=200|{{flagIOC|CAN|2022 Winter}} !width=200|{{flagIOC|CHN|2022 Winter}} !width=200|{{flagIOC|CZE|2022 Winter}} !width=200|{{flagIOC|GBR|2022 Winter}} |- | '''Female:''' [[Tahli Gill]]<br> '''Male:''' [[Dean Hewitt]] | '''Female:''' [[Rachel Koman<br>Male: John Morris]] | '''Female:''' Fan S<br>Male: L Z | '''Female:''' Z Paulov<br>Male: T Paul | '''Female:''' Jennifer Dodds<br>Male: B Mouat |- !width=200|{{flagIOC|ITA|2022 Winter}} !width=200|{{flagIOC|NOR|2022 Winter}} !width=200|{{flagIOC|SWE|2022 Winter}} !width=200|{{flagIOC|SUI|2022 Winter}} !width=200|{{flagIOC|USA|2022 Winter}} |- | '''Female:''' Stefania Constantini<br>'''Male:''' Amos Mosaner | '''Female:''' Kristin S<br>Male: Magnus Nedregotten | '''Female:''' A de Val<br>Male: O Eriksson | '''Female:''' Jenny Perret<br>Male: Martin Rios | '''Female:''' Vicky Persinger<br>Male: Chris Plys |}
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Curling at the 2022 Winter Olympics - Women's tournament (Bronze medal game)
Wikipedia article section: Curling at the 2022 Winter Olympics ==Results summary== ===Women's tournament=== ====Playoffs==== =====Bronze medal game===== ''Saturday, 19 February, 20:05'' {{#lst:Curling at the 2022 Winter Olympics – Women's tournament|BM}} {{Player percentages | team1 = {{flagIOC|SUI|2022 Winter}} | [[Melanie Barbezat]] | 79% | [[Esther Neuenschwander]] | 75% | [[Silvana Tirinzoni]] | 81% | [[Alina Pätz]] | 64% | teampct1 = 75% | team2 = {{flagIOC|SWE|2022 Winter}} | [[Sofia Mabergs]] | 89% | [[Agnes Knochenhauer]] | 80% | [[Sara McManus]] | 81% | [[Anna Hasselborg]] | 76% | teampct2 = 82% }}
https://cookbook.openai.com/examples/chat_finetuning_data_prep
Curling at the 2022 Winter Olympics - Mixed doubles tournament (Gold medal game)
Wikipedia article section: Curling at the 2022 Winter Olympics ==Results summary== ===Mixed doubles tournament=== ====Playoffs==== =====Gold medal game===== ''Tuesday, 8 February, 20:05'' {{#lst:Curling at the 2022 Winter Olympics – Mixed doubles tournament|GM}} {| class="wikitable" !colspan=4 width=400|Player percentages |- !colspan=2 width=200 style="white-space:nowrap;"| {{flagIOC|ITA|2022 Winter}} !colspan=2 width=200 style="white-space:nowrap;"| {{flagIOC|NOR|2022 Winter}} |- | [[Stefania Constantini]] || 83% | [[Kristin Skaslien]] || 70% |- | [[Amos Mosaner]] || 90% | [[Magnus Nedregotten]] || 69% |- | '''Total''' || 87% | '''Total''' || 69% |}
https://cookbook.openai.com/examples/chat_finetuning_data_prep
How to format inputs to ChatGPT models
ChatGPT is powered by gpt-3.5-turbo and gpt-4, OpenAI's most advanced models. You can build your own applications with gpt-3.5-turbo or gpt-4 using the OpenAI API. Chat models take a series of messages as input, and return an AI-written message as output. This guide illustrates the chat format with a few example API calls.
https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models
Import the openai library
# if needed, install and/or upgrade to the latest version of the OpenAI Python library %pip install --upgrade openai import the OpenAI Python library for calling the OpenAI API import openai
https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models
An example chat API call
A chat API call has two required inputs: model: the name of the model you want to use (e.g., gpt-3.5-turbo, gpt-4, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613) messages: a list of message objects, where each object has two required fields: role: the role of the messenger (either system, user, or assistant) content: the content of the message (e.g., Write me a beautiful poem) Messages can also contain an optional name field, which gives the messenger a name. E.g., example-user, Alice, BlackbeardBot. Names may not contain spaces.
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
Extract just the reply with:
response['choices'][0]['message']['content'] 'Orange who?' Even non-conversation-based tasks can fit into the chat format, by placing the instruction in the first user message.
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
Tips for instructing gpt-3.5-turbo-0301
Best practices for instructing models may change from model version to model version. The advice that follows applies to gpt-3.5-turbo-0301 and may not apply to future models. System messages The system message can be used to prime the assistant with different personalities or behaviors.
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
Few-shot prompting
In some cases, it's easier to show the model what you want rather than tell the model what you want. One way to show the model what you want is with faked example messages.
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
Transforming the few-shot example above, we could write:
The business jargon translation example, but with example names for the example messages
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
Not every attempt at engineering conversations will succeed at first.
If your first attempts fail, don't be afraid to experiment with different ways of priming or conditioning the model.
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
Counting tokens
When you submit your request, the API transforms the messages into a sequence of tokens. The number of tokens used affects: the cost of the request the time it takes to generate the response when the reply gets cut off from hitting the maximum token limit (4,096 for gpt-3.5-turbo or 8,192 for gpt-4)
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
Token Counting Function
You can use the following function to count the number of tokens that a list of messages will use. Note that the exact way that tokens are counted from messages may change from model to model. Consider the counts from the function below an estimate, not a timeless guarantee.
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
Token Counting Function Code
```python def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613"): """Return the number of tokens used by a list of messages.""" try: encoding = tiktoken.encoding_for_model(model) except KeyError: print("Warning: model not found. Using cl100k_base encoding.") encoding = tiktoken.get_encoding("cl100k_base") if model in { "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-4-0314", "gpt-4-32k-0314", "gpt-4-0613", "gpt-4-32k-0613", }: tokens_per_message = 3 tokens_per_name = 1 elif model == "gpt-3.5-turbo-0301": tokens_per_message = 4 # every message follows {role/name}\n{content}\n tokens_per_name = -1 # if there's a name, the role is omitted elif "gpt-3.5-turbo" in model: print("Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.") return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613") elif "gpt-4" in model: print("Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.") return num_tokens_from_messages(messages, model="gpt-4-0613") else: raise NotImplementedError( f"""num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.""" ) num_tokens = 0 for message in messages: num_tokens += tokens_per_message for key, value in message.items(): num_tokens += len(encoding.encode(value)) if key == "name": num_tokens += tokens_per_name num_tokens += 3 # every reply is primed with assistant return num_tokens ```
https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
How to stream completions
By default, when you request a completion from the OpenAI, the entire completion is generated before being sent back in a single response. If you're generating long completions, waiting for the response can take many seconds. To get responses sooner, you can 'stream' the completion as it's being generated. This allows you to start printing or processing the beginning of the completion before the full completion is finished.
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream completions (cont.)
To stream completions, set stream=True when calling the chat completions or completions endpoints. This will return an object that streams back the response as data-only server-sent events. Extract chunks from the delta field rather than the message field.
https://cookbook.openai.com/examples/how_to_stream_completions
Downsides
Note that using stream=True in a production application makes it more difficult to moderate the content of the completions, as partial completions may be more difficult to evaluate. which has implications for approved usage. Another small drawback of streaming responses is that the response no longer includes the usage field to tell you how many tokens were consumed. After receiving and combining all of the responses, you can calculate this yourself using tiktoken.
https://cookbook.openai.com/examples/how_to_stream_completions
Example code
Below, this notebook shows: What a typical chat completion response looks like What a streaming chat completion response looks like How much time is saved by streaming a chat completion How to stream non-chat completions (used by older models like text-davinci-003)
https://cookbook.openai.com/examples/how_to_stream_completions
Example code (cont.)
# imports import openai # for OpenAI API calls import time # for measuring time duration of API calls 1. What a typical chat completion response looks like With a typical ChatCompletions API call, the response is first computed and then returned all at once.
https://cookbook.openai.com/examples/how_to_stream_completions
Example code (cont.)
The reply can be extracted with response['choices'][0]['message']. The content of the reply can be extracted with response['choices'][0]['message']['content']. reply = response['choices'][0]['message'] print(f"Extracted reply: {reply}") reply_content = response['choices'][0]['message']['content'] print(f"Extracted content: {reply_content}")
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream a chat completion
With a streaming API call, the response is sent back incrementally in chunks via an event stream. In Python, you can iterate over these events with a for loop.
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream a chat completion (cont.)
As you can see above, streaming responses have a delta field rather than a message field. delta can hold things like: a role token (e.g., {'role': 'assistant'}) a content token (e.g., {'content': '\n\n'}) nothing (e.g., {}), when the stream is over
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream a chat completion (cont.)
3. How much time is saved by streaming a chat completion Now let's ask gpt-3.5-turbo to count to 100 again, and see how long it takes.
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream a chat completion (cont.)
# Example of an OpenAI ChatCompletion request with stream=True # https://platform.openai.com/docs/guides/chat # record the time before the request is sent start_time = time.time()
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream a chat completion (cont.)
# send a ChatCompletion request to count to 100 response = openai.ChatCompletion.create( model='gpt-3.5-turbo', messages=[ {'role': 'user', 'content': 'Count to 100, with a comma between each number and no newlines. E.g., 1, 2, 3, ...'} ], temperature=0, )
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream a chat completion (cont.)
# calculate the time it took to receive the response response_time = time.time() - start_time # print the time delay and text received print(f"Full response received {response_time:.2f} seconds after request") print(f"Full response received: {response}")
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream a chat completion (cont.)
The reply can be extracted with response['choices'][0]['message']. The content of the reply can be extracted with response['choices'][0]['message']['content']. reply = response['choices'][0]['message'] print(f"Extracted reply: {reply}") reply_content = response['choices'][0]['message']['content'] print(f"Extracted content: {reply_content}")
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream non-chat completions (used by older models like text-davinci-003)
A typical completion request With a typical Completions API call, the text is first computed and then returned all at once.
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream non-chat completions (used by older models like text-davinci-003) (cont.)
# Example of an OpenAI Completion request # https://beta.openai.com/docs/api-reference/completions/create # record the time before the request is sent start_time = time.time()
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream non-chat completions (used by older models like text-davinci-003) (cont.)
# send a Completion request to count to 100 response = openai.Completion.create( model='text-davinci-002', prompt='1,2,3,', max_tokens=193, temperature=0, )
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream non-chat completions (used by older models like text-davinci-003) (cont.)
# calculate the time it took to receive the response response_time = time.time() - start_time # extract the text from the response completion_text = response['choices'][0]['text']
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream non-chat completions (used by older models like text-davinci-003) (cont.)
# print the time delay and text received print(f"Full response received {response_time:.2f} seconds after request") print(f"Full text received: {completion_text}")
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream non-chat completions (used by older models like text-davinci-003) (cont.)
A streaming completion request With a streaming Completions API call, the text is sent back via a series of events. In Python, you can iterate over these events with a for loop.
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream non-chat completions (used by older models like text-davinci-003) (cont.)
# Example of an OpenAI Completion request, using the stream=True option # https://beta.openai.com/docs/api-reference/completions/create # record the time before the request is sent start_time = time.time()
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream non-chat completions (used by older models like text-davinci-003) (cont.)
# send a Completion request to count to 100 response = openai.Completion.create( model='text-davinci-002', prompt='1,2,3,', max_tokens=193, temperature=0, stream=True, # this time, we set stream=True )
https://cookbook.openai.com/examples/how_to_stream_completions
How to stream non-chat completions (used by older models like text-davinci-003) (cont.)
# create variables to collect the stream of events collected_events = [] completion_text = '' # iterate through the stream of events for event in response: event_time = time.time() - start_time # calculate the time delay of the event collected_events.append(event) # save the event response/...sponse received {event_time:.2f} seconds after request") print(f"Full text received: {completion_text} However, with the streaming request, we received the first token after 0.18 seconds, and subsequent tokens every ~0.01-0.02 seconds")
https://cookbook.openai.com/examples/how_to_stream_completions
Techniques to improve reliability
When GPT-3 fails on a task, what should you do? Search for a better prompt that elicits more reliable answers? Invest in thousands of examples to fine-tune a custom model? Assume the model is incapable of the task, and move on? There is no simple answer - it depends. However, if your task involves logical reasoning or complexity, consider trying the techniques in this article to build more reliable, high-performing prompts.
https://cookbook.openai.com/techniques_to_improve_reliability
Why GPT-3 fails on complex tasks
If you were asked to multiply 13 by 17, would the answer pop immediately into your mind? For most of us, probably not. Yet, that doesn't mean humans are incapable of two-digit multiplication. With a few seconds, and some pen and paper, it's not too taxing to work out that 13 x 17 = 130 + 70 + 21 = 221. Similarly, if you give GPT-3 a task that's too complex to do in the time it takes to calculate its next token, it may confabulate an incorrect guess. Yet, akin to humans, that doesn't necessarily mean the model is incapable of the task. With some time and space to reason things out, the model still may be able to answer reliably.
https://cookbook.openai.com/techniques_to_improve_reliability
As an example, if you ask text-davinci-002...
As an example, if you ask text-davinci-002 the following math problem about juggling balls, it answers incorrectly: Q: A juggler has 16 balls. Half of the balls are golf balls and half of the golf balls are blue. How many blue golf balls are there? A: There are 8 blue golf balls. Does this mean that GPT-3 cannot do simple math problems? No; in fact, it turns out that by prompting the model with Let's think step by step, the model solves the problem reliably: Q: A juggler has 16 balls. Half of the balls are golf balls and half of the golf balls are blue. How many blue golf balls are there? A: Let's think step by step. There are 16 balls in total. Half of the balls are golf balls. That means that there are 8 golf balls. Half of the golf balls are blue. That means that there are 4 blue golf balls. Of course, it's hard to tell from only a single example whether this Let's think step by step trick actually works in general or just got lucky on this particular problem. But it really does work. On a benchmark of word math problems, the Let's think step by step trick raised GPT-3's solve rate massively, from a worthless 18% to a decent 79%!
https://cookbook.openai.com/techniques_to_improve_reliability
Model capabilities depend on context
When learning to work with GPT-3, one common conceptual mistake is to believe that its capabilities are fixed across all contexts. E.g., if GPT-3 gets a simple logic question wrong, then it must be incapable of simple logic. But as the Let's think step by step example illustrates, apparent failures of GPT-3 can sometimes be remedied with a better prompt that helps the model steer itself toward the correct output.
https://cookbook.openai.com/techniques_to_improve_reliability
How to improve reliability on complex tasks
The rest of this article shares techniques for improving reliability of large language models on complex tasks. Although some of the techniques are specific to certain types of problems, many of them are built upon general principles that can be applied to a wide range of tasks, e.g.: Give clearer instructions Split complex tasks into simpler subtasks Structure the instruction to keep the model on task Prompt the model to explain before answering Ask for justifications of many possible answers, and then synthesize Generate many outputs, and then use the model to pick the best one Fine-tune custom models to maximize performance Split complex tasks into simpler tasks One way to give a model more time and space to think is to break tasks into simpler pieces.
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 1
As an example, consider a task where we ask the model a multiple-choice question about some text - in this case, a game of Clue. When asked directly, text-davinci-002 isn't able to put clues 3 & 5 together, and answers incorrectly:
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 2
Use the following clues to answer the following multiple-choice question. Clues: 1. Miss Scarlett was the only person in the lounge. 2. The person with the pipe was in the kitchen. 3. Colonel Mustard was the only person in the observatory. 4. Professor Plum was not in the library nor the billiard room. 5. The person with the candlestick was in the observatory.
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 3
Question: Was Colonel Mustard in the observatory with the candlestick? (a) Yes; Colonel Mustard was in the observatory with the candlestick (b) No; Colonel Mustard was not in the observatory with the candlestick (c) Unknown; there is not enough information to determine whether Colonel Mustard was in the observatory with the candlestick Solution:
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 4
(c) Unknown; there is not enough information to determine whether Colonel Mustard was in the observatory with the candlestick Although clues 3 and 5 establish that Colonel Mustard was the only person in the observatory and that the person in the observatory had the candlestick, the model fails to combine them into a correct answer of (a) Yes.
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 5
However, instead of asking for the answer directly, we can split the task into three pieces:
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 6
First, go through the clues one by one and consider whether the clue is potentially relevant Second, combine the relevant clues to reason out the answer to the question Third, write the final answer: either (a), (b), or (c)
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 7
Use the following clues to answer the following multiple-choice question, using the following procedure: (1) First, go through the clues one by one and consider whether the clue is potentially relevant (2) Second, combine the relevant clues to reason out the answer to the question (3) Third, map the answer to one of the multiple choice answers: either (a), (b), or (c)
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 8
Clues: 1. Miss Scarlett was the only person in the lounge. 2. The person with the pipe was in the kitchen. 3. Colonel Mustard was the only person in the observatory. 4. Professor Plum was not in the library nor the billiard room. 5. The person with the candlestick was in the observatory.
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 9
Question: Was Colonel Mustard in the observatory with the candlestick? (a) Yes; Colonel Mustard was in the observatory with the candlestick (b) No; Colonel Mustard was not in the observatory with the candlestick (c) Unknown; there is not enough information to determine whether Colonel Mustard was in the observatory with the candlestick
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 10
Solution: (1) First, go through the clues one by one and consider whether the clue is potentially relevant: Clue 1: Miss Scarlett was the only person in the lounge. This clue is not relevant. Clue 2: The person with the pipe was in the kitchen. This clue is not relevant. Clue 3: Colonel Mustard was the only person in the observatory. This clue is relevant.
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 11
Clue 4: Professor Plum was not in the library nor the billiard room. This clue is not relevant. Clue 5: The person with the candlestick was in the observatory. This clue is relevant. (2) Second, combine the relevant clues to reason out the answer to the question: Clue 3 and Clue 5 both say that the person with the candlestick was in the observatory. Colonel Mustard was the only person in the observatory, so Colonel Mustard was the person with the candlestick.
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 12
(3) Third, write the final answer: (a) Yes; Colonel Mustard was in the observatory with the candlestick By giving the model more time and space to think, and guiding it along a reasoning plan, it's able to figure out the correct answer of (a) Yes.
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 13
Another benefit of splitting complex instructions into smaller subtasks is that it can help keep the model focused on each subtask.
https://cookbook.openai.com/techniques_to_improve_reliability
Techniques to improve reliability - Part 14
For example, if we ask text-davinci-002 to summarize a text in its original language, the model can lapse back into English:
https://cookbook.openai.com/techniques_to_improve_reliability
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
1
Edit dataset card