Chat

object Chat

Holds all available Chat models. Chat models are used to generate text in a conversational manner. Chat models work using conversational memory.

Note that GPT-4 endpoints are only available to paying customers.

See also

Properties

Link copied to clipboard

Points to the currently supported version of gpt-3.5-turbo.

Link copied to clipboard

Snapshot of gpt-3.5-turbo from March 1st, 2023. This model version will be deprecated on June 13th, 2024. Has a context window of 4,096 tokens with training data up to September 2021.

Link copied to clipboard

Snapshot of gpt-3.5-turbo from June 13th, 2023. This model version will be deprecated on June 13th, 2024. Has a context window of 4,096 tokens with training data up to September 2021.

Link copied to clipboard

Has a context window of 16,385 tokens with training data up to September 2021. This model has improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens.

Link copied to clipboard

Points to the currently supported version of gpt-3.5-turbo.

Link copied to clipboard

Snapshot of gpt-3.5-turbo-16k from June 13th, 2023. This model version will be deprecated on June 13th, 2024. Has a context window of 16,385 tokens with training data up to September 2021.

Link copied to clipboard
const val GPT_4: String

Points to the currently supported version of gpt-4.

Link copied to clipboard
const val GPT_4_0314: String

Snapshot of gpt-4 from March 14th 2023 with function calling support. This model version will be deprecated on June 13th, 2024. Has a context window of 8,192 tokens with training data up to September

Link copied to clipboard
const val GPT_4_0613: String

Snapshot of gpt-4 from June 13th, 2023 with improved function calling support. Has a context window of 8,192 tokens with training data up to September 2021.

Link copied to clipboard

gpt-4 Turbo. Has a context window of 128,000 tokens with training data up to April 2023. This model has improved instruction following, JSON mode, reproducible output, parallel function calling, and more. Returns a maximum of 4,096 output tokens.

Link copied to clipboard
const val GPT_4_32k: String

Points to the currently supported version of gpt-4 with a 32k context window.

Link copied to clipboard

Snapshot of gpt-4 from March 14th 2023 with function calling support. This model version will be deprecated on June 13th, 2024. Has a context window of 32,768 tokens with training data up to September

Link copied to clipboard

Snapshot of gpt-4-32k from June 13th, 2023 with improved function calling support. Has a context window of 32,768 tokens with training data up to September 2021.

Link copied to clipboard

gpt-4 Turbo with vision. Has a context window of 128,000 tokens with training data up to April 2023. Has the same capabilities as GPT_4_1106_PREVIEW, but can also understand images.