CompletionRequest

data class CompletionRequest @JvmOverloads constructor(var model: String, var prompt: Any? = null, var suffix: String? = null, var maxTokens: Int? = null, var temperature: Number? = null, var topP: Number? = null, var n: Int? = null, var stream: Boolean? = null, var logprobs: Int? = null, var echo: Boolean? = null, var stop: Any? = null, var presencePenalty: Number? = null, var frequencyPenalty: Number? = null, var bestOf: Int? = null, var logitBias: Map<String, Int>? = null, var user: String? = null)

Holds the configurable options that can be sent to the OpenAI Completions API. For most use cases, you only need to set model and prompt. For more detailed descriptions for each option, refer to the Completions Wiki.

prompt can be either a singular String, a List<String>, or a String[]. Providing multiple prompts is called batching, and it can be used to reduce rate limit errors. This will cause the CompletionResponse to have multiple choices.

You should not set stream. The stream option is handled using []

Constructors

Link copied to clipboard
constructor(model: String, prompt: Any? = null, suffix: String? = null, maxTokens: Int? = null, temperature: Number? = null, topP: Number? = null, n: Int? = null, stream: Boolean? = null, logprobs: Int? = null, echo: Boolean? = null, stop: Any? = null, presencePenalty: Number? = null, frequencyPenalty: Number? = null, bestOf: Int? = null, logitBias: Map<String, Int>? = null, user: String? = null)

Create a CompletionRequest instance. Recommend using builder instead.

Types

Link copied to clipboard
class Builder

Builder is a helper class to build a CompletionRequest instance with a stable API. It provides methods for setting the properties fo the CompletionRequest object. The build method returns a new CompletionRequest instance with the specified properties.

Link copied to clipboard
object Companion

Properties

Link copied to clipboard
var bestOf: Int?

Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token).

Link copied to clipboard

Echo back the prompt in addition to the completion.

Link copied to clipboard

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

Link copied to clipboard

Modify the likelihood of specified tokens appearing in the completion.

Link copied to clipboard

Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens.

Link copied to clipboard

The maximum number of tokens to generate in the completion.

Link copied to clipboard

ID of the model to use.

Link copied to clipboard
var n: Int?

How many completions to generate for each prompt.

Link copied to clipboard

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

Link copied to clipboard
var prompt: Any?

The prompt(s) to generate completions for (either a String, List<String>, or String[])

Link copied to clipboard
var stop: Any?

Up to 4 sequences where the API will stop generating further tokens.

Link copied to clipboard

Whether to stream back partial progress.

Link copied to clipboard

The suffix that comes after a completion of inserted text.

Link copied to clipboard

What sampling temperature to use, between 0 and 2.

Link copied to clipboard
var topP: Number?

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.

Link copied to clipboard
var user: String?

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.