openai.chat
Module openai.chat
Definitions
ballerinax/openai.chat Ballerina library
Overview
This is a generated connector for the OpenAI Chat API OpenAPI Specification. OpenAI is an American artificial intelligence research laboratory consisting of a non-profit corporation and a for-profit subsidiary. OpenAI conducts AI research with the declared intention of promoting and developing friendly AI. The OpenAI Chat API provides a way to access the state-of-the-art ChatGPT models developed by OpenAI for a variety of tasks.
Prerequisites
Before using this connector in your Ballerina application, complete the following:
- Create an OpenAI account.
- Obtain an API key by following these instructions.
Quick start
To use the OpenAI Chat connector in your Ballerina application, update the .bal
file as follows:
Step 1: Import the connector
First, import the ballerinax/openai.chat
module into the Ballerina project.
import ballerinax/openai.chat;
Step 2: Create a new connector instance
Create and initialize a chat:Client
with the obtained apiKey
.
chat:Client chatClient = check new ({ auth: { token: "sk-XXXXXXXXX" } });
Step 3: Invoke the connector operation
-
Now you can use the operations available within the connector. Following is an example on creating a conversation with the GPT-3.5 model.
public function main() returns error? { chat:CreateChatCompletionRequest req = { model: "gpt-3.5-turbo", messages: [{"role": "user", "content": "What is Ballerina?"}] }; chat:CreateChatCompletionResponse res = check chatClient->/chat/completions.post(req); }
-
Use
bal run
command to compile and run the Ballerina program.
Clients
openai.chat: Client
This is a generated connector for the [OpenAI API] (https://platform.openai.com/docs/api-reference/introduction) specification. Use the OpenAI API to access the state-of-the-art language models that can complete sentences, transcribe audio, and generate images. The API also supports natural language processing tasks such as text classification, entity recognition, and sentiment analysis. By using the OpenAI API, you can incorporate advanced AI capabilities into your own applications and services.
Constructor
Gets invoked to initialize the connector
.
To use the OpenAI API, you will need an API key. You can sign up for an API key by creating an account on the OpenAI website and following the provided instructions.
init (ConnectionConfig config, string serviceUrl)
- config ConnectionConfig - The configurations to be used when initializing the
connector
- serviceUrl string "https://api.openai.com/v1" - URL of the target service
post chat/completions
function post chat/completions(CreateChatCompletionRequest payload) returns CreateChatCompletionResponse|error
Creates a model response for the given chat conversation.
Parameters
- payload CreateChatCompletionRequest -
Return Type
Records
openai.chat: ChatCompletionFunctionParameters
The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.
openai.chat: ChatCompletionFunctions
Fields
- name string - The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
- description string? - The description of what the function does.
- parameters ChatCompletionFunctionParameters? - The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.
openai.chat: ChatCompletionRequestMessage
Fields
- role string - The role of the messages author. One of
system
,user
,assistant
, orfunction
.
- content string - The contents of the message.
content
is required for all messages except assistant messages with function calls.
- name string? - The name of the author of this message.
name
is required if role isfunction
, and it should be the name of the function whose response is in thecontent
. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.
- function_call ChatCompletionRequestMessage_function_call? - The name and arguments of a function that should be called, as generated by the model.
openai.chat: ChatCompletionRequestMessage_function_call
The name and arguments of a function that should be called, as generated by the model.
Fields
- name string? - The name of the function to call.
- arguments string? - The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
openai.chat: ChatCompletionResponseMessage
Fields
- role string - The role of the author of this message.
- content string? - The contents of the message.
- function_call ChatCompletionRequestMessage_function_call? - The name and arguments of a function that should be called, as generated by the model.
openai.chat: ChatCompletionStreamResponseDelta
Fields
- role string? - The role of the author of this message.
- content string? - The contents of the chunk message.
- function_call ChatCompletionRequestMessage_function_call? - The name and arguments of a function that should be called, as generated by the model.
openai.chat: ClientHttp1Settings
Provides settings related to HTTP/1.x protocol.
Fields
- keepAlive KeepAlive(default http:KEEPALIVE_AUTO) - Specifies whether to reuse a connection for multiple requests
- chunking Chunking(default http:CHUNKING_AUTO) - The chunking behaviour of the request
- proxy ProxyConfig? - Proxy server related options
openai.chat: ConnectionConfig
Provides a set of configurations for controlling the behaviours when communicating with a remote HTTP endpoint.
Fields
- auth BearerTokenConfig - Configurations related to client authentication
- httpVersion HttpVersion(default http:HTTP_2_0) - The HTTP version understood by the client
- http1Settings ClientHttp1Settings? - Configurations related to HTTP/1.x protocol
- http2Settings ClientHttp2Settings? - Configurations related to HTTP/2 protocol
- timeout decimal(default 60) - The maximum time to wait (in seconds) for a response before closing the connection
- forwarded string(default "disable") - The choice of setting
forwarded
/x-forwarded
header
- poolConfig PoolConfiguration? - Configurations associated with request pooling
- cache CacheConfig? - HTTP caching related configurations
- compression Compression(default http:COMPRESSION_AUTO) - Specifies the way of handling compression (
accept-encoding
) header
- circuitBreaker CircuitBreakerConfig? - Configurations associated with the behaviour of the Circuit Breaker
- retryConfig RetryConfig? - Configurations associated with retrying
- responseLimits ResponseLimitConfigs? - Configurations associated with inbound response size limits
- secureSocket ClientSecureSocket? - SSL/TLS-related options
- proxy ProxyConfig? - Proxy server related options
- validation boolean(default true) - Enables the inbound payload validation functionality which provided by the constraint package. Enabled by default
openai.chat: CreateChatCompletionRequest
Fields
- model string|string - ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
- messages ChatCompletionRequestMessage[] - A list of messages comprising the conversation so far. Example Python code.
- functions ChatCompletionFunctions[]? - A list of functions the model may generate JSON inputs for.
- function_call string|record { name string }? - Controls how the model responds to function calls. "none" means the model does not call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via
{"name":\ "my_function"}
forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present.
- temperature decimal?(default 1) - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or
top_p
but not both.
- top_p decimal?(default 1) - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or
temperature
but not both.
- n int?(default 1) - How many chat completion choices to generate for each input message.
- 'stream boolean?(default false) - If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a
data: [DONE]
message. Example Python code.
- max_tokens int? - The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. Example Python code for counting tokens.
- presence_penalty decimal?(default 0) - Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. See more information about frequency and presence penalties.
- frequency_penalty decimal?(default 0) - Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. See more information about frequency and presence penalties.
- logit_bias record {}? - Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
- user string? - A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
openai.chat: CreateChatCompletionResponse
Fields
- id string -
- 'object string -
- created int -
- model string -
- choices CreateChatCompletionResponse_choices[] -
- usage CreateChatCompletionResponse_usage? -
openai.chat: CreateChatCompletionResponse_choices
Fields
- index int? -
- message ChatCompletionResponseMessage? -
- finish_reason string? -
openai.chat: CreateChatCompletionResponse_usage
Fields
- prompt_tokens int -
- completion_tokens int -
- total_tokens int -
openai.chat: CreateChatCompletionStreamResponse
Fields
- id string -
- 'object string -
- created int -
- model string -
- choices CreateChatCompletionStreamResponse_choices[] -
openai.chat: CreateChatCompletionStreamResponse_choices
Fields
- index int? -
- delta ChatCompletionStreamResponseDelta? -
- finish_reason string? -
openai.chat: ProxyConfig
Proxy server configurations to be used with the HTTP client endpoint.
Fields
- host string(default "") - Host name of the proxy server
- port int(default 0) - Proxy server port
- userName string(default "") - Proxy server username
- password string(default "") - Proxy server password
Import
import ballerinax/openai.chat;
Metadata
Released date: over 1 year ago
Version: 1.1.0
License: Apache-2.0
Compatibility
Platform: any
Ballerina version: 2201.4.1
Pull count
Total: 4665
Current verison: 486
Weekly downloads
Keywords
AI/Chat
OpenAI
Cost/Paid
GPT-3.5
ChatGPT
Vendor/OpenAI
Contributors