Module ai
API
Declarations
Definitions
ballerinax/ai Ballerina library
Overview
This module provides APIs for building AI agents using Large Language Models (LLMs).
AI agents use LLMs to process natural language inputs, generate responses, and make decisions based on given instructions. These agents can be designed for various tasks, such as answering questions, automating workflows, or interacting with external systems.
Prerequisites
Before using this module in your Ballerina application, first you must select your LLM provider and obtain the nessary configuration to engage the LLM. Currenlty the module supports the following LLM Providers and You can obtain the nessary configuration by using the folllowing intrusctons
OpenAI Provider
- Create an OpenAI account.
- Obtain an API key by following these instructions.
Azure OpenAI Provider
- Create an Azure account.
- Create an Azure OpenAI resource.
- Obtain the tokens. Refer to the Azure OpenAI Authentication guide to learn how to generate and use tokens.
Anthropic Provider
- Create an Anthropic account.
- Obtain an API key by following these instructions.
MistralAI Provider
- Create a Mistral account.
- Obtain an API key by following these instructions
Ollama Provider
- Install Ollama on your system.
- Download the required LLM models using the Ollama CLI.
- Ensure the Ollama service is running before making API requests.
- Refer to the Ollama documentation for additional configuration details.
Quickstart
To use the ai
module in your Ballerina application, update the .bal
file as follows:
Step 1: Import the module
Import the ai
module.
import ballerinax/ai;
Step 2: Define the System Prompt
A system prompt guides the AI's behavior, tone, and response style, defining its role and interaction with users.
ai:SystemPrompt systemPrompt = { role: "Math Tutor", instructions: string `You are a helpful math tutor. Explain concepts clearly with examples and provide step-by-step solutions.` };
Step 3: Define the Model Provider
The ai
module supports multiple LLM providers. Here's how to define the OpenAI provider:
final ai:OpenAiProvider openAiModel = check new ("openAiApiKey", modelType = ai:GPT_4O);
Step 4: Define the tools
An agent tool extends the AI's abilities beyond text-based responses, enabling interaction with external systems or dynamic tasks. Define tools as shown below:
# Returns the sum of two numbers # + a - first number # + b - second number # + return - sum of the numbers @ai:AgentTool isolated function sum(int a, int b) returns int => a + b; @ai:AgentTool isolated function mult(int a, int b) returns int => a * b;
Constraints for defining tools:
- The function must be marked
isolated
. - Parameters should be a subtype of
anydata
. - The tool should return a subtype of
anydata|http:Response|stream<anydata, error>|error
. - Tool documentation enhances LLM performance but is optional.
Step 5: Define the Memory
The ai
module manages memory for individual user sessions using the Memory
. By default, agents are configured with a memory that has a predefined capacity. To create a stateless agent, set the memory
to ()
when defining the agent. Additionally, you can customize the memory capacity or provide your own memory implementation. Here's how to initialize the default memory with a new capacity:
final ai:Memory memory = new ai:MessageWindowChatMemory(20);
Step 6: Define the Agent
Create a Ballerina AI agent using the configurations created earlier:
final ai:Agent mathTutorAgent = check new ( systemPrompt = systemPrompt, model = openAiModel, tools = [sum, mult], // Pass array of function pointers annotated with @ai:AgentTool memory = memory );
Step 7: Invoke the Agent
Finally, invoke the agent by calling the run
method:
mathTutorAgent->run("What is 8 + 9 multiplied by 10", sessionId = "student-one");
If using the agent with a single session, you can omit the sessionId
parameter.
Examples
The ai
module provides practical examples illustrating usage in various scenarios. Explore these examples, covering the following use cases:
- Personal AI Assistant - Demonstrates how to implement a personal AI assistant using Ballerina AI module along with Google Calendar and Gmail integrations
Functions
extractToolsFromOpenApiJsonSpec
function extractToolsFromOpenApiJsonSpec(map<json> openApiSpec, *AdditionInfoFlags additionInfoFlags) returns HttpApiSpecification & readonly|Error
Extracts the Http tools from the given OpenAPI specification as a JSON
Parameters
- openApiSpec map<json> - A valid OpenAPI specification in JSON format
- additionInfoFlags *AdditionInfoFlags - Flags to extract additional information from the OpenAPI specification
Return Type
- HttpApiSpecification & readonly|Error - A record with the list of extracted tools and the service URL (if available)
extractToolsFromOpenApiSpecFile
function extractToolsFromOpenApiSpecFile(string filePath, *AdditionInfoFlags additionInfoFlags) returns HttpApiSpecification & readonly|Error
Extracts the Http tools from the given OpenAPI specification file.
Parameters
- filePath string - Path to the OpenAPI specification file (should be JSON or YAML)
- additionInfoFlags *AdditionInfoFlags - Flags to extract additional information from the OpenAPI specification
Return Type
- HttpApiSpecification & readonly|Error - A record with the list of extracted tools and the service URL (if available)
getToolConfigs
function getToolConfigs(FunctionTool[] tools) returns ToolConfig[]
Generates a array of ToolConfig
from the given list of function pointers.
Parameters
- tools FunctionTool[] - Array of function pointers annotated with
@ai:AgentTool
annotation
Return Type
- ToolConfig[] - Array of
ai:ToolConfig
instances
getTools
Get the tools registered with the agent.
Parameters
- agent BaseAgent - Agent instance
Return Type
- Tool[] - Array of tools registered with the agent
parseOpenApiSpec
function parseOpenApiSpec(map<json> openApiSpec) returns OpenApiSpec|UnsupportedOpenApiVersion|OpenApiParsingError
Parses the given OpenAPI specification as a JSON to a OpenApiSpec object.
Parameters
- openApiSpec map<json> - A valid OpenAPI specification in JSON format
Return Type
- OpenApiSpec|UnsupportedOpenApiVersion|OpenApiParsingError - A OpenApiSpec object
run
function run(BaseAgent agent, string query, int maxIter, string|map<json> context, boolean verbose, string sessionId) returns record {| steps (ExecutionResult|ExecutionError)[], answer string |}
Execute the agent for a given user's query.
Parameters
- agent BaseAgent - Agent to be executed
- query string - Natural langauge commands to the agent
- maxIter int - No. of max iterations that agent will run to execute the task (default: 5)
- verbose boolean - If true, then print the reasoning steps (default: true)
- sessionId string (default DEFAULT_SESSION_ID) - The ID associated with the memory
Return Type
- record {| steps (ExecutionResult|ExecutionError)[], answer string |} - Returns the execution steps tracing the agent's reasoning and outputs from the tools
updateMemory
function updateMemory(Memory memory, string sessionId, ChatMessage message)
Classes
ai: Executor
An executor to perform step-by-step execution of the agent.
Constructor
Initialize the executor with the agent and the query.
init (BaseAgent agent, string sessionId, *ExecutionProgress progress)
hasNext
function hasNext() returns boolean
Checks whether agent has more steps to execute.
Return Type
- boolean - True if agent has more steps to execute, false otherwise
reason
function reason() returns json|TaskCompletedError|LlmError
Reason the next step of the agent.
Return Type
- json|TaskCompletedError|LlmError - generated LLM response during the reasoning or an error if the reasoning fails
act
function act(json llmResponse) returns ExecutionResult|LlmChatResponse|ExecutionError
Execute the next step of the agent.
Parameters
- llmResponse json - LLM response containing the tool to be executed and the raw LLM output
Return Type
- ExecutionResult|LlmChatResponse|ExecutionError - Observations from the tool can be any|error|null
update
function update(ExecutionStep step)
Update the agent with an execution step.
Parameters
- step ExecutionStep - Latest step to be added to the history
next
function next() returns record {| value ExecutionResult|LlmChatResponse|ExecutionError|Error |}?
Reason and execute the next step of the agent.
Return Type
- record {| value ExecutionResult|LlmChatResponse|ExecutionError|Error |}? - A record with ExecutionResult, chat response or an error
Fields
- progress ExecutionProgress - Contains the current execution progress for the agent and the query
ai: HttpServiceToolKit
Defines a HTTP tool kit. This is a special type of tool kit that can be used to invoke HTTP resources. Require to initialize the toolkit with the service url and http tools that are belongs to a single API.
Constructor
Initializes the toolkit with the given service url and http tools.
init (string serviceUrl, HttpTool[] httpTools, ClientConfiguration clientConfig, map<string|string[]> headers)
getTools
function getTools() returns ToolConfig[]
Useful to retrieve the Tools extracted from the HttpTools.
Return Type
- ToolConfig[] - An array of Tools corresponding to the HttpTools
ai: Iterator
An iterator to iterate over agent's execution
Constructor
Initialize the iterator with the agent and the query.
init (BaseAgent agent, string sessionId, *ExecutionProgress progress)
- agent BaseAgent - Agent instance to be executed
- sessionId string - The ID associated with the agent memory
- progress *ExecutionProgress -
iterator
function iterator() returns object {
public function next() returns record {|ExecutionResult|LlmChatResponse|ExecutionError|Error value;|}?;
}
Iterate over the agent's execution steps.
Return Type
- object { public function next() returns record {|ExecutionResult|LlmChatResponse|ExecutionError|Error value;|}?; } - a record with the execution step or an error if the agent failed
ai: MessageWindowChatMemory
Provides an in-memory chat message window with a limit on stored messages.
Constructor
Initializes a new memory window with a default or given size.
init (int size)
- size int 10 - The maximum capacity for stored messages
get
function get(string sessionId) returns ChatMessage[]|MemoryError
Retrieves a copy of all stored messages, with an optional system prompt.
Parameters
- sessionId string - The ID associated with the memory
Return Type
- ChatMessage[]|MemoryError - A copy of the messages, or an
ai:Error
update
function update(string sessionId, ChatMessage message) returns MemoryError?
Adds a message to the window.
Parameters
- sessionId string - The ID associated with the memory
- message ChatMessage - The
ChatMessage
to store or use as system prompt
Return Type
- MemoryError? - nil on success, or an
ai:Error
if the operation fails
delete
function delete(string sessionId) returns MemoryError?
Removes all messages from the memory.
Parameters
- sessionId string - The ID associated with the memory
Return Type
- MemoryError? - nil on success, or an
ai:Error
if the operation fails
ai: ToolStore
Constructor
Register tools to the agent. These tools will be by the LLM to perform tasks.
init ((BaseToolKit|ToolConfig|FunctionTool)... tools)
- tools (BaseToolKit|ToolConfig|FunctionTool)... - A list of tools that are available to the LLM
Fields
Clients
ai: Agent
Represents an agent.
Constructor
Initialize an Agent.
init (*AgentConfiguration config)
- config *AgentConfiguration - Configuration used to initialize an agent
run
Executes the agent for a given user query.
ai: AnthropicProvider
AnthropicProvider is a client class that provides an interface for interacting with Anthropic language models.
Constructor
Initializes the Anthropic model with the given connection configuration and model configuration.
init (string apiKey, ANTHROPIC_MODEL_NAMES modelType, string apiVersion, string serviceUrl, int maxTokens, decimal temperature, *ConnectionConfig connectionConfig)
- apiKey string - The Anthropic API key
- modelType ANTHROPIC_MODEL_NAMES - The Anthropic model name
- apiVersion string - The Anthropic API version (e.g., "2023-06-01")
- serviceUrl string ANTHROPIC_SERVICE_URL - The base URL of Anthropic API endpoint
- maxTokens int DEFAULT_MAX_TOKEN_COUNT - The upper limit for the number of tokens in the response generated by the model
- temperature decimal DEFAULT_TEMPERATURE - The temperature for controlling randomness in the model's output
- connectionConfig *ConnectionConfig - Additional HTTP connection configuration
chat
function chat(ChatMessage[] messages, ChatCompletionFunctions[] tools, string? stop) returns ChatAssistantMessage|LlmError
Uses Anthropic API to generate a response
Parameters
- messages ChatMessage[] - List of chat messages
- tools ChatCompletionFunctions[] (default []) - Tool definitions to be used for the tool call
- stop string? (default ()) - Stop sequence to stop the completion (not used in this implementation)
Return Type
- ChatAssistantMessage|LlmError - Chat response or an error in case of failures
ai: AzureOpenAiProvider
AzureOpenAiProvider is a client class that provides an interface for interacting with Azure-hosted OpenAI language models.
Constructor
Initializes the Azure OpenAI model with the given connection configuration and model configuration.
init (string serviceUrl, string apiKey, string deploymentId, string apiVersion, int maxTokens, decimal temperature, *ConnectionConfig connectionConfig)
- serviceUrl string - The base URL of Azure OpenAI API endpoint
- apiKey string - The Azure OpenAI API key
- deploymentId string - The deployment identifier for the specific model deployment in Azure
- apiVersion string - The Azure OpenAI API version (e.g., "2023-07-01-preview")
- maxTokens int DEFAULT_MAX_TOKEN_COUNT - The upper limit for the number of tokens in the response generated by the model
- temperature decimal DEFAULT_TEMPERATURE - The temperature for controlling randomness in the model's output
- connectionConfig *ConnectionConfig - Additional HTTP connection configuration
chat
function chat(ChatMessage[] messages, ChatCompletionFunctions[] tools, string? stop) returns ChatAssistantMessage|LlmError
Sends a chat request to the OpenAI model with the given messages and tools.
Parameters
- messages ChatMessage[] - List of chat messages
- tools ChatCompletionFunctions[] - Tool definitions to be used for the tool call
- stop string? (default ()) - Stop sequence to stop the completion
Return Type
- ChatAssistantMessage|LlmError - Function to be called, chat response or an error in-case of failures
ai: ChatClient
A client class for interacting with a chat service.
Constructor
Initializes the ChatClient
with the provided service URL and configuration.
init (string serviceUrl, *ChatClientConfiguration clientConfig)
- serviceUrl string - The base URL of the chat service.
- clientConfig *ChatClientConfiguration - Configuration options for the chat client.
post chat
function post chat(ChatReqMessage request) returns ChatRespMessage|error
Handles incoming chat messages by sending a request to the chat service.
Parameters
- request ChatReqMessage - The chat request message to be sent.
Return Type
- ChatRespMessage|error - A
ChatRespMessage
containing the response from the chat service, or anerror
if the request fails.
ai: FunctionCallAgent
Function call agent. This agent uses OpenAI function call API to perform the tool selection.
Constructor
Initialize an Agent.
init (ModelProvider model, (BaseToolKit|ToolConfig|FunctionTool)[] tools, Memory? memory)
- model ModelProvider - LLM model instance
- tools (BaseToolKit|ToolConfig|FunctionTool)[] - Tools to be used by the agent
- memory Memory? new MessageWindowChatMemory() - The memory associated with the agent.
run
function run(string query, int maxIter, string|map<json> context, boolean verbose, string sessionId) returns record {| steps (ExecutionResult|ExecutionError)[], answer string |}
Execute the agent for a given user's query.
Parameters
- query string - Natural langauge commands to the agent
- maxIter int (default 5) - No. of max iterations that agent will run to execute the task (default: 5)
- verbose boolean (default true) - If true, then print the reasoning steps (default: true)
- sessionId string (default DEFAULT_SESSION_ID) - The ID associated with the agent memory
Return Type
- record {| steps (ExecutionResult|ExecutionError)[], answer string |} - Returns the execution steps tracing the agent's reasoning and outputs from the tools
parseLlmResponse
function parseLlmResponse(json llmResponse) returns LlmToolResponse|LlmChatResponse|LlmInvalidGenerationError
Parse the function calling API response and extract the tool to be executed.
Parameters
- llmResponse json - Raw LLM response
Return Type
- LlmToolResponse|LlmChatResponse|LlmInvalidGenerationError - A record containing the tool decided by the LLM, chat response or an error if the response is invalid
selectNextTool
function selectNextTool(ExecutionProgress progress, string sessionId) returns json|LlmError
Use LLM to decide the next tool/step based on the function calling APIs.
Parameters
- progress ExecutionProgress - Execution progress with the current query and execution history
- sessionId string (default DEFAULT_SESSION_ID) - The ID associated with the agent memory
Return Type
- json|LlmError - LLM response containing the tool or chat response (or an error if the call fails)
ai: MistralAiProvider
MistralAiProvider is a client class that provides an interface for interacting with Mistral AI language models.
chat
function chat(ChatMessage[] messages, ChatCompletionFunctions[] tools, string? stop) returns ChatAssistantMessage|LlmError
Uses function call API to determine next function to be called
Parameters
- messages ChatMessage[] - List of chat messages
- tools ChatCompletionFunctions[] - Tool definitions to be used for the tool call
- stop string? (default ()) - Stop sequence to stop the completion
Return Type
- ChatAssistantMessage|LlmError - Returns an array of ChatAssistantMessage or an LlmError in case of failures
ai: OllamaProvider
Represents a client for interacting with an Ollama models.
Constructor
Initializes the client with the given connection configuration and model configuration.
init (string modelType, string serviceUrl, *OllamaModelParameters modleParameters, *ConnectionConfig connectionConfig)
- modelType string - The Ollama model name
- serviceUrl string OLLAMA_DEFAULT_SERVICE_URL - The base URL for the Ollama API endpoint
- modleParameters *OllamaModelParameters - Additional model parameters
- connectionConfig *ConnectionConfig - Additional connection configuration
chat
function chat(ChatMessage[] messages, ChatCompletionFunctions[] tools, string? stop) returns ChatAssistantMessage|LlmError
Sends a chat request to the Ollama model with the given messages and tools.
Parameters
- messages ChatMessage[] - List of chat messages
- tools ChatCompletionFunctions[] (default []) - Tool definitions to be used for the tool call
- stop string? (default ()) - Stop sequence to stop the completion
Return Type
- ChatAssistantMessage|LlmError - Function to be called, chat response or an error in-case of failures
ai: OpenAiProvider
OpenAiProvider is a client class that provides an interface for interacting with OpenAI language models.
Constructor
Initializes the OpenAI model with the given connection configuration and model configuration.
init (string apiKey, OPEN_AI_MODEL_NAMES modelType, string serviceUrl, int maxTokens, decimal temperature, *ConnectionConfig connectionConfig)
- apiKey string - The OpenAI API key
- modelType OPEN_AI_MODEL_NAMES - The OpenAI model name
- serviceUrl string OPENAI_SERVICE_URL - The base URL of OpenAI API endpoint
- maxTokens int DEFAULT_MAX_TOKEN_COUNT - The upper limit for the number of tokens in the response generated by the model
- temperature decimal DEFAULT_TEMPERATURE - The temperature for controlling randomness in the model's output
- connectionConfig *ConnectionConfig - Additional HTTP connection configuration
chat
function chat(ChatMessage[] messages, ChatCompletionFunctions[] tools, string? stop) returns ChatAssistantMessage|LlmError
Sends a chat request to the OpenAI model with the given messages and tools.
Parameters
- messages ChatMessage[] - List of chat messages
- tools ChatCompletionFunctions[] - Tool definitions to be used for the tool call
- stop string? (default ()) - Stop sequence to stop the completion
Return Type
- ChatAssistantMessage|LlmError - Function to be called, chat response or an error in-case of failures
ai: ReActAgent
A ReAct Agent that uses ReAct prompt to answer questions by using tools.
Constructor
Initialize an Agent.
init (ModelProvider model, (BaseToolKit|ToolConfig|FunctionTool)[] tools, Memory? memory)
- model ModelProvider - LLM model instance
- tools (BaseToolKit|ToolConfig|FunctionTool)[] - Tools to be used by the agent
- memory Memory? new MessageWindowChatMemory() -
run
function run(string query, int maxIter, string|map<json> context, boolean verbose, string sessionId) returns record {| steps (ExecutionResult|ExecutionError)[], answer string |}
Execute the agent for a given user's query.
Parameters
- query string - Natural langauge commands to the agent
- maxIter int (default 5) - No. of max iterations that agent will run to execute the task (default: 5)
- verbose boolean (default true) - If true, then print the reasoning steps (default: true)
- sessionId string (default DEFAULT_SESSION_ID) - The ID associated with the agent memory
Return Type
- record {| steps (ExecutionResult|ExecutionError)[], answer string |} - Returns the execution steps tracing the agent's reasoning and outputs from the tools
parseLlmResponse
function parseLlmResponse(json llmResponse) returns LlmToolResponse|LlmChatResponse|LlmInvalidGenerationError
Parse the ReAct llm response and extract the tool to be executed.
Parameters
- llmResponse json - Raw LLM response
Return Type
- LlmToolResponse|LlmChatResponse|LlmInvalidGenerationError - A record containing the tool decided by the LLM, chat response or an error if the response is invalid
selectNextTool
function selectNextTool(ExecutionProgress progress, string sessionId) returns json|LlmError
Use LLM to decide the next tool/step based on the ReAct prompting.
Parameters
- progress ExecutionProgress - Execution progress with the current query and execution history
- sessionId string (default DEFAULT_SESSION_ID) - The ID associated with the agent memory
Return Type
- json|LlmError - LLM response containing the tool or chat response (or an error if the call fails)
Service types
ai: ChatService
Defines a chat service interface that handles incoming chat messages.
post chat
function post chat(ChatReqMessage request) returns ChatRespMessage|error
Parameters
- request ChatReqMessage -
Enums
ai: AgentType
Represents the different types of agents supported by the module.
Members
ai: ANTHROPIC_MODEL_NAMES
Models types for Anthropic
Members
ai: EncodingStyle
Members
ai: HeaderStyle
Members
ai: HttpMethod
Supported HTTP methods.
Members
ai: InputType
Supported input types by the Tool schemas.
Members
ai: MISTRAL_AI_MODEL_NAMES
Models types for Mistral AI
Members
ai: OPEN_AI_MODEL_NAMES
Model types for OpenAI
Members
ai: ParameterLocation
Members
ai: ROLE
Roles for the chat messages.
Members
Listeners
ai: Listener
A server listener for handling chat service requests.
attach
function attach(ChatService chatService, string[]|string? name) returns error?
detach
function detach(ChatService chatService) returns error?
Parameters
- chatService ChatService -
'start
function 'start() returns error?
gracefulStop
function gracefulStop() returns error?
immediateStop
function immediateStop() returns error?
Annotations
ai: AgentTool
Represents the annotation of a function tool.
Records
ai: AdditionInfoFlags
Defines additional information to be extracted from the OpenAPI specification.
Fields
- extractDescription boolean(default false) - Flag to extract description of parameters and schema attributes from the OpenAPI specification
- extractDefault boolean(default false) - Flag to extract default values of parameters and schema attributes from the OpenAPI specification
ai: AgentConfiguration
Provides a set of configurations for the agent.
Fields
- systemPrompt SystemPrompt - The system prompt assigned to the agent
- model ModelProvider - The model used by the agent
- tools (BaseToolKit|ToolConfig|FunctionTool)[](default []) - The tools available for the agent
- agentType AgentType(default FUNCTION_CALL_AGENT) - Type of the agent
- maxIter int(default 5) - The maximum number of iterations the agent performs to complete the task
- verbose boolean(default false) - Specifies whether verbose logging is enabled
ai: AllOfInputSchema
Defines an allOf
input field in the schema. Follows OpenAPI 3.x specification.
Fields
- Fields Included from *BaseInputSchema
- allOf JsonSubSchema[] - List of possible input types
ai: AllOfSchema
All of schema object.
Fields
- Fields Included from *BaseSchema
- allOf Schema[] - List of schemas that should match
ai: AnyOfInputSchema
Defines an anyOf
input field in the schema. Follows OpenAPI 3.x specification.
Fields
- Fields Included from *BaseInputSchema
- anyOf JsonSubSchema[] - List of possible input types
ai: AnyOfSchema
Any of schema object.
Fields
- Fields Included from *BaseSchema
- anyOf Schema[] - List of schemas that should match
- discriminator? Discriminator - Discriminator
ai: ArrayInputSchema
Defines an array input field in the schema.
Fields
- Fields Included from *BaseInputTypeSchema
- 'type ARRAY(default ARRAY) - Input data type. Should be
ARRAY
.
- items JsonSubSchema - Schema of the array items
- default? json[] - Default value for the array
ai: ArraySchema
Array schema object.
Fields
- Fields Included from *BaseTypeSchema
- 'type ARRAY(default ARRAY) - Type of the array schema
- uniqueItems? boolean - Whether the array items are unique
- items Schema - Schema of the array items
- minItems? int - Minimum number of items in the array
- maxItems? int - Maximum number of items in the array
- properties? never - Not allowed properties
ai: BaseInputSchema
Defines a base input type schema.
Fields
- description? string - Description of the input
- default? json - Default value of the input
- nullable? boolean - Indicates whether the value can be null.
ai: BaseInputTypeSchema
Defines a base input schema with type field.
Fields
- Fields Included from *BaseInputSchema
- 'type InputType - Input data type
ai: BasePrimitiveTypeSchema
Base primitive type schema object.
Fields
- Fields Included from *BaseTypeSchema
- properties? never - Can not have properties in a primitive type schema
- items? never - Can not have items in a primitive type schema
ai: BaseSchema
Base schema object.
Fields
- description? string - Description of the schema
- default? json - Default value of the schema
- nullable? boolean - Whether the value is nullable
- 'xml? XmlSchema - Xml schema
- \$ref? never - Not allowed $ref property
ai: BaseTypeSchema
Base type schema object.
Fields
- Fields Included from *BaseSchema
- 'type string - Type of the schema
- anyOf? never - Not allowed anyOf
- oneOf? never - Not allowed oneOf
- allOf? never - Not allowed allOf
- not? never - Not allowed not
ai: BooleanSchema
Boolean schema object.
Fields
- Fields Included from *BasePrimitiveTypeSchema
- 'type BOOLEAN - Type of the boolean schema
ai: ChatAssistantMessage
Assistant chat message record.
Fields
- role ASSISTANT - Role of the message
- content string?(default ()) - The contents of the assistant message
Required unless
tool_calls
orfunction_call
is specified
- name? string - An optional name for the participant Provides the model information to differentiate between participants of the same role
- toolCalls FunctionCall[]?(default ()) - The function calls generated by the model, such as function calls
ai: ChatClientConfiguration
Represents the configuration for a chat client.
ai: ChatCompletionFunctions
Function definitions for function calling API.
Fields
- name string - Name of the function
- description string - Description of the function
- parameters? JsonInputSchema - Parameters of the function
ai: ChatFunctionMessage
Function message record.
Fields
- role FUNCTION - Role of the message
- content string?(default ()) - Content of the message
- name string - Name of the function when the message is a function call
- id? string - Identifier for the tool call
ai: ChatReqMessage
Represents a request message for the chat service.
Fields
- sessionId string - A unique identifier for the chat session.
- message string - The content of the chat message sent by the user.
ai: ChatRespMessage
Represents a response message from the chat service.
Fields
- message string - The response message generated by the chat service.
ai: ChatSystemMessage
System chat message record.
Fields
- role SYSTEM - Role of the message
- content string - Content of the message
- name? string - An optional name for the participant Provides the model information to differentiate between participants of the same role
ai: ChatUserMessage
User chat message record.
Fields
- role USER - Role of the message
- content string - Content of the message
- name? string - An optional name for the participant Provides the model information to differentiate between participants of the same role
ai: Components
Map of component objects.
Fields
- requestBodies? map<RequestBody|Reference> - A map of reusable request body objects
ai: ConnectionConfig
Configurations for controlling the behaviours when communicating with a remote HTTP endpoint.
Fields
- httpVersion HttpVersion(default http:HTTP_2_0) - The HTTP version understood by the client
- http1Settings? ClientHttp1Settings - Configurations related to HTTP/1.x protocol
- http2Settings? ClientHttp2Settings - Configurations related to HTTP/2 protocol
- timeout decimal(default 60) - The maximum time to wait (in seconds) for a response before closing the connection
- forwarded string(default "disable") - The choice of setting
forwarded
/x-forwarded
header
- poolConfig? PoolConfiguration - Configurations associated with request pooling
- cache? CacheConfig - HTTP caching related configurations
- compression Compression(default http:COMPRESSION_AUTO) - Specifies the way of handling compression (
accept-encoding
) header
- circuitBreaker? CircuitBreakerConfig - Configurations associated with the behaviour of the Circuit Breaker
- retryConfig? RetryConfig - Configurations associated with retrying
- responseLimits? ResponseLimitConfigs - Configurations associated with inbound response size limits
- secureSocket? ClientSecureSocket - SSL/TLS-related options
- proxy? ProxyConfig - Proxy server related options
- validation boolean(default true) - Enables the inbound payload validation functionality which provided by the constraint package. Enabled by default
ai: ConstantValueSchema
Defines a constant value field in the schema.
Fields
- 'const json - The constant value.
ai: Discriminator
Discriminator object.
Fields
- propertyName string - Name of the property that specifies the type
ai: Encoding
Describes a encoding definition applied to a schema property.
Fields
- style? string - Describes how a specific property value will be serialized depending on its type
- explode? boolean - When this is true, property values of type array or object generate separate parameters for each value of the array, or key-value-pair of the map
- contentType? string - The Content-Type for encoding a specific property
ai: ExecutionError
Fields
- llmResponse json - Response generated by the LLM
- 'error LlmInvalidGenerationError|ToolExecutionError|MemoryError - Error caused during the execution
- observation string - Observation on the caused error as additional instruction to the LLM
ai: ExecutionProgress
Execution progress record
Fields
- query string - Question to the agent
- history ExecutionStep[](default []) - Execution history up to the current action
ai: ExecutionResult
Execution step information
Fields
- tool LlmToolResponse - Tool decided by the LLM during the reasoning
- observation anydata|error - Observations produced by the tool during the execution
ai: ExecutionStep
Execution step information
Fields
- llmResponse json - Response generated by the LLM
- observation anydata|error - Observations produced by the tool during the execution
ai: FunctionCall
Function call record
Fields
- name string - Name of the function
- arguments string - Arguments of the function
- id? string - Identifier for the tool call
ai: Header
Describes HTTP headers.
Fields
- required? boolean - Whether this header parameter is mandatory
- description? string - A brief description of the header parameter
- allowEmptyValue? string - Whether empty value is allowed
- style? HeaderStyle - Describes how a specific property value will be serialized depending on its type
- explode? boolean - When this is true, property values of type array or object generate separate parameters for each value of the array, or key-value-pair of the map
- schema? Schema - Schema of the header parameter
- \$ref? never - Not allowed $ref
ai: HttpApiSpecification
Provides extracted tools and service URL from the OpenAPI specification.
Fields
- serviceUrl? string - Extracted service URL from the OpenAPI specification if there is any
- tools HttpTool[] - Extracted Http tools from the OpenAPI specification
ai: HttpOutput
Defines an HTTP output record for requests.
Fields
- code int - HTTP status code of the response
- path string - HTTP path url with encoded paramteres
- body? json|xml - Response payload
ai: HttpTool
Defines an HTTP tool. This is a special type of tool that can be used to invoke HTTP resources.
Fields
- name string - Name of the Http resource tool
- description string - Description of the Http resource tool used by the LLM
- method HttpMethod - Http method type (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS)
- path string - Path of the Http resource
- parameters? map<ParameterSchema> - path and query parameters definitions of the Http resource
- requestBody? RequestBodySchema - Request body definition of the Http resource
ai: IntegerSchema
Integer schema object.
Fields
- Fields Included from *BasePrimitiveTypeSchema
- 'type INTEGER - Type of the integer schema
- format? string - Format of the value
- minimum? int - Minimum value of the integer value
- maximum? int - Maximum value of the integer value
- exclusiveMinimum? boolean - Whether the minimum value is exclusive
- exclusiveMaximum? boolean - Whether the maximum value is exclusive
- multipleOf? int - Multiplier of the integer value
ai: InternalValueSchema
Defines a internal value field in the schema
Fields
- Fields Included from *ConstantValueSchema
- const json
ai: LlmChatResponse
An chat response by the LLM
Fields
- content string - A text response to the question
ai: LlmToolResponse
Tool selected by LLM to be performed by the agent
Fields
- name string - Name of the tool to selected
- arguments map<json>?(default {}) - Input to the tool
- id? string - Identifier for the tool call
ai: MediaType
Defines media type of a parameter, response body or header.
Fields
- schema Schema(default {}) - Schema of the content
ai: NotInputSchema
Defines a not
input field in the schema. Follows OpenAPI 3.x specification.
Fields
- Fields Included from *BaseInputSchema
- not JsonSubSchema - Schema that is not accepted as an input
ai: NotSchema
Not schema object.
Fields
- Fields Included from *BaseSchema
- not Schema - Schema that should not match
ai: NumberSchema
Number schema object.
Fields
- Fields Included from *BasePrimitiveTypeSchema
- format? string - Format of the value
- exclusiveMinimum? boolean - Whether the minimum value is exclusive
- exclusiveMaximum? boolean - Whether the maximum value is exclusive
ai: ObjectInputSchema
Defines an object input field in the schema.
Fields
- Fields Included from *BaseInputTypeSchema
- 'type OBJECT(default OBJECT) - Input data type. Should be
OBJECT
.
- required? string[] - List of required properties
- properties? map<JsonSubSchema> - Schema of the object properties
ai: ObjectSchemaType1
Defines an bbject schema with type is specified and properties are optional.
Fields
- Fields Included from *BaseTypeSchema
- 'type OBJECT - Type of the object schema
- minProperties? int - Minimum number of properties in the object
- maxProperties? int - Maximum number of properties in the object
- discriminator? Discriminator - Discriminator
- items? never - Not allowed items. Distinction between array and object
ai: ObjectSchemaType2
Defines an object schema with the properties defined and type is unspecified.
Fields
- Fields Included from *ObjectSchemaType1
- type "object"
- minProperties int
- maxProperties int
- required boolean|string[]
- properties map<Schema>
- additionalProperties boolean|IntegerSchema|NumberSchema|StringSchema|BooleanSchema|ArraySchema|ObjectSchemaType1|ObjectSchemaType2|OneOfSchema|AllOfSchema|AnyOfSchema|NotSchema|Reference
- discriminator Discriminator
- items never
- anyOf never
- oneOf never
- allOf never
- not never
- description string
- default json
- nullable boolean
- xml XmlSchema
- \$ref never
- anydata...
- 'type? never - To match when type is not specified, but properties are specified
ai: OllamaModelParameters
Represents the model parameters for Ollama text generation. These parameters control the behavior and output of the model.
Fields
- mirostat 0|1|2 (default 0) - Enable Mirostat sampling for controlling perplexity.
0
= disabled1
= Mirostat2
= Mirostat 2.0
- mirostatEta float(default 0.1) - Influences how quickly the algorithm responds to feedback from the generated text.
A lower value results in slower adjustments, while a higher value makes the model more responsive.
- mirostatTau float(default 5.0) - Controls the balance between coherence and diversity of the output.
A lower value results in more focused and coherent text.
- numCtx int(default 2048) - Sets the size of the context window used to generate the next token.
- repeatLastN int(default 64) - Sets how far back the model should look to prevent repetition.
0
= disabled-1
= num_ctx
- repeatPenalty float(default 1.1) - Sets how strongly to penalize repetitions.
A higher value (e.g.,1.5
) will penalize repetitions more strongly,
while a lower value (e.g.,0.9
) will be more lenient.
- temperature float(default 0.8) - Controls the creativity of the model's responses.
A higher value makes the output more diverse, while a lower value makes it more focused.
- seed int(default 0) - Sets the random number seed for deterministic text generation.
A specific value ensures the same output for identical prompts.
- numPredict int(default -1) - Maximum number of tokens to generate.
-1
allows infinite generation.
- topK int(default 40) - Controls randomness by selecting the top-k most likely next words.
A higher value (e.g.,100
) increases diversity,
while a lower value (e.g.,10
) makes responses more conservative.
- topP float(default 0.9) - Controls randomness by considering the cumulative probability of choices.
A higher value (e.g.,0.95
) increases diversity,
while a lower value (e.g.,0.5
) makes responses more conservative.
- minP float(default 0.0) - Ensures a balance between quality and variety.
Filters out low-probability tokens relative to the highest probability token.
ai: OneOfInputSchema
Defines an oneOf
input field in the schema. Follows OpenAPI 3.x specification.
Fields
- oneOf JsonSubSchema[] - List of possible input types
- nullable? boolean - Indicates whether the value can be null.
- description? string - Description of the input
ai: OneOfSchema
One of schema object.
Fields
- Fields Included from *BaseSchema
- oneOf Schema[] - List of schemas that should match
- discriminator? Discriminator - Discriminator
ai: OpenApiSpec
OpenAPI Specification 3.1.0
Fields
- openapi string - OpenAPI version
- servers? Server[] - Server information
- paths? Paths - Server resource definitions
- components? Components - Reference objects
ai: Operation
Describes a single API operation on a path.
Fields
- tags? string[] - A list of tags for API documentation control
- summary? string - A short summary of what the operation does
- description? string - A description explanation of the operation behavior
- operationId? string - Operation ID for referencing the operation
- requestBody? RequestBody|Reference - The request body applicable for this operation
ai: Parameter
Describes a single operation parameter.
Fields
- name string - Name of the parameter
- 'in ParameterLocation - The location of the parameter
- required? boolean - Whether the parameter is mandatory
- description? string - A brief description of the parameter
- allowEmptyValue? boolean - Whether empty value is allowed
- style? EncodingStyle - Describes how a specific property value will be serialized depending on its type
- explode? boolean - When this is true, property values of type array or object generate separate parameters for each value of the array, or key-value-pair of the map
- schema? Schema - Schema of the parameter
- nullable? boolean - Null value is allowed
ai: ParameterSchema
Defines a HTTP parameter schema (can be query parameter or path parameters).
Fields
- description? string - A brief description of the parameter
- required? boolean - Whether the parameter is mandatory
- style? EncodingStyle - Describes how a specific property value will be serialized depending on its type
- explode? boolean - When this is true, property values of type array or object generate separate parameters for each value of the array, or key-value-pair of the map.
- nullable? boolean - Null value is allowed
- allowEmptyValue? boolean - Whether empty value is allowed
- mediaType? string - Content type of the schema
- schema JsonSubSchema - Parameter schema
ai: PathItem
Describes a single path item.
Fields
- description? string - Description of the path item
- summary? string - Summary of the path item
- get? Operation - GET operation
- post? Operation - POST operation
- put? Operation - PUT operation
- delete? Operation - DELETE operation
- options? Operation - OPTIONS operation
- head? Operation - HEAD operation
- patch? Operation - PATCH operation
- trace? Operation - TRACE operation
- servers? Server[] - Server information for the path
- \$ref? never - Not allowed $ref
ai: Paths
Map of pathItem objects.
Fields
- PathItem|Reference... - Rest field
ai: PrimitiveInputSchema
Defines a primitive input field in the schema.
Fields
- Fields Included from *BaseInputTypeSchema
- format? string - Format of the input. This is not applicable for
BOOLEAN
type.
- pattern? string - Pattern of the input. This is only applicable for
STRING
type.
- 'enum? (PrimitiveType?)[] - Enum values of the input. This is only applicable for
STRING
type.
- default? PrimitiveType - Default value of the input
ai: Reference
Defines a reference object.
Fields
- \$ref string - Reference to a component
- summary? string - Short description of the target component
- 'xml? XmlSchema - Xml schema
- description? string - Detailed description of the target component
ai: RequestBody
Describes a single request body.
Fields
- description? string - A brief description of the request body. This could contain examples of use.
- required? boolean - Whether the request body is mandatory in the request.
ai: RequestBodySchema
Fields
- description? string - A brief description of the request body
- mediaType? string - Content type of the request body
- schema JsonSubSchema - Request body schema
ai: Response
Describes a single response from an API Operation.
Fields
- description? string - A short description of the response
- \$ref? never - Not allowed $ref
ai: Responses
Describes the responses from an API Operation.
Fields
- Response|Reference... - Rest field
ai: Server
Server information object.
Fields
- url string - A URL to the target host
- description? string - An optional string describing the host designated by the URL
ai: StringSchema
String schema object.
Fields
- Fields Included from *BasePrimitiveTypeSchema
- 'type STRING(default STRING) - Type of the string schema
- format? string - Format of the string
- minLength? int - Minimum length of the string value
- maxLength? int - Maximum length of the string value
- pattern? string - Regular expression pattern of the string value
- 'enum? (PrimitiveType?)[] - Enum values of the string value
ai: SystemPrompt
Represents the system prompt given to the agent.
Fields
- role string - The role or responsibility assigned to the agent
- instructions string - Specific instructions for the agent
ai: Tool
This is the tool used by LLMs during reasoning. This tool is same as the Tool record, but it has a clear separation between the variables that should be generated with the help of the LLMs and the constants that are defined by the users.
Fields
- name string - Name of the tool
- description string - Description of the tool
- variables? JsonInputSchema - Variables that should be generated with the help of the LLMs
- constants map<json>(default {}) - Constants that are defined by the users
- caller
function() ()
- Function that should be called to execute the tool
ai: ToolAnnotationConfig
Defines the configuration of the Tool annotation.
Fields
- name? string - The name of the tool. If not provided, defaults to the function pointer name.
- description? string - A description of the tool. This is used by LLMs to understand the tool's behavior.
If not provided, the doc comment used as the description.
- parameters? ObjectInputSchema? - The input schema expected by the tool. If the tool does not expect any input, this should be null.
If not provided, the input schema is generated automatically.
ai: ToolConfig
Defines a tool. This is the only tool type directly understood by the agent. All other tool types are converted to this type using toolkits.
Fields
- name string - Name of the tool
- description string - A description of the tool. This is used by the LLMs to understand the behavior of the tool.
- parameters JsonInputSchema?(default ()) - Input schema expected by the tool. If the tool doesn't expect any input, this should be null.
- caller FunctionTool - Pointer to the function that should be called when the tool is invoked.
ai: ToolOutput
Output from executing an action
Fields
- value anydata|error - Output value the tool
ai: XmlSchema
Fields
- name? string - Replaces the name of the element/attribute used for the described schema property.
- namespace? string - The URI of the namespace definition.
- prefix? string - The prefix to be used for the name.
- attribute? boolean - Declares whether the property definition translates to an attribute instead of an element.
- wrapped? boolean - May be used only for an array definition.
Errors
ai: Error
Defines the common error type for the module.
ai: HttpResponseParsingError
Any error occurred during parsing HTTP response is classified under this error type.
ai: HttpServiceToolKitError
Errors occurred due while running HTTP service toolkit.
ai: IncompleteSpecificationError
Errors due to incomplete OpenAPI specification.
ai: InvalidParameterDefinition
Error through due to invalid parameter definition that does not include either schema or content.
ai: InvalidReferenceError
Errors due to invalid or broken references in the OpenAPI specification.
ai: LlmConnectionError
Errors occurred during LLM generation due to connection.
ai: LlmError
Any error occurred during LLM generation is classified under this error type.
ai: LlmInvalidGenerationError
Errors occurred due to invalid LLM generation.
ai: LlmInvalidResponseError
Errors occurred due to unexpected responses from the LLM.
ai: MaxIterationExceededError
Represents an error that occurs when the maximum number of iterations has been exceeded.
ai: MemoryError
Represents errors that occur during memory-related operations.
ai: MissingHttpParameterError
Errors occurred due to missing mandotary path or query parameters.
ai: OpenApiParsingError
Any error occurred during parsing OpenAPI specification is classified under this error type.
ai: ParsingStackOverflowError
Stackoverflow errors due to lenthy OpenAPI specification or cyclic references in the specification.
ai: TaskCompletedError
Errors occurred due to termination of the Agent's execution.
ai: ToolExecutionError
Errors during tool execution.
ai: ToolInvalidInputError
Errors occurred due to invalid input to the tool generated by the LLM.
ai: ToolInvalidOutputError
Error during unexpected output by the tool
ai: ToolNotFoundError
Errors occurred due to invalid tool name generated by the LLM.
ai: UnsupportedMediaTypeError
Errors due to unsupported media type.
ai: UnsupportedOpenApiVersion
Errors due to unsupported OpenAPI specification version.
ai: UnsupportedSerializationError
Errors occurred due to unsupported path parameter serializations.
Object types
ai: BaseAgent
parseLlmResponse
function parseLlmResponse(json llmResponse) returns LlmToolResponse|LlmChatResponse|LlmInvalidGenerationError
Parse the llm response and extract the tool to be executed.
Parameters
- llmResponse json - Raw LLM response
Return Type
- LlmToolResponse|LlmChatResponse|LlmInvalidGenerationError - A record containing the tool decided by the LLM, chat response or an error if the response is invalid
selectNextTool
function selectNextTool(ExecutionProgress progress, string sessionId) returns json|LlmError
Use LLM to decide the next tool/step.
Parameters
- progress ExecutionProgress - Execution progress with the current query and execution history
- sessionId string (default DEFAULT_SESSION_ID) - The ID associated with the agent memory
Return Type
- json|LlmError - LLM response containing the tool or chat response (or an error if the call fails)
run
function run(string query, int maxIter, string|map<json> context, boolean verbose, string sessionId) returns record {| steps (ExecutionResult|ExecutionError)[], answer string |}
ai: BaseToolKit
Allows implmenting custom toolkits by extending this type. Toolkits can help to define new types of tools so that agent can understand them.
getTools
function getTools() returns ToolConfig[]
Useful to retrieve the Tools extracted from the Toolkit.
Return Type
- ToolConfig[] - An array of Tools
ai: Memory
Represents the memory interface for the agents.
get
function get(string sessionId) returns ChatMessage[]|MemoryError
Retrieves all stored chat messages.
Parameters
- sessionId string - The ID associated with the memory
Return Type
- ChatMessage[]|MemoryError - An array of messages or an
ai:Error
update
function update(string sessionId, ChatMessage message) returns MemoryError?
Adds a chat message to the memory.
Parameters
- sessionId string - The ID associated with the memory
- message ChatMessage - The message to store
Return Type
- MemoryError? - nil on success, or an
ai:Error
if the operation fails
delete
function delete(string sessionId) returns MemoryError?
Deletes all stored messages.
Parameters
- sessionId string - The ID associated with the memory
Return Type
- MemoryError? - nil on success, or an
ai:Error
if the operation fails
ai: ModelProvider
Represents an extendable client for interacting with an AI model.
chat
function chat(ChatMessage[] messages, ChatCompletionFunctions[] tools, string? stop) returns ChatAssistantMessage|LlmError
Sends a chat request to the model with the given messages and tools.
Parameters
- messages ChatMessage[] - List of chat messages
- tools ChatCompletionFunctions[] (default []) - Tool definitions to be used for the tool call
- stop string? (default ()) - Stop sequence to stop the completion
Return Type
- ChatAssistantMessage|LlmError - Function to be called, chat response or an error in-case of failures
Union types
ai: PrimitiveType
PrimitiveType
Primitive types supported by the Tool schemas.
ai: JsonInputSchema
JsonInputSchema
Defines a json input schema
ai: JsonSubSchema
JsonSubSchema
Defines a json sub schema
ai: PrimitiveTypeSchema
PrimitiveTypeSchema
Primitive type schema object.
ai: ObjectSchema
ObjectSchema
Defines an object schema.
ai: Schema
Schema
Defines a OpenAPI schema.
ai: ChatMessage
ChatMessage
Chat message record.
Function types
ai: FunctionTool
function() ()
FunctionTool
Represents a type alias for an isolated function, representing a function tool.
Import
import ballerinax/ai;
Metadata
Released date: about 8 hours ago
Version: 1.0.0
License: Apache-2.0
Compatibility
Platform: java21
Ballerina version: 2201.12.0
GraalVM compatible: Yes
Pull count
Total: 0
Current verison: 6
Weekly downloads
Keywords
AI/Agent
Cost/Freemium
Agent
AI
Contributors
Other versions
1.0.0