data.csv
Module data.csv
ballerina/data.csv Ballerina library
Functions
parseBytes
function parseBytes(byte[] csvBytes, ParseOptions options, typedesc<record {}[]|anydata[]> t) returns t|Error
Parse a byte[] as a subtype of record {}[]
or anydata[][]
.
byte[] csvBytes = check io:fileReadBytes("example.csv"); record {int id; string name;}[] csv1 = check csv:parseBytes(csvBytes); [int, string][] csv2 = check csv:parseBytes(csvBytes); record {|int id;|}[] csv3 = check csv:parseBytes(csvBytes); record {int id;}[] csv4 = check csv:parseBytes(csvBytes, {skipLines: [1]});
Parameters
- csvBytes byte[] - Source CSV byte array
- options ParseOptions (default {}) - Options to be used for filtering in the projection
- t typedesc<record {}[]|anydata[]> (default <>) - Target type
Return Type
- t|Error - On success, value belonging to the given target type, else returns an
csv:Error
value.
parseList
function parseList(string[] csvList, ParseListOptions options, typedesc<record {}[]|anydata[]> t) returns t|Error
Parse a string array of array as a subtype of record {}[]
or anydata[][]
.
string[][] csvList = [["1", "John"], ["2", "Jane"]]; [int, string][] csv1 = check csv:parseList(csvList); record {|int id;|}[] csv2 = check csv:parseList(csvList, {customHeaders: ["id", "name"]}); record {int id;}[] csv3 = check csv:parseList(csvList, {skipLines: [1], customHeaders: ["id", "name"]});
Parameters
- csvList string[] - Source Ballerina string array of array value
- options ParseListOptions (default {}) - Options to be used for filtering in the projection
- t typedesc<record {}[]|anydata[]> (default <>) - Target type
Return Type
- t|Error - On success, returns value belonging to the given target type, else returns an
csv:Error
value.
parseStream
function parseStream(stream<byte[], error?> csvByteStream, ParseOptions options, typedesc<record {}[]|anydata[]> t) returns t|Error
Parse a CSV byte block stream as a subtype of record {}[]
or anydata[][]
.
stream<byte[], io:Error?> csvByteStream = check io:fileReadBlocksAsStream("example.csv"); record {int id; string name;}[] csv1 = check csv:parseStream(csvByteStream); stream<byte[], io:Error?> csvByteStream2 = check io:fileReadBlocksAsStream("example.csv"); [int, string][] csv2 = check csv:parseStream(csvByteStream2); stream<byte[], io:Error?> csvByteStream3 = check io:fileReadBlocksAsStream("example.csv"); record {|int id;|}[] csv3 = check csv:parseStream(csvByteStream3); stream<byte[], io:Error?> csvByteStream4 = check io:fileReadBlocksAsStream("example.csv"); record {int id;}[] csv4 = check csv:parseStream(csvByteStream4, {skipLines: [1]});
Parameters
- options ParseOptions (default {}) - Options to be used for filtering in the projection
- t typedesc<record {}[]|anydata[]> (default <>) - Target type
Return Type
- t|Error - On success, value belonging to the given target type, else returns an
csv:Error
value.
parseString
function parseString(string csvString, ParseOptions options, typedesc<record {}[]|anydata[]> t) returns t|Error
Parse a CSV string as a subtype of record {}[]
or anydata[][]
.
string csvString = string `id,name 1,John 3,Jane`; record {int id; string name;}[] csv1 = check csv:parseString(csvString); [int, string][] csv2 = check csv:parseString(csvString); record {|int id;|}[] csv3 = check csv:parseString(csvString); record {int id;}[] csv4 = check csv:parseString(csvString, {skipLines: [1]});
Parameters
- csvString string - Source CSV string value
- options ParseOptions (default {}) - Options to be used for filtering in the projection
- t typedesc<record {}[]|anydata[]> (default <>) - Target type
Return Type
- t|Error - On success, value belonging to the given target type, else returns an
csv:Error
value.
transform
function transform(record {}[] csvRecords, TransformOptions options, typedesc<record {}[]|anydata[]> t) returns t|Error
Transform value of type record {}[] to subtype of record {}[]
or anydata[][]
.
record {int id; string name;}[] csvRecords = [{id: 1, name: "John"}, {id: 2, name: "Jane"}]; [int, string][] csv1 = check csv:transform(csvRecords); record {|int id;|}[] csv2 = check csv:transform(csvRecords); record {int id;}[] csv3 = check csv:transform(csvRecords, {skipLines: [1]});
Parameters
- csvRecords record {}[] - Source Ballerina record array value
- options TransformOptions (default {}) - Options to be used for filtering in the projection
- t typedesc<record {}[]|anydata[]> (default <>) - Target type
Return Type
- t|Error - On success, returns value belonging to the given target type, else returns an
csv:Error
value.
Enums
data.csv: LineTerminator
Enum representing possible line terminators.
Members
\n
\r\n
data.csv: NilValue
Enum representing possible nil values.
Members
()
.Annotations
data.csv: Name
The annotation is used to overwrite the existing record field name.
Records
data.csv: NameConfig
Defines the name of the JSON Object key.
Fields
- value string - The name of the JSON Object key
data.csv: Options
Represents options for data projection.
Fields
- allowDataProjection record { nilAsOptionalField boolean, absentAsNilableType boolean }|false (default {}) - Allows data projection with specific settings.
This configuration can be either a record or false.
If it is a record, it contains
nilAsOptionalField
andabsentAsNilableType
options. If it is set tofalse
, data projection is not allowed.
- enableConstraintValidation boolean(default true) - If
true
, enables validation of constraints during processing.
- outputWithHeaders boolean(default false) - If
true
, when the result is a list it will contain headers as the first row.
data.csv: ParseListOptions
Represents options for treating a list as a record.
Fields
- Fields Included from *Options
- headerRows Unsigned32(default 0) - If
0
, all the source data will treat as data rows. Otherwise specify the header rows(Starts from 1) in the source data.
- customHeaders? string[] - Specify the header names of the source data. This field will overwrite the header values in the header rows. This will be mandatory if the header row parameter is larger than one.
data.csv: ParseOptions
Represents the options for parsing data.
Fields
- Fields Included from *Options
- delimiter Char(default ",") - The delimiter character used for separating fields in the data.
- encoding string(default "UTF-8") - The character encoding of the data.
- locale string(default "en_US") - The locale used for parsing.
- textEnclosure Char(default "\"") - The character used to enclose text fields.
- escapeChar Char(default "\\") - The character used for escaping.
- lineTerminator LineTerminator|LineTerminator[](default [LF, CRLF]) - The line terminator(s) used in the data.
- nilValue NilValue?(default ()) - The value to represent nil.
- comment Char(default "#") - The character used to indicate comments in the data.
- header Unsigned32?(default 0) - Specifies whether the header is present and, if so, the number of header lines.
- customHeadersIfHeadersAbsent string[]?(default ()) - Custom headers for the data, if headers are absent.
data.csv: TransformOptions
Represents options for treating a list as a record.
Fields
- Fields Included from *Options
- headerOrder string[]?(default ()) - Specify the order of the headers in the source data.
If the expected type is a subset of
record {}[]
this parameter will be ignored.
Errors
data.csv: Error
Represents an error.
Import
import ballerina/data.csv;
Metadata
Released date: 6 months ago
Version: 0.1.0
License: Apache-2.0
Compatibility
Platform: java17
Ballerina version: 2201.10.0
GraalVM compatible: Yes
Pull count
Total: 996
Current verison: 995
Weekly downloads
Keywords
csv
Contributors