googleapis.vision
Module googleapis.vision
API
Definitions
ballerinax/googleapis.vision Ballerina library
Overview
This is a generated connector for Google Cloud Vision API v1 OpenAPI specification. The Google Cloud Vision API provides an interface for integrates Google Vision features, including image labeling, face, logo, and landmark detection, optical character recognition (OCR), and detection of explicit content, into applications.
Prerequisites
Before using this connector in your Ballerina application, complete the following:
- Create a Google account
- Obtain tokens - Follow this link
Quickstart
To use the Google Cloud Vision connector in your Ballerina application, update the .bal file as follows:
Step 1: Import connector
First, import the ballerinax/googleapis.vision
module into the Ballerina project.
import ballerinax/googleapis.vision;
Step 2: Create a new connector instance
Create a vision:ClientConfig
with the OAuth2 tokens obtained, and initialize the connector with it.
vision:ClientConfig clientConfig = { auth: { clientId: <CLIENT_ID>, clientSecret: <CLIENT_SECRET>, refreshUrl: <REFRESH_URL>, refreshToken: <REFRESH_TOKEN> } }; vision:Client baseClient = check new Client(clientConfig);
Step 3: Invoke connector operation
-
Now you can use the operations available within the connector. Note that they are in the form of remote operations.
Following is an example on how to run image dection
public function main() { vision:BatchAnnotateImagesRequest request = { requests: [ { features: [ { 'type: "WEB_DETECTION" } ], image: { 'source: { imageUri: "https://thumbs.dreamstime.com/b/golden-retriever-dog-21668976.jpg" } } } ] }; vision:BatchAnnotateImagesResponse|error response = baseClient->runImageDetectionAndAnnotation(request); if (response is vision:BatchAnnotateImagesResponse) { log:printInfo(response.toString()); } else { log:printError(response.message()); } }
-
Use
bal run
command to compile and run the Ballerina program.
Clients
googleapis.vision: Client
This is a generated connector for Google Cloud Vision API v1 OpenAPI specification. Integrates Google Vision features, including image labeling, face, logo, and landmark detection, optical character recognition (OCR), and detection of explicit content, into applications.
Constructor
Gets invoked to initialize the connector
.
The connector initialization requires setting the API credentials.
Create a Google account and obtain tokens by following this guide.
init (ConnectionConfig config, string serviceUrl)
- config ConnectionConfig - The configurations to be used when initializing the
connector
- serviceUrl string "https://vision.googleapis.com/" - URL of the target service
annotateFiles
function annotateFiles(BatchAnnotateFilesRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns BatchAnnotateFilesResponse|error
Service that performs image detection and annotation for a batch of files. Now only "application/pdf", "image/tiff" and "image/gif" are supported. This service will extract at most 5 (customers can specify which 5 in AnnotateFileRequest.pages) frames (gif) or pages (pdf or tiff) from each file provided and perform detection and annotation for each image extracted.
Parameters
- payload BatchAnnotateFilesRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
Return Type
- BatchAnnotateFilesResponse|error - Successful response
asyncBatchAnnotate
function asyncBatchAnnotate(AsyncBatchAnnotateFilesRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns Operation|error
Run asynchronous image detection and annotation for a list of generic files, such as PDF files, which may contain multiple pages and multiple images per page. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains OperationMetadata
(metadata). Operation.response
contains AsyncBatchAnnotateFilesResponse
(results).
Parameters
- payload AsyncBatchAnnotateFilesRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
runImageDetectionAndAnnotation
function runImageDetectionAndAnnotation(BatchAnnotateImagesRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns BatchAnnotateImagesResponse|error
Run image detection and annotation for a batch of images.
Parameters
- payload BatchAnnotateImagesRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
Return Type
- BatchAnnotateImagesResponse|error - Successful response
asyncBatchImageAnnotate
function asyncBatchImageAnnotate(AsyncBatchAnnotateImagesRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns Operation|error
Run asynchronous image detection and annotation for a list of images. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains OperationMetadata
(metadata). Operation.response
contains AsyncBatchAnnotateImagesResponse
(results). This service will write image annotation outputs to json files in customer GCS bucket, each json file containing BatchAnnotateImagesResponse proto.
Parameters
- payload AsyncBatchAnnotateImagesRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
getProjectOperation
function getProjectOperation(string name, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType, string? filter, int? pageSize, string? pageToken) returns Operation|error
Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.
Parameters
- name string - The name of the operation resource.
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
- filter string? (default ()) - The standard list filter.
- pageSize int? (default ()) - The standard list page size.
- pageToken string? (default ()) - The standard list page token.
deleteProductSet
function deleteProductSet(string name, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns Empty|error
Permanently deletes a ProductSet. Products and ReferenceImages in the ProductSet are not deleted. The actual image files are not deleted from Google Cloud Storage.
Parameters
- name string - Required. Resource name of the ProductSet to delete. Format is:
projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
pathProductSet
function pathProductSet(string name, ProductSet payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType, string? updateMask) returns ProductSet|error
Makes changes to a ProductSet resource. Only display_name can be updated currently. Possible errors: * Returns NOT_FOUND if the ProductSet does not exist. * Returns INVALID_ARGUMENT if display_name is present in update_mask but missing from the request or longer than 4096 characters.
Parameters
- name string - The resource name of the ProductSet. Format is:
projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID
. This field is ignored when creating a ProductSet.
- payload ProductSet -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
- updateMask string? (default ()) - The FieldMask that specifies which fields to update. If update_mask isn't specified, all mutable fields are to be updated. Valid mask path is
display_name
.
Return Type
- ProductSet|error - Successful response
listProducts
function listProducts(string name, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType, int? pageSize, string? pageToken) returns ListProductsInProductSetResponse|error
Lists the Products in a ProductSet, in an unspecified order. If the ProductSet does not exist, the products field of the response will be empty. Possible errors: * Returns INVALID_ARGUMENT if page_size is greater than 100 or less than 1.
Parameters
- name string - Required. The ProductSet resource for which to retrieve Products. Format is:
projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
- pageSize int? (default ()) - The maximum number of items to return. Default 10, maximum 100.
- pageToken string? (default ()) - The next_page_token returned from a previous List request, if any.
Return Type
- ListProductsInProductSetResponse|error - Successful response
addProduct
function addProduct(string name, AddProductToProductSetRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns Empty|error
Adds a Product to the specified ProductSet. If the Product is already present, no change is made. One Product can be added to at most 100 ProductSets. Possible errors: * Returns NOT_FOUND if the Product or the ProductSet doesn't exist.
Parameters
- name string - Required. The resource name for the ProductSet to modify. Format is:
projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID
- payload AddProductToProductSetRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
cancelAsyncOperation
function cancelAsyncOperation(string name, CancelOperationRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns Empty|error
Starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. If the server doesn't support this method, it returns google.rpc.Code.UNIMPLEMENTED
. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED
.
Parameters
- name string - The name of the operation resource to be cancelled.
- payload CancelOperationRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
removeProduct
function removeProduct(string name, RemoveProductFromProductSetRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns Empty|error
Removes a Product from the specified ProductSet.
Parameters
- name string - Required. The resource name for the ProductSet to modify. Format is:
projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID
- payload RemoveProductFromProductSetRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
annotateBatchFiles
function annotateBatchFiles(string parent, BatchAnnotateFilesRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns BatchAnnotateFilesResponse|error
Service that performs image detection and annotation for a batch of files. Now only "application/pdf", "image/tiff" and "image/gif" are supported. This service will extract at most 5 (customers can specify which 5 in AnnotateFileRequest.pages) frames (gif) or pages (pdf or tiff) from each file provided and perform detection and annotation for each image extracted.
Parameters
- parent string - Optional. Target project and location to make a call. Format:
projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
.
- payload BatchAnnotateFilesRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
Return Type
- BatchAnnotateFilesResponse|error - Successful response
asyncFileBatchAnnotate
function asyncFileBatchAnnotate(string parent, AsyncBatchAnnotateFilesRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns Operation|error
Run asynchronous image detection and annotation for a list of generic files, such as PDF files, which may contain multiple pages and multiple images per page. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains OperationMetadata
(metadata). Operation.response
contains AsyncBatchAnnotateFilesResponse
(results).
Parameters
- parent string - Optional. Target project and location to make a call. Format:
projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
.
- payload AsyncBatchAnnotateFilesRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
annotateBatchImages
function annotateBatchImages(string parent, BatchAnnotateImagesRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns BatchAnnotateImagesResponse|error
Run image detection and annotation for a batch of images.
Parameters
- parent string - Optional. Target project and location to make a call. Format:
projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
.
- payload BatchAnnotateImagesRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
Return Type
- BatchAnnotateImagesResponse|error - Successful response
asyncImageBatchAnnotate
function asyncImageBatchAnnotate(string parent, AsyncBatchAnnotateImagesRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns Operation|error
Run asynchronous image detection and annotation for a list of images. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains OperationMetadata
(metadata). Operation.response
contains AsyncBatchAnnotateImagesResponse
(results). This service will write image annotation outputs to json files in customer GCS bucket, each json file containing BatchAnnotateImagesResponse proto.
Parameters
- parent string - Optional. Target project and location to make a call. Format:
projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
.
- payload AsyncBatchAnnotateImagesRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
listProductSetsInUnspecifiedOrder
function listProductSetsInUnspecifiedOrder(string parent, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType, int? pageSize, string? pageToken) returns ListProductSetsResponse|error
Lists ProductSets in an unspecified order. Possible errors: * Returns INVALID_ARGUMENT if page_size is greater than 100, or less than 1.
Parameters
- parent string - Required. The project from which ProductSets should be listed. Format is
projects/PROJECT_ID/locations/LOC_ID
.
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
- pageSize int? (default ()) - The maximum number of items to return. Default 10, maximum 100.
- pageToken string? (default ()) - The next_page_token returned from a previous List request, if any.
Return Type
- ListProductSetsResponse|error - Successful response
createProductSet
function createProductSet(string parent, ProductSet payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType, string? productSetId) returns ProductSet|error
Creates and returns a new ProductSet resource. Possible errors: * Returns INVALID_ARGUMENT if display_name is missing, or is longer than 4096 characters.
Parameters
- parent string - Required. The project in which the ProductSet should be created. Format is
projects/PROJECT_ID/locations/LOC_ID
.
- payload ProductSet -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
- productSetId string? (default ()) - A user-supplied resource id for this ProductSet. If set, the server will attempt to use this value as the resource id. If it is already in use, an error is returned with code ALREADY_EXISTS. Must be at most 128 characters long. It cannot contain the character
/
.
Return Type
- ProductSet|error - Successful response
importReferenceImages
function importReferenceImages(string parent, ImportProductSetsRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns Operation|error
Asynchronous API that imports a list of reference images to specified product sets based on a list of image information. The google.longrunning.Operation API can be used to keep track of the progress and results of the request. Operation.metadata
contains BatchOperationMetadata
. (progress) Operation.response
contains ImportProductSetsResponse
. (results) The input source of this method is a csv file on Google Cloud Storage. For the format of the csv file please see ImportProductSetsGcsSource.csv_file_uri.
Parameters
- parent string - Required. The project in which the ProductSets should be imported. Format is
projects/PROJECT_ID/locations/LOC_ID
.
- payload ImportProductSetsRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
listProductvision
function listProductvision(string parent, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType, int? pageSize, string? pageToken) returns ListProductsResponse|error
Lists products in an unspecified order. Possible errors: * Returns INVALID_ARGUMENT if page_size is greater than 100 or less than 1.
Parameters
- parent string - Required. The project OR ProductSet from which Products should be listed. Format:
projects/PROJECT_ID/locations/LOC_ID
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
- pageSize int? (default ()) - The maximum number of items to return. Default 10, maximum 100.
- pageToken string? (default ()) - The next_page_token returned from a previous List request, if any.
Return Type
- ListProductsResponse|error - Successful response
createProductResource
function createProductResource(string parent, Product payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType, string? productId) returns Product|error
Creates and returns a new product resource. Possible errors: * Returns INVALID_ARGUMENT if display_name is missing or longer than 4096 characters. * Returns INVALID_ARGUMENT if description is longer than 4096 characters. * Returns INVALID_ARGUMENT if product_category is missing or invalid.
Parameters
- parent string - Required. The project in which the Product should be created. Format is
projects/PROJECT_ID/locations/LOC_ID
.
- payload Product -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
- productId string? (default ()) - A user-supplied resource id for this Product. If set, the server will attempt to use this value as the resource id. If it is already in use, an error is returned with code ALREADY_EXISTS. Must be at most 128 characters long. It cannot contain the character
/
.
deleteAllProducts
function deleteAllProducts(string parent, PurgeProductsRequest payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType) returns Operation|error
Asynchronous API to delete all Products in a ProductSet or all Products that are in no ProductSet. If a Product is a member of the specified ProductSet in addition to other ProductSets, the Product will still be deleted. It is recommended to not delete the specified ProductSet until after this operation has completed. It is also recommended to not add any of the Products involved in the batch delete to a new ProductSet while this operation is running because those Products may still end up deleted. It's not possible to undo the PurgeProducts operation. Therefore, it is recommended to keep the csv files used in ImportProductSets (if that was how you originally built the Product Set) before starting PurgeProducts, in case you need to re-import the data after deletion. If the plan is to purge all of the Products from a ProductSet and then re-use the empty ProductSet to re-import new Products into the empty ProductSet, you must wait until the PurgeProducts operation has finished for that ProductSet. The google.longrunning.Operation API can be used to keep track of the progress and results of the request. Operation.metadata
contains BatchOperationMetadata
. (progress)
Parameters
- parent string - Required. The project and location in which the Products should be deleted. Format is
projects/PROJECT_ID/locations/LOC_ID
.
- payload PurgeProductsRequest -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
listReferenceImages
function listReferenceImages(string parent, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType, int? pageSize, string? pageToken) returns ListReferenceImagesResponse|error
Lists reference images. Possible errors: * Returns NOT_FOUND if the parent product does not exist. * Returns INVALID_ARGUMENT if the page_size is greater than 100, or less than 1.
Parameters
- parent string - Required. Resource name of the product containing the reference images. Format is
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID
.
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
- pageSize int? (default ()) - The maximum number of items to return. Default 10, maximum 100.
- pageToken string? (default ()) - A token identifying a page of results to be returned. This is the value of
nextPageToken
returned in a previous reference image list request. Defaults to the first page if not specified.
Return Type
- ListReferenceImagesResponse|error - Successful response
createReferenceImages
function createReferenceImages(string parent, ReferenceImage payload, string? xgafv, string? alt, string? callback, string? fields, string? quotaUser, string? uploadProtocol, string? uploadType, string? referenceImageId) returns ReferenceImage|error
Creates and returns a new ReferenceImage resource. The bounding_poly
field is optional. If bounding_poly
is not specified, the system will try to detect regions of interest in the image that are compatible with the product_category on the parent product. If it is specified, detection is ALWAYS skipped. The system converts polygons into non-rotated rectangles. Note that the pipeline will resize the image if the image resolution is too large to process (above 50MP). Possible errors: * Returns INVALID_ARGUMENT if the image_uri is missing or longer than 4096 characters. * Returns INVALID_ARGUMENT if the product does not exist. * Returns INVALID_ARGUMENT if bounding_poly is not provided, and nothing compatible with the parent product's product_category is detected. * Returns INVALID_ARGUMENT if bounding_poly contains more than 10 polygons.
Parameters
- parent string - Required. Resource name of the product in which to create the reference image. Format is
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID
.
- payload ReferenceImage -
- xgafv string? (default ()) - V1 error format.
- alt string? (default ()) - Data format for response.
- callback string? (default ()) - JSONP
- fields string? (default ()) - Selector specifying which fields to include in a partial response.
- quotaUser string? (default ()) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadProtocol string? (default ()) - Upload protocol for media (e.g. "raw", "multipart").
- uploadType string? (default ()) - Legacy upload protocol for media (e.g. "media", "multipart").
- referenceImageId string? (default ()) - A user-supplied resource id for the ReferenceImage to be added. If set, the server will attempt to use this value as the resource id. If it is already in use, an error is returned with code ALREADY_EXISTS. Must be at most 128 characters long. It cannot contain the character
/
.
Return Type
- ReferenceImage|error - Successful response
Records
googleapis.vision: AddProductToProductSetRequest
Request message for the AddProductToProductSet
method.
Fields
- product string? - Required. The resource name for the Product to be added to this ProductSet. Format is:
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID
googleapis.vision: AnnotateFileRequest
A request to annotate one single file, e.g. a PDF, TIFF or GIF file.
Fields
- features Feature[]? - Required. Requested features.
- imageContext ImageContext? - Image context and/or feature-specific parameters.
- inputConfig InputConfig? - The desired input location and metadata.
- pages int[]? - Pages of the file to perform image annotation. Pages starts from 1, we assume the first page of the file is page 1. At most 5 pages are supported per request. Pages can be negative. Page 1 means the first page. Page 2 means the second page. Page -1 means the last page. Page -2 means the second to the last page. If the file is GIF instead of PDF or TIFF, page refers to GIF frames. If this field is empty, by default the service performs image annotation for the first 5 pages of the file.
googleapis.vision: AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
Fields
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- inputConfig InputConfig? - The desired input location and metadata.
- responses AnnotateImageResponse[]? - Individual responses to images found within the file. This field will be empty if the
error
field is set.
- totalPages int? - This field gives the total number of pages in the file.
googleapis.vision: AnnotateImageRequest
Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features, and with context information.
Fields
- features Feature[]? - Requested features.
- image Image? - Client image to perform Google Cloud Vision API tasks over.
- imageContext ImageContext? - Image context and/or feature-specific parameters.
googleapis.vision: AnnotateImageResponse
Response to an image annotation request.
Fields
- context ImageAnnotationContext? - If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
- cropHintsAnnotation CropHintsAnnotation? - Set of crop hints that are used to generate new crops when serving images.
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- faceAnnotations FaceAnnotation[]? - If present, face detection has completed successfully.
- fullTextAnnotation TextAnnotation? - TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
- imagePropertiesAnnotation ImageProperties? - Stores image properties, such as dominant colors.
- labelAnnotations EntityAnnotation[]? - If present, label detection has completed successfully.
- landmarkAnnotations EntityAnnotation[]? - If present, landmark detection has completed successfully.
- localizedObjectAnnotations LocalizedObjectAnnotation[]? - If present, localized object detection has completed successfully. This will be sorted descending by confidence score.
- logoAnnotations EntityAnnotation[]? - If present, logo detection has completed successfully.
- productSearchResults ProductSearchResults? - Results for a product search request.
- safeSearchAnnotation SafeSearchAnnotation? - Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
- textAnnotations EntityAnnotation[]? - If present, text (OCR) detection has completed successfully.
- webDetection WebDetection? - Relevant information for the image from the Internet.
googleapis.vision: AsyncAnnotateFileRequest
An offline file annotation request.
Fields
- features Feature[]? - Required. Requested features.
- imageContext ImageContext? - Image context and/or feature-specific parameters.
- inputConfig InputConfig? - The desired input location and metadata.
- outputConfig OutputConfig? - The desired output location and metadata.
googleapis.vision: AsyncAnnotateFileResponse
The response for a single offline file annotation request.
Fields
- outputConfig OutputConfig? - The desired output location and metadata.
googleapis.vision: AsyncBatchAnnotateFilesRequest
Multiple async file annotation requests are batched into a single service call.
Fields
- parent string? - Optional. Target project and location to make a call. Format:
projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
.
- requests AsyncAnnotateFileRequest[]? - Required. Individual async file annotation requests for this batch.
googleapis.vision: AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
Fields
- responses AsyncAnnotateFileResponse[]? - The list of file annotation responses, one for each request in AsyncBatchAnnotateFilesRequest.
googleapis.vision: AsyncBatchAnnotateImagesRequest
Request for async image annotation for a list of images.
Fields
- outputConfig OutputConfig? - The desired output location and metadata.
- parent string? - Optional. Target project and location to make a call. Format:
projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
.
- requests AnnotateImageRequest[]? - Required. Individual image annotation requests for this batch.
googleapis.vision: AsyncBatchAnnotateImagesResponse
Response to an async batch image annotation request.
Fields
- outputConfig OutputConfig? - The desired output location and metadata.
googleapis.vision: BatchAnnotateFilesRequest
A list of requests to annotate files using the BatchAnnotateFiles API.
Fields
- parent string? - Optional. Target project and location to make a call. Format:
projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
.
- requests AnnotateFileRequest[]? - Required. The list of file annotation requests. Right now we support only one AnnotateFileRequest in BatchAnnotateFilesRequest.
googleapis.vision: BatchAnnotateFilesResponse
A list of file annotation responses.
Fields
- responses AnnotateFileResponse[]? - The list of file annotation responses, each response corresponding to each AnnotateFileRequest in BatchAnnotateFilesRequest.
googleapis.vision: BatchAnnotateImagesRequest
Multiple image annotation requests are batched into a single service call.
Fields
- parent string? - Optional. Target project and location to make a call. Format:
projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
.
- requests AnnotateImageRequest[]? - Required. Individual image annotation requests for this batch.
googleapis.vision: BatchAnnotateImagesResponse
Response to a batch image annotation request.
Fields
- responses AnnotateImageResponse[]? - Individual responses to image annotation requests within the batch.
googleapis.vision: BatchOperationMetadata
Metadata for the batch operations such as the current state. This is included in the metadata
field of the Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
Fields
- endTime string? - The time when the batch request is finished and google.longrunning.Operation.done is set to true.
- state string? - The current state of the batch operation.
- submitTime string? - The time when the batch request was submitted to the server.
googleapis.vision: Block
Logical element on the page.
Fields
- blockType string? - Detected block type (text, image etc) for this block.
- boundingBox BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results on the block. Range [0, 1].
- paragraphs Paragraph[]? - List of paragraphs in this block (if this blocks is of type text).
- property TextProperty? - Additional information detected on the structural component.
googleapis.vision: BoundingPoly
A bounding polygon for the detected image annotation.
Fields
- normalizedVertices NormalizedVertex[]? - The bounding polygon normalized vertices.
- vertices Vertex[]? - The bounding polygon vertices.
googleapis.vision: CancelOperationRequest
The request message for Operations.CancelOperation.
googleapis.vision: ClientHttp1Settings
Provides settings related to HTTP/1.x protocol.
Fields
- keepAlive KeepAlive(default http:KEEPALIVE_AUTO) - Specifies whether to reuse a connection for multiple requests
- chunking Chunking(default http:CHUNKING_AUTO) - The chunking behaviour of the request
- proxy ProxyConfig? - Proxy server related options
googleapis.vision: Color
Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of java.awt.Color
in Java; it can also be trivially provided to UIColor's +colorWithRed:green:blue:alpha
method in iOS; and, with just a little work, it can be easily formatted into a CSS rgba()
string in JavaScript. This reference page doesn't carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...
Fields
- alpha float? - The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation:
pixel color = alpha * (this color) + (1.0 - alpha) * (background color)
This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is rendered as a solid color (as if the alpha value had been explicitly given a value of 1.0).
- blue float? - The amount of blue in the color as a value in the interval [0, 1].
- green float? - The amount of green in the color as a value in the interval [0, 1].
- red float? - The amount of red in the color as a value in the interval [0, 1].
googleapis.vision: ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
Fields
- color Color? - Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of
java.awt.Color
in Java; it can also be trivially provided to UIColor's+colorWithRed:green:blue:alpha
method in iOS; and, with just a little work, it can be easily formatted into a CSSrgba()
string in JavaScript. This reference page doesn't carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...
- pixelFraction float? - The fraction of pixels the color occupies in the image. Value in range [0, 1].
- score float? - Image-specific score for this color. Value in range [0, 1].
googleapis.vision: ConnectionConfig
Provides a set of configurations for controlling the behaviours when communicating with a remote HTTP endpoint.
Fields
- auth BearerTokenConfig|OAuth2RefreshTokenGrantConfig - Configurations related to client authentication
- httpVersion HttpVersion(default http:HTTP_2_0) - The HTTP version understood by the client
- http1Settings ClientHttp1Settings? - Configurations related to HTTP/1.x protocol
- http2Settings ClientHttp2Settings? - Configurations related to HTTP/2 protocol
- timeout decimal(default 60) - The maximum time to wait (in seconds) for a response before closing the connection
- forwarded string(default "disable") - The choice of setting
forwarded
/x-forwarded
header
- poolConfig PoolConfiguration? - Configurations associated with request pooling
- cache CacheConfig? - HTTP caching related configurations
- compression Compression(default http:COMPRESSION_AUTO) - Specifies the way of handling compression (
accept-encoding
) header
- circuitBreaker CircuitBreakerConfig? - Configurations associated with the behaviour of the Circuit Breaker
- retryConfig RetryConfig? - Configurations associated with retrying
- responseLimits ResponseLimitConfigs? - Configurations associated with inbound response size limits
- secureSocket ClientSecureSocket? - SSL/TLS-related options
- proxy ProxyConfig? - Proxy server related options
- validation boolean(default true) - Enables the inbound payload validation functionality which provided by the constraint package. Enabled by default
googleapis.vision: CropHint
Single crop hint that is used to generate a new crop when serving an image.
Fields
- boundingPoly BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of this being a salient region. Range [0, 1].
- importanceFraction float? - Fraction of importance of this salient region with respect to the original image.
googleapis.vision: CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
Fields
- cropHints CropHint[]? - Crop hint results.
googleapis.vision: CropHintsParams
Parameters for crop hints annotation request.
Fields
- aspectRatios float[]? - Aspect ratios in floats, representing the ratio of the width to the height of the image. For example, if the desired aspect ratio is 4/3, the corresponding float value should be 1.33333. If not specified, the best possible crop is returned. The number of provided aspect ratios is limited to a maximum of 16; any aspect ratios provided after the 16th are ignored.
googleapis.vision: DetectedBreak
Detected start or end of a structural component.
Fields
- isPrefix boolean? - True if break prepends the element.
- 'type string? - Detected break type.
googleapis.vision: DetectedLanguage
Detected language for a structural component.
Fields
- confidence float? - Confidence of detected language. Range [0, 1].
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
googleapis.vision: DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
Fields
- colors ColorInfo[]? - RGB color values with their score and pixel fraction.
googleapis.vision: Empty
A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty
is empty JSON object {}
.
googleapis.vision: EntityAnnotation
Set of detected entity features.
Fields
- boundingPoly BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Deprecated. Use
score
instead. The accuracy of the entity detection in an image. For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1].
- description string? - Entity textual description, expressed in its
locale
language.
- locale string? - The language code for the locale in which the entity textual
description
is expressed.
- locations LocationInfo[]? - The location information for the detected entity. Multiple
LocationInfo
elements can be present because one location may indicate the location of the scene in the image, and another location may indicate the location of the place where the image was taken. Location information is usually present for landmarks.
- mid string? - Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
- properties Property[]? - Some entities may have optional user-supplied
Property
(name/value) fields, such a score or string that qualifies the entity.
- score float? - Overall score of the result. Range [0, 1].
- topicality float? - The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
googleapis.vision: FaceAnnotation
A face annotation object contains the results of face detection.
Fields
- angerLikelihood string? - Anger likelihood.
- blurredLikelihood string? - Blurred likelihood.
- boundingPoly BoundingPoly? - A bounding polygon for the detected image annotation.
- detectionConfidence float? - Detection confidence. Range [0, 1].
- fdBoundingPoly BoundingPoly? - A bounding polygon for the detected image annotation.
- headwearLikelihood string? - Headwear likelihood.
- joyLikelihood string? - Joy likelihood.
- landmarkingConfidence float? - Face landmarking confidence. Range [0, 1].
- landmarks Landmark[]? - Detected face landmarks.
- panAngle float? - Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
- rollAngle float? - Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
- sorrowLikelihood string? - Sorrow likelihood.
- surpriseLikelihood string? - Surprise likelihood.
- tiltAngle float? - Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
- underExposedLikelihood string? - Under-exposed likelihood.
googleapis.vision: Feature
The type of Google Cloud Vision API detection to perform, and the maximum number of results to return for that type. Multiple Feature
objects can be specified in the features
list.
Fields
- maxResults int? - Maximum number of results of this type. Does not apply to
TEXT_DETECTION
,DOCUMENT_TEXT_DETECTION
, orCROP_HINTS
.
- model string? - Model to use for the feature. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
- 'type string? - The feature type.
googleapis.vision: GcsDestination
The Google Cloud Storage location where the output will be written to.
Fields
- uri string? - Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples: * File Prefix: gs://bucket-name/here/filenameprefix The output files will be created in gs://bucket-name/here/ and the names of the output files will begin with "filenameprefix". * Directory Prefix: gs://bucket-name/some/location/ The output files will be created in gs://bucket-name/some/location/ and the names of the output files could be anything because there was no filename prefix specified. If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files.
googleapis.vision: GcsSource
The Google Cloud Storage location where the input will be read from.
Fields
- uri string? - Google Cloud Storage URI for the input file. This must only be a Google Cloud Storage object. Wildcards are not currently supported.
googleapis.vision: GoogleCloudVisionV1p1beta1AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
Fields
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- inputConfig GoogleCloudVisionV1p1beta1InputConfig? - The desired input location and metadata.
- responses GoogleCloudVisionV1p1beta1AnnotateImageResponse[]? - Individual responses to images found within the file. This field will be empty if the
error
field is set.
- totalPages int? - This field gives the total number of pages in the file.
googleapis.vision: GoogleCloudVisionV1p1beta1AnnotateImageResponse
Response to an image annotation request.
Fields
- context GoogleCloudVisionV1p1beta1ImageAnnotationContext? - If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
- cropHintsAnnotation GoogleCloudVisionV1p1beta1CropHintsAnnotation? - Set of crop hints that are used to generate new crops when serving images.
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- faceAnnotations GoogleCloudVisionV1p1beta1FaceAnnotation[]? - If present, face detection has completed successfully.
- fullTextAnnotation GoogleCloudVisionV1p1beta1TextAnnotation? - TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
- imagePropertiesAnnotation GoogleCloudVisionV1p1beta1ImageProperties? - Stores image properties, such as dominant colors.
- labelAnnotations GoogleCloudVisionV1p1beta1EntityAnnotation[]? - If present, label detection has completed successfully.
- landmarkAnnotations GoogleCloudVisionV1p1beta1EntityAnnotation[]? - If present, landmark detection has completed successfully.
- localizedObjectAnnotations GoogleCloudVisionV1p1beta1LocalizedObjectAnnotation[]? - If present, localized object detection has completed successfully. This will be sorted descending by confidence score.
- logoAnnotations GoogleCloudVisionV1p1beta1EntityAnnotation[]? - If present, logo detection has completed successfully.
- productSearchResults GoogleCloudVisionV1p1beta1ProductSearchResults? - Results for a product search request.
- safeSearchAnnotation GoogleCloudVisionV1p1beta1SafeSearchAnnotation? - Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
- textAnnotations GoogleCloudVisionV1p1beta1EntityAnnotation[]? - If present, text (OCR) detection has completed successfully.
- webDetection GoogleCloudVisionV1p1beta1WebDetection? - Relevant information for the image from the Internet.
googleapis.vision: GoogleCloudVisionV1p1beta1AsyncAnnotateFileResponse
The response for a single offline file annotation request.
Fields
- outputConfig GoogleCloudVisionV1p1beta1OutputConfig? - The desired output location and metadata.
googleapis.vision: GoogleCloudVisionV1p1beta1AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
Fields
- responses GoogleCloudVisionV1p1beta1AsyncAnnotateFileResponse[]? - The list of file annotation responses, one for each request in AsyncBatchAnnotateFilesRequest.
googleapis.vision: GoogleCloudVisionV1p1beta1Block
Logical element on the page.
Fields
- blockType string? - Detected block type (text, image etc) for this block.
- boundingBox GoogleCloudVisionV1p1beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results on the block. Range [0, 1].
- paragraphs GoogleCloudVisionV1p1beta1Paragraph[]? - List of paragraphs in this block (if this blocks is of type text).
- property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
googleapis.vision: GoogleCloudVisionV1p1beta1BoundingPoly
A bounding polygon for the detected image annotation.
Fields
- normalizedVertices GoogleCloudVisionV1p1beta1NormalizedVertex[]? - The bounding polygon normalized vertices.
- vertices GoogleCloudVisionV1p1beta1Vertex[]? - The bounding polygon vertices.
googleapis.vision: GoogleCloudVisionV1p1beta1ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
Fields
- color Color? - Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of
java.awt.Color
in Java; it can also be trivially provided to UIColor's+colorWithRed:green:blue:alpha
method in iOS; and, with just a little work, it can be easily formatted into a CSSrgba()
string in JavaScript. This reference page doesn't carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...
- pixelFraction float? - The fraction of pixels the color occupies in the image. Value in range [0, 1].
- score float? - Image-specific score for this color. Value in range [0, 1].
googleapis.vision: GoogleCloudVisionV1p1beta1CropHint
Single crop hint that is used to generate a new crop when serving an image.
Fields
- boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of this being a salient region. Range [0, 1].
- importanceFraction float? - Fraction of importance of this salient region with respect to the original image.
googleapis.vision: GoogleCloudVisionV1p1beta1CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
Fields
- cropHints GoogleCloudVisionV1p1beta1CropHint[]? - Crop hint results.
googleapis.vision: GoogleCloudVisionV1p1beta1DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
Fields
- colors GoogleCloudVisionV1p1beta1ColorInfo[]? - RGB color values with their score and pixel fraction.
googleapis.vision: GoogleCloudVisionV1p1beta1EntityAnnotation
Set of detected entity features.
Fields
- boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Deprecated. Use
score
instead. The accuracy of the entity detection in an image. For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1].
- description string? - Entity textual description, expressed in its
locale
language.
- locale string? - The language code for the locale in which the entity textual
description
is expressed.
- locations GoogleCloudVisionV1p1beta1LocationInfo[]? - The location information for the detected entity. Multiple
LocationInfo
elements can be present because one location may indicate the location of the scene in the image, and another location may indicate the location of the place where the image was taken. Location information is usually present for landmarks.
- mid string? - Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
- properties GoogleCloudVisionV1p1beta1Property[]? - Some entities may have optional user-supplied
Property
(name/value) fields, such a score or string that qualifies the entity.
- score float? - Overall score of the result. Range [0, 1].
- topicality float? - The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p1beta1FaceAnnotation
A face annotation object contains the results of face detection.
Fields
- angerLikelihood string? - Anger likelihood.
- blurredLikelihood string? - Blurred likelihood.
- boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- detectionConfidence float? - Detection confidence. Range [0, 1].
- fdBoundingPoly GoogleCloudVisionV1p1beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- headwearLikelihood string? - Headwear likelihood.
- joyLikelihood string? - Joy likelihood.
- landmarkingConfidence float? - Face landmarking confidence. Range [0, 1].
- landmarks GoogleCloudVisionV1p1beta1FaceAnnotationLandmark[]? - Detected face landmarks.
- panAngle float? - Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
- rollAngle float? - Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
- sorrowLikelihood string? - Sorrow likelihood.
- surpriseLikelihood string? - Surprise likelihood.
- tiltAngle float? - Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
- underExposedLikelihood string? - Under-exposed likelihood.
googleapis.vision: GoogleCloudVisionV1p1beta1FaceAnnotationLandmark
A face-specific landmark (for example, a face feature).
Fields
- position GoogleCloudVisionV1p1beta1Position? - A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
- 'type string? - Face landmark type.
googleapis.vision: GoogleCloudVisionV1p1beta1GcsDestination
The Google Cloud Storage location where the output will be written to.
Fields
- uri string? - Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples: * File Prefix: gs://bucket-name/here/filenameprefix The output files will be created in gs://bucket-name/here/ and the names of the output files will begin with "filenameprefix". * Directory Prefix: gs://bucket-name/some/location/ The output files will be created in gs://bucket-name/some/location/ and the names of the output files could be anything because there was no filename prefix specified. If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files.
googleapis.vision: GoogleCloudVisionV1p1beta1GcsSource
The Google Cloud Storage location where the input will be read from.
Fields
- uri string? - Google Cloud Storage URI for the input file. This must only be a Google Cloud Storage object. Wildcards are not currently supported.
googleapis.vision: GoogleCloudVisionV1p1beta1ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
Fields
- pageNumber int? - If the file was a PDF or TIFF, this field gives the page number within the file used to produce the image.
- uri string? - The URI of the file used to produce the image.
googleapis.vision: GoogleCloudVisionV1p1beta1ImageProperties
Stores image properties, such as dominant colors.
Fields
- dominantColors GoogleCloudVisionV1p1beta1DominantColorsAnnotation? - Set of dominant colors and their corresponding scores.
googleapis.vision: GoogleCloudVisionV1p1beta1InputConfig
The desired input location and metadata.
Fields
- content string? - File content, represented as a stream of bytes. Note: As with all
bytes
fields, protobuffers use a pure binary representation, whereas JSON representations use base64. Currently, this field only works for BatchAnnotateFiles requests. It does not work for AsyncBatchAnnotateFiles requests.
- gcsSource GoogleCloudVisionV1p1beta1GcsSource? - The Google Cloud Storage location where the input will be read from.
- mimeType string? - The type of the file. Currently only "application/pdf", "image/tiff" and "image/gif" are supported. Wildcards are not supported.
googleapis.vision: GoogleCloudVisionV1p1beta1LocalizedObjectAnnotation
Set of detected objects with bounding boxes.
Fields
- boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- mid string? - Object ID that should align with EntityAnnotation mid.
- name string? - Object name, expressed in its
language_code
language.
- score float? - Score of the result. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p1beta1LocationInfo
Detected entity location information.
Fields
- latLng LatLng? - An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
googleapis.vision: GoogleCloudVisionV1p1beta1NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
Fields
- x float? - X coordinate.
- y float? - Y coordinate.
googleapis.vision: GoogleCloudVisionV1p1beta1OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
Fields
- createTime string? - The time when the batch request was received.
- state string? - Current state of the batch operation.
- updateTime string? - The time when the operation result was last updated.
googleapis.vision: GoogleCloudVisionV1p1beta1OutputConfig
The desired output location and metadata.
Fields
- batchSize int? - The max number of response protos to put into each output JSON file on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 response protos will be generated. If
batch_size
= 20, then 5 json files each containing 20 response protos will be written under the prefixgcs_destination
.uri
. Currently, batch_size only applies to GcsDestination, with potential future support for other output configurations.
- gcsDestination GoogleCloudVisionV1p1beta1GcsDestination? - The Google Cloud Storage location where the output will be written to.
googleapis.vision: GoogleCloudVisionV1p1beta1Page
Detected page from OCR.
Fields
- blocks GoogleCloudVisionV1p1beta1Block[]? - List of blocks of text, images etc on this page.
- confidence float? - Confidence of the OCR results on the page. Range [0, 1].
- height int? - Page height. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
- property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- width int? - Page width. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
googleapis.vision: GoogleCloudVisionV1p1beta1Paragraph
Structural unit of text representing a number of words in certain order.
Fields
- boundingBox GoogleCloudVisionV1p1beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the paragraph. Range [0, 1].
- property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- words GoogleCloudVisionV1p1beta1Word[]? - List of all words in this paragraph.
googleapis.vision: GoogleCloudVisionV1p1beta1Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
Fields
- x float? - X coordinate.
- y float? - Y coordinate.
- z float? - Z coordinate (or depth).
googleapis.vision: GoogleCloudVisionV1p1beta1Product
A Product contains ReferenceImages.
Fields
- description string? - User-provided metadata to be stored with this product. Must be at most 4096 characters long.
- displayName string? - The user-provided name for this Product. Must not be empty. Must be at most 4096 characters long.
- name string? - The resource name of the product. Format is:
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID
. This field is ignored when creating a product.
- productCategory string? - Immutable. The category for the product identified by the reference image. This should be one of "homegoods-v2", "apparel-v2", "toys-v2", "packagedgoods-v1" or "general-v1". The legacy categories "homegoods", "apparel", and "toys" are still supported, but these should not be used for new products.
- productLabels GoogleCloudVisionV1p1beta1ProductKeyValue[]? - Key-value pairs that can be attached to a product. At query time, constraints can be specified based on the product_labels. Note that integer values can be provided as strings, e.g. "1199". Only strings with integer values can match a range-based restriction which is to be supported soon. Multiple values can be assigned to the same key. One product may have up to 500 product_labels. Notice that the total number of distinct product_labels over all products in one ProductSet cannot exceed 1M, otherwise the product search pipeline will refuse to work for that ProductSet.
googleapis.vision: GoogleCloudVisionV1p1beta1ProductKeyValue
A product label represented as a key-value pair.
Fields
- 'key string? - The key of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
- value string? - The value of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
googleapis.vision: GoogleCloudVisionV1p1beta1ProductSearchResults
Results for a product search request.
Fields
- indexTime string? - Timestamp of the index which provided these results. Products added to the product set and products removed from the product set after this time are not reflected in the current results.
- productGroupedResults GoogleCloudVisionV1p1beta1ProductSearchResultsGroupedResult[]? - List of results grouped by products detected in the query image. Each entry corresponds to one bounding polygon in the query image, and contains the matching products specific to that region. There may be duplicate product matches in the union of all the per-product results.
- results GoogleCloudVisionV1p1beta1ProductSearchResultsResult[]? - List of results, one for each product match.
googleapis.vision: GoogleCloudVisionV1p1beta1ProductSearchResultsGroupedResult
Information about the products similar to a single product in a query image.
Fields
- boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- objectAnnotations GoogleCloudVisionV1p1beta1ProductSearchResultsObjectAnnotation[]? - List of generic predictions for the object in the bounding box.
- results GoogleCloudVisionV1p1beta1ProductSearchResultsResult[]? - List of results, one for each product match.
googleapis.vision: GoogleCloudVisionV1p1beta1ProductSearchResultsObjectAnnotation
Prediction for what the object in the bounding box is.
Fields
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- mid string? - Object ID that should align with EntityAnnotation mid.
- name string? - Object name, expressed in its
language_code
language.
- score float? - Score of the result. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p1beta1ProductSearchResultsResult
Information about a product.
Fields
- image string? - The resource name of the image from the product that is the closest match to the query.
- product GoogleCloudVisionV1p1beta1Product? - A Product contains ReferenceImages.
- score float? - A confidence level on the match, ranging from 0 (no confidence) to 1 (full confidence).
googleapis.vision: GoogleCloudVisionV1p1beta1Property
A Property
consists of a user-supplied name/value pair.
Fields
- name string? - Name of the property.
- uint64Value string? - Value of numeric properties.
- value string? - Value of the property.
googleapis.vision: GoogleCloudVisionV1p1beta1SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
Fields
- adult string? - Represents the adult content likelihood for the image. Adult content may contain elements such as nudity, pornographic images or cartoons, or sexual activities.
- medical string? - Likelihood that this is a medical image.
- racy string? - Likelihood that the request image contains racy content. Racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.
- spoof string? - Spoof likelihood. The likelihood that an modification was made to the image's canonical version to make it appear funny or offensive.
- violence string? - Likelihood that this image contains violent content.
googleapis.vision: GoogleCloudVisionV1p1beta1Symbol
A single symbol representation.
Fields
- boundingBox GoogleCloudVisionV1p1beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the symbol. Range [0, 1].
- property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- text string? - The actual UTF-8 representation of the symbol.
googleapis.vision: GoogleCloudVisionV1p1beta1TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
Fields
- pages GoogleCloudVisionV1p1beta1Page[]? - List of pages detected by OCR.
- text string? - UTF-8 text detected on the pages.
googleapis.vision: GoogleCloudVisionV1p1beta1TextAnnotationDetectedBreak
Detected start or end of a structural component.
Fields
- isPrefix boolean? - True if break prepends the element.
- 'type string? - Detected break type.
googleapis.vision: GoogleCloudVisionV1p1beta1TextAnnotationDetectedLanguage
Detected language for a structural component.
Fields
- confidence float? - Confidence of detected language. Range [0, 1].
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
googleapis.vision: GoogleCloudVisionV1p1beta1TextAnnotationTextProperty
Additional information detected on the structural component.
Fields
- detectedBreak GoogleCloudVisionV1p1beta1TextAnnotationDetectedBreak? - Detected start or end of a structural component.
- detectedLanguages GoogleCloudVisionV1p1beta1TextAnnotationDetectedLanguage[]? - A list of detected languages together with confidence.
googleapis.vision: GoogleCloudVisionV1p1beta1Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
Fields
- x int? - X coordinate.
- y int? - Y coordinate.
googleapis.vision: GoogleCloudVisionV1p1beta1WebDetection
Relevant information for the image from the Internet.
Fields
- bestGuessLabels GoogleCloudVisionV1p1beta1WebDetectionWebLabel[]? - The service's best guess as to the topic of the request image. Inferred from similar images on the open web.
- fullMatchingImages GoogleCloudVisionV1p1beta1WebDetectionWebImage[]? - Fully matching images from the Internet. Can include resized copies of the query image.
- pagesWithMatchingImages GoogleCloudVisionV1p1beta1WebDetectionWebPage[]? - Web pages containing the matching images from the Internet.
- partialMatchingImages GoogleCloudVisionV1p1beta1WebDetectionWebImage[]? - Partial matching images from the Internet. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
- visuallySimilarImages GoogleCloudVisionV1p1beta1WebDetectionWebImage[]? - The visually similar image results.
- webEntities GoogleCloudVisionV1p1beta1WebDetectionWebEntity[]? - Deduced entities from similar images on the Internet.
googleapis.vision: GoogleCloudVisionV1p1beta1WebDetectionWebEntity
Entity deduced from similar images on the Internet.
Fields
- description string? - Canonical description of the entity, in English.
- entityId string? - Opaque entity ID.
- score float? - Overall relevancy score for the entity. Not normalized and not comparable across different image queries.
googleapis.vision: GoogleCloudVisionV1p1beta1WebDetectionWebImage
Metadata for online images.
Fields
- score float? - (Deprecated) Overall relevancy score for the image.
- url string? - The result image URL.
googleapis.vision: GoogleCloudVisionV1p1beta1WebDetectionWebLabel
Label to provide extra metadata for the web detection.
Fields
- label string? - Label for extra metadata.
- languageCode string? - The BCP-47 language code for
label
, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
googleapis.vision: GoogleCloudVisionV1p1beta1WebDetectionWebPage
Metadata for web pages.
Fields
- fullMatchingImages GoogleCloudVisionV1p1beta1WebDetectionWebImage[]? - Fully matching images on the page. Can include resized copies of the query image.
- pageTitle string? - Title for the web page, may contain HTML markups.
- partialMatchingImages GoogleCloudVisionV1p1beta1WebDetectionWebImage[]? - Partial matching images on the page. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
- score float? - (Deprecated) Overall relevancy score for the web page.
- url string? - The result web page URL.
googleapis.vision: GoogleCloudVisionV1p1beta1Word
A word representation.
Fields
- boundingBox GoogleCloudVisionV1p1beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the word. Range [0, 1].
- property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- symbols GoogleCloudVisionV1p1beta1Symbol[]? - List of symbols in the word. The order of the symbols follows the natural reading order.
googleapis.vision: GoogleCloudVisionV1p2beta1AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
Fields
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- inputConfig GoogleCloudVisionV1p2beta1InputConfig? - The desired input location and metadata.
- responses GoogleCloudVisionV1p2beta1AnnotateImageResponse[]? - Individual responses to images found within the file. This field will be empty if the
error
field is set.
- totalPages int? - This field gives the total number of pages in the file.
googleapis.vision: GoogleCloudVisionV1p2beta1AnnotateImageResponse
Response to an image annotation request.
Fields
- context GoogleCloudVisionV1p2beta1ImageAnnotationContext? - If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
- cropHintsAnnotation GoogleCloudVisionV1p2beta1CropHintsAnnotation? - Set of crop hints that are used to generate new crops when serving images.
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- faceAnnotations GoogleCloudVisionV1p2beta1FaceAnnotation[]? - If present, face detection has completed successfully.
- fullTextAnnotation GoogleCloudVisionV1p2beta1TextAnnotation? - TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
- imagePropertiesAnnotation GoogleCloudVisionV1p2beta1ImageProperties? - Stores image properties, such as dominant colors.
- labelAnnotations GoogleCloudVisionV1p2beta1EntityAnnotation[]? - If present, label detection has completed successfully.
- landmarkAnnotations GoogleCloudVisionV1p2beta1EntityAnnotation[]? - If present, landmark detection has completed successfully.
- localizedObjectAnnotations GoogleCloudVisionV1p2beta1LocalizedObjectAnnotation[]? - If present, localized object detection has completed successfully. This will be sorted descending by confidence score.
- logoAnnotations GoogleCloudVisionV1p2beta1EntityAnnotation[]? - If present, logo detection has completed successfully.
- productSearchResults GoogleCloudVisionV1p2beta1ProductSearchResults? - Results for a product search request.
- safeSearchAnnotation GoogleCloudVisionV1p2beta1SafeSearchAnnotation? - Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
- textAnnotations GoogleCloudVisionV1p2beta1EntityAnnotation[]? - If present, text (OCR) detection has completed successfully.
- webDetection GoogleCloudVisionV1p2beta1WebDetection? - Relevant information for the image from the Internet.
googleapis.vision: GoogleCloudVisionV1p2beta1AsyncAnnotateFileResponse
The response for a single offline file annotation request.
Fields
- outputConfig GoogleCloudVisionV1p2beta1OutputConfig? - The desired output location and metadata.
googleapis.vision: GoogleCloudVisionV1p2beta1AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
Fields
- responses GoogleCloudVisionV1p2beta1AsyncAnnotateFileResponse[]? - The list of file annotation responses, one for each request in AsyncBatchAnnotateFilesRequest.
googleapis.vision: GoogleCloudVisionV1p2beta1Block
Logical element on the page.
Fields
- blockType string? - Detected block type (text, image etc) for this block.
- boundingBox GoogleCloudVisionV1p2beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results on the block. Range [0, 1].
- paragraphs GoogleCloudVisionV1p2beta1Paragraph[]? - List of paragraphs in this block (if this blocks is of type text).
- property GoogleCloudVisionV1p2beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
googleapis.vision: GoogleCloudVisionV1p2beta1BoundingPoly
A bounding polygon for the detected image annotation.
Fields
- normalizedVertices GoogleCloudVisionV1p2beta1NormalizedVertex[]? - The bounding polygon normalized vertices.
- vertices GoogleCloudVisionV1p2beta1Vertex[]? - The bounding polygon vertices.
googleapis.vision: GoogleCloudVisionV1p2beta1ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
Fields
- color Color? - Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of
java.awt.Color
in Java; it can also be trivially provided to UIColor's+colorWithRed:green:blue:alpha
method in iOS; and, with just a little work, it can be easily formatted into a CSSrgba()
string in JavaScript. This reference page doesn't carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...
- pixelFraction float? - The fraction of pixels the color occupies in the image. Value in range [0, 1].
- score float? - Image-specific score for this color. Value in range [0, 1].
googleapis.vision: GoogleCloudVisionV1p2beta1CropHint
Single crop hint that is used to generate a new crop when serving an image.
Fields
- boundingPoly GoogleCloudVisionV1p2beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of this being a salient region. Range [0, 1].
- importanceFraction float? - Fraction of importance of this salient region with respect to the original image.
googleapis.vision: GoogleCloudVisionV1p2beta1CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
Fields
- cropHints GoogleCloudVisionV1p2beta1CropHint[]? - Crop hint results.
googleapis.vision: GoogleCloudVisionV1p2beta1DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
Fields
- colors GoogleCloudVisionV1p2beta1ColorInfo[]? - RGB color values with their score and pixel fraction.
googleapis.vision: GoogleCloudVisionV1p2beta1EntityAnnotation
Set of detected entity features.
Fields
- boundingPoly GoogleCloudVisionV1p2beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Deprecated. Use
score
instead. The accuracy of the entity detection in an image. For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1].
- description string? - Entity textual description, expressed in its
locale
language.
- locale string? - The language code for the locale in which the entity textual
description
is expressed.
- locations GoogleCloudVisionV1p2beta1LocationInfo[]? - The location information for the detected entity. Multiple
LocationInfo
elements can be present because one location may indicate the location of the scene in the image, and another location may indicate the location of the place where the image was taken. Location information is usually present for landmarks.
- mid string? - Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
- properties GoogleCloudVisionV1p2beta1Property[]? - Some entities may have optional user-supplied
Property
(name/value) fields, such a score or string that qualifies the entity.
- score float? - Overall score of the result. Range [0, 1].
- topicality float? - The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p2beta1FaceAnnotation
A face annotation object contains the results of face detection.
Fields
- angerLikelihood string? - Anger likelihood.
- blurredLikelihood string? - Blurred likelihood.
- boundingPoly GoogleCloudVisionV1p2beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- detectionConfidence float? - Detection confidence. Range [0, 1].
- fdBoundingPoly GoogleCloudVisionV1p2beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- headwearLikelihood string? - Headwear likelihood.
- joyLikelihood string? - Joy likelihood.
- landmarkingConfidence float? - Face landmarking confidence. Range [0, 1].
- landmarks GoogleCloudVisionV1p2beta1FaceAnnotationLandmark[]? - Detected face landmarks.
- panAngle float? - Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
- rollAngle float? - Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
- sorrowLikelihood string? - Sorrow likelihood.
- surpriseLikelihood string? - Surprise likelihood.
- tiltAngle float? - Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
- underExposedLikelihood string? - Under-exposed likelihood.
googleapis.vision: GoogleCloudVisionV1p2beta1FaceAnnotationLandmark
A face-specific landmark (for example, a face feature).
Fields
- position GoogleCloudVisionV1p2beta1Position? - A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
- 'type string? - Face landmark type.
googleapis.vision: GoogleCloudVisionV1p2beta1GcsDestination
The Google Cloud Storage location where the output will be written to.
Fields
- uri string? - Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples: * File Prefix: gs://bucket-name/here/filenameprefix The output files will be created in gs://bucket-name/here/ and the names of the output files will begin with "filenameprefix". * Directory Prefix: gs://bucket-name/some/location/ The output files will be created in gs://bucket-name/some/location/ and the names of the output files could be anything because there was no filename prefix specified. If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files.
googleapis.vision: GoogleCloudVisionV1p2beta1GcsSource
The Google Cloud Storage location where the input will be read from.
Fields
- uri string? - Google Cloud Storage URI for the input file. This must only be a Google Cloud Storage object. Wildcards are not currently supported.
googleapis.vision: GoogleCloudVisionV1p2beta1ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
Fields
- pageNumber int? - If the file was a PDF or TIFF, this field gives the page number within the file used to produce the image.
- uri string? - The URI of the file used to produce the image.
googleapis.vision: GoogleCloudVisionV1p2beta1ImageProperties
Stores image properties, such as dominant colors.
Fields
- dominantColors GoogleCloudVisionV1p2beta1DominantColorsAnnotation? - Set of dominant colors and their corresponding scores.
googleapis.vision: GoogleCloudVisionV1p2beta1InputConfig
The desired input location and metadata.
Fields
- content string? - File content, represented as a stream of bytes. Note: As with all
bytes
fields, protobuffers use a pure binary representation, whereas JSON representations use base64. Currently, this field only works for BatchAnnotateFiles requests. It does not work for AsyncBatchAnnotateFiles requests.
- gcsSource GoogleCloudVisionV1p2beta1GcsSource? - The Google Cloud Storage location where the input will be read from.
- mimeType string? - The type of the file. Currently only "application/pdf", "image/tiff" and "image/gif" are supported. Wildcards are not supported.
googleapis.vision: GoogleCloudVisionV1p2beta1LocalizedObjectAnnotation
Set of detected objects with bounding boxes.
Fields
- boundingPoly GoogleCloudVisionV1p2beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- mid string? - Object ID that should align with EntityAnnotation mid.
- name string? - Object name, expressed in its
language_code
language.
- score float? - Score of the result. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p2beta1LocationInfo
Detected entity location information.
Fields
- latLng LatLng? - An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
googleapis.vision: GoogleCloudVisionV1p2beta1NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
Fields
- x float? - X coordinate.
- y float? - Y coordinate.
googleapis.vision: GoogleCloudVisionV1p2beta1OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
Fields
- createTime string? - The time when the batch request was received.
- state string? - Current state of the batch operation.
- updateTime string? - The time when the operation result was last updated.
googleapis.vision: GoogleCloudVisionV1p2beta1OutputConfig
The desired output location and metadata.
Fields
- batchSize int? - The max number of response protos to put into each output JSON file on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 response protos will be generated. If
batch_size
= 20, then 5 json files each containing 20 response protos will be written under the prefixgcs_destination
.uri
. Currently, batch_size only applies to GcsDestination, with potential future support for other output configurations.
- gcsDestination GoogleCloudVisionV1p2beta1GcsDestination? - The Google Cloud Storage location where the output will be written to.
googleapis.vision: GoogleCloudVisionV1p2beta1Page
Detected page from OCR.
Fields
- blocks GoogleCloudVisionV1p2beta1Block[]? - List of blocks of text, images etc on this page.
- confidence float? - Confidence of the OCR results on the page. Range [0, 1].
- height int? - Page height. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
- property GoogleCloudVisionV1p2beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- width int? - Page width. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
googleapis.vision: GoogleCloudVisionV1p2beta1Paragraph
Structural unit of text representing a number of words in certain order.
Fields
- boundingBox GoogleCloudVisionV1p2beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the paragraph. Range [0, 1].
- property GoogleCloudVisionV1p2beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- words GoogleCloudVisionV1p2beta1Word[]? - List of all words in this paragraph.
googleapis.vision: GoogleCloudVisionV1p2beta1Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
Fields
- x float? - X coordinate.
- y float? - Y coordinate.
- z float? - Z coordinate (or depth).
googleapis.vision: GoogleCloudVisionV1p2beta1Product
A Product contains ReferenceImages.
Fields
- description string? - User-provided metadata to be stored with this product. Must be at most 4096 characters long.
- displayName string? - The user-provided name for this Product. Must not be empty. Must be at most 4096 characters long.
- name string? - The resource name of the product. Format is:
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID
. This field is ignored when creating a product.
- productCategory string? - Immutable. The category for the product identified by the reference image. This should be one of "homegoods-v2", "apparel-v2", "toys-v2", "packagedgoods-v1" or "general-v1". The legacy categories "homegoods", "apparel", and "toys" are still supported, but these should not be used for new products.
- productLabels GoogleCloudVisionV1p2beta1ProductKeyValue[]? - Key-value pairs that can be attached to a product. At query time, constraints can be specified based on the product_labels. Note that integer values can be provided as strings, e.g. "1199". Only strings with integer values can match a range-based restriction which is to be supported soon. Multiple values can be assigned to the same key. One product may have up to 500 product_labels. Notice that the total number of distinct product_labels over all products in one ProductSet cannot exceed 1M, otherwise the product search pipeline will refuse to work for that ProductSet.
googleapis.vision: GoogleCloudVisionV1p2beta1ProductKeyValue
A product label represented as a key-value pair.
Fields
- 'key string? - The key of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
- value string? - The value of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
googleapis.vision: GoogleCloudVisionV1p2beta1ProductSearchResults
Results for a product search request.
Fields
- indexTime string? - Timestamp of the index which provided these results. Products added to the product set and products removed from the product set after this time are not reflected in the current results.
- productGroupedResults GoogleCloudVisionV1p2beta1ProductSearchResultsGroupedResult[]? - List of results grouped by products detected in the query image. Each entry corresponds to one bounding polygon in the query image, and contains the matching products specific to that region. There may be duplicate product matches in the union of all the per-product results.
- results GoogleCloudVisionV1p2beta1ProductSearchResultsResult[]? - List of results, one for each product match.
googleapis.vision: GoogleCloudVisionV1p2beta1ProductSearchResultsGroupedResult
Information about the products similar to a single product in a query image.
Fields
- boundingPoly GoogleCloudVisionV1p2beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- objectAnnotations GoogleCloudVisionV1p2beta1ProductSearchResultsObjectAnnotation[]? - List of generic predictions for the object in the bounding box.
- results GoogleCloudVisionV1p2beta1ProductSearchResultsResult[]? - List of results, one for each product match.
googleapis.vision: GoogleCloudVisionV1p2beta1ProductSearchResultsObjectAnnotation
Prediction for what the object in the bounding box is.
Fields
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- mid string? - Object ID that should align with EntityAnnotation mid.
- name string? - Object name, expressed in its
language_code
language.
- score float? - Score of the result. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p2beta1ProductSearchResultsResult
Information about a product.
Fields
- image string? - The resource name of the image from the product that is the closest match to the query.
- product GoogleCloudVisionV1p2beta1Product? - A Product contains ReferenceImages.
- score float? - A confidence level on the match, ranging from 0 (no confidence) to 1 (full confidence).
googleapis.vision: GoogleCloudVisionV1p2beta1Property
A Property
consists of a user-supplied name/value pair.
Fields
- name string? - Name of the property.
- uint64Value string? - Value of numeric properties.
- value string? - Value of the property.
googleapis.vision: GoogleCloudVisionV1p2beta1SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
Fields
- adult string? - Represents the adult content likelihood for the image. Adult content may contain elements such as nudity, pornographic images or cartoons, or sexual activities.
- medical string? - Likelihood that this is a medical image.
- racy string? - Likelihood that the request image contains racy content. Racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.
- spoof string? - Spoof likelihood. The likelihood that an modification was made to the image's canonical version to make it appear funny or offensive.
- violence string? - Likelihood that this image contains violent content.
googleapis.vision: GoogleCloudVisionV1p2beta1Symbol
A single symbol representation.
Fields
- boundingBox GoogleCloudVisionV1p2beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the symbol. Range [0, 1].
- property GoogleCloudVisionV1p2beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- text string? - The actual UTF-8 representation of the symbol.
googleapis.vision: GoogleCloudVisionV1p2beta1TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
Fields
- pages GoogleCloudVisionV1p2beta1Page[]? - List of pages detected by OCR.
- text string? - UTF-8 text detected on the pages.
googleapis.vision: GoogleCloudVisionV1p2beta1TextAnnotationDetectedBreak
Detected start or end of a structural component.
Fields
- isPrefix boolean? - True if break prepends the element.
- 'type string? - Detected break type.
googleapis.vision: GoogleCloudVisionV1p2beta1TextAnnotationDetectedLanguage
Detected language for a structural component.
Fields
- confidence float? - Confidence of detected language. Range [0, 1].
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
googleapis.vision: GoogleCloudVisionV1p2beta1TextAnnotationTextProperty
Additional information detected on the structural component.
Fields
- detectedBreak GoogleCloudVisionV1p2beta1TextAnnotationDetectedBreak? - Detected start or end of a structural component.
- detectedLanguages GoogleCloudVisionV1p2beta1TextAnnotationDetectedLanguage[]? - A list of detected languages together with confidence.
googleapis.vision: GoogleCloudVisionV1p2beta1Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
Fields
- x int? - X coordinate.
- y int? - Y coordinate.
googleapis.vision: GoogleCloudVisionV1p2beta1WebDetection
Relevant information for the image from the Internet.
Fields
- bestGuessLabels GoogleCloudVisionV1p2beta1WebDetectionWebLabel[]? - The service's best guess as to the topic of the request image. Inferred from similar images on the open web.
- fullMatchingImages GoogleCloudVisionV1p2beta1WebDetectionWebImage[]? - Fully matching images from the Internet. Can include resized copies of the query image.
- pagesWithMatchingImages GoogleCloudVisionV1p2beta1WebDetectionWebPage[]? - Web pages containing the matching images from the Internet.
- partialMatchingImages GoogleCloudVisionV1p2beta1WebDetectionWebImage[]? - Partial matching images from the Internet. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
- visuallySimilarImages GoogleCloudVisionV1p2beta1WebDetectionWebImage[]? - The visually similar image results.
- webEntities GoogleCloudVisionV1p2beta1WebDetectionWebEntity[]? - Deduced entities from similar images on the Internet.
googleapis.vision: GoogleCloudVisionV1p2beta1WebDetectionWebEntity
Entity deduced from similar images on the Internet.
Fields
- description string? - Canonical description of the entity, in English.
- entityId string? - Opaque entity ID.
- score float? - Overall relevancy score for the entity. Not normalized and not comparable across different image queries.
googleapis.vision: GoogleCloudVisionV1p2beta1WebDetectionWebImage
Metadata for online images.
Fields
- score float? - (Deprecated) Overall relevancy score for the image.
- url string? - The result image URL.
googleapis.vision: GoogleCloudVisionV1p2beta1WebDetectionWebLabel
Label to provide extra metadata for the web detection.
Fields
- label string? - Label for extra metadata.
- languageCode string? - The BCP-47 language code for
label
, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
googleapis.vision: GoogleCloudVisionV1p2beta1WebDetectionWebPage
Metadata for web pages.
Fields
- fullMatchingImages GoogleCloudVisionV1p2beta1WebDetectionWebImage[]? - Fully matching images on the page. Can include resized copies of the query image.
- pageTitle string? - Title for the web page, may contain HTML markups.
- partialMatchingImages GoogleCloudVisionV1p2beta1WebDetectionWebImage[]? - Partial matching images on the page. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
- score float? - (Deprecated) Overall relevancy score for the web page.
- url string? - The result web page URL.
googleapis.vision: GoogleCloudVisionV1p2beta1Word
A word representation.
Fields
- boundingBox GoogleCloudVisionV1p2beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the word. Range [0, 1].
- property GoogleCloudVisionV1p2beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- symbols GoogleCloudVisionV1p2beta1Symbol[]? - List of symbols in the word. The order of the symbols follows the natural reading order.
googleapis.vision: GoogleCloudVisionV1p3beta1AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
Fields
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- inputConfig GoogleCloudVisionV1p3beta1InputConfig? - The desired input location and metadata.
- responses GoogleCloudVisionV1p3beta1AnnotateImageResponse[]? - Individual responses to images found within the file. This field will be empty if the
error
field is set.
- totalPages int? - This field gives the total number of pages in the file.
googleapis.vision: GoogleCloudVisionV1p3beta1AnnotateImageResponse
Response to an image annotation request.
Fields
- context GoogleCloudVisionV1p3beta1ImageAnnotationContext? - If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
- cropHintsAnnotation GoogleCloudVisionV1p3beta1CropHintsAnnotation? - Set of crop hints that are used to generate new crops when serving images.
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- faceAnnotations GoogleCloudVisionV1p3beta1FaceAnnotation[]? - If present, face detection has completed successfully.
- fullTextAnnotation GoogleCloudVisionV1p3beta1TextAnnotation? - TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
- imagePropertiesAnnotation GoogleCloudVisionV1p3beta1ImageProperties? - Stores image properties, such as dominant colors.
- labelAnnotations GoogleCloudVisionV1p3beta1EntityAnnotation[]? - If present, label detection has completed successfully.
- landmarkAnnotations GoogleCloudVisionV1p3beta1EntityAnnotation[]? - If present, landmark detection has completed successfully.
- localizedObjectAnnotations GoogleCloudVisionV1p3beta1LocalizedObjectAnnotation[]? - If present, localized object detection has completed successfully. This will be sorted descending by confidence score.
- logoAnnotations GoogleCloudVisionV1p3beta1EntityAnnotation[]? - If present, logo detection has completed successfully.
- productSearchResults GoogleCloudVisionV1p3beta1ProductSearchResults? - Results for a product search request.
- safeSearchAnnotation GoogleCloudVisionV1p3beta1SafeSearchAnnotation? - Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
- textAnnotations GoogleCloudVisionV1p3beta1EntityAnnotation[]? - If present, text (OCR) detection has completed successfully.
- webDetection GoogleCloudVisionV1p3beta1WebDetection? - Relevant information for the image from the Internet.
googleapis.vision: GoogleCloudVisionV1p3beta1AsyncAnnotateFileResponse
The response for a single offline file annotation request.
Fields
- outputConfig GoogleCloudVisionV1p3beta1OutputConfig? - The desired output location and metadata.
googleapis.vision: GoogleCloudVisionV1p3beta1AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
Fields
- responses GoogleCloudVisionV1p3beta1AsyncAnnotateFileResponse[]? - The list of file annotation responses, one for each request in AsyncBatchAnnotateFilesRequest.
googleapis.vision: GoogleCloudVisionV1p3beta1BatchOperationMetadata
Metadata for the batch operations such as the current state. This is included in the metadata
field of the Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
Fields
- endTime string? - The time when the batch request is finished and google.longrunning.Operation.done is set to true.
- state string? - The current state of the batch operation.
- submitTime string? - The time when the batch request was submitted to the server.
googleapis.vision: GoogleCloudVisionV1p3beta1Block
Logical element on the page.
Fields
- blockType string? - Detected block type (text, image etc) for this block.
- boundingBox GoogleCloudVisionV1p3beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results on the block. Range [0, 1].
- paragraphs GoogleCloudVisionV1p3beta1Paragraph[]? - List of paragraphs in this block (if this blocks is of type text).
- property GoogleCloudVisionV1p3beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
googleapis.vision: GoogleCloudVisionV1p3beta1BoundingPoly
A bounding polygon for the detected image annotation.
Fields
- normalizedVertices GoogleCloudVisionV1p3beta1NormalizedVertex[]? - The bounding polygon normalized vertices.
- vertices GoogleCloudVisionV1p3beta1Vertex[]? - The bounding polygon vertices.
googleapis.vision: GoogleCloudVisionV1p3beta1ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
Fields
- color Color? - Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of
java.awt.Color
in Java; it can also be trivially provided to UIColor's+colorWithRed:green:blue:alpha
method in iOS; and, with just a little work, it can be easily formatted into a CSSrgba()
string in JavaScript. This reference page doesn't carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...
- pixelFraction float? - The fraction of pixels the color occupies in the image. Value in range [0, 1].
- score float? - Image-specific score for this color. Value in range [0, 1].
googleapis.vision: GoogleCloudVisionV1p3beta1CropHint
Single crop hint that is used to generate a new crop when serving an image.
Fields
- boundingPoly GoogleCloudVisionV1p3beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of this being a salient region. Range [0, 1].
- importanceFraction float? - Fraction of importance of this salient region with respect to the original image.
googleapis.vision: GoogleCloudVisionV1p3beta1CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
Fields
- cropHints GoogleCloudVisionV1p3beta1CropHint[]? - Crop hint results.
googleapis.vision: GoogleCloudVisionV1p3beta1DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
Fields
- colors GoogleCloudVisionV1p3beta1ColorInfo[]? - RGB color values with their score and pixel fraction.
googleapis.vision: GoogleCloudVisionV1p3beta1EntityAnnotation
Set of detected entity features.
Fields
- boundingPoly GoogleCloudVisionV1p3beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Deprecated. Use
score
instead. The accuracy of the entity detection in an image. For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1].
- description string? - Entity textual description, expressed in its
locale
language.
- locale string? - The language code for the locale in which the entity textual
description
is expressed.
- locations GoogleCloudVisionV1p3beta1LocationInfo[]? - The location information for the detected entity. Multiple
LocationInfo
elements can be present because one location may indicate the location of the scene in the image, and another location may indicate the location of the place where the image was taken. Location information is usually present for landmarks.
- mid string? - Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
- properties GoogleCloudVisionV1p3beta1Property[]? - Some entities may have optional user-supplied
Property
(name/value) fields, such a score or string that qualifies the entity.
- score float? - Overall score of the result. Range [0, 1].
- topicality float? - The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p3beta1FaceAnnotation
A face annotation object contains the results of face detection.
Fields
- angerLikelihood string? - Anger likelihood.
- blurredLikelihood string? - Blurred likelihood.
- boundingPoly GoogleCloudVisionV1p3beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- detectionConfidence float? - Detection confidence. Range [0, 1].
- fdBoundingPoly GoogleCloudVisionV1p3beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- headwearLikelihood string? - Headwear likelihood.
- joyLikelihood string? - Joy likelihood.
- landmarkingConfidence float? - Face landmarking confidence. Range [0, 1].
- landmarks GoogleCloudVisionV1p3beta1FaceAnnotationLandmark[]? - Detected face landmarks.
- panAngle float? - Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
- rollAngle float? - Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
- sorrowLikelihood string? - Sorrow likelihood.
- surpriseLikelihood string? - Surprise likelihood.
- tiltAngle float? - Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
- underExposedLikelihood string? - Under-exposed likelihood.
googleapis.vision: GoogleCloudVisionV1p3beta1FaceAnnotationLandmark
A face-specific landmark (for example, a face feature).
Fields
- position GoogleCloudVisionV1p3beta1Position? - A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
- 'type string? - Face landmark type.
googleapis.vision: GoogleCloudVisionV1p3beta1GcsDestination
The Google Cloud Storage location where the output will be written to.
Fields
- uri string? - Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples: * File Prefix: gs://bucket-name/here/filenameprefix The output files will be created in gs://bucket-name/here/ and the names of the output files will begin with "filenameprefix". * Directory Prefix: gs://bucket-name/some/location/ The output files will be created in gs://bucket-name/some/location/ and the names of the output files could be anything because there was no filename prefix specified. If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files.
googleapis.vision: GoogleCloudVisionV1p3beta1GcsSource
The Google Cloud Storage location where the input will be read from.
Fields
- uri string? - Google Cloud Storage URI for the input file. This must only be a Google Cloud Storage object. Wildcards are not currently supported.
googleapis.vision: GoogleCloudVisionV1p3beta1ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
Fields
- pageNumber int? - If the file was a PDF or TIFF, this field gives the page number within the file used to produce the image.
- uri string? - The URI of the file used to produce the image.
googleapis.vision: GoogleCloudVisionV1p3beta1ImageProperties
Stores image properties, such as dominant colors.
Fields
- dominantColors GoogleCloudVisionV1p3beta1DominantColorsAnnotation? - Set of dominant colors and their corresponding scores.
googleapis.vision: GoogleCloudVisionV1p3beta1ImportProductSetsResponse
Response message for the ImportProductSets
method. This message is returned by the google.longrunning.Operations.GetOperation method in the returned google.longrunning.Operation.response field.
Fields
- referenceImages GoogleCloudVisionV1p3beta1ReferenceImage[]? - The list of reference_images that are imported successfully.
- statuses Status[]? - The rpc status for each ImportProductSet request, including both successes and errors. The number of statuses here matches the number of lines in the csv file, and statuses[i] stores the success or failure status of processing the i-th line of the csv, starting from line 0.
googleapis.vision: GoogleCloudVisionV1p3beta1InputConfig
The desired input location and metadata.
Fields
- content string? - File content, represented as a stream of bytes. Note: As with all
bytes
fields, protobuffers use a pure binary representation, whereas JSON representations use base64. Currently, this field only works for BatchAnnotateFiles requests. It does not work for AsyncBatchAnnotateFiles requests.
- gcsSource GoogleCloudVisionV1p3beta1GcsSource? - The Google Cloud Storage location where the input will be read from.
- mimeType string? - The type of the file. Currently only "application/pdf", "image/tiff" and "image/gif" are supported. Wildcards are not supported.
googleapis.vision: GoogleCloudVisionV1p3beta1LocalizedObjectAnnotation
Set of detected objects with bounding boxes.
Fields
- boundingPoly GoogleCloudVisionV1p3beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- mid string? - Object ID that should align with EntityAnnotation mid.
- name string? - Object name, expressed in its
language_code
language.
- score float? - Score of the result. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p3beta1LocationInfo
Detected entity location information.
Fields
- latLng LatLng? - An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
googleapis.vision: GoogleCloudVisionV1p3beta1NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
Fields
- x float? - X coordinate.
- y float? - Y coordinate.
googleapis.vision: GoogleCloudVisionV1p3beta1OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
Fields
- createTime string? - The time when the batch request was received.
- state string? - Current state of the batch operation.
- updateTime string? - The time when the operation result was last updated.
googleapis.vision: GoogleCloudVisionV1p3beta1OutputConfig
The desired output location and metadata.
Fields
- batchSize int? - The max number of response protos to put into each output JSON file on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 response protos will be generated. If
batch_size
= 20, then 5 json files each containing 20 response protos will be written under the prefixgcs_destination
.uri
. Currently, batch_size only applies to GcsDestination, with potential future support for other output configurations.
- gcsDestination GoogleCloudVisionV1p3beta1GcsDestination? - The Google Cloud Storage location where the output will be written to.
googleapis.vision: GoogleCloudVisionV1p3beta1Page
Detected page from OCR.
Fields
- blocks GoogleCloudVisionV1p3beta1Block[]? - List of blocks of text, images etc on this page.
- confidence float? - Confidence of the OCR results on the page. Range [0, 1].
- height int? - Page height. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
- property GoogleCloudVisionV1p3beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- width int? - Page width. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
googleapis.vision: GoogleCloudVisionV1p3beta1Paragraph
Structural unit of text representing a number of words in certain order.
Fields
- boundingBox GoogleCloudVisionV1p3beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the paragraph. Range [0, 1].
- property GoogleCloudVisionV1p3beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- words GoogleCloudVisionV1p3beta1Word[]? - List of all words in this paragraph.
googleapis.vision: GoogleCloudVisionV1p3beta1Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
Fields
- x float? - X coordinate.
- y float? - Y coordinate.
- z float? - Z coordinate (or depth).
googleapis.vision: GoogleCloudVisionV1p3beta1Product
A Product contains ReferenceImages.
Fields
- description string? - User-provided metadata to be stored with this product. Must be at most 4096 characters long.
- displayName string? - The user-provided name for this Product. Must not be empty. Must be at most 4096 characters long.
- name string? - The resource name of the product. Format is:
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID
. This field is ignored when creating a product.
- productCategory string? - Immutable. The category for the product identified by the reference image. This should be one of "homegoods-v2", "apparel-v2", "toys-v2", "packagedgoods-v1" or "general-v1". The legacy categories "homegoods", "apparel", and "toys" are still supported, but these should not be used for new products.
- productLabels GoogleCloudVisionV1p3beta1ProductKeyValue[]? - Key-value pairs that can be attached to a product. At query time, constraints can be specified based on the product_labels. Note that integer values can be provided as strings, e.g. "1199". Only strings with integer values can match a range-based restriction which is to be supported soon. Multiple values can be assigned to the same key. One product may have up to 500 product_labels. Notice that the total number of distinct product_labels over all products in one ProductSet cannot exceed 1M, otherwise the product search pipeline will refuse to work for that ProductSet.
googleapis.vision: GoogleCloudVisionV1p3beta1ProductKeyValue
A product label represented as a key-value pair.
Fields
- 'key string? - The key of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
- value string? - The value of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
googleapis.vision: GoogleCloudVisionV1p3beta1ProductSearchResults
Results for a product search request.
Fields
- indexTime string? - Timestamp of the index which provided these results. Products added to the product set and products removed from the product set after this time are not reflected in the current results.
- productGroupedResults GoogleCloudVisionV1p3beta1ProductSearchResultsGroupedResult[]? - List of results grouped by products detected in the query image. Each entry corresponds to one bounding polygon in the query image, and contains the matching products specific to that region. There may be duplicate product matches in the union of all the per-product results.
- results GoogleCloudVisionV1p3beta1ProductSearchResultsResult[]? - List of results, one for each product match.
googleapis.vision: GoogleCloudVisionV1p3beta1ProductSearchResultsGroupedResult
Information about the products similar to a single product in a query image.
Fields
- boundingPoly GoogleCloudVisionV1p3beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- objectAnnotations GoogleCloudVisionV1p3beta1ProductSearchResultsObjectAnnotation[]? - List of generic predictions for the object in the bounding box.
- results GoogleCloudVisionV1p3beta1ProductSearchResultsResult[]? - List of results, one for each product match.
googleapis.vision: GoogleCloudVisionV1p3beta1ProductSearchResultsObjectAnnotation
Prediction for what the object in the bounding box is.
Fields
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- mid string? - Object ID that should align with EntityAnnotation mid.
- name string? - Object name, expressed in its
language_code
language.
- score float? - Score of the result. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p3beta1ProductSearchResultsResult
Information about a product.
Fields
- image string? - The resource name of the image from the product that is the closest match to the query.
- product GoogleCloudVisionV1p3beta1Product? - A Product contains ReferenceImages.
- score float? - A confidence level on the match, ranging from 0 (no confidence) to 1 (full confidence).
googleapis.vision: GoogleCloudVisionV1p3beta1Property
A Property
consists of a user-supplied name/value pair.
Fields
- name string? - Name of the property.
- uint64Value string? - Value of numeric properties.
- value string? - Value of the property.
googleapis.vision: GoogleCloudVisionV1p3beta1ReferenceImage
A ReferenceImage
represents a product image and its associated metadata, such as bounding boxes.
Fields
- boundingPolys GoogleCloudVisionV1p3beta1BoundingPoly[]? - Optional. Bounding polygons around the areas of interest in the reference image. If this field is empty, the system will try to detect regions of interest. At most 10 bounding polygons will be used. The provided shape is converted into a non-rotated rectangle. Once converted, the small edge of the rectangle must be greater than or equal to 300 pixels. The aspect ratio must be 1:4 or less (i.e. 1:3 is ok; 1:5 is not).
- name string? - The resource name of the reference image. Format is:
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID/referenceImages/IMAGE_ID
. This field is ignored when creating a reference image.
- uri string? - Required. The Google Cloud Storage URI of the reference image. The URI must start with
gs://
.
googleapis.vision: GoogleCloudVisionV1p3beta1SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
Fields
- adult string? - Represents the adult content likelihood for the image. Adult content may contain elements such as nudity, pornographic images or cartoons, or sexual activities.
- medical string? - Likelihood that this is a medical image.
- racy string? - Likelihood that the request image contains racy content. Racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.
- spoof string? - Spoof likelihood. The likelihood that an modification was made to the image's canonical version to make it appear funny or offensive.
- violence string? - Likelihood that this image contains violent content.
googleapis.vision: GoogleCloudVisionV1p3beta1Symbol
A single symbol representation.
Fields
- boundingBox GoogleCloudVisionV1p3beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the symbol. Range [0, 1].
- property GoogleCloudVisionV1p3beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- text string? - The actual UTF-8 representation of the symbol.
googleapis.vision: GoogleCloudVisionV1p3beta1TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
Fields
- pages GoogleCloudVisionV1p3beta1Page[]? - List of pages detected by OCR.
- text string? - UTF-8 text detected on the pages.
googleapis.vision: GoogleCloudVisionV1p3beta1TextAnnotationDetectedBreak
Detected start or end of a structural component.
Fields
- isPrefix boolean? - True if break prepends the element.
- 'type string? - Detected break type.
googleapis.vision: GoogleCloudVisionV1p3beta1TextAnnotationDetectedLanguage
Detected language for a structural component.
Fields
- confidence float? - Confidence of detected language. Range [0, 1].
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
googleapis.vision: GoogleCloudVisionV1p3beta1TextAnnotationTextProperty
Additional information detected on the structural component.
Fields
- detectedBreak GoogleCloudVisionV1p3beta1TextAnnotationDetectedBreak? - Detected start or end of a structural component.
- detectedLanguages GoogleCloudVisionV1p3beta1TextAnnotationDetectedLanguage[]? - A list of detected languages together with confidence.
googleapis.vision: GoogleCloudVisionV1p3beta1Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
Fields
- x int? - X coordinate.
- y int? - Y coordinate.
googleapis.vision: GoogleCloudVisionV1p3beta1WebDetection
Relevant information for the image from the Internet.
Fields
- bestGuessLabels GoogleCloudVisionV1p3beta1WebDetectionWebLabel[]? - The service's best guess as to the topic of the request image. Inferred from similar images on the open web.
- fullMatchingImages GoogleCloudVisionV1p3beta1WebDetectionWebImage[]? - Fully matching images from the Internet. Can include resized copies of the query image.
- pagesWithMatchingImages GoogleCloudVisionV1p3beta1WebDetectionWebPage[]? - Web pages containing the matching images from the Internet.
- partialMatchingImages GoogleCloudVisionV1p3beta1WebDetectionWebImage[]? - Partial matching images from the Internet. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
- visuallySimilarImages GoogleCloudVisionV1p3beta1WebDetectionWebImage[]? - The visually similar image results.
- webEntities GoogleCloudVisionV1p3beta1WebDetectionWebEntity[]? - Deduced entities from similar images on the Internet.
googleapis.vision: GoogleCloudVisionV1p3beta1WebDetectionWebEntity
Entity deduced from similar images on the Internet.
Fields
- description string? - Canonical description of the entity, in English.
- entityId string? - Opaque entity ID.
- score float? - Overall relevancy score for the entity. Not normalized and not comparable across different image queries.
googleapis.vision: GoogleCloudVisionV1p3beta1WebDetectionWebImage
Metadata for online images.
Fields
- score float? - (Deprecated) Overall relevancy score for the image.
- url string? - The result image URL.
googleapis.vision: GoogleCloudVisionV1p3beta1WebDetectionWebLabel
Label to provide extra metadata for the web detection.
Fields
- label string? - Label for extra metadata.
- languageCode string? - The BCP-47 language code for
label
, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
googleapis.vision: GoogleCloudVisionV1p3beta1WebDetectionWebPage
Metadata for web pages.
Fields
- fullMatchingImages GoogleCloudVisionV1p3beta1WebDetectionWebImage[]? - Fully matching images on the page. Can include resized copies of the query image.
- pageTitle string? - Title for the web page, may contain HTML markups.
- partialMatchingImages GoogleCloudVisionV1p3beta1WebDetectionWebImage[]? - Partial matching images on the page. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
- score float? - (Deprecated) Overall relevancy score for the web page.
- url string? - The result web page URL.
googleapis.vision: GoogleCloudVisionV1p3beta1Word
A word representation.
Fields
- boundingBox GoogleCloudVisionV1p3beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the word. Range [0, 1].
- property GoogleCloudVisionV1p3beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- symbols GoogleCloudVisionV1p3beta1Symbol[]? - List of symbols in the word. The order of the symbols follows the natural reading order.
googleapis.vision: GoogleCloudVisionV1p4beta1AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
Fields
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- inputConfig GoogleCloudVisionV1p4beta1InputConfig? - The desired input location and metadata.
- responses GoogleCloudVisionV1p4beta1AnnotateImageResponse[]? - Individual responses to images found within the file. This field will be empty if the
error
field is set.
- totalPages int? - This field gives the total number of pages in the file.
googleapis.vision: GoogleCloudVisionV1p4beta1AnnotateImageResponse
Response to an image annotation request.
Fields
- context GoogleCloudVisionV1p4beta1ImageAnnotationContext? - If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
- cropHintsAnnotation GoogleCloudVisionV1p4beta1CropHintsAnnotation? - Set of crop hints that are used to generate new crops when serving images.
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- faceAnnotations GoogleCloudVisionV1p4beta1FaceAnnotation[]? - If present, face detection has completed successfully.
- fullTextAnnotation GoogleCloudVisionV1p4beta1TextAnnotation? - TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
- imagePropertiesAnnotation GoogleCloudVisionV1p4beta1ImageProperties? - Stores image properties, such as dominant colors.
- labelAnnotations GoogleCloudVisionV1p4beta1EntityAnnotation[]? - If present, label detection has completed successfully.
- landmarkAnnotations GoogleCloudVisionV1p4beta1EntityAnnotation[]? - If present, landmark detection has completed successfully.
- localizedObjectAnnotations GoogleCloudVisionV1p4beta1LocalizedObjectAnnotation[]? - If present, localized object detection has completed successfully. This will be sorted descending by confidence score.
- logoAnnotations GoogleCloudVisionV1p4beta1EntityAnnotation[]? - If present, logo detection has completed successfully.
- productSearchResults GoogleCloudVisionV1p4beta1ProductSearchResults? - Results for a product search request.
- safeSearchAnnotation GoogleCloudVisionV1p4beta1SafeSearchAnnotation? - Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
- textAnnotations GoogleCloudVisionV1p4beta1EntityAnnotation[]? - If present, text (OCR) detection has completed successfully.
- webDetection GoogleCloudVisionV1p4beta1WebDetection? - Relevant information for the image from the Internet.
googleapis.vision: GoogleCloudVisionV1p4beta1AsyncAnnotateFileResponse
The response for a single offline file annotation request.
Fields
- outputConfig GoogleCloudVisionV1p4beta1OutputConfig? - The desired output location and metadata.
googleapis.vision: GoogleCloudVisionV1p4beta1AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
Fields
- responses GoogleCloudVisionV1p4beta1AsyncAnnotateFileResponse[]? - The list of file annotation responses, one for each request in AsyncBatchAnnotateFilesRequest.
googleapis.vision: GoogleCloudVisionV1p4beta1AsyncBatchAnnotateImagesResponse
Response to an async batch image annotation request.
Fields
- outputConfig GoogleCloudVisionV1p4beta1OutputConfig? - The desired output location and metadata.
googleapis.vision: GoogleCloudVisionV1p4beta1BatchAnnotateFilesResponse
A list of file annotation responses.
Fields
- responses GoogleCloudVisionV1p4beta1AnnotateFileResponse[]? - The list of file annotation responses, each response corresponding to each AnnotateFileRequest in BatchAnnotateFilesRequest.
googleapis.vision: GoogleCloudVisionV1p4beta1BatchOperationMetadata
Metadata for the batch operations such as the current state. This is included in the metadata
field of the Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
Fields
- endTime string? - The time when the batch request is finished and google.longrunning.Operation.done is set to true.
- state string? - The current state of the batch operation.
- submitTime string? - The time when the batch request was submitted to the server.
googleapis.vision: GoogleCloudVisionV1p4beta1Block
Logical element on the page.
Fields
- blockType string? - Detected block type (text, image etc) for this block.
- boundingBox GoogleCloudVisionV1p4beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results on the block. Range [0, 1].
- paragraphs GoogleCloudVisionV1p4beta1Paragraph[]? - List of paragraphs in this block (if this blocks is of type text).
- property GoogleCloudVisionV1p4beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
googleapis.vision: GoogleCloudVisionV1p4beta1BoundingPoly
A bounding polygon for the detected image annotation.
Fields
- normalizedVertices GoogleCloudVisionV1p4beta1NormalizedVertex[]? - The bounding polygon normalized vertices.
- vertices GoogleCloudVisionV1p4beta1Vertex[]? - The bounding polygon vertices.
googleapis.vision: GoogleCloudVisionV1p4beta1Celebrity
A Celebrity is a group of Faces with an identity.
Fields
- description string? - The Celebrity's description.
- displayName string? - The Celebrity's display name.
- name string? - The resource name of the preloaded Celebrity. Has the format
builtin/{mid}
.
googleapis.vision: GoogleCloudVisionV1p4beta1ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
Fields
- color Color? - Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of
java.awt.Color
in Java; it can also be trivially provided to UIColor's+colorWithRed:green:blue:alpha
method in iOS; and, with just a little work, it can be easily formatted into a CSSrgba()
string in JavaScript. This reference page doesn't carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...
- pixelFraction float? - The fraction of pixels the color occupies in the image. Value in range [0, 1].
- score float? - Image-specific score for this color. Value in range [0, 1].
googleapis.vision: GoogleCloudVisionV1p4beta1CropHint
Single crop hint that is used to generate a new crop when serving an image.
Fields
- boundingPoly GoogleCloudVisionV1p4beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of this being a salient region. Range [0, 1].
- importanceFraction float? - Fraction of importance of this salient region with respect to the original image.
googleapis.vision: GoogleCloudVisionV1p4beta1CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
Fields
- cropHints GoogleCloudVisionV1p4beta1CropHint[]? - Crop hint results.
googleapis.vision: GoogleCloudVisionV1p4beta1DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
Fields
- colors GoogleCloudVisionV1p4beta1ColorInfo[]? - RGB color values with their score and pixel fraction.
googleapis.vision: GoogleCloudVisionV1p4beta1EntityAnnotation
Set of detected entity features.
Fields
- boundingPoly GoogleCloudVisionV1p4beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Deprecated. Use
score
instead. The accuracy of the entity detection in an image. For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1].
- description string? - Entity textual description, expressed in its
locale
language.
- locale string? - The language code for the locale in which the entity textual
description
is expressed.
- locations GoogleCloudVisionV1p4beta1LocationInfo[]? - The location information for the detected entity. Multiple
LocationInfo
elements can be present because one location may indicate the location of the scene in the image, and another location may indicate the location of the place where the image was taken. Location information is usually present for landmarks.
- mid string? - Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
- properties GoogleCloudVisionV1p4beta1Property[]? - Some entities may have optional user-supplied
Property
(name/value) fields, such a score or string that qualifies the entity.
- score float? - Overall score of the result. Range [0, 1].
- topicality float? - The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p4beta1FaceAnnotation
A face annotation object contains the results of face detection.
Fields
- angerLikelihood string? - Anger likelihood.
- blurredLikelihood string? - Blurred likelihood.
- boundingPoly GoogleCloudVisionV1p4beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- detectionConfidence float? - Detection confidence. Range [0, 1].
- fdBoundingPoly GoogleCloudVisionV1p4beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- headwearLikelihood string? - Headwear likelihood.
- joyLikelihood string? - Joy likelihood.
- landmarkingConfidence float? - Face landmarking confidence. Range [0, 1].
- landmarks GoogleCloudVisionV1p4beta1FaceAnnotationLandmark[]? - Detected face landmarks.
- panAngle float? - Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
- recognitionResult GoogleCloudVisionV1p4beta1FaceRecognitionResult[]? - Additional recognition information. Only computed if image_context.face_recognition_params is provided, and a match is found to a Celebrity in the input CelebritySet. This field is sorted in order of decreasing confidence values.
- rollAngle float? - Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
- sorrowLikelihood string? - Sorrow likelihood.
- surpriseLikelihood string? - Surprise likelihood.
- tiltAngle float? - Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
- underExposedLikelihood string? - Under-exposed likelihood.
googleapis.vision: GoogleCloudVisionV1p4beta1FaceAnnotationLandmark
A face-specific landmark (for example, a face feature).
Fields
- position GoogleCloudVisionV1p4beta1Position? - A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
- 'type string? - Face landmark type.
googleapis.vision: GoogleCloudVisionV1p4beta1FaceRecognitionResult
Information about a face's identity.
Fields
- celebrity GoogleCloudVisionV1p4beta1Celebrity? - A Celebrity is a group of Faces with an identity.
- confidence float? - Recognition confidence. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p4beta1GcsDestination
The Google Cloud Storage location where the output will be written to.
Fields
- uri string? - Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples: * File Prefix: gs://bucket-name/here/filenameprefix The output files will be created in gs://bucket-name/here/ and the names of the output files will begin with "filenameprefix". * Directory Prefix: gs://bucket-name/some/location/ The output files will be created in gs://bucket-name/some/location/ and the names of the output files could be anything because there was no filename prefix specified. If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files.
googleapis.vision: GoogleCloudVisionV1p4beta1GcsSource
The Google Cloud Storage location where the input will be read from.
Fields
- uri string? - Google Cloud Storage URI for the input file. This must only be a Google Cloud Storage object. Wildcards are not currently supported.
googleapis.vision: GoogleCloudVisionV1p4beta1ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
Fields
- pageNumber int? - If the file was a PDF or TIFF, this field gives the page number within the file used to produce the image.
- uri string? - The URI of the file used to produce the image.
googleapis.vision: GoogleCloudVisionV1p4beta1ImageProperties
Stores image properties, such as dominant colors.
Fields
- dominantColors GoogleCloudVisionV1p4beta1DominantColorsAnnotation? - Set of dominant colors and their corresponding scores.
googleapis.vision: GoogleCloudVisionV1p4beta1ImportProductSetsResponse
Response message for the ImportProductSets
method. This message is returned by the google.longrunning.Operations.GetOperation method in the returned google.longrunning.Operation.response field.
Fields
- referenceImages GoogleCloudVisionV1p4beta1ReferenceImage[]? - The list of reference_images that are imported successfully.
- statuses Status[]? - The rpc status for each ImportProductSet request, including both successes and errors. The number of statuses here matches the number of lines in the csv file, and statuses[i] stores the success or failure status of processing the i-th line of the csv, starting from line 0.
googleapis.vision: GoogleCloudVisionV1p4beta1InputConfig
The desired input location and metadata.
Fields
- content string? - File content, represented as a stream of bytes. Note: As with all
bytes
fields, protobuffers use a pure binary representation, whereas JSON representations use base64. Currently, this field only works for BatchAnnotateFiles requests. It does not work for AsyncBatchAnnotateFiles requests.
- gcsSource GoogleCloudVisionV1p4beta1GcsSource? - The Google Cloud Storage location where the input will be read from.
- mimeType string? - The type of the file. Currently only "application/pdf", "image/tiff" and "image/gif" are supported. Wildcards are not supported.
googleapis.vision: GoogleCloudVisionV1p4beta1LocalizedObjectAnnotation
Set of detected objects with bounding boxes.
Fields
- boundingPoly GoogleCloudVisionV1p4beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- mid string? - Object ID that should align with EntityAnnotation mid.
- name string? - Object name, expressed in its
language_code
language.
- score float? - Score of the result. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p4beta1LocationInfo
Detected entity location information.
Fields
- latLng LatLng? - An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
googleapis.vision: GoogleCloudVisionV1p4beta1NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
Fields
- x float? - X coordinate.
- y float? - Y coordinate.
googleapis.vision: GoogleCloudVisionV1p4beta1OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
Fields
- createTime string? - The time when the batch request was received.
- state string? - Current state of the batch operation.
- updateTime string? - The time when the operation result was last updated.
googleapis.vision: GoogleCloudVisionV1p4beta1OutputConfig
The desired output location and metadata.
Fields
- batchSize int? - The max number of response protos to put into each output JSON file on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 response protos will be generated. If
batch_size
= 20, then 5 json files each containing 20 response protos will be written under the prefixgcs_destination
.uri
. Currently, batch_size only applies to GcsDestination, with potential future support for other output configurations.
- gcsDestination GoogleCloudVisionV1p4beta1GcsDestination? - The Google Cloud Storage location where the output will be written to.
googleapis.vision: GoogleCloudVisionV1p4beta1Page
Detected page from OCR.
Fields
- blocks GoogleCloudVisionV1p4beta1Block[]? - List of blocks of text, images etc on this page.
- confidence float? - Confidence of the OCR results on the page. Range [0, 1].
- height int? - Page height. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
- property GoogleCloudVisionV1p4beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- width int? - Page width. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
googleapis.vision: GoogleCloudVisionV1p4beta1Paragraph
Structural unit of text representing a number of words in certain order.
Fields
- boundingBox GoogleCloudVisionV1p4beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the paragraph. Range [0, 1].
- property GoogleCloudVisionV1p4beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- words GoogleCloudVisionV1p4beta1Word[]? - List of all words in this paragraph.
googleapis.vision: GoogleCloudVisionV1p4beta1Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
Fields
- x float? - X coordinate.
- y float? - Y coordinate.
- z float? - Z coordinate (or depth).
googleapis.vision: GoogleCloudVisionV1p4beta1Product
A Product contains ReferenceImages.
Fields
- description string? - User-provided metadata to be stored with this product. Must be at most 4096 characters long.
- displayName string? - The user-provided name for this Product. Must not be empty. Must be at most 4096 characters long.
- name string? - The resource name of the product. Format is:
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID
. This field is ignored when creating a product.
- productCategory string? - Immutable. The category for the product identified by the reference image. This should be one of "homegoods-v2", "apparel-v2", "toys-v2", "packagedgoods-v1" or "general-v1". The legacy categories "homegoods", "apparel", and "toys" are still supported, but these should not be used for new products.
- productLabels GoogleCloudVisionV1p4beta1ProductKeyValue[]? - Key-value pairs that can be attached to a product. At query time, constraints can be specified based on the product_labels. Note that integer values can be provided as strings, e.g. "1199". Only strings with integer values can match a range-based restriction which is to be supported soon. Multiple values can be assigned to the same key. One product may have up to 500 product_labels. Notice that the total number of distinct product_labels over all products in one ProductSet cannot exceed 1M, otherwise the product search pipeline will refuse to work for that ProductSet.
googleapis.vision: GoogleCloudVisionV1p4beta1ProductKeyValue
A product label represented as a key-value pair.
Fields
- 'key string? - The key of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
- value string? - The value of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
googleapis.vision: GoogleCloudVisionV1p4beta1ProductSearchResults
Results for a product search request.
Fields
- indexTime string? - Timestamp of the index which provided these results. Products added to the product set and products removed from the product set after this time are not reflected in the current results.
- productGroupedResults GoogleCloudVisionV1p4beta1ProductSearchResultsGroupedResult[]? - List of results grouped by products detected in the query image. Each entry corresponds to one bounding polygon in the query image, and contains the matching products specific to that region. There may be duplicate product matches in the union of all the per-product results.
- results GoogleCloudVisionV1p4beta1ProductSearchResultsResult[]? - List of results, one for each product match.
googleapis.vision: GoogleCloudVisionV1p4beta1ProductSearchResultsGroupedResult
Information about the products similar to a single product in a query image.
Fields
- boundingPoly GoogleCloudVisionV1p4beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- objectAnnotations GoogleCloudVisionV1p4beta1ProductSearchResultsObjectAnnotation[]? - List of generic predictions for the object in the bounding box.
- results GoogleCloudVisionV1p4beta1ProductSearchResultsResult[]? - List of results, one for each product match.
googleapis.vision: GoogleCloudVisionV1p4beta1ProductSearchResultsObjectAnnotation
Prediction for what the object in the bounding box is.
Fields
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- mid string? - Object ID that should align with EntityAnnotation mid.
- name string? - Object name, expressed in its
language_code
language.
- score float? - Score of the result. Range [0, 1].
googleapis.vision: GoogleCloudVisionV1p4beta1ProductSearchResultsResult
Information about a product.
Fields
- image string? - The resource name of the image from the product that is the closest match to the query.
- product GoogleCloudVisionV1p4beta1Product? - A Product contains ReferenceImages.
- score float? - A confidence level on the match, ranging from 0 (no confidence) to 1 (full confidence).
googleapis.vision: GoogleCloudVisionV1p4beta1Property
A Property
consists of a user-supplied name/value pair.
Fields
- name string? - Name of the property.
- uint64Value string? - Value of numeric properties.
- value string? - Value of the property.
googleapis.vision: GoogleCloudVisionV1p4beta1ReferenceImage
A ReferenceImage
represents a product image and its associated metadata, such as bounding boxes.
Fields
- boundingPolys GoogleCloudVisionV1p4beta1BoundingPoly[]? - Optional. Bounding polygons around the areas of interest in the reference image. If this field is empty, the system will try to detect regions of interest. At most 10 bounding polygons will be used. The provided shape is converted into a non-rotated rectangle. Once converted, the small edge of the rectangle must be greater than or equal to 300 pixels. The aspect ratio must be 1:4 or less (i.e. 1:3 is ok; 1:5 is not).
- name string? - The resource name of the reference image. Format is:
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID/referenceImages/IMAGE_ID
. This field is ignored when creating a reference image.
- uri string? - Required. The Google Cloud Storage URI of the reference image. The URI must start with
gs://
.
googleapis.vision: GoogleCloudVisionV1p4beta1SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
Fields
- adult string? - Represents the adult content likelihood for the image. Adult content may contain elements such as nudity, pornographic images or cartoons, or sexual activities.
- medical string? - Likelihood that this is a medical image.
- racy string? - Likelihood that the request image contains racy content. Racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.
- spoof string? - Spoof likelihood. The likelihood that an modification was made to the image's canonical version to make it appear funny or offensive.
- violence string? - Likelihood that this image contains violent content.
googleapis.vision: GoogleCloudVisionV1p4beta1Symbol
A single symbol representation.
Fields
- boundingBox GoogleCloudVisionV1p4beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the symbol. Range [0, 1].
- property GoogleCloudVisionV1p4beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- text string? - The actual UTF-8 representation of the symbol.
googleapis.vision: GoogleCloudVisionV1p4beta1TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
Fields
- pages GoogleCloudVisionV1p4beta1Page[]? - List of pages detected by OCR.
- text string? - UTF-8 text detected on the pages.
googleapis.vision: GoogleCloudVisionV1p4beta1TextAnnotationDetectedBreak
Detected start or end of a structural component.
Fields
- isPrefix boolean? - True if break prepends the element.
- 'type string? - Detected break type.
googleapis.vision: GoogleCloudVisionV1p4beta1TextAnnotationDetectedLanguage
Detected language for a structural component.
Fields
- confidence float? - Confidence of detected language. Range [0, 1].
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
googleapis.vision: GoogleCloudVisionV1p4beta1TextAnnotationTextProperty
Additional information detected on the structural component.
Fields
- detectedBreak GoogleCloudVisionV1p4beta1TextAnnotationDetectedBreak? - Detected start or end of a structural component.
- detectedLanguages GoogleCloudVisionV1p4beta1TextAnnotationDetectedLanguage[]? - A list of detected languages together with confidence.
googleapis.vision: GoogleCloudVisionV1p4beta1Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
Fields
- x int? - X coordinate.
- y int? - Y coordinate.
googleapis.vision: GoogleCloudVisionV1p4beta1WebDetection
Relevant information for the image from the Internet.
Fields
- bestGuessLabels GoogleCloudVisionV1p4beta1WebDetectionWebLabel[]? - The service's best guess as to the topic of the request image. Inferred from similar images on the open web.
- fullMatchingImages GoogleCloudVisionV1p4beta1WebDetectionWebImage[]? - Fully matching images from the Internet. Can include resized copies of the query image.
- pagesWithMatchingImages GoogleCloudVisionV1p4beta1WebDetectionWebPage[]? - Web pages containing the matching images from the Internet.
- partialMatchingImages GoogleCloudVisionV1p4beta1WebDetectionWebImage[]? - Partial matching images from the Internet. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
- visuallySimilarImages GoogleCloudVisionV1p4beta1WebDetectionWebImage[]? - The visually similar image results.
- webEntities GoogleCloudVisionV1p4beta1WebDetectionWebEntity[]? - Deduced entities from similar images on the Internet.
googleapis.vision: GoogleCloudVisionV1p4beta1WebDetectionWebEntity
Entity deduced from similar images on the Internet.
Fields
- description string? - Canonical description of the entity, in English.
- entityId string? - Opaque entity ID.
- score float? - Overall relevancy score for the entity. Not normalized and not comparable across different image queries.
googleapis.vision: GoogleCloudVisionV1p4beta1WebDetectionWebImage
Metadata for online images.
Fields
- score float? - (Deprecated) Overall relevancy score for the image.
- url string? - The result image URL.
googleapis.vision: GoogleCloudVisionV1p4beta1WebDetectionWebLabel
Label to provide extra metadata for the web detection.
Fields
- label string? - Label for extra metadata.
- languageCode string? - The BCP-47 language code for
label
, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
googleapis.vision: GoogleCloudVisionV1p4beta1WebDetectionWebPage
Metadata for web pages.
Fields
- fullMatchingImages GoogleCloudVisionV1p4beta1WebDetectionWebImage[]? - Fully matching images on the page. Can include resized copies of the query image.
- pageTitle string? - Title for the web page, may contain HTML markups.
- partialMatchingImages GoogleCloudVisionV1p4beta1WebDetectionWebImage[]? - Partial matching images on the page. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
- score float? - (Deprecated) Overall relevancy score for the web page.
- url string? - The result web page URL.
googleapis.vision: GoogleCloudVisionV1p4beta1Word
A word representation.
Fields
- boundingBox GoogleCloudVisionV1p4beta1BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the word. Range [0, 1].
- property GoogleCloudVisionV1p4beta1TextAnnotationTextProperty? - Additional information detected on the structural component.
- symbols GoogleCloudVisionV1p4beta1Symbol[]? - List of symbols in the word. The order of the symbols follows the natural reading order.
googleapis.vision: GroupedResult
Information about the products similar to a single product in a query image.
Fields
- boundingPoly BoundingPoly? - A bounding polygon for the detected image annotation.
- objectAnnotations ObjectAnnotation[]? - List of generic predictions for the object in the bounding box.
- results Result[]? - List of results, one for each product match.
googleapis.vision: Image
Client image to perform Google Cloud Vision API tasks over.
Fields
- content string? - Image content, represented as a stream of bytes. Note: As with all
bytes
fields, protobuffers use a pure binary representation, whereas JSON representations use base64. Currently, this field only works for BatchAnnotateImages requests. It does not work for AsyncBatchAnnotateImages requests.
- 'source ImageSource? - External image source (Google Cloud Storage or web URL image location).
googleapis.vision: ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
Fields
- pageNumber int? - If the file was a PDF or TIFF, this field gives the page number within the file used to produce the image.
- uri string? - The URI of the file used to produce the image.
googleapis.vision: ImageContext
Image context and/or feature-specific parameters.
Fields
- cropHintsParams CropHintsParams? - Parameters for crop hints annotation request.
- languageHints string[]? - List of languages to use for TEXT_DETECTION. In most cases, an empty value yields the best results since it enables automatic language detection. For languages based on the Latin alphabet, setting
language_hints
is not needed. In rare cases, when the language of the text in the image is known, setting a hint will help get better results (although it will be a significant hindrance if the hint is wrong). Text detection returns an error if one or more of the specified languages is not one of the supported languages.
- latLongRect LatLongRect? - Rectangle determined by min and max
LatLng
pairs.
- productSearchParams ProductSearchParams? - Parameters for a product search request.
- textDetectionParams TextDetectionParams? - Parameters for text detections. This is used to control TEXT_DETECTION and DOCUMENT_TEXT_DETECTION features.
- webDetectionParams WebDetectionParams? - Parameters for web detection request.
googleapis.vision: ImageProperties
Stores image properties, such as dominant colors.
Fields
- dominantColors DominantColorsAnnotation? - Set of dominant colors and their corresponding scores.
googleapis.vision: ImageSource
External image source (Google Cloud Storage or web URL image location).
Fields
- gcsImageUri string? - Use
image_uri
instead. The Google Cloud Storage URI of the formgs://bucket_name/object_name
. Object versioning is not supported. See Google Cloud Storage Request URIs for more info.
- imageUri string? - The URI of the source image. Can be either: 1. A Google Cloud Storage URI of the form
gs://bucket_name/object_name
. Object versioning is not supported. See Google Cloud Storage Request URIs for more info. 2. A publicly-accessible image HTTP/HTTPS URL. When fetching images from HTTP/HTTPS URLs, Google cannot guarantee that the request will be completed. Your request may fail if the specified host denies the request (e.g. due to request throttling or DOS prevention), or if Google throttles requests to the site for abuse prevention. You should not depend on externally-hosted images for production applications. When bothgcs_image_uri
andimage_uri
are specified,image_uri
takes precedence.
googleapis.vision: ImportProductSetsGcsSource
The Google Cloud Storage location for a csv file which preserves a list of ImportProductSetRequests in each line.
Fields
- csvFileUri string? - The Google Cloud Storage URI of the input csv file. The URI must start with
gs://
. The format of the input csv file should be one image per line. In each line, there are 8 columns. 1. image-uri 2. image-id 3. product-set-id 4. product-id 5. product-category 6. product-display-name 7. labels 8. bounding-poly Theimage-uri
,product-set-id
,product-id
, andproduct-category
columns are required. All other columns are optional. If theProductSet
orProduct
specified by theproduct-set-id
andproduct-id
values does not exist, then the system will create a newProductSet
orProduct
for the image. In this case, theproduct-display-name
column refers to display_name, theproduct-category
column refers to product_category, and thelabels
column refers to product_labels. Theimage-id
column is optional but must be unique if provided. If it is empty, the system will automatically assign a unique id to the image. Theproduct-display-name
column is optional. If it is empty, the system sets the display_name field for the product to a space (" "). You can update thedisplay_name
later by using the API. If aProduct
with the specifiedproduct-id
already exists, then the system ignores theproduct-display-name
,product-category
, andlabels
columns. Thelabels
column (optional) is a line containing a list of comma-separated key-value pairs, in the following format: "key_1=value_1,key_2=value_2,...,key_n=value_n" Thebounding-poly
column (optional) identifies one region of interest from the image in the same manner asCreateReferenceImage
. If you do not specify thebounding-poly
column, then the system will try to detect regions of interest automatically. At most onebounding-poly
column is allowed per line. If the image contains multiple regions of interest, add a line to the CSV file that includes the same product information, and thebounding-poly
values for each region of interest. Thebounding-poly
column must contain an even number of comma-separated numbers, in the format "p1_x,p1_y,p2_x,p2_y,...,pn_x,pn_y". Use non-negative integers for absolute bounding polygons, and float values in [0, 1] for normalized bounding polygons. The system will resize the image if the image resolution is too large to process (larger than 20MP).
googleapis.vision: ImportProductSetsInputConfig
The input content for the ImportProductSets
method.
Fields
- gcsSource ImportProductSetsGcsSource? - The Google Cloud Storage location for a csv file which preserves a list of ImportProductSetRequests in each line.
googleapis.vision: ImportProductSetsRequest
Request message for the ImportProductSets
method.
Fields
- inputConfig ImportProductSetsInputConfig? - The input content for the
ImportProductSets
method.
googleapis.vision: ImportProductSetsResponse
Response message for the ImportProductSets
method. This message is returned by the google.longrunning.Operations.GetOperation method in the returned google.longrunning.Operation.response field.
Fields
- referenceImages ReferenceImage[]? - The list of reference_images that are imported successfully.
- statuses Status[]? - The rpc status for each ImportProductSet request, including both successes and errors. The number of statuses here matches the number of lines in the csv file, and statuses[i] stores the success or failure status of processing the i-th line of the csv, starting from line 0.
googleapis.vision: InputConfig
The desired input location and metadata.
Fields
- content string? - File content, represented as a stream of bytes. Note: As with all
bytes
fields, protobuffers use a pure binary representation, whereas JSON representations use base64. Currently, this field only works for BatchAnnotateFiles requests. It does not work for AsyncBatchAnnotateFiles requests.
- gcsSource GcsSource? - The Google Cloud Storage location where the input will be read from.
- mimeType string? - The type of the file. Currently only "application/pdf", "image/tiff" and "image/gif" are supported. Wildcards are not supported.
googleapis.vision: KeyValue
A product label represented as a key-value pair.
Fields
- 'key string? - The key of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
- value string? - The value of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
googleapis.vision: Landmark
A face-specific landmark (for example, a face feature).
Fields
- position Position? - A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
- 'type string? - Face landmark type.
googleapis.vision: LatLng
An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
Fields
- latitude decimal? - The latitude in degrees. It must be in the range [-90.0, +90.0].
- longitude decimal? - The longitude in degrees. It must be in the range [-180.0, +180.0].
googleapis.vision: LatLongRect
Rectangle determined by min and max LatLng
pairs.
Fields
- maxLatLng LatLng? - An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
- minLatLng LatLng? - An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
googleapis.vision: ListOperationsResponse
The response message for Operations.ListOperations.
Fields
- nextPageToken string? - The standard List next-page token.
- operations Operation[]? - A list of operations that matches the specified filter in the request.
googleapis.vision: ListProductSetsResponse
Response message for the ListProductSets
method.
Fields
- nextPageToken string? - Token to retrieve the next page of results, or empty if there are no more results in the list.
- productSets ProductSet[]? - List of ProductSets.
googleapis.vision: ListProductsInProductSetResponse
Response message for the ListProductsInProductSet
method.
Fields
- nextPageToken string? - Token to retrieve the next page of results, or empty if there are no more results in the list.
- products Product[]? - The list of Products.
googleapis.vision: ListProductsResponse
Response message for the ListProducts
method.
Fields
- nextPageToken string? - Token to retrieve the next page of results, or empty if there are no more results in the list.
- products Product[]? - List of products.
googleapis.vision: ListReferenceImagesResponse
Response message for the ListReferenceImages
method.
Fields
- nextPageToken string? - The next_page_token returned from a previous List request, if any.
- pageSize int? - The maximum number of items to return. Default 10, maximum 100.
- referenceImages ReferenceImage[]? - The list of reference images.
googleapis.vision: LocalizedObjectAnnotation
Set of detected objects with bounding boxes.
Fields
- boundingPoly BoundingPoly? - A bounding polygon for the detected image annotation.
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- mid string? - Object ID that should align with EntityAnnotation mid.
- name string? - Object name, expressed in its
language_code
language.
- score float? - Score of the result. Range [0, 1].
googleapis.vision: LocationInfo
Detected entity location information.
Fields
- latLng LatLng? - An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
googleapis.vision: NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
Fields
- x float? - X coordinate.
- y float? - Y coordinate.
googleapis.vision: OAuth2RefreshTokenGrantConfig
OAuth2 Refresh Token Grant Configs
Fields
- Fields Included from *OAuth2RefreshTokenGrantConfig
- refreshUrl string(default "https://accounts.google.com/o/oauth2/token") - Refresh URL
googleapis.vision: ObjectAnnotation
Prediction for what the object in the bounding box is.
Fields
- languageCode string? - The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- mid string? - Object ID that should align with EntityAnnotation mid.
- name string? - Object name, expressed in its
language_code
language.
- score float? - Score of the result. Range [0, 1].
googleapis.vision: Operation
This resource represents a long-running operation that is the result of a network API call.
Fields
- done boolean? - If the value is
false
, it means the operation is still in progress. Iftrue
, the operation is completed, and eithererror
orresponse
is available.
- 'error Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- metadata record {}? - Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
- name string? - The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the
name
should be a resource name ending withoperations/{unique_id}
.
- response record {}? - The normal response of the operation in case of success. If the original method returns no data on success, such as
Delete
, the response isgoogle.protobuf.Empty
. If the original method is standardGet
/Create
/Update
, the response should be the resource. For other methods, the response should have the typeXxxResponse
, whereXxx
is the original method name. For example, if the original method name isTakeSnapshot()
, the inferred response type isTakeSnapshotResponse
.
googleapis.vision: OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
Fields
- createTime string? - The time when the batch request was received.
- state string? - Current state of the batch operation.
- updateTime string? - The time when the operation result was last updated.
googleapis.vision: OutputConfig
The desired output location and metadata.
Fields
- batchSize int? - The max number of response protos to put into each output JSON file on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 response protos will be generated. If
batch_size
= 20, then 5 json files each containing 20 response protos will be written under the prefixgcs_destination
.uri
. Currently, batch_size only applies to GcsDestination, with potential future support for other output configurations.
- gcsDestination GcsDestination? - The Google Cloud Storage location where the output will be written to.
googleapis.vision: Page
Detected page from OCR.
Fields
- blocks Block[]? - List of blocks of text, images etc on this page.
- confidence float? - Confidence of the OCR results on the page. Range [0, 1].
- height int? - Page height. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
- property TextProperty? - Additional information detected on the structural component.
- width int? - Page width. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
googleapis.vision: Paragraph
Structural unit of text representing a number of words in certain order.
Fields
- boundingBox BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the paragraph. Range [0, 1].
- property TextProperty? - Additional information detected on the structural component.
- words Word[]? - List of all words in this paragraph.
googleapis.vision: Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
Fields
- x float? - X coordinate.
- y float? - Y coordinate.
- z float? - Z coordinate (or depth).
googleapis.vision: Product
A Product contains ReferenceImages.
Fields
- description string? - User-provided metadata to be stored with this product. Must be at most 4096 characters long.
- displayName string? - The user-provided name for this Product. Must not be empty. Must be at most 4096 characters long.
- name string? - The resource name of the product. Format is:
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID
. This field is ignored when creating a product.
- productCategory string? - Immutable. The category for the product identified by the reference image. This should be one of "homegoods-v2", "apparel-v2", "toys-v2", "packagedgoods-v1" or "general-v1". The legacy categories "homegoods", "apparel", and "toys" are still supported, but these should not be used for new products.
- productLabels KeyValue[]? - Key-value pairs that can be attached to a product. At query time, constraints can be specified based on the product_labels. Note that integer values can be provided as strings, e.g. "1199". Only strings with integer values can match a range-based restriction which is to be supported soon. Multiple values can be assigned to the same key. One product may have up to 500 product_labels. Notice that the total number of distinct product_labels over all products in one ProductSet cannot exceed 1M, otherwise the product search pipeline will refuse to work for that ProductSet.
googleapis.vision: ProductSearchParams
Parameters for a product search request.
Fields
- boundingPoly BoundingPoly? - A bounding polygon for the detected image annotation.
- filter string? - The filtering expression. This can be used to restrict search results based on Product labels. We currently support an AND of OR of key-value expressions, where each expression within an OR must have the same key. An '=' should be used to connect the key and value. For example, "(color = red OR color = blue) AND brand = Google" is acceptable, but "(color = red OR brand = Google)" is not acceptable. "color: red" is not acceptable because it uses a ':' instead of an '='.
- productCategories string[]? - The list of product categories to search in. Currently, we only consider the first category, and either "homegoods-v2", "apparel-v2", "toys-v2", "packagedgoods-v1", or "general-v1" should be specified. The legacy categories "homegoods", "apparel", and "toys" are still supported but will be deprecated. For new products, please use "homegoods-v2", "apparel-v2", or "toys-v2" for better product search accuracy. It is recommended to migrate existing products to these categories as well.
- productSet string? - The resource name of a ProductSet to be searched for similar images. Format is:
projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID
.
googleapis.vision: ProductSearchResults
Results for a product search request.
Fields
- indexTime string? - Timestamp of the index which provided these results. Products added to the product set and products removed from the product set after this time are not reflected in the current results.
- productGroupedResults GroupedResult[]? - List of results grouped by products detected in the query image. Each entry corresponds to one bounding polygon in the query image, and contains the matching products specific to that region. There may be duplicate product matches in the union of all the per-product results.
- results Result[]? - List of results, one for each product match.
googleapis.vision: ProductSet
A ProductSet contains Products. A ProductSet can contain a maximum of 1 million reference images. If the limit is exceeded, periodic indexing will fail.
Fields
- displayName string? - The user-provided name for this ProductSet. Must not be empty. Must be at most 4096 characters long.
- indexError Status? - The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
- indexTime string? - Output only. The time at which this ProductSet was last indexed. Query results will reflect all updates before this time. If this ProductSet has never been indexed, this timestamp is the default value "1970-01-01T00:00:00Z". This field is ignored when creating a ProductSet.
- name string? - The resource name of the ProductSet. Format is:
projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID
. This field is ignored when creating a ProductSet.
googleapis.vision: ProductSetPurgeConfig
Config to control which ProductSet contains the Products to be deleted.
Fields
- productSetId string? - The ProductSet that contains the Products to delete. If a Product is a member of product_set_id in addition to other ProductSets, the Product will still be deleted.
googleapis.vision: Property
A Property
consists of a user-supplied name/value pair.
Fields
- name string? - Name of the property.
- uint64Value string? - Value of numeric properties.
- value string? - Value of the property.
googleapis.vision: ProxyConfig
Proxy server configurations to be used with the HTTP client endpoint.
Fields
- host string(default "") - Host name of the proxy server
- port int(default 0) - Proxy server port
- userName string(default "") - Proxy server username
- password string(default "") - Proxy server password
googleapis.vision: PurgeProductsRequest
Request message for the PurgeProducts
method.
Fields
- deleteOrphanProducts boolean? - If delete_orphan_products is true, all Products that are not in any ProductSet will be deleted.
- force boolean? - The default value is false. Override this value to true to actually perform the purge.
- productSetPurgeConfig ProductSetPurgeConfig? - Config to control which ProductSet contains the Products to be deleted.
googleapis.vision: ReferenceImage
A ReferenceImage
represents a product image and its associated metadata, such as bounding boxes.
Fields
- boundingPolys BoundingPoly[]? - Optional. Bounding polygons around the areas of interest in the reference image. If this field is empty, the system will try to detect regions of interest. At most 10 bounding polygons will be used. The provided shape is converted into a non-rotated rectangle. Once converted, the small edge of the rectangle must be greater than or equal to 300 pixels. The aspect ratio must be 1:4 or less (i.e. 1:3 is ok; 1:5 is not).
- name string? - The resource name of the reference image. Format is:
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID/referenceImages/IMAGE_ID
. This field is ignored when creating a reference image.
- uri string? - Required. The Google Cloud Storage URI of the reference image. The URI must start with
gs://
.
googleapis.vision: RemoveProductFromProductSetRequest
Request message for the RemoveProductFromProductSet
method.
Fields
- product string? - Required. The resource name for the Product to be removed from this ProductSet. Format is:
projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID
googleapis.vision: Result
Information about a product.
Fields
- image string? - The resource name of the image from the product that is the closest match to the query.
- product Product? - A Product contains ReferenceImages.
- score float? - A confidence level on the match, ranging from 0 (no confidence) to 1 (full confidence).
googleapis.vision: SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
Fields
- adult string? - Represents the adult content likelihood for the image. Adult content may contain elements such as nudity, pornographic images or cartoons, or sexual activities.
- medical string? - Likelihood that this is a medical image.
- racy string? - Likelihood that the request image contains racy content. Racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.
- spoof string? - Spoof likelihood. The likelihood that an modification was made to the image's canonical version to make it appear funny or offensive.
- violence string? - Likelihood that this image contains violent content.
googleapis.vision: Status
The Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
Fields
- code int? - The status code, which should be an enum value of google.rpc.Code.
- details record {}[]? - A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message string? - A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
googleapis.vision: Symbol
A single symbol representation.
Fields
- boundingBox BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the symbol. Range [0, 1].
- property TextProperty? - Additional information detected on the structural component.
- text string? - The actual UTF-8 representation of the symbol.
googleapis.vision: TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
Fields
- pages Page[]? - List of pages detected by OCR.
- text string? - UTF-8 text detected on the pages.
googleapis.vision: TextDetectionParams
Parameters for text detections. This is used to control TEXT_DETECTION and DOCUMENT_TEXT_DETECTION features.
Fields
- enableTextDetectionConfidenceScore boolean? - By default, Cloud Vision API only includes confidence score for DOCUMENT_TEXT_DETECTION result. Set the flag to true to include confidence score for TEXT_DETECTION as well.
googleapis.vision: TextProperty
Additional information detected on the structural component.
Fields
- detectedBreak DetectedBreak? - Detected start or end of a structural component.
- detectedLanguages DetectedLanguage[]? - A list of detected languages together with confidence.
googleapis.vision: Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
Fields
- x int? - X coordinate.
- y int? - Y coordinate.
googleapis.vision: WebDetection
Relevant information for the image from the Internet.
Fields
- bestGuessLabels WebLabel[]? - The service's best guess as to the topic of the request image. Inferred from similar images on the open web.
- fullMatchingImages WebImage[]? - Fully matching images from the Internet. Can include resized copies of the query image.
- pagesWithMatchingImages WebPage[]? - Web pages containing the matching images from the Internet.
- partialMatchingImages WebImage[]? - Partial matching images from the Internet. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
- visuallySimilarImages WebImage[]? - The visually similar image results.
- webEntities WebEntity[]? - Deduced entities from similar images on the Internet.
googleapis.vision: WebDetectionParams
Parameters for web detection request.
Fields
- includeGeoResults boolean? - Whether to include results derived from the geo information in the image.
googleapis.vision: WebEntity
Entity deduced from similar images on the Internet.
Fields
- description string? - Canonical description of the entity, in English.
- entityId string? - Opaque entity ID.
- score float? - Overall relevancy score for the entity. Not normalized and not comparable across different image queries.
googleapis.vision: WebImage
Metadata for online images.
Fields
- score float? - (Deprecated) Overall relevancy score for the image.
- url string? - The result image URL.
googleapis.vision: WebLabel
Label to provide extra metadata for the web detection.
Fields
- label string? - Label for extra metadata.
- languageCode string? - The BCP-47 language code for
label
, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
googleapis.vision: WebPage
Metadata for web pages.
Fields
- fullMatchingImages WebImage[]? - Fully matching images on the page. Can include resized copies of the query image.
- pageTitle string? - Title for the web page, may contain HTML markups.
- partialMatchingImages WebImage[]? - Partial matching images on the page. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
- score float? - (Deprecated) Overall relevancy score for the web page.
- url string? - The result web page URL.
googleapis.vision: Word
A word representation.
Fields
- boundingBox BoundingPoly? - A bounding polygon for the detected image annotation.
- confidence float? - Confidence of the OCR results for the word. Range [0, 1].
- property TextProperty? - Additional information detected on the structural component.
- symbols Symbol[]? - List of symbols in the word. The order of the symbols follows the natural reading order.
Import
import ballerinax/googleapis.vision;
Metadata
Released date: over 1 year ago
Version: 1.6.1
License: Apache-2.0
Compatibility
Platform: any
Ballerina version: 2201.4.1
GraalVM compatible: Yes
Pull count
Total: 5
Current verison: 2
Weekly downloads
Keywords
AI/Images
Cost/Freemium
Vendor/Google
Contributors
Dependencies