GoogleCloudMlV1__PredictionInputArgs

data class GoogleCloudMlV1__PredictionInputArgs(val batchSize: Output<String>? = null, val dataFormat: Output<GoogleCloudMlV1__PredictionInputDataFormat>, val inputPaths: Output<List<String>>, val maxWorkerCount: Output<String>? = null, val modelName: Output<String>? = null, val outputDataFormat: Output<GoogleCloudMlV1__PredictionInputOutputDataFormat>? = null, val outputPath: Output<String>, val region: Output<String>, val runtimeVersion: Output<String>? = null, val signatureName: Output<String>? = null, val uri: Output<String>? = null, val versionName: Output<String>? = null) : ConvertibleToJava<GoogleCloudMlV1__PredictionInputArgs>

Represents input parameters for a prediction job.

Constructors

Link copied to clipboard
fun GoogleCloudMlV1__PredictionInputArgs(batchSize: Output<String>? = null, dataFormat: Output<GoogleCloudMlV1__PredictionInputDataFormat>, inputPaths: Output<List<String>>, maxWorkerCount: Output<String>? = null, modelName: Output<String>? = null, outputDataFormat: Output<GoogleCloudMlV1__PredictionInputOutputDataFormat>? = null, outputPath: Output<String>, region: Output<String>, runtimeVersion: Output<String>? = null, signatureName: Output<String>? = null, uri: Output<String>? = null, versionName: Output<String>? = null)

Functions

Link copied to clipboard
open override fun toJava(): GoogleCloudMlV1__PredictionInputArgs

Properties

Link copied to clipboard
val batchSize: Output<String>? = null

Optional. Number of records per batch, defaults to 64. The service will buffer batch_size number of records in memory before invoking one Tensorflow prediction call internally. So take the record size and memory available into consideration when setting this parameter.

Link copied to clipboard

The format of the input data files.

Link copied to clipboard
val inputPaths: Output<List<String>>

The Cloud Storage location of the input data files. May contain wildcards.

Link copied to clipboard
val maxWorkerCount: Output<String>? = null

Optional. The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified.

Link copied to clipboard
val modelName: Output<String>? = null

Use this field if you want to use the default version for the specified model. The string must use the following format: "projects/YOUR_PROJECT/models/YOUR_MODEL"

Link copied to clipboard

Optional. Format of the output data files, defaults to JSON.

Link copied to clipboard
val outputPath: Output<String>

The output Google Cloud Storage location.

Link copied to clipboard
val region: Output<String>

The Google Compute Engine region to run the prediction job in. See the available regions for AI Platform services.

Link copied to clipboard
val runtimeVersion: Output<String>? = null

Optional. The AI Platform runtime version to use for this batch prediction. If not set, AI Platform will pick the runtime version used during the CreateVersion request for this model version, or choose the latest stable version when model version information is not available such as when the model is specified by uri.

Link copied to clipboard
val signatureName: Output<String>? = null

Optional. The name of the signature defined in the SavedModel to use for this job. Please refer to SavedModel for information about how to use signatures. Defaults to DEFAULT_SERVING_SIGNATURE_DEF_KEY , which is "serving_default".

Link copied to clipboard
val uri: Output<String>? = null

Use this field if you want to specify a Google Cloud Storage path for the model to use.

Link copied to clipboard
val versionName: Output<String>? = null

Use this field if you want to specify a version of the model to use. The string is formatted the same way as model_version, with the addition of the version information: "projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"