EvaluationJobArgs

data class EvaluationJobArgs(val annotationSpecSet: Output<String>? = null, val description: Output<String>? = null, val evaluationJobConfig: Output<GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs>? = null, val labelMissingGroundTruth: Output<Boolean>? = null, val modelVersion: Output<String>? = null, val project: Output<String>? = null, val schedule: Output<String>? = null) : ConvertibleToJava<EvaluationJobArgs>

Creates an evaluation job. Auto-naming is currently not supported for this resource.

Constructors

Link copied to clipboard
constructor(annotationSpecSet: Output<String>? = null, description: Output<String>? = null, evaluationJobConfig: Output<GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs>? = null, labelMissingGroundTruth: Output<Boolean>? = null, modelVersion: Output<String>? = null, project: Output<String>? = null, schedule: Output<String>? = null)

Properties

Link copied to clipboard
val annotationSpecSet: Output<String>? = null

Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

Link copied to clipboard
val description: Output<String>? = null

Description of the job. The description can be up to 25,000 characters long.

Link copied to clipboard

Configuration details for the evaluation job.

Link copied to clipboard
val labelMissingGroundTruth: Output<Boolean>? = null

Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

Link copied to clipboard
val modelVersion: Output<String>? = null

The /ml-engine/docs/prediction-overview to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

Link copied to clipboard
val project: Output<String>? = null
Link copied to clipboard
val schedule: Output<String>? = null

Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in /scheduler/docs/configuring/cron-job-schedules or in an /appengine/docs/standard/python/config/cronref#schedule_format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

Functions

Link copied to clipboard
open override fun toJava(): EvaluationJobArgs