Google Cloud Ml V1__Training Input Args
Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the --config command-line argument. For details, see the guide to /ai-platform/training/docs/training-jobs.
Constructors
Functions
Properties
Optional. Whether you want AI Platform Training to enable interactive shell access to training containers. If set to true
, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials).
Optional. Options for using customer-managed encryption keys (CMEK) to protect resources created by a training job, instead of using Google's default encryption. If this is set, then all resources created by the training job will be encrypted with the customer-managed encryption key that you specify. /ai-platform/training/docs/cmek.
Optional. The configuration for evaluators. You should only set evaluatorConfig.acceleratorConfig
if evaluatorType
is set to a Compute Engine machine type. /ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu Set evaluatorConfig.imageUri
only if you build a custom image for your evaluator. If evaluatorConfig.imageUri
has not been set, AI Platform uses the value of masterConfig.imageUri
. Learn more about /ai-platform/training/docs/distributed-training-containers.
Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for masterType
. This value must be consistent with the category of machine type that masterType
uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier
is set to CUSTOM
and evaluatorCount
is greater than zero.
Optional. The configuration for your master worker. You should only set masterConfig.acceleratorConfig
if masterType
is set to a Compute Engine machine type. Learn about /ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu Set masterConfig.imageUri
only if you build a custom image. Only one of masterConfig.imageUri
and runtimeVersion
should be set. Learn more about /ai-platform/training/docs/distributed-training-containers.
Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier
is set to CUSTOM
. You can use certain Compute Engine machine types directly in this field. See the /ai-platform/training/docs/machine-types#compute-engine-machine-types. Alternatively, you can use the certain legacy machine types in this field. See the /ai-platform/training/docs/machine-types#legacy-machine-types. Finally, if you want to use a TPU for training, specify cloud_tpu
in this field. Learn more about the /ai-platform/training/docs/using-tpus#configuring_a_custom_tpu_machine.
Optional. The full name of the /vpc/docs/vpc to which the Job is peered. For example, projects/12345/global/networks/myVPC
. The format of this field is projects/{project}/global/networks/{network}
, where {project} is a project number (like 12345
) and {network} is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. /ai-platform/training/docs/vpc-peering.
Optional. The configuration for parameter servers. You should only set parameterServerConfig.acceleratorConfig
if parameterServerType
is set to a Compute Engine machine type. /ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu Set parameterServerConfig.imageUri
only if you build a custom image for your parameter server. If parameterServerConfig.imageUri
has not been set, AI Platform uses the value of masterConfig.imageUri
. Learn more about /ai-platform/training/docs/distributed-training-containers.
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type
. This value can only be used when scale_tier
is set to CUSTOM
. If you set this value, you must also set parameter_server_type
. The default value is zero.
Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for master_type
. This value must be consistent with the category of machine type that masterType
uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier
is set to CUSTOM
and parameter_server_count
is greater than zero.
Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri
. The following Python versions are available: * Python '3.7' is available when runtime_version
is set to '1.15' or later. * Python '3.5' is available when runtime_version
is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version
is set to '1.15' or earlier. Read more about the Python versions available for /ml-engine/docs/runtime-version-list.
Optional. The email address of a service account to use when running the training appplication. You must have the iam.serviceAccounts.actAs
permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the roles/iam.serviceAccountAdmin
role for the specified service account. /ai-platform/training/docs/custom-service-account If not specified, the AI Platform Training Google-managed service account is used by default.
Optional. The configuration for workers. You should only set workerConfig.acceleratorConfig
if workerType
is set to a Compute Engine machine type. /ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu Set workerConfig.imageUri
only if you build a custom image for your worker. If workerConfig.imageUri
has not been set, AI Platform uses the value of masterConfig.imageUri
. Learn more about /ai-platform/training/docs/distributed-training-containers.
Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for masterType
. This value must be consistent with the category of machine type that masterType
uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use cloud_tpu
for this value, see special instructions for /ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine. This value must be present when scaleTier
is set to CUSTOM
and workerCount
is greater than zero.