elasticInferenceAccelerators

@JvmName(name = "awfiidkdphugbkis")
suspend fun elasticInferenceAccelerators(value: Output<List<LaunchTemplateElasticInferenceAcceleratorArgs>>)

Parameters

value

Amazon Elastic Inference is no longer available. An elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads. You cannot specify accelerators from different generations in the same request.


@JvmName(name = "ssecruwtmcngejac")
suspend fun elasticInferenceAccelerators(vararg values: Output<LaunchTemplateElasticInferenceAcceleratorArgs>)


@JvmName(name = "okephcbgbjtubbnk")
suspend fun elasticInferenceAccelerators(values: List<Output<LaunchTemplateElasticInferenceAcceleratorArgs>>)
@JvmName(name = "wkcwejxsjyqnwiar")
suspend fun elasticInferenceAccelerators(vararg values: LaunchTemplateElasticInferenceAcceleratorArgs)

Parameters

values

Amazon Elastic Inference is no longer available. An elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads. You cannot specify accelerators from different generations in the same request.


@JvmName(name = "smdehocisrlgcjcf")
suspend fun elasticInferenceAccelerators(argument: List<suspend LaunchTemplateElasticInferenceAcceleratorArgsBuilder.() -> Unit>)
@JvmName(name = "tyammjruhixtcxdh")
suspend fun elasticInferenceAccelerators(vararg argument: suspend LaunchTemplateElasticInferenceAcceleratorArgsBuilder.() -> Unit)
@JvmName(name = "ktnwpfvmoqptwgph")
suspend fun elasticInferenceAccelerators(argument: suspend LaunchTemplateElasticInferenceAcceleratorArgsBuilder.() -> Unit)

Parameters

argument

Amazon Elastic Inference is no longer available. An elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads. You cannot specify accelerators from different generations in the same request.