elasticInferenceAccelerators

@JvmName(name = "nxvtotkeaoyfsvak")
suspend fun elasticInferenceAccelerators(value: Output<List<LaunchTemplateElasticInferenceAcceleratorArgs>>)

Parameters

value

An elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads. You cannot specify accelerators from different generations in the same request. Starting April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.


@JvmName(name = "iighiuqcruprlcon")
suspend fun elasticInferenceAccelerators(vararg values: Output<LaunchTemplateElasticInferenceAcceleratorArgs>)


@JvmName(name = "vxonbdlstpdbdomi")
suspend fun elasticInferenceAccelerators(values: List<Output<LaunchTemplateElasticInferenceAcceleratorArgs>>)
@JvmName(name = "qscxqngcrkgnugvc")
suspend fun elasticInferenceAccelerators(vararg values: LaunchTemplateElasticInferenceAcceleratorArgs)

Parameters

values

An elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads. You cannot specify accelerators from different generations in the same request. Starting April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.


@JvmName(name = "rdvqqjrximeupkto")
suspend fun elasticInferenceAccelerators(argument: List<suspend LaunchTemplateElasticInferenceAcceleratorArgsBuilder.() -> Unit>)
@JvmName(name = "fdurkgphuxepxdej")
suspend fun elasticInferenceAccelerators(vararg argument: suspend LaunchTemplateElasticInferenceAcceleratorArgsBuilder.() -> Unit)
@JvmName(name = "lhltrnlqdywmeeda")
suspend fun elasticInferenceAccelerators(argument: suspend LaunchTemplateElasticInferenceAcceleratorArgsBuilder.() -> Unit)

Parameters

argument

An elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads. You cannot specify accelerators from different generations in the same request. Starting April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.