Package-level declarations
Types
The configuration for the kernels in a SageMaker image running as a CodeEditor app.
The container configuration for a SageMaker image.
The Amazon Elastic File System (EFS) storage configuration for a SageMaker image.
The configuration for the kernels in a SageMaker image running as a JupyterLab app.
The configuration for the file system and kernels in a SageMaker image running as a KernelGateway app.
Details of an instance group in a SageMaker HyperPod cluster.
Defines the configuration for attaching additional storage to the instances in the SageMaker HyperPod cluster instance group.
The lifecycle configuration for a SageMaker HyperPod cluster.
Specifies parameter(s) specific to the orchestrator, e.g. specify the EKS cluster.
Specifies parameter(s) related to EKS as orchestrator, e.g. the EKS cluster nodes will attach to,
Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to. You can control access to and from your resources by configuring a VPC.
The batch transform input for a monitoring job.
Configuration for the cluster used to run model monitoring jobs.
The baseline constraints resource for a monitoring job.
The CSV format
Container image configuration object for the monitoring job.
Baseline configuration used to validate that the data conforms to the specified constraints and statistics.
The inputs for a monitoring job.
The dataset format of the data to monitor
The endpoint for a monitoring job.
The Json format
The output object for a monitoring job.
The output configuration for monitoring jobs.
Identifies the resources to deploy for a monitoring job.
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
Information about where and how to store the results of a monitoring job.
The baseline statistics resource for a monitoring job.
Specifies a time limit for how long the monitoring job is allowed to run.
Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
The CodeEditor app settings.
A custom SageMaker image.
Properties related to the Amazon Elastic Block Store volume. Must be provided if storage type is Amazon EBS and must not be provided if storage type is not Amazon EBS
A collection of settings that apply to spaces of Amazon SageMaker Studio. These settings are specified when the Create/Update Domain API is called.
Default storage settings for a space.
A collection of settings that are required to start docker-proxy server.
The JupyterLab app settings.
The JupyterServer app settings.
The kernel gateway app settings.
A collection of settings that apply to an RSessionGateway app.
A collection of settings that configure user interaction with the RStudioServerPro app.
A collection of settings that update the current configuration for the RStudioServerPro Domain-level app.
A collection of Domain settings.
Specifies options when sharing an Amazon SageMaker Studio notebook. These settings are specified as part of DefaultUserSettings when the CreateDomain API is called, and as part of UserSettings when the CreateUserProfile API is called.
Studio settings. If these settings are applied on a user level, they take priority over the settings applied on a domain level.
A collection of settings that apply to users of Amazon SageMaker Studio. These settings are specified when the CreateUserProfile API is called, and as DefaultUserSettings when the CreateDomain API is called.
TTL configuration of the feature group
/*
Capacity size configuration for the inference component
The deployment config for the inference component
The rolling update policy for the inference component
The runtime config for the inference component
The specification for the inference component
Configuration specifying how to treat different headers. If no headers are specified SageMaker will by default base64 encode when capturing the data.
The Amazon S3 location and configuration for storing inference request and response data.
The metadata of the endpoint on which the inference experiment ran.
The configuration for the infrastructure that the model will be deployed to.
Contains information about the deployment options of a model.
The infrastructure configuration for deploying the model to a real-time inference endpoint.
The duration for which you want the inference experiment to run.
The configuration of ShadowMode inference experiment type. Use this field to specify a production variant which takes all the inference requests, and a shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant also specify the percentage of requests that Amazon SageMaker replicates.
The name and sampling percentage of a shadow variant.
The batch transform input for a monitoring job.
Configuration for the cluster used to run model monitoring jobs.
The baseline constraints resource for a monitoring job.
The CSV format
The dataset format of the data to monitor
The endpoint for a monitoring job.
The Json format
Container image configuration object for the monitoring job.
Baseline configuration used to validate that the data conforms to the specified constraints and statistics.
The inputs for a monitoring job.
Ground truth input provided in S3
The output object for a monitoring job.
The output configuration for monitoring jobs.
Identifies the resources to deploy for a monitoring job.
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
Information about where and how to store the results of a monitoring job.
Specifies a time limit for how long the monitoring job is allowed to run.
Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
Business details.
The content of the model card.
Intended usage of model.
Linear graph metric.
item in metric groups
Overview about the model.
Overview about the inference.
Metadata information related to model package version
the objective function the model will optimize for.
objective function that training job is optimized for.
An optional Key Management Service key to encrypt, decrypt, and re-encrypt model card content for regulated workloads with highly sensitive data.
metric data
Overview about the training.
Details about any associated training jobs.
training hyper parameter
training metric data.
Information about the user who created or modified an experiment, trial, trial component, lineage group, project, or model card.
The batch transform input for a monitoring job.
Configuration for the cluster used to run model monitoring jobs.
The baseline constraints resource for a monitoring job.
The CSV format
The dataset format of the data to monitor
The endpoint for a monitoring job.
The Json format
Container image configuration object for the monitoring job.
Baseline configuration used to validate that the data conforms to the specified constraints and statistics.
The inputs for a monitoring job.
The output object for a monitoring job.
The output configuration for monitoring jobs.
Identifies the resources to deploy for a monitoring job.
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
Information about where and how to store the results of a monitoring job.
Specifies a time limit for how long the monitoring job is allowed to run.
Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
Additional Inference Specification specifies details about inference jobs that can be run with models based on this model package.AdditionalInferenceSpecifications can be added to existing model packages using AdditionalInferenceSpecificationsToAdd.
Contains bias metrics for a model.
Describes the Docker container for the model package.
The metadata properties associated with the model package versions.
Describes the input source of a transform job and the way the transform job consumes it.
Represents the drift check baselines that can be used when the model monitor is set using the model package.
Represents the drift check bias baselines that can be used when the model monitor is set using the model package.
Contains explainability metrics for a model.
Represents the drift check data quality baselines that can be used when the model monitor is set using the model package.
Represents the drift check model quality baselines that can be used when the model monitor is set using the model package.
Sets the environment variables in the Docker container
Contains explainability metrics for a model.
Represents a File Source Object.
Details about inference jobs that can be run with models based on this model package.
Metadata properties of the tracking entity, trial, or trial component.
Represents a Metric Source Object.
Specifies the access configuration file for the ML model.
The model card associated with the model package.
Metrics that measure the quality of the input data for a model.
Specifies the location of ML model data to deploy during endpoint creation.
A structure that contains model metrics reports.
Metrics that measure the quality of a model.
Describes the S3 data source.
Specifies the S3 location of ML model data to deploy.
An optional AWS Key Management Service key to encrypt, decrypt, and re-encrypt model package information for regulated workloads with highly sensitive data.
Specifies an algorithm that was used to create the model package. The algorithm must be either an algorithm resource in your Amazon SageMaker account or an algorithm in AWS Marketplace that you are subscribed to.
Details about the algorithm that was used to create the model package.
Details about the current status of the model package.
Represents the overall status of a model package.
Describes the input source of a transform job and the way the transform job consumes it.
Defines the input needed to run a transform job using the inference specification specified in the algorithm.
Describes the results of a transform job.
Describes the resources, including ML instance types and ML instance count, to use for transform job.
Contains data, such as the inputs and targeted instance types that are used in the process of validating the model package.
Specifies configurations for one or more transform jobs that Amazon SageMaker runs to test the model package.
The batch transform input for a monitoring job.
Configuration for the cluster used to run model monitoring jobs.
The baseline constraints resource for a monitoring job.
The CSV format
The dataset format of the data to monitor
The endpoint for a monitoring job.
The Json format
Container image configuration object for the monitoring job.
Baseline configuration used to validate that the data conforms to the specified constraints and statistics.
The inputs for a monitoring job.
Ground truth input provided in S3
The output object for a monitoring job.
The output configuration for monitoring jobs.
Identifies the resources to deploy for a monitoring job.
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
Information about where and how to store the results of a monitoring job.
Specifies a time limit for how long the monitoring job is allowed to run.
Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
Baseline configuration used to validate that the data conforms to the specified constraints and statistics.
The batch transform input for a monitoring job.
Configuration for the cluster used to run model monitoring jobs.
The configuration object that specifies the monitoring schedule and defines the monitoring job.
The baseline constraints resource for a monitoring job.
The CSV format
The dataset format of the data to monitor
The endpoint for a monitoring job.
The Json format
Container image configuration object for the monitoring job.
Summary of information about monitoring job
The inputs for a monitoring job.
Defines the monitoring job.
The output object for a monitoring job.
The output configuration for monitoring jobs.
Identifies the resources to deploy for a monitoring job.
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
Information about where and how to store the results of a monitoring job.
Configuration details about the monitoring schedule.
The baseline statistics resource for a monitoring job.
Specifies a time limit for how long the monitoring job is allowed to run.
Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
The configuration of an OfflineStore
.
The configuration of an OnlineStore
.
The parallelism configuration applied to the pipeline.
A collection of settings that specify the maintenance schedule for the PartnerApp.
The definition of the pipeline. This can be either a JSON string or an Amazon S3 location.
The definition of the pipeline. This can be either a JSON string or an Amazon S3 location.
Information about a parameter used to provision a product.
Provisioned ServiceCatalog Details
Input ServiceCatalog Provisioning Details
The CodeEditor app settings.
A custom SageMaker image.
Properties related to the space's Amazon Elastic Block Store volume.
The JupyterServer app settings.
The JupyterServer app settings.
The kernel gateway app settings.
A collection of settings that apply to spaces of Amazon SageMaker Studio. These settings are specified when the CreateSpace API is called.
The CodeEditor app settings.
A custom SageMaker image.
Properties related to the Amazon Elastic Block Store volume.
Default storage settings for a space.
The JupyterLab app settings.
The JupyterServer app settings.
The kernel gateway app settings.
A collection of settings that configure user interaction with the RStudioServerPro app.
Specifies options when sharing an Amazon SageMaker Studio notebook. These settings are specified as part of DefaultUserSettings when the CreateDomain API is called, and as part of UserSettings when the CreateUserProfile API is called.
Studio settings. If these settings are applied on a user level, they take priority over the settings applied on a domain level.
A collection of settings that apply to users of Amazon SageMaker Studio. These settings are specified when the CreateUserProfile API is called, and as DefaultUserSettings when the CreateDomain API is called.