Catalog Table Args
Provides a Glue Catalog Table Resource. You can refer to the Glue Developer Guide for a full explanation of the Glue Data Catalog functionality.
Example Usage
Basic Table
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aws.glue.CatalogTable;
import com.pulumi.aws.glue.CatalogTableArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var awsGlueCatalogTable = new CatalogTable("awsGlueCatalogTable", CatalogTableArgs.builder()
.databaseName("MyCatalogDatabase")
.name("MyCatalogTable")
.build());
}
}
Parquet Table for Athena
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aws.glue.CatalogTable;
import com.pulumi.aws.glue.CatalogTableArgs;
import com.pulumi.aws.glue.inputs.CatalogTableStorageDescriptorArgs;
import com.pulumi.aws.glue.inputs.CatalogTableStorageDescriptorSerDeInfoArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var awsGlueCatalogTable = new CatalogTable("awsGlueCatalogTable", CatalogTableArgs.builder()
.databaseName("MyCatalogDatabase")
.name("MyCatalogTable")
.parameters(Map.ofEntries(
Map.entry("EXTERNAL", "TRUE"),
Map.entry("parquet.compression", "SNAPPY")
))
.storageDescriptor(CatalogTableStorageDescriptorArgs.builder()
.columns(
CatalogTableStorageDescriptorColumnArgs.builder()
.name("my_string")
.type("string")
.build(),
CatalogTableStorageDescriptorColumnArgs.builder()
.name("my_double")
.type("double")
.build(),
CatalogTableStorageDescriptorColumnArgs.builder()
.comment("")
.name("my_date")
.type("date")
.build(),
CatalogTableStorageDescriptorColumnArgs.builder()
.comment("")
.name("my_bigint")
.type("bigint")
.build(),
CatalogTableStorageDescriptorColumnArgs.builder()
.comment("")
.name("my_struct")
.type("struct<my_nested_string:string>")
.build())
.inputFormat("org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat")
.location("s3://my-bucket/event-streams/my-stream")
.outputFormat("org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat")
.serDeInfo(CatalogTableStorageDescriptorSerDeInfoArgs.builder()
.name("my-stream")
.parameters(Map.of("serialization.format", 1))
.serializationLibrary("org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe")
.build())
.build())
.tableType("EXTERNAL_TABLE")
.build());
}
}
Import
Glue Tables can be imported with their catalog ID (usually AWS account ID), database name, and table name, e.g.,
$ pulumi import aws:glue/catalogTable:CatalogTable MyTable 123456789012:MyDatabase:MyTable
Constructors
Properties
Name of the metadata database where the table metadata resides. For Hive compatibility, this must be all lowercase. The follow arguments are optional:
Description of the table.
Properties associated with this table, as a list of key-value pairs.
Configuration block for a maximum of 3 partition indexes. See partition_index
below.
Configuration block of columns by which the table is partitioned. Only primitive types are supported as partition keys. See partition_keys
below.
Configuration block for information about the physical storage of this table. For more information, refer to the Glue Developer Guide. See storage_descriptor
below.
Configuration block of a target table for resource linking. See target_table
below.
If the table is a view, the expanded text of the view; otherwise null.
If the table is a view, the original text of the view; otherwise null.