Bucket Args
Creates a new bucket in Google cloud storage service (GCS). Once a bucket has been created, its location can't be changed. For more information see the official documentation and API. Note: If the project id is not set on the resource or in the provider block it will be dynamically determined which will require enabling the compute api.
Example Usage
Creating A Private Bucket In Standard Storage, In The EU Region. Bucket Configured As Static Website And CORS Configurations
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.storage.Bucket;
import com.pulumi.gcp.storage.BucketArgs;
import com.pulumi.gcp.storage.inputs.BucketCorArgs;
import com.pulumi.gcp.storage.inputs.BucketWebsiteArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var static_site = new Bucket("static-site", BucketArgs.builder()
.cors(BucketCorArgs.builder()
.maxAgeSeconds(3600)
.methods(
"GET",
"HEAD",
"PUT",
"POST",
"DELETE")
.origins("http://image-store.com")
.responseHeaders("*")
.build())
.forceDestroy(true)
.location("EU")
.uniformBucketLevelAccess(true)
.website(BucketWebsiteArgs.builder()
.mainPageSuffix("index.html")
.notFoundPage("404.html")
.build())
.build());
}
}
Life Cycle Settings For Storage Bucket Objects
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.storage.Bucket;
import com.pulumi.gcp.storage.BucketArgs;
import com.pulumi.gcp.storage.inputs.BucketLifecycleRuleArgs;
import com.pulumi.gcp.storage.inputs.BucketLifecycleRuleActionArgs;
import com.pulumi.gcp.storage.inputs.BucketLifecycleRuleConditionArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var auto_expire = new Bucket("auto-expire", BucketArgs.builder()
.forceDestroy(true)
.lifecycleRules(
BucketLifecycleRuleArgs.builder()
.action(BucketLifecycleRuleActionArgs.builder()
.type("Delete")
.build())
.condition(BucketLifecycleRuleConditionArgs.builder()
.age(3)
.build())
.build(),
BucketLifecycleRuleArgs.builder()
.action(BucketLifecycleRuleActionArgs.builder()
.type("AbortIncompleteMultipartUpload")
.build())
.condition(BucketLifecycleRuleConditionArgs.builder()
.age(1)
.build())
.build())
.location("US")
.build());
}
}
Enabling Public Access Prevention
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.storage.Bucket;
import com.pulumi.gcp.storage.BucketArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var auto_expire = new Bucket("auto-expire", BucketArgs.builder()
.forceDestroy(true)
.location("US")
.publicAccessPrevention("enforced")
.build());
}
}
Import
Storage buckets can be imported using the name
or project/name
. If the project is not passed to the import command it will be inferred from the provider block or environment variables. If it cannot be inferred it will be queried from the Compute API (this will fail if the API is not enabled). e.g.
$ pulumi import gcp:storage/bucket:Bucket image-store image-store-bucket
$ pulumi import gcp:storage/bucket:Bucket image-store tf-test-project/image-store-bucket
false
in state. If you've set it to true
in config, run pulumi up
to update the value set in state. If you delete this resource before updating the value, objects in the bucket will not be destroyed.
Constructors
Properties
The bucket's Autoclass configuration. Structure is documented below.
The bucket's Cross-Origin Resource Sharing (CORS) configuration. Multiple blocks of this type are permitted. Structure is documented below.
The bucket's custom location configuration, which specifies the individual regions that comprise a dual-region bucket. If the bucket is designated a single or multi-region, the parameters are empty. Structure is documented below.
Whether or not to automatically apply an eventBasedHold to new objects added to the bucket.
The bucket's encryption configuration. Structure is documented below.
When deleting a bucket, this boolean option will delete all contained objects. If you try to delete a bucket that contains objects, the provider will fail that run.
The bucket's Lifecycle Rules configuration. Multiple blocks of this type are permitted. Structure is documented below.
The GCS location.
The bucket's Access & Storage Logs configuration. Structure is documented below.
Prevents public access to a bucket. Acceptable values are "inherited" or "enforced". If "inherited", the bucket uses public access prevention. only if the bucket is subject to the public access prevention organization policy constraint. Defaults to "inherited".
Enables Requester Pays on a storage bucket.
Configuration of the bucket's data retention policy for how long objects in the bucket should be retained. Structure is documented below.
The Storage Class of the new bucket. Supported values include: STANDARD
, MULTI_REGIONAL
, REGIONAL
, NEARLINE
, COLDLINE
, ARCHIVE
.
Enables Uniform bucket-level access access to a bucket.
The bucket's Versioning configuration. Structure is documented below.
Configuration if the bucket acts as a website. Structure is documented below.