diff --git a/website/content/docs/configuration/session-recording/create-storage-bucket.mdx b/website/content/docs/configuration/session-recording/create-storage-bucket.mdx index 0be073b73c..f1bc270097 100644 --- a/website/content/docs/configuration/session-recording/create-storage-bucket.mdx +++ b/website/content/docs/configuration/session-recording/create-storage-bucket.mdx @@ -61,7 +61,7 @@ Complete the following steps to create a storage bucket in Boundary. - **Access key ID**: (Required) The access key ID that AWS generates for the IAM user to use with the storage bucket. - **Secret access key**: (Required) The secret access key that AWS generates for the IAM user to use with this storage bucket. - - **Worker filter**: (Required) A filter that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. + - **Worker filter**: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to [filter examples](/boundary/docs/concepts/filtering/worker-tags#example-worker-filter-for-storage-buckets) to learn about worker tags and filters. - **Disable credential rotation**: (Optional) Prevents the AWS plugin from automatically rotating credentials. Although credentials are stored encrypted in Boundary, by default the [AWS plugin](https://github.com/hashicorp/boundary-plugin-aws) attempts to rotate the credentials you provide. The given credentials are used to create a new credential, and then the original credential is revoked. @@ -79,7 +79,7 @@ Complete the following steps to create a storage bucket in Boundary. For more information, refer to the AWS documentation for [Logging IAM and AWS STS API calls with AWS CloudTrail](https://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html). - **Role tags**: An object with key-value pair attributes that is passed when you assume an IAM role. For more information, refer to the AWS documentation for [Passing session tags in AWS STS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html). - - **Worker filter**: (Required) A filter that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. + - **Worker filter**: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to [filter examples](/boundary/docs/concepts/filtering/worker-tags#example-worker-filter-for-storage-buckets) to learn about worker tags and filters. - **Disable credential rotation**: (Required) Prevents the AWS plugin from automatically rotating credentials. This option is required if you use dynamic credentials. @@ -110,7 +110,7 @@ The required fields for creating a storage bucket depend on whether you configur -bucket-name mybucket1 \ -plugin-name aws \ -scope-id o_1234567890 \ - -worker-filter ‘“dev” in “/tags/type”’ \ + -worker-filter ‘“aws-worker” in “/tags/type”’ \ -secret ‘{“access_key_id”: “123456789” , “secret_access_key” : “123/456789/12345678”}’ \ -attributes ‘{“region”:”us-east-1”,”disable_credential_rotation”:true}’ ``` @@ -121,7 +121,7 @@ The required fields for creating a storage bucket depend on whether you configur - `bucket-name`: (Required) The name of the AWS bucket you want to associate with the Boundary storage bucket. - `plugin-name`: (Required) The name of the Boundary storage plugin. - `scope_id`: (Required) A storage bucket can belong to the Global scope or an Org scope. - - `worker-filter`: (Required) A filter that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. + - `worker-filter`: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to [filter examples](/boundary/docs/concepts/filtering/worker-tags#example-worker-filter-for-storage-buckets) to learn about worker tags and filters. - `secret`: (Required) The AWS credentials to use. - `access_key_id`: (Required) The AWS access key to use. - `secret_access_key_id`: (Required) The AWS secret access key to use. @@ -155,7 +155,7 @@ The required fields for creating a storage bucket depend on whether you configur - `bucket-name`: (Required) The name of the AWS bucket you want to associate with the Boundary storage bucket. - `plugin-name`: (Required) The name of the Boundary storage plugin. - `scope_id`: (Required) A storage bucket can belong to the Global scope or an Org scope. - - `worker-filter`: (Required) A filter that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. + - `worker-filter`: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to [filter examples](/boundary/docs/concepts/filtering/worker-tags#example-worker-filter-for-storage-buckets) to learn about worker tags and filters. - `attributes` or `-attr`: Attributes of the Amazon S3 storage bucket. - `role_arn`: (Required) The ARN (Amazon Resource Name) role that is attached to the EC2 instance that the self-managed worker runs on. - `role_external_id`: (Optional) A required value if you delegate third party access to your AWS resources. @@ -173,7 +173,11 @@ The required fields for creating a storage bucket depend on whether you configur -The HCL code for creating a storage bucket is different depending on whether you configured the AWS S3 bucket with static or dynamic credentials. +The HCL code for creating a storage bucket is different depending on whether you configured the AWS S3 bucket with static or dynamic credentials. This page provides example configurations for a generic Terraform deployment. + +Refer to the [Boundary Terraform provider documentation](https://registry.terraform.io/providers/hashicorp/boundary/latest/docs) to learn about the requirements for the following example attributes. + +Support for Amazon S3 storage providers leverages the [Boundary AWS plugin](https://github.com/hashicorp/boundary-plugin-aws). @@ -201,7 +205,7 @@ resource "boundary_storage_bucket" "aws_static_credentials_example" { "access_key_id" = "aws_access_key_id_value", "secret_access_key" = "aws_secret_access_key_value" }) - worker_filter = "\"dev\" in \"/tags/type\"" + worker_filter = "\"aws-worker\" in \"/tags/type\"" } output "storage_bucket_id" { @@ -229,7 +233,7 @@ resource "boundary_storage_bucket" "aws_dynamic_credentials_example" { "role_arn" = "arn:aws:iam::123456789012:role/S3Access" "disable_credential_rotation" = true }) - worker_filter = "\"dev\" in \"/tags/type\"" + worker_filter = "\"s3-worker\" in \"/tags/type\"" } output "storage_bucket_id" { @@ -271,7 +275,7 @@ Complete the following steps to create a storage bucket in Boundary. - **Region**: (Optional) The region to configure the storage bucket for. - **Access key ID** (Required): The MinIO service account's access key to use with this storage bucket. - **Secret access key** (Required): The MinIO service account's secret key to use with this storage bucket. - - **Worker filter**: (Required) A filter that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. + - **Worker filter**: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to [filter examples](/boundary/docs/concepts/filtering/worker-tags#example-worker-filter-for-storage-buckets) to learn about worker tags and filters. - **Disable credential rotation**: (Optional) Controls whether the plugin will rotate the incoming credentials and manage a new MinIO service account. If this attribute is set to false, or not provided, the plugin will rotate the incoming credentials, using them to create a new MinIO service account, then delete the incoming credentials. 1. Click **Save**. @@ -288,7 +292,7 @@ Complete the following steps to create a storage bucket in Boundary. -plugin-name minio \ -scope-id o_1234567890 \ -bucket-prefix="foo/bar/zoo" \ - -worker-filter '"minio" in "/tags/type"' \ + -worker-filter '"minio-worker" in "/tags/type"' \ -attr endpoint_url="https://my-minio-instance.dev:9000" \ -attr region="REGION" \ -attr disable_credential_rotation=true \ @@ -301,7 +305,7 @@ Complete the following steps to create a storage bucket in Boundary. - `bucket-name`: (Required) Name of the MinIO bucket you want to associate with the Boundary storage bucket. - `plugin-name`: (Required) The name of the Boundary storage plugin. - `scope_id`: (Required) A storage bucket can belong to the Global scope or an Org scope. - - `worker-filter`: (Required) A filter that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. + - `worker-filter`: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to [filter examples](/boundary/docs/concepts/filtering/worker-tags#example-worker-filter-for-storage-buckets) to learn about worker tags and filters. - `secret`: (Required) The MinIO credentials to use. - `access_key_id` (Required): The MinIO service account's access key to use with this storage bucket. - `secret_access_key` (Required): The MinIO service account's secret key to use with this storage bucket. @@ -313,10 +317,45 @@ Complete the following steps to create a storage bucket in Boundary. This option must be set to `true` if you use dynamic credentials. - + + +This page provides example configurations for a generic Terraform deployment. + +Refer to the [Boundary Terraform provider documentation](https://registry.terraform.io/providers/hashicorp/boundary/latest/docs) to learn about the requirements for the following example attributes. + +Support for MinIO storage providers leverages the [Boundary MinIO plugin](https://github.com/hashicorp/boundary-plugin-minio). + +Apply the following Terraform policy: + +```hcl +resource "boundary_storage_bucket" "minio_credentials_example" { + name = "My MinIO storage bucket" + description = "My first storage bucket" + scope_id = "o_1234567890" + plugin_name = "minio" + bucket_name = "mybucket1" + + attributes_json = jsonencode({ + "endpoint_url" = "minio_access_key_id_value", + "disable_credential_rotation" = true + }) + + secrets_json = jsonencode({ + "access_key_id" = "minio_access_key_id_value", + "secret_access_key" = "minio_secret_access_key_value" + }) + worker_filter = "\"minio-worker\" in \"/tags/type\"" +} + +output "storage_bucket_id" { + value = boundary_storage_bucket.minio_credentials_example.id +} +``` + + Complete the following steps to create a storage bucket in Boundary using an S3-compliant storage provider. Hitachi Content Platform is used as an example below. @@ -345,7 +384,7 @@ Complete the following steps to create a storage bucket in Boundary using an S3- - **Region**: (Optional) The region to configure the storage bucket for. - **Access key ID** (Required): The storage provider's service account's access key to use with this storage bucket. - **Secret access key** (Required): The storage provider's service account's secret key to use with this storage bucket. - - **Worker filter**: (Required) A filter that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. + - **Worker filter**: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to [filter examples](/boundary/docs/concepts/filtering/worker-tags#example-worker-filter-for-storage-buckets) to learn about worker tags and filters. - **Disable credential rotation**: (Optional) Controls whether the plugin will rotate the incoming credentials and manage a new storage service account. If this attribute is set to false, or not provided, the plugin will rotate the incoming credentials, using them to create a new storage service account, then delete the incoming credentials. Note that credential rotation is not supported for Hitachi Content Platform, and it may not function for other S3-compatible providers. @@ -364,7 +403,7 @@ Complete the following steps to create a storage bucket in Boundary using an S3- -plugin-name minio \ -scope-id o_1234567890 \ -bucket-prefix="foo/bar/zoo" \ - -worker-filter '"dev" in "/tags/type"' \ + -worker-filter '"storage-worker" in "/tags/type"' \ -attr endpoint_url="https://my-hitachi-instance.dev:9000" \ -attr region="REGION" \ -attr disable_credential_rotation=true \ @@ -378,7 +417,7 @@ Complete the following steps to create a storage bucket in Boundary using an S3- - `plugin-name`: (Required) The name of the Boundary storage plugin. Use the `minio` plugin for S3-compatible storage. - `scope_id`: (Required) A storage bucket can belong to the Global scope or an Org scope. - - `worker-filter`: (Required) A filter that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. + - `worker-filter`: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to [filter examples](/boundary/docs/concepts/filtering/worker-tags#example-worker-filter-for-storage-buckets) to learn about worker tags and filters. - `secret`: (Required) The storage provider's credentials to use. - `access_key_id` (Required): The storage provider's service account's access key to use with this storage bucket. - `secret_access_key` (Required): The storage provider's service account's secret key to use with this storage bucket. @@ -389,6 +428,42 @@ Complete the following steps to create a storage bucket in Boundary using an S3- Note that credential rotation is not supported for Hitachi Content Platform, and it may not function for other S3-compatible providers. + + + +This page provides example configurations for a generic Terraform deployment. + +Refer to the [Boundary Terraform provider documentation](https://registry.terraform.io/providers/hashicorp/boundary/latest/docs) to learn about the requirements for the following example attributes. + +Support for S3-compliant storage providers leverages the [Boundary MinIO plugin](https://github.com/hashicorp/boundary-plugin-minio). + +Apply the following Terraform policy: + +```hcl +resource "boundary_storage_bucket" "storage_credentials_example" { + name = "My storage bucket" + description = "My first storage bucket" + scope_id = "o_1234567890" + plugin_name = "minio" + bucket_name = "mybucket1" + + attributes_json = jsonencode({ + "endpoint_url" = "minio_access_key_id_value", + "disable_credential_rotation" = true + }) + + secrets_json = jsonencode({ + "access_key_id" = "storage_access_key_id_value", + "secret_access_key" = "storage_secret_access_key_value" + }) + worker_filter = "\"storage-worker\" in \"/tags/type\"" +} + +output "storage_bucket_id" { + value = boundary_storage_bucket.storage_credentials_example.id +} +``` +