mirror of https://github.com/hashicorp/boundary
Merge PKI & KMS docs (#4325)
* Merge PKI & KMS docs * Update worker-configuration.mdx remove deprecated kms flag * docs: Fix TOC * docs: Create redirects * docs: Fix some broken links * modify overview page for post 0.15 changes * Modify data-encryption docs for post 0.15 * modify configure-workers page for post 0.15 --------- Co-authored-by: Dan Heath <76443935+Dan-Heath@users.noreply.github.com>pull/4288/head^2
parent
27a697f1ce
commit
50de448fee
@ -1,136 +0,0 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: KMS worker configuration
|
||||
description: |-
|
||||
KMS worker-specific parameters.
|
||||
---
|
||||
|
||||
# KMS worker configuration
|
||||
|
||||
This page describes configuration for workers that authenticate to upstreams
|
||||
using a shared KMS. This mechanism auto-registers the worker in addition to
|
||||
authenticating it, and does not require on-disk storage for credentials since
|
||||
each time it connects it reauthenticates using the trusted KMS.
|
||||
|
||||
~> If using 0.13+ workers with pre-0.13 controllers you _must_ set the
|
||||
`use_deprecated_kms_auth_method` value to true. Additionally, a worker
|
||||
registered using this method _against a pre-0.13 controller_ will only be able
|
||||
to register directly to a controller, and cannot be used as part of multi-hop or
|
||||
Vault private access capabilities.
|
||||
|
||||
KMS Workers require a `name` field. This specifies a unique name of this worker
|
||||
within the Boundary cluster and _must be unique across workers_. The `name`
|
||||
value can be:
|
||||
- a direct name string (must be all lowercase)
|
||||
- a reference to a file on disk (`file://`) from which the name is read
|
||||
- an env var (`env://`) from which the name is read.
|
||||
|
||||
KMS Workers accept an optional `description` field. The `description` value can
|
||||
be:
|
||||
- a direct description string
|
||||
- a reference to a file on disk (`file://`) from which the name is read
|
||||
- an env var (`env://`) from which the name is read.
|
||||
|
||||
```hcl
|
||||
worker {
|
||||
name = "example-worker"
|
||||
description = "An example worker"
|
||||
public_addr = "5.1.23.198"
|
||||
# Uncomment if using 0.13 worker against pre-0.13 controller
|
||||
# use_deprecated_kms_auth_method = true
|
||||
}
|
||||
```
|
||||
|
||||
KMS Workers also require a KMS block designated for `worker-auth`. This is the
|
||||
KMS configuration for authentication between the workers and controllers and
|
||||
must be present. Example (not safe for production!):
|
||||
|
||||
```hcl
|
||||
kms "aead" {
|
||||
purpose = "worker-auth"
|
||||
aead_type = "aes-gcm"
|
||||
key = "X+IJMVT6OnsrIR6G/9OTcJSX+lM9FSPN"
|
||||
key_id = "global_worker-auth"
|
||||
}
|
||||
```
|
||||
|
||||
The upstream controller or worker must have a `kms` block that references the
|
||||
same key and purpose. (If both a controller and worker are running as the same
|
||||
server process only one stanza is needed.) It is also possible to specify a
|
||||
`kms` block with the `downstream-worker-auth` purpose. If specified, this will
|
||||
be a separate KMS that can be used for authenticating new downstream nodes.
|
||||
Blocks with this purpose can be specified multiple times. This allows a single
|
||||
upstream node to authenticate with one key to its own upstream (via the
|
||||
`worker-auth` purpose) and then serve as an authenticating upstream to nodes
|
||||
across various networks, each with their own separate KMS system or key:
|
||||
|
||||
```hcl
|
||||
kms "aead" {
|
||||
purpose = "downstream-worker-auth"
|
||||
aead_type = "aes-gcm"
|
||||
key = "XthZVtFtBD1Bw1XwAWhZKVrIwRhR7HcZ"
|
||||
key_id = "iot-nodes-auth"
|
||||
}
|
||||
|
||||
kms "aead" {
|
||||
purpose = "downstream-worker-auth"
|
||||
aead_type = "aes-gcm"
|
||||
key = "OLFhJNbEb3umRjdhY15QKNEmNXokY1Iq"
|
||||
key_id = "production-nodes-auth"
|
||||
}
|
||||
```
|
||||
|
||||
In the examples above we are encoding key bytes directly in the configuration
|
||||
file. This is because we are using the `aead` method where you directly supply a
|
||||
key; in production you'd want to use a KMS such as AWS KMS, GCP CKMS, Azure
|
||||
KeyVault, or HashiCorp Vault. For a complete guide to all available KMS types,
|
||||
see our [KMS documentation](/boundary/docs/configuration/kms).
|
||||
|
||||
## Complete configuration example
|
||||
|
||||
```hcl
|
||||
listener "tcp" {
|
||||
purpose = "proxy"
|
||||
tls_disable = true
|
||||
address = "127.0.0.1"
|
||||
}
|
||||
|
||||
worker {
|
||||
# Name attr must be unique across workers
|
||||
name = "demo-worker-1"
|
||||
description = "A default worker created for demonstration"
|
||||
|
||||
# Workers must be able to reach upstreams on :9201
|
||||
initial_upstreams = [
|
||||
"10.0.0.1",
|
||||
"10.0.0.2",
|
||||
"10.0.0.3",
|
||||
]
|
||||
|
||||
public_addr = "myhost.mycompany.com"
|
||||
|
||||
tags {
|
||||
type = ["prod", "webservers"]
|
||||
region = ["us-east-1"]
|
||||
}
|
||||
|
||||
# use_deprecated_kms_auth_method = true
|
||||
}
|
||||
|
||||
# must be same key as used on controller config
|
||||
kms "aead" {
|
||||
purpose = "worker-auth"
|
||||
aead_type = "aes-gcm"
|
||||
key = "X+IJMVT6OnsrIR6G/9OTcJSX+lM9FSPN"
|
||||
key_id = "global_worker-auth"
|
||||
}
|
||||
```
|
||||
|
||||
[`initial_upstreams`](/boundary/docs/configuration/worker/overview#initial_upstreams)
|
||||
are used to connect to upstream Boundary clusters.
|
||||
|
||||
## Resources
|
||||
|
||||
For more on how `tags{}` in the above configuration are used to facilitate
|
||||
routing to the correct target, refer to the [Worker
|
||||
Tags](/boundary/docs/concepts/filtering/worker-tags) page.
|
||||
@ -1,101 +0,0 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: PKI worker configuration
|
||||
description: |-
|
||||
PKI worker-specific parameters.
|
||||
---
|
||||
|
||||
# PKI worker configuration
|
||||
|
||||
PKI Workers authenticate to Boundary using an activation token. They require an
|
||||
accessible directory defined by `auth_storage_path` for credential storage and
|
||||
rotation.
|
||||
|
||||
Example (not safe for production!):
|
||||
|
||||
```hcl
|
||||
worker {
|
||||
auth_storage_path="/var/lib/boundary"
|
||||
initial_upstreams = ["10.0.0.1"]
|
||||
}
|
||||
```
|
||||
|
||||
## Authorization methods
|
||||
|
||||
There are two mechanisms that can be used to register a PKI worker to the cluster.
|
||||
|
||||
### Controller-led authorization flow
|
||||
|
||||
In this flow, the operator fetches an activation token from the controller's
|
||||
`workers:create:controller-led` action (on the CLI, this is via `boundary
|
||||
workers create controller-led`). That activation token is given to the worker
|
||||
via the `controller_generated_activation_token` parameter. This can be done
|
||||
either directly or via an env var or file by using `env://` or `file://` syntax:
|
||||
|
||||
```hcl
|
||||
worker {
|
||||
auth_storage_path="/var/lib/boundary"
|
||||
initial_upstreams = ["10.0.0.1"]
|
||||
controller_generated_activation_token = "neslat_........."
|
||||
# controller_generated_activation_token = "env://ACT_TOKEN"
|
||||
# controller_generated_activation_token = "file:///tmp/worker_act_token"
|
||||
}
|
||||
```
|
||||
|
||||
Once the worker starts, it reads this token and uses it to authorize to the
|
||||
cluster. Note that this token is one-time-use; it is safe to keep it here even
|
||||
after the worker has successfully authorized and authenticated, as it will be
|
||||
unusable at that point.
|
||||
|
||||
Note: If this value is not present at worker startup time and the worker is not
|
||||
authorized, it will print and write out suitable information for the worker-led
|
||||
flow, described below. So long as the worker-led flow has not been used to
|
||||
authorize the worker, if the controller-generated activation token is provided
|
||||
and the worker restarted, it will make use of it.
|
||||
|
||||
### Worker-led authorization flow
|
||||
|
||||
In this flow, the worker prints out an authorization request token to two
|
||||
places: the startup information printed to stdout, and a file called
|
||||
`auth_request_token` in the base of the configured `auth_storage_path`. This
|
||||
token can be submitted to a controller at the `workers:create:worker-led` path;
|
||||
on the CLI this would be via `boundary workers create worker-led
|
||||
-worker-generated-auth-token`. No values are needed in the configuration file.
|
||||
|
||||
## KMS configuration
|
||||
|
||||
PKI Workers' credentials can be encrypted by including an optional KMS stanza
|
||||
with the purpose `worker-auth-storage`.
|
||||
|
||||
Example (not safe for production!):
|
||||
```hcl
|
||||
kms "aead" {
|
||||
purpose = "worker-auth-storage"
|
||||
aead_type = "aes-gcm"
|
||||
key = "X+IJMVT6OnsrIR6G/9OTcJSX+lM9FSPN"
|
||||
key_id = "worker-auth-storage"
|
||||
}
|
||||
```
|
||||
|
||||
## Session recording
|
||||
|
||||
<EnterpriseAlert product="boundary">This feature requires <a href="https://www.hashicorp.com/products/boundary">HCP Boundary or Boundary Enterprise</a></EnterpriseAlert>
|
||||
|
||||
[Session recording](/boundary/docs/configuration/session-recording) requires at least one PKI worker with access to local and remote storage.
|
||||
PKI workers used for session recording require an accessible directory defined by `recording_storage_path` for
|
||||
storing in-progress session recordings. On session closure, a local session recording is moved to remote storage and
|
||||
deleted locally.
|
||||
|
||||
Development example:
|
||||
|
||||
```hcl
|
||||
worker {
|
||||
auth_storage_path="/var/lib/boundary"
|
||||
initial_upstreams = ["10.0.0.1"]
|
||||
recording_storage_path="/local/storage/directory"
|
||||
}
|
||||
```
|
||||
|
||||
~> **Note:** `name` and `description` fields are not valid config fields for PKI
|
||||
workers. These fields are only valid for [KMS Workers](/boundary/docs/configuration/worker/kms-worker). `name` and
|
||||
`description` can only be set for PKI workers through the API.
|
||||
Loading…
Reference in new issue