boundary/mini-docs day 2 - configuration/worker/index (#5730) (#5810)

* rework mini-docs day

* fix note callout

* Update website/content/docs/configuration/worker/index.mdx



* docs: Minor edits

---------

Co-authored-by: Ken Keller <104874953+mister-ken@users.noreply.github.com>
Co-authored-by: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
pull/5813/head
Dan Heath 12 months ago committed by GitHub
parent 53f43edeef
commit d482e0bda2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -13,22 +13,21 @@ All workers within Boundary use certificates and encryption keys to identify
themselves and protect data in transit. However, there are three different
ways to register them so that registration of workers can fit into any workflow: controller-led, worker-led, and via external KMS.
The differences in how they are configured are in the sub-pages linked at the
bottom of this page.
The sub-pages linked at the bottom of this page explain the differences in their configuration.
Workers registered using the worker-led or controller-led methods must be registered in
the system using an API call, and require storage on disk to store the current set
of credentials. Workers registering using an external KMS auto-register after successful authentication, making them an easy mechanism to
use for automatic scaling. This also means they are not required to store
credentials locally; each time they connect the KMS is used to reauthenticate
them.
You must register workers using the worker-led or controller-led methods in the system with an API call. These workers require storage on disk to store the current set of credentials. Workers using an external KMS auto-register after authenticating. This makes them an easy mechanism to use for automatic scaling.
This also means they do not need to store
credentials locally; the KMS re-authenticates them each time they connect.
~> Prior to version 0.15 of Boundary, there were two different types of workers, PKI & KMS workers.
If you are using pre-0.15 workers, with pre-0.15 upstreams please be sure to switch the documentation
version to `0.13.x` - `0.14.x` for correct information.
<Note title="Important">
Before version 0.15 of Boundary, there were two different types of workers, PKI & KMS workers.
If you are using pre-0.15 workers with pre-0.15 upstream configurations, please switch the documentation version to `0.13.x` - `0.14.x`. This will ensure you have the correct information.
</Note>
## Common worker parameters
Regardless of registration mechanism, the following fields are supported.
The following fields apply to all registration mechanisms.
```hcl
worker {
@ -57,24 +56,24 @@ worker {
```
- `public_addr` - Specifies the public host or IP address (and optionally port)
at which the worker can be reached _by clients for proxying_. This defaults to
the address of the listener marked for `proxy` purpose. This is especially
useful for cloud environments that do not bind a publicly accessible IP to a
NIC on the host directly, such as an Amazon EIP.
where clients can reach the worker for proxying. By default, it uses the
address of the listener marked for `proxy` purpose. This is useful for cloud
environments that do not bind a publicly accessible IP directly to a NIC on
the host, such as an Amazon EIP.
You should omit this parameter in multi-hop configurations if this self-managed worker connects to an upstream HCP-managed worker.
You should omit this parameter in multi-hop configurations if this self-managed worker connects to an upstream HCP-managed worker.
This value can reference any of the following:
- a direct address string
- a file on disk (file://) from which an address will be read
- an env var (env://) from which the address will be read
This value can reference any of the following:
- a direct address string
- read an address from a file on disk (file://)
- read an address from an environment variable (env://)
- `initial_upstreams` - A list of hosts/IP addresses and optionally ports for
reaching the boundary cluster. The port will default to `:9201` if not
reaching the Boundary cluster. The port will default to `:9201` if not
specified. This value can be a direct access string array with the addresses,
or it can refer to a file on disk (`file://`) from which the addresses will be
read, or an env var (`env://`) from which the addresses will be read. When
using env or file, their contents must formatted as a JSON array:
read, or an environment variable (`env://`) from which to read the addresses. When
using environment variable or file, their contents must formatted as a JSON array:
`["127.0.0.1", "192.168.0.1", "10.0.0.1"]`
Self-managed workers connecting to HCP Boundary require the [`hcp_boundary_cluster_id`](/boundary/docs/configuration/worker/#hcp_boundary_cluster_id) parameter instead of `initial upstreams`, unless you are configuring an HCP-managed worker as an ingress worker.
@ -82,35 +81,35 @@ worker {
- `hcp_boundary_cluster_id` - A string required to configure workers using worker-led or controller-led registration
to connect to your HCP Boundary cluster rather than specifying
`initial_upstreams`. This parameter is currently only valid for workers using the worker-led or controller-led
`initial_upstreams`. This parameter is valid only for workers using the worker-led or controller-led
registration method and for workers directly connected to HCP Boundary.
- `recording_storage_path` - A path to the local storage for recorded sessions.
Session recordings are stored in the local storage while they are in progress.
Boundary stores session recordings in the local storage while they are in progress.
When the session is complete, Boundary moves the local session recording to remote storage and deletes the local copy.
- `recording_storage_minimum_available_capacity` - A value measured in bytes that defines the worker's local storage state.
Boundary compares this value with the available local disk space found in the `recording_storage_path` to determine if a worker can be used for session recording operations.
- `recording_storage_minimum_available_capacity` - A value measured in bytes that
defines the worker's local storage state. Boundary compares this value to the available local disk space found in the `recording_storage_path` and determines if a worker can perform session recording operations.
The supported suffixes are kb, kib, mb, mib, gb, gib, tb, tib, which are not case sensitive. Example: 2GB, 2gb, 2GiB, 2gib.
The possible storage states based on the `recording_storage_minimum_available_capacity` are:
- Available - The worker is above the storage threshold and is available to proxy sessions that are enabled with session recording.
- Low storage - The worker is below the storage threshold. Existing sessions can continue without interruption, but new sessions that are enabled with session recording cannot be proxied. The worker is not available to record new sessions or play back existing recordings.
- Critically low storage - The worker is below half the storage threshold. Existing sessions that are enabled with session recording will be forcefully closed. The worker is not available to record new sessions or play back existing recordings.
- Out of storage - The worker is out of local disk space. It is not available to record new sessions or play back existing recordings. The worker is in a unrecoverable state. An administrator must intervene to remedy the issue.
- Not configured - The worker does not have a local storage path configured.
- Unknown - The default local storage state of a worker. This state indicates that the local storage state of a worker is not yet known.
- Available - The worker has storage above the threshold and can proxy sessions that have session recording enabled.
- Low storage - The worker has storage below the threshold. It allows existing sessions to continue without interruption but prevents proxying new sessions that have session recording enabled. The worker cannot record new sessions or play back existing recordings.
- Critically low storage - The worker falls below half the storage threshold. It forcefully closes existing sessions with session recording. The worker cannot record new sessions or play back existing recordings.
- Out of storage - The worker is out of local disk space. It cannot record new sessions or play back existing recordings. The worker enters an unrecoverable state, requiring an administrator to intervene and resolve the issue.
- Not configured - The worker lacks a configured local storage path.
- Unknown - The worker starts with this default local storage state. This state indicates that the worker's local storage state is not yet known.
- `tags` - A map of key-value pairs where values are an array of strings. Most
commonly used for [filtering](/boundary/docs/concepts/filtering) targets a
worker can proxy via [worker
tags](/boundary/docs/concepts/filtering/worker-tags). On `SIGHUP`, the tags
set here will be re-parsed and new values used. It can also be a string
referring to a file on disk (`file://`) or an env var (`env://`).
referring to a file on disk (`file://`) or an environment variable (`env://`).
## Signals
The `SIGHUP` signal causes a worker to reload its configuration file to pick up any updates for the `initial_upstreams` and `tags` values.
Any other updated values are ignored.
Boundary ignores other updated values.
The `SIGTERM` and `SIGINT` signals initiate a graceful shutdown on a worker. The worker waits for any sessions to drain
before shutting down. Workers in a graceful shutdown state do not receive any new work, including session proxying, from the control plane.
@ -121,13 +120,13 @@ before shutting down. Workers in a graceful shutdown state do not receive any ne
Multi-hop capabilities, including multi-hop sessions and Vault private access,
is when a session or Vault credential request goes through more than one worker.
To enable this, two or more workers must be connected to each other in some
configuration. There are no limits on the amount of workers allowed in a
To enable this, you must connect two or more workers to each other in some
configuration. There are no limits on the number of workers allowed in a
multi-hop session configuration.
It helps to think of “upstream” and “downstream” nodes in the context of
multi-hop. If you view controllers as the “top” node of a multi-hop chain, any
worker connected to a node is "downstream" of that node; the node that any
worker connected to a node is "downstream" of that node. The node that any
particular worker connects to (whether another worker or a controller) is the
"upstream" of that node. For example, in the diagram below, Worker 2s upstream
is Worker 1, and its downstream is Worker 3.
@ -140,9 +139,7 @@ its upstream worker, and create a reverse proxy to establish a session.
You can configure [target worker filters][] with multi-hop workers to allow for
fine-grained control on which workers handle ingress and egress for session
traffic to a [target][]. Ingress worker filters determine which workers you
connect with to initiate a session, and egress worker filters determine which
workers are used to access targets.
traffic to a [target][]. Ingress worker filters specify the workers you use to initiate a session, and egress worker filters specify the workers you use to access targets.
### Multi-hop worker requirements

Loading…
Cancel
Save