* Key Rotation/Destruction (#2477) * fix: Correct kms.CreateKeys comment This method does not return anything. * feat(kms): Add new ListKeys method The new ListKeys method wraps the underlying KMS wrapper, returning all the keys in the scope specified. * feat(scopes): Add new scope actions for key management Adds list-keys, rotate-keys and revoke-keys actions for scopes. * feat(scopes): Add ListKeys API endpoint This new endpoint lists all keys in a scope * feat(api): Add ListKeys to Scopes client This has to be a custom function because it is a custom action and doesn't map well to any of the existing templates, or generalises well in a way that would make it reasonable to create a new template. * feat(cli): Add new scopes list-keys command * add RotateKeys function to the kms repository * Add Rotate Keys Endpoint to Scopes Api (#2360) * add rotate keys endpoint * add tests for RotateKeys endpoint * remove that one comment I forgot about * addressing more PR comments * Add Rotate Keys CLI Command (#2395) * add cli command and client * add bats tests * fix some info text and pr comments * write a new rotate keys description * Add Missing DEK Foreign Keys (#2408) * add missing fks * mark key as nullable in gorm * update tests to use a new wrapper with a real key_id * fix formatting in test and add keys for auth_token test * replace key values * revert postgres_40_01_test.go * style: reformat sql for readability * refactor: change type of key_id columns to kms_private_id * Store key_id and scope_id with oplog entries, re-type columns referencing data key version (#2431) * fix(db): Correct types of data key version referencers * fix(oplog): Fix missing error handling * feat(oplog): Store key_id and scope_id with oplog entries This migration truncates all existing oplog entries, as migrating existing data would have been complex and likely unnecessary. * Update view references * Always delete oplog entries when dropping keys * Fix rebase issues * Fix sql tests * Add schema for new kms destruction jobs (#2479) * feat: Add schema for new kms destruction jobs The schema lets us manage destruction jobs per-table while providing a guarantee that only one run is running at a time. * Apply suggestions from code review Co-authored-by: Michael Gaffney <mgaffney@users.noreply.github.com> * Fix table reference and tests Co-authored-by: Michael Gaffney <mgaffney@users.noreply.github.com> * update domain, api, and cli to all include key versions in outputs (#2472) * update domain, api, and cli to all include key versions in outputs * fix go sum * add migration to fix immutable error * fix final test issue as well as make gen issue * rebase and fix * address johan pr comments * make gen, didn't realize comments were included * update migrations based on new migration 55 file 03 * Move migrations to 56 * feat: Add key version rewrapping function registry (#2478) The new RegisterTableRewrapFn can be used by a domain to register a callback to use when rewrapping data in a specific table name. * feat(kms): Add scopeId to RewrapFn (#2539) This makes it much easier for rewrap functions to look up the correct wrapper and limit the number of rows searched. * Update Root Cert Proto Definition (#2542) * update root cert proto definition to properly identify public_key as the table primary key * add the test fix as well * Rewrap Functions for Encrypted Tables (#2532) * add oidc rewrap * add argon2 config rewrap * add auth token rewrapping * add worker auth cert rewrapping * check the err * add username password credential rewrapping * add ssh private key credential rewrapping * add vault client certificate rewrap function * add host catalog secret rewrap function * cleanup a little * add host vault token rewrap function * add session credential rewrap function * add session rewrap function * add worker auth rewrap function * update test to include full assert gamut * add session rewrap function TEST * bring all tests up to current standards, make more readable, and include full assert gamut * rework all rewrap functions to utilize scope id * update comments, queries, sql refs, etc, in response to PR comments * forgot a comment * remove hmac updates and fix a couple comments again * Rewrapping Registered Tables Test (#2555) * add the registered table test as well as the two missing rewrap functions * address pr comments, add db r/w conversions, cmp.diff, sql comment, remove (now) faulty immutable test * Add ability to list key version destruction jobs, recurring jobs and destroying key versions (#2498) * feat(kms): Add ability to list data key version destruction jobs * feat(scopes): Add ListKeyVersionDestructionJobs * feat(cli): Add list-key-version-destruction-jobs * feat(kms): Add recurring job that performs table rewrapping The new recurring job runs for each job table with a registered rewrapping callback. It attempts to become the running run and start its rewrapping, gracefully handling sudden interruptions by resuming it's work if it finds evidence of sudden interruption. * feat(kms): Add recurring job for destroying key versions The new job monitors the database for data key version destruction jobs that have finished rewrapping all its data, performing the final data key version revocation. * feat(kms): Add DestroyKeyVersion DestroyKeyVersion immediately destroys a key version if it is a root key version or a data key version which encrypts no data. If the data key version encrypts data which needs to be rewrapped, it queues an asynchronous job to complete the rewrapping operation. * feat(scopes): Add DestroyKeyVersion API endpoint * feat(cli): Add scopes destroy-key-version CLI command * Ensure all secret updates include key ID (#2556) * feat(all): Ensure all secret updates include key ID When updating a secret, it is paramount that the key ID is also updated, as it is the only means we have of ensuring that we only destroy keys once there is no more data associated with the key. * Remove fix to session repository for now This breaks the current private key derivation method * add new encrypt/decrypt funcs and made necessary proto changes (#2557) * add new encrypt/decrypt funcs and made necessary proto changes * address all PR comments * fix test panic * add param checking to rewraps * fix(schema): Correctly organize migrations again * Review comments part 1 * Review comments part 2 * Review comments part 3 * Store cert private key, encrypt tofu token (#2583) * fix(session): store cert private key, encrypt tofu token Previously, we would derive a session certificate private key from the session key used in each project, and not store the key. We would also encrypt the tofu token with the database key but not store a reference to this key in the database. This change fixes this by replacing our key derivation with a key generation step, and instead store the generated key, encrypted, in the database. We also ensure that any new tofu tokens are encrypted with the same key. To handle existing sessions, we lazily rewrite the sessions whenever a user lists them. There is a minor risk that a user could end up destroying the database key that encrypts the tofu token before that session has been rewrapped, but this is going to be rare and temporary. * feat(session): Allow the random source for secrets to be configured * Review comments * Minor fixes * More minor fixes * More small fixes Co-authored-by: Danielle Miu <dani.miu@hashicorp.com> Co-authored-by: Danielle <29378233+DanielleMiu@users.noreply.github.com> Co-authored-by: Michael Gaffney <mike@gaffney.cc> Co-authored-by: Michael Gaffney <mgaffney@users.noreply.github.com> * fix(session): Unset session key ID when scope is deleted Unsetting the key ID allows the session to live beyond the lifetime of the scope, while allowing the scope to cascade delete its keys. * fix(migrations): Rewrite migration 56 tests These tests can no longer use the normal test helpers as they try to use the new oplog table structure, so we have to manually write all the interactions. * fix(migrations): Rename migration 58 file name This was the name used in the release branch, update it to the new migration number. * fix(e2e): Add some comments and debug to kms tests * fix(session): Handle empty project and user IDs in read When a project or user is deleted, the session is automatically canceled and the respective field on the session is unset. LookupSession did not handle this case gracefully, and would error on decryption in this state. Skip decrypting sessions that do not have either a user or project to avoid this error. The decrypted values are not used for a canceled session. * Fix worker test * Increase test timeout Co-authored-by: Danielle Miu <dani.miu@hashicorp.com> Co-authored-by: Danielle <29378233+DanielleMiu@users.noreply.github.com> Co-authored-by: Michael Gaffney <mike@gaffney.cc> Co-authored-by: Michael Gaffney <mgaffney@users.noreply.github.com> |
3 years ago | |
|---|---|---|
| .. | ||
| boundary | test(e2e): Check that an active session ends if the user is deleted (#2588) | 3 years ago |
| tests | Key Rotation/Destruction (#2477) (#2607) | 3 years ago |
| vault | test(e2e): Update test to also use a username/password credential in vault (#2595) | 3 years ago |
| README.md | doc(e2e): Add info for adding tests | 3 years ago |
| helpers.go | refact(e2e): Add WithEnv option to RunCommand | 3 years ago |
README.md
boundary-e2e-tests
This test suite tests Boundary in an end-to-end setting, utilizing both the Boundary CLI and the Boundary Go API to exercise Boundary through various user workflows. It was designed to be run in a variety of environments as long as the appropriate environment variables are set. The test suite itself uses the standard go test library.
One method for setting up an environment is utilizing Enos to create the desired infrastructure.
Getting Started
Enos
Setup Enos as described here
Then, use the following commands to run tests
cd enos
enos scenario list
# `Run` executes the tests and destroys the associated infrastructure in one command
enos scenario run e2e_{scenario} builder:local
# `Launch` executes the tests, but leaves the infrastructure online for debugging purposes
enos scenario launch e2e_{scenario} builder:local
enos scenario output # displays any defined enos output
enos scenario destroy # destroys infra
Enos scenarios set up the infrastructure, set the appropriate environment variables, and run the specified tests in its scenario file.
Note: To run the e2e_host_aws scenario, you will need access to the boundary team's test AWS
account.
Local
Set the appropriate environment variables...
export BOUNDARY_ADDR= # e.g. http://127.0.0.1:9200
export E2E_PASSWORD_AUTH_METHOD_ID= # e.g. ampw_1234567890
export E2E_PASSWORD_ADMIN_LOGIN_NAME= # e.g. "admin"
export E2E_PASSWORD_ADMIN_PASSWORD= # e.g. "password"
# For e2e/host/static
export E2E_TARGET_IP= # e.g. 192.168.0.1
export E2E_SSH_KEY_PATH= # e.g. /Users/username/key.pem
export E2E_SSH_USER= # e.g. ubuntu
# For e2e/host/aws
export E2E_AWS_ACCESS_KEY_ID=
export E2E_AWS_SECRET_ACCESS_KEY=
export E2E_AWS_HOST_SET_FILTER1= # e.g. "tag:testtag=true"
export E2E_AWS_HOST_SET_IPS1= # e.g. "[\"1.2.3.4\", \"2.3.4.5\"]"
export E2E_AWS_HOST_SET_FILTER2= # e.g. "tag:testtagtwo=test"
export E2E_AWS_HOST_SET_IPS2= # e.g. "[\"1.2.3.4\"]
export E2E_SSH_KEY_PATH= # e.g. /Users/username/key.pem
export E2E_SSH_USER= # e.g. ubuntu
# For e2e/credential/vault
export VAULT_ADDR= # e.g. http://127.0.0.1:8200
export VAULT_TOKEN=
export E2E_TARGET_IP= # e.g. 192.168.0.1
export E2E_SSH_KEY_PATH= # e.g. /Users/username/key.pem
export E2E_SSH_USER= # e.g. ubuntu
Then, run...
go test github.com/hashicorp/boundary/testing/e2e/target // run target tests
go test ./target/ // run target tests if running from this directory
go test github.com/hashicorp/boundary/testing/e2e/target -v // verbose
go test github.com/hashicorp/boundary/testing/e2e/target -v -run '^TestCreateTargetApi$' // run a specific test
Adding Tests
Tests live under this directory. Additional tests can be added to an existing go package or a new one can be created. If a new package is created, a new enos scenario would also need to be created.
Enos is comprised of scenarios, where a scenario is the environment you want the tests to operate in. In one scenario, there may be a boundary cluster and a target. Another scenario might involve a boundary cluster and a vault instance. Scenarios can be found in boundary/enos
To run these tests in CI, the enos-run.yml github action
workflow must be updated to include the new scenario (see the matrix).
Development
To assist with iterating on tests on enos launched infrastructure, you can perform the following...
Add the following snippet to print out environment variable information
# `c` is the output from `loadConfig()`
s, _ := json.MarshalIndent(c, "", "\t")
log.Printf("%s", s)
Launch an enos scenario
enos scenario launch e2e_{scenario} builder:local
enos scenario output
Take the printed environment variable information and export them into another terminal session
export BOUNDARY_ADDR=
export E2E_PASSWORD_AUTH_METHOD_ID=
...
Run your tests
go test -v {go package}