From 551ba2e525f60611ae3b5d64a0fada2415cff4a3 Mon Sep 17 00:00:00 2001 From: Liam Cervante Date: Wed, 10 Sep 2025 17:22:20 +0200 Subject: [PATCH] Implement controlling destroy functionality within Terraform Test (#37359) * Add ability to parse backend blocks present in a test file's run blocks, validate configuration (#36541) * Add ability to parse backend blocks from a run block * Add validation to avoid multiple backend blocks across run blocks that use the same internal state file. Update tests. * Add validation to avoid multiple backend blocks within a single run block. Update tests. * Remove use of quotes in diagnostic messages * Add validation to avoid backend blocks being used in plan run blocks. Update tests. * Correct local backend blocks in new test fixtures * Add test to show that different test files can use same backend block for same state key. * Add validation to enforce state-storage backend types are used * Remove TODO comment We only need to consider one file at a time when checking if a state_key already has a backend associated with it; parallelism in `terraform test` is scoped down to individual files. * Add validation to assert that the backend block must be in the first apply command for an internal state * Consolidate backend block validation inside a single if statement * Add initial version of validation that ensures a backend isn't re-used within a file * Explicitly set the state_key at the point of parsing the config TODO: What should be done with method (moduletest.Run).GetStateKey? * Update test fixture now that reusing backend configs has been made invalid * Add automated test showing validation of reused configuration blocks * Skip test due to flakiness, minor change to test config naming * Update test so it tolerates non-deterministic order run blocks are evaluated in * Remove unnecessary value assignment to r.StateKey * Replace use of GetStateKey() with accessing the state key that's now set during test config parsing * Fix bug so that run blocks using child modules get the correct state key set at parsing time * Update acceptance test to also cover scenario where root and child module state keys are in use * Update test name * Add newline to regex * Ensure consistent place where repeat backend error is raised from * Write leftover test state(s) to file (#36614) * Add additional validation that the backend used in a run is a supported type (#36648) * Prevent test run when leftover state data is present (#36685) * `test`: Set the initial state for a state files from a backend, allow the run that defines a backend to write state to the backend (#36646) * Allow use of backend block to set initial state for a state key * Note about alternative place to keep 'backend factories' * Allow the run block defining the backend to write state to it * Fix rebase * Change to accessing backend init functions via ContextOpts * Add tests demonstrating how runs containing backend blocks use and update persisted state * Fix test fixture * Address test failure due to trouble opening the state file This problem doesn't happen on MacOS, so I assume is due to the Linux environment of GitHub runners. * Fix issue with paths properly I hope * Fix defect in test assertion * Pivot back to approach introduced in 4afc3d7 * Let failing tests write to persistent state, add test case covering that. I split the acceptance tests into happy/unhappy paths for this, which required some of the helper functions' declarations to be raised up to package-level. * Change how we update internal state files, so that information about the associated backend is never lost * Fix UpdateStateFile * Ensure that the states map set by TestStateTransformer associates a backend with the correct run. * Misc spelling fixes in comments and a log * Replace state get/set functions with existing helpers (#36747) * Replace state get/set functions with existing helpers * Compare to string representation of state * Compare to string representation of state * Terraform Test: Allow skipping cleanup of entire test file or individual run blocks (#36729) * Add validation to enforce skip_cleanup=false cannot be used with backend blocks (#36857) * Integrate use of backend blocks in tests with skip_cleanup feature (#36848) * Fix nil pointer error, update test to not be table-driven * Make using a backend block implicitly set skip_cleanup to true * Stop state artefacts being created when a backend is in use and no cleanup errors have occurred * Return diagnostics so calling code knows if cleanup experienced issues or not * Update tests to show that when cleanup fails a state artefact is created * Add comment about why diag not returned * Bug fix - actually pull in the state from the state manager! * Split and simplify (?) tests to show the backend block can create and/or reuse prior state * Update test to use new fixtures, assert about state artefact. Fix nil pointer * Update test fixture in use, add guardrail for flakiness of forced error during cleanup * Refactor so resource ID set in only one place * Add documentation for using a `backend` block during `test` (#36832) * Add backend as a documented block in a run block * Add documentation about backend blocks in run blocks. * Make the relationship between backends and state keys more clear, other improvements * More test documentation (#36838) * Terraform Test: cleanup command (#36847) * Allow cleanup of states that depend on prior runs outputs (#36902) * terraform test: refactor graph edge calculation * create fake run block nodes during cleanup operation * tidy up TODOs * fix tests * remove old changes * Update internal/moduletest/graph/node_state_cleanup.go Co-authored-by: Samsondeen <40821565+dsa0x@users.noreply.github.com> * Improve diagnostics around skip_cleanup conflicts (#37385) * Improve diagnostics around skip_cleanup conflicts * remove unused dynamic node * terraform test: refactor manifest file for simplicity (#37412) * test: refactor apply and plan functions so no run block is needed * terraform test: write and load state manifest files * Terraform Test: Allow skipping cleanup of entire test file or individual run blocks (#36729) * terraform test: add support for skip_cleanup attr * terraform test: add cleanup command * terraform test: add backend blocks * pause * fix tests * remove commented code * terraform test: make controlling destroy functionality experimental (#37419) * address comments * Update internal/moduletest/graph/node_state_cleanup.go Co-authored-by: Samsondeen <40821565+dsa0x@users.noreply.github.com> --------- Co-authored-by: Samsondeen <40821565+dsa0x@users.noreply.github.com> * add experimental changelog entries --------- Co-authored-by: Sarah French <15078782+SarahFrench@users.noreply.github.com> Co-authored-by: Samsondeen <40821565+dsa0x@users.noreply.github.com> Co-authored-by: Samsondeen Dare --- .changes/footer-with-experiments.md | 4 + commands.go | 6 + internal/backend/local/test.go | 141 +- internal/cloud/test.go | 2 +- internal/cloud/test_test.go | 18 +- internal/command/arguments/test.go | 4 + internal/command/test.go | 273 ++- internal/command/test_cleanup.go | 145 ++ internal/command/test_test.go | 2173 ++++++++++++++--- .../backend-with-skip-cleanup/false/main.tf | 4 + .../false/main.tftest.hcl | 4 + .../backend-with-skip-cleanup/true/main.tf | 4 + .../true/main.tftest.hcl | 4 + .../command/testdata/test/cleanup/main.tf | 17 + .../testdata/test/cleanup/main.tftest.hcl | 26 + .../testdata/test/destroy_fail/main.tf | 2 +- .../test/non-existent-backend-type/main.tf | 10 + .../non-existent-backend-type/main.tftest.hcl | 9 + .../test/removed-backend-type/main.tf | 10 + .../test/removed-backend-type/main.tftest.hcl | 9 + .../child-module/main.tf | 10 + .../main.tf | 9 + .../main.tftest.hcl | 22 + .../test/reused-backend-config/main.tf | 10 + .../reused-backend-config/main.tftest.hcl | 15 + .../testdata/test/skip_cleanup/main.tf | 11 + .../test/skip_cleanup/main.tftest.hcl | 31 + .../testdata/test/skip_cleanup_simple/main.tf | 11 + .../test/skip_cleanup_simple/main.tftest.hcl | 7 + .../test/skip_cleanup_with_run_deps/main.tf | 20 + .../main.tftest.hcl | 23 + .../testdata/test/skip_file_cleanup/main.tf | 11 + .../test/skip_file_cleanup/main.tftest.hcl | 34 + .../no-prior-state/main.tf | 18 + .../no-prior-state/main.tftest.hcl | 15 + .../with-prior-state/main.tf | 18 + .../with-prior-state/main.tftest.hcl | 15 + .../with-prior-state/terraform.tfstate | 41 + internal/command/views/test.go | 54 +- internal/command/views/test_test.go | 105 +- internal/command/workdir/dir.go | 8 +- internal/configs/backend.go | 5 +- internal/configs/parser_config.go | 2 +- internal/configs/parser_config_dir_test.go | 22 + internal/configs/test_file.go | 203 +- .../backend_block_in_plan_run.tftest.hcl | 13 + ...ckend_block_in_second_apply_run.tftest.hcl | 13 + ...duplicate_backend_blocks_in_run.tftest.hcl | 12 + ...uplicate_backend_blocks_in_test.tftest.hcl | 18 + ...n_state_storage_backend_in_test.tftest.hcl | 7 + .../skip_cleanup_after_backend.tftest.hcl | 14 + .../valid-modules/with-tests-backend/main.tf | 11 + .../test_case_one.tftest.hcl | 22 + .../test_case_two.tftest.hcl | 15 + .../main.tf | 7 + .../test_file_one.tftest.hcl | 34 + .../test_file_two.tftest.hcl | 34 + internal/moduletest/graph/apply.go | 73 +- internal/moduletest/graph/eval_context.go | 158 +- .../moduletest/graph/eval_context_test.go | 2 +- .../moduletest/graph/node_state_cleanup.go | 148 +- internal/moduletest/graph/node_test_run.go | 47 +- .../moduletest/graph/node_test_run_cleanup.go | 105 + internal/moduletest/graph/plan.go | 55 +- .../moduletest/graph/test_graph_builder.go | 16 +- .../moduletest/graph/transform_context.go | 47 - .../moduletest/graph/transform_providers.go | 3 + .../graph/transform_state_cleanup.go | 24 +- .../moduletest/graph/transform_test_run.go | 52 +- internal/moduletest/graph/variables.go | 18 +- internal/moduletest/graph/wait.go | 25 +- internal/moduletest/opts.go | 85 + internal/moduletest/run.go | 109 +- internal/moduletest/run_test.go | 2 +- internal/moduletest/states/manifest.go | 558 +++++ internal/moduletest/states/states.go | 29 + internal/moduletest/suite.go | 20 +- internal/terraform/context_plan.go | 5 +- internal/terraform/graph_builder_plan.go | 7 +- internal/terraform/node_resource_abstract.go | 5 + .../node_resource_abstract_instance.go | 5 - 81 files changed, 4490 insertions(+), 893 deletions(-) create mode 100644 internal/command/test_cleanup.go create mode 100644 internal/command/testdata/test/backend-with-skip-cleanup/false/main.tf create mode 100644 internal/command/testdata/test/backend-with-skip-cleanup/false/main.tftest.hcl create mode 100644 internal/command/testdata/test/backend-with-skip-cleanup/true/main.tf create mode 100644 internal/command/testdata/test/backend-with-skip-cleanup/true/main.tftest.hcl create mode 100644 internal/command/testdata/test/cleanup/main.tf create mode 100644 internal/command/testdata/test/cleanup/main.tftest.hcl create mode 100644 internal/command/testdata/test/non-existent-backend-type/main.tf create mode 100644 internal/command/testdata/test/non-existent-backend-type/main.tftest.hcl create mode 100644 internal/command/testdata/test/removed-backend-type/main.tf create mode 100644 internal/command/testdata/test/removed-backend-type/main.tftest.hcl create mode 100644 internal/command/testdata/test/reused-backend-config-child-modules/child-module/main.tf create mode 100644 internal/command/testdata/test/reused-backend-config-child-modules/main.tf create mode 100644 internal/command/testdata/test/reused-backend-config-child-modules/main.tftest.hcl create mode 100644 internal/command/testdata/test/reused-backend-config/main.tf create mode 100644 internal/command/testdata/test/reused-backend-config/main.tftest.hcl create mode 100644 internal/command/testdata/test/skip_cleanup/main.tf create mode 100644 internal/command/testdata/test/skip_cleanup/main.tftest.hcl create mode 100644 internal/command/testdata/test/skip_cleanup_simple/main.tf create mode 100644 internal/command/testdata/test/skip_cleanup_simple/main.tftest.hcl create mode 100644 internal/command/testdata/test/skip_cleanup_with_run_deps/main.tf create mode 100644 internal/command/testdata/test/skip_cleanup_with_run_deps/main.tftest.hcl create mode 100644 internal/command/testdata/test/skip_file_cleanup/main.tf create mode 100644 internal/command/testdata/test/skip_file_cleanup/main.tftest.hcl create mode 100644 internal/command/testdata/test/valid-use-local-backend/no-prior-state/main.tf create mode 100644 internal/command/testdata/test/valid-use-local-backend/no-prior-state/main.tftest.hcl create mode 100644 internal/command/testdata/test/valid-use-local-backend/with-prior-state/main.tf create mode 100644 internal/command/testdata/test/valid-use-local-backend/with-prior-state/main.tftest.hcl create mode 100644 internal/command/testdata/test/valid-use-local-backend/with-prior-state/terraform.tfstate create mode 100644 internal/configs/testdata/invalid-test-files/backend_block_in_plan_run.tftest.hcl create mode 100644 internal/configs/testdata/invalid-test-files/backend_block_in_second_apply_run.tftest.hcl create mode 100644 internal/configs/testdata/invalid-test-files/duplicate_backend_blocks_in_run.tftest.hcl create mode 100644 internal/configs/testdata/invalid-test-files/duplicate_backend_blocks_in_test.tftest.hcl create mode 100644 internal/configs/testdata/invalid-test-files/non_state_storage_backend_in_test.tftest.hcl create mode 100644 internal/configs/testdata/invalid-test-files/skip_cleanup_after_backend.tftest.hcl create mode 100644 internal/configs/testdata/valid-modules/with-tests-backend/main.tf create mode 100644 internal/configs/testdata/valid-modules/with-tests-backend/test_case_one.tftest.hcl create mode 100644 internal/configs/testdata/valid-modules/with-tests-backend/test_case_two.tftest.hcl create mode 100644 internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/main.tf create mode 100644 internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/test_file_one.tftest.hcl create mode 100644 internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/test_file_two.tftest.hcl create mode 100644 internal/moduletest/graph/node_test_run_cleanup.go delete mode 100644 internal/moduletest/graph/transform_context.go create mode 100644 internal/moduletest/opts.go create mode 100644 internal/moduletest/states/manifest.go create mode 100644 internal/moduletest/states/states.go diff --git a/.changes/footer-with-experiments.md b/.changes/footer-with-experiments.md index 488ad3ade4..4ba366912e 100644 --- a/.changes/footer-with-experiments.md +++ b/.changes/footer-with-experiments.md @@ -3,6 +3,10 @@ EXPERIMENTS: Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases. - The experimental "deferred actions" feature, enabled by passing the `-allow-deferral` option to `terraform plan`, permits `count` and `for_each` arguments in `module`, `resource`, and `data` blocks to have unknown values and allows providers to react more flexibly to unknown values. +- `terraform test cleanup`: The experimental `test cleanup` command. In experimental builds of Terraform, a manifest file and state files for each failed cleanup operation during test operations are saved within the `.terraform` local directory. The `test cleanup` command will attempt to clean up the local state files left behind automatically, without requiring manual intervention. +- `terraform test`: `backend` blocks and `skip_cleanup` attributes: + - Test authors can now specify `backend` blocks within `run` blocks in Terraform Test files. Run blocks with `backend` blocks will load state from the specified backend instead of starting from empty state on every execution. This allows test authors to keep long-running test infrastructure alive between test operations, saving time during regular test operations. + - Test authors can now specify `skip_cleanup` attributes within test files and within run blocks. The `skip_cleanup` attribute tells `terraform test` not to clean up state files produced by run blocks with this attribute set to true. The state files for affected run blocks will be written to disk within the `.terraform` directory, where they can then be cleaned up manually using the also experimental `terraform test cleanup` command. ## Previous Releases diff --git a/commands.go b/commands.go index b202d34460..7942484504 100644 --- a/commands.go +++ b/commands.go @@ -456,6 +456,12 @@ func initCommands( Meta: meta, }, nil } + + Commands["test cleanup"] = func() (cli.Command, error) { + return &command.TestCleanupCommand{ + Meta: meta, + }, nil + } } PrimaryCommands = []string{ diff --git a/internal/backend/local/test.go b/internal/backend/local/test.go index 79f63a9d52..d8307b65fd 100644 --- a/internal/backend/local/test.go +++ b/internal/backend/local/test.go @@ -11,12 +11,14 @@ import ( "path/filepath" "slices" + "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/backend/backendrun" "github.com/hashicorp/terraform/internal/command/junit" "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/moduletest" "github.com/hashicorp/terraform/internal/moduletest/graph" + teststates "github.com/hashicorp/terraform/internal/moduletest/states" "github.com/hashicorp/terraform/internal/terraform" "github.com/hashicorp/terraform/internal/tfdiags" ) @@ -24,6 +26,15 @@ import ( type TestSuiteRunner struct { Config *configs.Config + // BackendFactory is used to enable initializing multiple backend types, + // depending on which backends are used in a test suite. + // + // Note: This is currently necessary because the source of the init functions, + // the backend/init package, experiences import cycles if used in other test-related + // packages. We set this field on a TestSuiteRunner when making runners in the + // command package, which is the main place where backend/init has previously been used. + BackendFactory func(string) backend.InitFn + TestingDirectory string // Global variables comes from the main configuration directory, @@ -60,6 +71,14 @@ type TestSuiteRunner struct { Concurrency int DeferralAllowed bool + + CommandMode moduletest.CommandMode + + // Repair is used to indicate whether the test cleanup command should run in + // "repair" mode. In this mode, the cleanup command will only remove state + // files that are a result of failed destroy operations, leaving any + // state due to skip_cleanup in place. + Repair bool } func (runner *TestSuiteRunner) Stop() { @@ -74,7 +93,7 @@ func (runner *TestSuiteRunner) Cancel() { runner.Cancelled = true } -func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) { +func (runner *TestSuiteRunner) Test(experimentsAllowed bool) (moduletest.Status, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics if runner.Concurrency < 1 { @@ -87,6 +106,14 @@ func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) { return moduletest.Error, diags } + manifest, err := teststates.LoadManifest(".", experimentsAllowed) + if err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to open state manifest", + fmt.Sprintf("The test state manifest file could not be opened: %s.", err))) + } + runner.View.Abstract(suite) // We have two sets of variables that are available to different test files. @@ -104,38 +131,24 @@ func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) { if runner.Cancelled { return moduletest.Error, diags } - file := suite.Files[name] - - currentGlobalVariables := runner.GlobalVariables - if filepath.Dir(file.Name) == runner.TestingDirectory { - // If the file is in the test directory, we'll use the union of the - // global variables and the global test variables. - currentGlobalVariables = testDirectoryGlobalVariables - } - - evalCtx := graph.NewEvalContext(graph.EvalContextOpts{ - Config: runner.Config, - CancelCtx: runner.CancelledCtx, - StopCtx: runner.StoppedCtx, - Verbose: runner.Verbose, - Render: runner.View, - UnparsedVariables: currentGlobalVariables, - Concurrency: runner.Concurrency, - DeferralAllowed: runner.DeferralAllowed, - }) - fileRunner := &TestFileRunner{ - Suite: runner, - EvalContext: evalCtx, + Suite: runner, + TestDirectoryGlobalVariables: testDirectoryGlobalVariables, + Manifest: manifest, } - runner.View.File(file, moduletest.Starting) fileRunner.Test(file) runner.View.File(file, moduletest.Complete) suite.Status = suite.Status.Merge(file.Status) } + if err := manifest.Save(experimentsAllowed); err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to save state manifest", + fmt.Sprintf("The test state manifest file could not be saved: %s.", err))) + } runner.View.Conclusion(suite) if runner.JUnit != nil { @@ -155,6 +168,8 @@ func (runner *TestSuiteRunner) collectTests() (*moduletest.Suite, tfdiags.Diagno var diags tfdiags.Diagnostics suite := &moduletest.Suite{ + Status: moduletest.Pending, + CommandMode: runner.CommandMode, Files: func() map[string]*moduletest.File { files := make(map[string]*moduletest.File) @@ -219,8 +234,9 @@ func (runner *TestSuiteRunner) collectTests() (*moduletest.Suite, tfdiags.Diagno type TestFileRunner struct { // Suite contains all the helpful metadata about the test that we need // during the execution of a file. - Suite *TestSuiteRunner - EvalContext *graph.EvalContext + Suite *TestSuiteRunner + TestDirectoryGlobalVariables map[string]backendrun.UnparsedVariableValue + Manifest *teststates.TestManifest } func (runner *TestFileRunner) Test(file *moduletest.File) { @@ -230,6 +246,25 @@ func (runner *TestFileRunner) Test(file *moduletest.File) { // checking anything about them. file.Diagnostics = file.Diagnostics.Append(file.Config.Validate(runner.Suite.Config)) + states, stateDiags := runner.Manifest.LoadStates(file, runner.Suite.BackendFactory) + file.Diagnostics = file.Diagnostics.Append(stateDiags) + if stateDiags.HasErrors() { + file.Status = moduletest.Error + } + + if runner.Suite.CommandMode != moduletest.CleanupMode { + // then we can't have any state files pending cleanup + for _, state := range states { + if state.Manifest.Reason != teststates.StateReasonNone { + file.Diagnostics = file.Diagnostics.Append(tfdiags.Sourceless( + tfdiags.Error, + "State manifest not empty", + fmt.Sprintf("The state manifest for %s should be empty before running tests. This could be due to a previous test run not cleaning up after itself. Please ensure that all state files are cleaned up before running tests.", file.Name))) + file.Status = moduletest.Error + } + } + } + // We'll execute the tests in the file. First, mark the overall status as // being skipped. This will ensure that if we've cancelled and the files not // going to do anything it'll be marked as skipped. @@ -238,13 +273,39 @@ func (runner *TestFileRunner) Test(file *moduletest.File) { // If we have zero run blocks then we'll just mark the file as passed. file.Status = file.Status.Merge(moduletest.Pass) return + } else if runner.Suite.CommandMode == moduletest.CleanupMode { + // In cleanup mode, we don't actually execute the run blocks so we'll + // start with the assumption they have all passed. + file.Status = file.Status.Merge(moduletest.Pass) + } + + currentGlobalVariables := runner.Suite.GlobalVariables + if filepath.Dir(file.Name) == runner.Suite.TestingDirectory { + // If the file is in the test directory, we'll use the union of the + // global variables and the global test variables. + currentGlobalVariables = runner.TestDirectoryGlobalVariables } + evalCtx := graph.NewEvalContext(graph.EvalContextOpts{ + Config: runner.Suite.Config, + CancelCtx: runner.Suite.CancelledCtx, + StopCtx: runner.Suite.StoppedCtx, + Verbose: runner.Suite.Verbose, + Render: runner.Suite.View, + UnparsedVariables: currentGlobalVariables, + FileStates: states, + Concurrency: runner.Suite.Concurrency, + DeferralAllowed: runner.Suite.DeferralAllowed, + Mode: runner.Suite.CommandMode, + Repair: runner.Suite.Repair, + }) + // Build the graph for the file. b := graph.TestGraphBuilder{ Config: runner.Suite.Config, File: file, ContextOpts: runner.Suite.Opts, + CommandMode: runner.Suite.CommandMode, } g, diags := b.Build() file.Diagnostics = file.Diagnostics.Append(diags) @@ -253,13 +314,37 @@ func (runner *TestFileRunner) Test(file *moduletest.File) { } // walk and execute the graph - diags = diags.Append(graph.Walk(g, runner.EvalContext)) + diags = diags.Append(graph.Walk(g, evalCtx)) + + // save any dangling state files. we'll check all the states we have in + // memory, and if any are skipped or errored it means we might want to do + // a cleanup command in the future. this means we need to save the other + // state files as dependencies in case they are needed during the cleanup. + + saveDependencies := false + for _, state := range states { + if state.Manifest.Reason == teststates.StateReasonSkip || state.Manifest.Reason == teststates.StateReasonError { + saveDependencies = true // at least one state file does have resources left over + break + } + } + if saveDependencies { + for _, state := range states { + if state.Manifest.Reason == teststates.StateReasonNone { + // any states that have no reason to be saved, will be updated + // to the dependency reason and this will tell the manifest to + // save those state files as well. + state.Manifest.Reason = teststates.StateReasonDep + } + } + } + diags = diags.Append(runner.Manifest.SaveStates(file, states)) // If the graph walk was terminated, we don't want to add the diagnostics. // The error the user receives will just be: // Failure! 0 passed, 1 failed. // exit status 1 - if runner.EvalContext.Cancelled() { + if evalCtx.Cancelled() { file.UpdateStatus(moduletest.Error) log.Printf("[TRACE] TestFileRunner: graph walk terminated for %s", file.Name) return diff --git a/internal/cloud/test.go b/internal/cloud/test.go index 96703b777f..8d07fee453 100644 --- a/internal/cloud/test.go +++ b/internal/cloud/test.go @@ -121,7 +121,7 @@ func (runner *TestSuiteRunner) Cancel() { runner.Cancelled = true } -func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) { +func (runner *TestSuiteRunner) Test(_ bool) (moduletest.Status, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics configDirectory, err := filepath.Abs(runner.ConfigDirectory) diff --git a/internal/cloud/test_test.go b/internal/cloud/test_test.go index e33cec14df..35175152ae 100644 --- a/internal/cloud/test_test.go +++ b/internal/cloud/test_test.go @@ -82,7 +82,7 @@ func TestTest(t *testing.T) { clientOverride: client, } - _, diags := runner.Test() + _, diags := runner.Test(false) if len(diags) > 0 { t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings()) } @@ -168,7 +168,7 @@ func TestTest_Parallelism(t *testing.T) { clientOverride: client, } - _, diags := runner.Test() + _, diags := runner.Test(false) if len(diags) > 0 { t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings()) } @@ -238,7 +238,7 @@ func TestTest_JSON(t *testing.T) { clientOverride: client, } - _, diags := runner.Test() + _, diags := runner.Test(false) if len(diags) > 0 { t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings()) } @@ -335,7 +335,7 @@ func TestTest_Verbose(t *testing.T) { clientOverride: client, } - _, diags := runner.Test() + _, diags := runner.Test(false) if len(diags) > 0 { t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings()) } @@ -498,7 +498,7 @@ func TestTest_Cancel(t *testing.T) { var diags tfdiags.Diagnostics go func() { defer done() - _, diags = runner.Test() + _, diags = runner.Test(false) }() stop() // immediately cancel @@ -621,7 +621,7 @@ func TestTest_DelayedCancel(t *testing.T) { var diags tfdiags.Diagnostics go func() { defer done() - _, diags = runner.Test() + _, diags = runner.Test(false) }() // Wait for finish! @@ -743,7 +743,7 @@ func TestTest_ForceCancel(t *testing.T) { var diags tfdiags.Diagnostics go func() { defer done() - _, diags = runner.Test() + _, diags = runner.Test(false) }() stop() @@ -893,7 +893,7 @@ func TestTest_LongRunningTest(t *testing.T) { clientOverride: client, } - _, diags := runner.Test() + _, diags := runner.Test(false) if len(diags) > 0 { t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings()) } @@ -977,7 +977,7 @@ func TestTest_LongRunningTestJSON(t *testing.T) { clientOverride: client, } - _, diags := runner.Test() + _, diags := runner.Test(false) if len(diags) > 0 { t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings()) } diff --git a/internal/command/arguments/test.go b/internal/command/arguments/test.go index 08b652db1f..3eba4076f8 100644 --- a/internal/command/arguments/test.go +++ b/internal/command/arguments/test.go @@ -51,6 +51,9 @@ type Test struct { // DeferralAllowed enables deferrals during test operations. This matches // the same-named flag in the Operation struct. DeferralAllowed bool + + // These flags are only relevant to the "test cleanup" command. + Repair bool } func ParseTest(args []string) (*Test, tfdiags.Diagnostics) { @@ -70,6 +73,7 @@ func ParseTest(args []string) (*Test, tfdiags.Diagnostics) { cmdFlags.IntVar(&test.OperationParallelism, "parallelism", DefaultParallelism, "parallelism") cmdFlags.IntVar(&test.RunParallelism, "run-parallelism", DefaultParallelism, "run-parallelism") cmdFlags.BoolVar(&test.DeferralAllowed, "allow-deferral", false, "allow-deferral") + cmdFlags.BoolVar(&test.Repair, "repair", false, "repair") // TODO: Finalise the name of this flag. cmdFlags.StringVar(&test.CloudRunSource, "cloud-run", "", "cloud-run") diff --git a/internal/command/test.go b/internal/command/test.go index 68538aa308..6ae62ea980 100644 --- a/internal/command/test.go +++ b/internal/command/test.go @@ -5,18 +5,28 @@ package command import ( "context" + "fmt" + "maps" "path/filepath" + "slices" + "sort" "strings" "time" + "github.com/hashicorp/hcl/v2" + + "github.com/hashicorp/terraform/internal/backend/backendrun" + backendInit "github.com/hashicorp/terraform/internal/backend/init" "github.com/hashicorp/terraform/internal/backend/local" "github.com/hashicorp/terraform/internal/cloud" "github.com/hashicorp/terraform/internal/command/arguments" "github.com/hashicorp/terraform/internal/command/jsonformat" "github.com/hashicorp/terraform/internal/command/junit" "github.com/hashicorp/terraform/internal/command/views" + "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/logging" "github.com/hashicorp/terraform/internal/moduletest" + "github.com/hashicorp/terraform/internal/terraform" "github.com/hashicorp/terraform/internal/tfdiags" ) @@ -89,91 +99,17 @@ func (c *TestCommand) Synopsis() string { } func (c *TestCommand) Run(rawArgs []string) int { - var diags tfdiags.Diagnostics - - common, rawArgs := arguments.ParseView(rawArgs) - c.View.Configure(common) - - // Since we build the colorizer for the cloud runner outside the views - // package we need to propagate our no-color setting manually. Once the - // cloud package is fully migrated over to the new streams IO we should be - // able to remove this. - c.Meta.color = !common.NoColor - c.Meta.Color = c.Meta.color - - args, diags := arguments.ParseTest(rawArgs) + preparation, diags := c.setupTestExecution(moduletest.NormalMode, "test", rawArgs) if diags.HasErrors() { - c.View.Diagnostics(diags) - c.View.HelpPrompt("test") - return 1 - } - c.Meta.parallelism = args.OperationParallelism - - view := views.NewTest(args.ViewType, c.View) - - // EXPERIMENTAL: maybe enable deferred actions - if !c.AllowExperimentalFeatures && args.DeferralAllowed { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Failed to parse command-line flags", - "The -allow-deferral flag is only valid in experimental builds of Terraform.", - )) - view.Diagnostics(nil, nil, diags) return 1 } - // The specified testing directory must be a relative path, and it must - // point to a directory that is a descendant of the configuration directory. - if !filepath.IsLocal(args.TestDirectory) { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Invalid testing directory", - "The testing directory must be a relative path pointing to a directory local to the configuration directory.")) - - view.Diagnostics(nil, nil, diags) - return 1 - } - - config, configDiags := c.loadConfigWithTests(".", args.TestDirectory) - diags = diags.Append(configDiags) - if configDiags.HasErrors() { - view.Diagnostics(nil, nil, diags) - return 1 - } - - // Users can also specify variables via the command line, so we'll parse - // all that here. - var items []arguments.FlagNameValue - for _, variable := range args.Vars.All() { - items = append(items, arguments.FlagNameValue{ - Name: variable.Name, - Value: variable.Value, - }) - } - c.variableArgs = arguments.FlagNameValueSlice{Items: &items} - - // Collect variables for "terraform test" - testVariables, variableDiags := c.collectVariableValuesForTests(args.TestDirectory) - diags = diags.Append(variableDiags) - - variables, variableDiags := c.collectVariableValues() - diags = diags.Append(variableDiags) - if variableDiags.HasErrors() { - view.Diagnostics(nil, nil, diags) - return 1 - } - - opts, err := c.contextOpts() - if err != nil { - diags = diags.Append(err) - view.Diagnostics(nil, nil, diags) - return 1 - } - - // Print out all the diagnostics we have from the setup. These will just be - // warnings, and we want them out of the way before we start the actual - // testing. - view.Diagnostics(nil, nil, diags) + args := preparation.Args + view := preparation.View + config := preparation.Config + variables := preparation.Variables + testVariables := preparation.TestVariables + opts := preparation.Opts // We have two levels of interrupt here. A 'stop' and a 'cancel'. A 'stop' // is a soft request to stop. We'll finish the current test, do the tidy up, @@ -222,7 +158,8 @@ func (c *TestCommand) Run(rawArgs []string) int { } } else { localRunner := &local.TestSuiteRunner{ - Config: config, + BackendFactory: backendInit.Backend, + Config: config, // The GlobalVariables are loaded from the // main configuration directory // The GlobalTestVariables are loaded from the @@ -260,7 +197,7 @@ func (c *TestCommand) Run(rawArgs []string) int { defer stop() defer cancel() - status, testDiags = runner.Test() + status, testDiags = runner.Test(c.AllowExperimentalFeatures) }() // Wait for the operation to complete, or for an interrupt to occur. @@ -318,3 +255,173 @@ func (c *TestCommand) Run(rawArgs []string) int { } return 0 } + +type TestRunnerSetup struct { + Args *arguments.Test + View views.Test + Config *configs.Config + Variables map[string]backendrun.UnparsedVariableValue + TestVariables map[string]backendrun.UnparsedVariableValue + Opts *terraform.ContextOpts +} + +func (m *Meta) setupTestExecution(mode moduletest.CommandMode, command string, rawArgs []string) (preparation TestRunnerSetup, diags tfdiags.Diagnostics) { + common, rawArgs := arguments.ParseView(rawArgs) + m.View.Configure(common) + + var moreDiags tfdiags.Diagnostics + + // Since we build the colorizer for the cloud runner outside the views + // package we need to propagate our no-color setting manually. Once the + // cloud package is fully migrated over to the new streams IO we should be + // able to remove this. + m.color = !common.NoColor + m.Color = m.color + + preparation.Args, moreDiags = arguments.ParseTest(rawArgs) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + m.View.Diagnostics(diags) + m.View.HelpPrompt(command) + return + } + if preparation.Args.Repair && mode != moduletest.CleanupMode { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Invalid command mode", + "The -repair flag is only valid for the 'test cleanup' command.")) + m.View.Diagnostics(diags) + return preparation, diags + } + + m.parallelism = preparation.Args.OperationParallelism + + view := views.NewTest(preparation.Args.ViewType, m.View) + preparation.View = view + + // EXPERIMENTAL: maybe enable deferred actions + if !m.AllowExperimentalFeatures && preparation.Args.DeferralAllowed { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to parse command-line flags", + "The -allow-deferral flag is only valid in experimental builds of Terraform.", + )) + view.Diagnostics(nil, nil, diags) + return + } + + // The specified testing directory must be a relative path, and it must + // point to a directory that is a descendant of the configuration directory. + if !filepath.IsLocal(preparation.Args.TestDirectory) { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Invalid testing directory", + "The testing directory must be a relative path pointing to a directory local to the configuration directory.")) + + view.Diagnostics(nil, nil, diags) + return + } + + preparation.Config, moreDiags = m.loadConfigWithTests(".", preparation.Args.TestDirectory) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + view.Diagnostics(nil, nil, diags) + return + } + + // Per file, ensure backends: + // * aren't reused + // * are valid types + var backendDiags tfdiags.Diagnostics + for _, tf := range preparation.Config.Module.Tests { + bucketHashes := make(map[int]string) + // Use an ordered list of backends, so that errors are raised by 2nd+ time + // that a backend config is used in a file. + for _, bc := range orderBackendsByDeclarationLine(tf.BackendConfigs) { + f := backendInit.Backend(bc.Backend.Type) + if f == nil { + detail := fmt.Sprintf("There is no backend type named %q.", bc.Backend.Type) + if msg, removed := backendInit.RemovedBackends[bc.Backend.Type]; removed { + detail = msg + } + backendDiags = backendDiags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Unsupported backend type", + Detail: detail, + Subject: &bc.Backend.TypeRange, + }) + continue + } + + b := f() + schema := b.ConfigSchema() + hash := bc.Backend.Hash(schema) + + if runName, exists := bucketHashes[hash]; exists { + // This backend's been encountered before + backendDiags = backendDiags.Append( + &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Repeat use of the same backend block", + Detail: fmt.Sprintf("The run %q contains a backend configuration that's already been used in run %q. Sharing the same backend configuration between separate runs will result in conflicting state updates.", bc.Run.Name, runName), + Subject: bc.Backend.TypeRange.Ptr(), + }, + ) + continue + } + bucketHashes[bc.Backend.Hash(schema)] = bc.Run.Name + } + } + diags = diags.Append(backendDiags) + if backendDiags.HasErrors() { + view.Diagnostics(nil, nil, diags) + return + } + + // Users can also specify variables via the command line, so we'll parse + // all that here. + var items []arguments.FlagNameValue + for _, variable := range preparation.Args.Vars.All() { + items = append(items, arguments.FlagNameValue{ + Name: variable.Name, + Value: variable.Value, + }) + } + m.variableArgs = arguments.FlagNameValueSlice{Items: &items} + + // Collect variables for "terraform test" + preparation.TestVariables, moreDiags = m.collectVariableValuesForTests(preparation.Args.TestDirectory) + diags = diags.Append(moreDiags) + + preparation.Variables, moreDiags = m.collectVariableValues() + diags = diags.Append(moreDiags) + if diags.HasErrors() { + view.Diagnostics(nil, nil, diags) + return + } + + opts, err := m.contextOpts() + if err != nil { + diags = diags.Append(err) + view.Diagnostics(nil, nil, diags) + return + } + preparation.Opts = opts + + // Print out all the diagnostics we have from the setup. These will just be + // warnings, and we want them out of the way before we start the actual + // testing. + view.Diagnostics(nil, nil, diags) + return +} + +// orderBackendsByDeclarationLine takes in a map of state keys to backend configs and returns a list of +// those backend configs, sorted by the line their declaration range starts on. This allows identification +// of the 2nd+ time that a backend configuration is used in the same file. +func orderBackendsByDeclarationLine(backendConfigs map[string]configs.RunBlockBackend) []configs.RunBlockBackend { + bcs := slices.Collect(maps.Values(backendConfigs)) + sort.Slice(bcs, func(i, j int) bool { + return bcs[i].Run.DeclRange.Start.Line < bcs[j].Run.DeclRange.Start.Line + }) + return bcs +} diff --git a/internal/command/test_cleanup.go b/internal/command/test_cleanup.go new file mode 100644 index 0000000000..ed68f96931 --- /dev/null +++ b/internal/command/test_cleanup.go @@ -0,0 +1,145 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: BUSL-1.1 + +package command + +import ( + "context" + "strings" + "time" + + backendInit "github.com/hashicorp/terraform/internal/backend/init" + "github.com/hashicorp/terraform/internal/backend/local" + "github.com/hashicorp/terraform/internal/logging" + "github.com/hashicorp/terraform/internal/moduletest" + "github.com/hashicorp/terraform/internal/tfdiags" +) + +// TestCleanupCommand is a command that cleans up left-over resources created +// during Terraform test runs. It basically runs the test command in cleanup mode. +type TestCleanupCommand struct { + Meta +} + +func (c *TestCleanupCommand) Help() string { + helpText := ` +Usage: terraform [global options] test cleanup [options] + + Cleans up left-over resources in states that were created during Terraform test runs. + + By default, this command ignores the skip_cleanup attributes in the manifest + file. Use the -repair flag to override this behavior, which will ensure that + resources that were intentionally left-over are exempt from cleanup. + +Options: + + -repair Overrides the skip_cleanup attribute in the manifest + file and attempts to clean up all resources. + + -no-color If specified, output won't contain any color. + + -verbose Print detailed output during the cleanup process. +` + return strings.TrimSpace(helpText) +} + +func (c *TestCleanupCommand) Synopsis() string { + return "Clean up left-over resources created during Terraform test runs" +} + +func (c *TestCleanupCommand) Run(rawArgs []string) int { + setup, diags := c.setupTestExecution(moduletest.CleanupMode, "test cleanup", rawArgs) + if diags.HasErrors() { + return 1 + } + + args := setup.Args + view := setup.View + config := setup.Config + variables := setup.Variables + testVariables := setup.TestVariables + opts := setup.Opts + + // We have two levels of interrupt here. A 'stop' and a 'cancel'. A 'stop' + // is a soft request to stop. We'll finish the current test, do the tidy up, + // but then skip all remaining tests and run blocks. A 'cancel' is a hard + // request to stop now. We'll cancel the current operation immediately + // even if it's a delete operation, and we won't clean up any infrastructure + // if we're halfway through a test. We'll print details explaining what was + // stopped so the user can do their best to recover from it. + + runningCtx, done := context.WithCancel(context.Background()) + stopCtx, stop := context.WithCancel(runningCtx) + cancelCtx, cancel := context.WithCancel(context.Background()) + + runner := &local.TestSuiteRunner{ + BackendFactory: backendInit.Backend, + Config: config, + // The GlobalVariables are loaded from the + // main configuration directory + // The GlobalTestVariables are loaded from the + // test directory + GlobalVariables: variables, + GlobalTestVariables: testVariables, + TestingDirectory: args.TestDirectory, + Opts: opts, + View: view, + Stopped: false, + Cancelled: false, + StoppedCtx: stopCtx, + CancelledCtx: cancelCtx, + Filter: args.Filter, + Verbose: args.Verbose, + Repair: args.Repair, + CommandMode: moduletest.CleanupMode, + } + + var testDiags tfdiags.Diagnostics + + go func() { + defer logging.PanicHandler() + defer done() + defer stop() + defer cancel() + + _, testDiags = runner.Test(c.Meta.AllowExperimentalFeatures) + }() + + // Wait for the operation to complete, or for an interrupt to occur. + select { + case <-c.ShutdownCh: + // Nice request to be cancelled. + + view.Interrupted() + runner.Stop() + stop() + + select { + case <-c.ShutdownCh: + // The user pressed it again, now we have to get it to stop as + // fast as possible. + + view.FatalInterrupt() + runner.Cancel() + cancel() + + waitTime := 5 * time.Second + + // We'll wait 5 seconds for this operation to finish now, regardless + // of whether it finishes successfully or not. + select { + case <-runningCtx.Done(): + case <-time.After(waitTime): + } + + case <-runningCtx.Done(): + // The application finished nicely after the request was stopped. + } + case <-runningCtx.Done(): + // tests finished normally with no interrupts. + } + + view.Diagnostics(nil, nil, testDiags) + + return 0 +} diff --git a/internal/command/test_test.go b/internal/command/test_test.go index e7b25e7926..8926c782c5 100644 --- a/internal/command/test_test.go +++ b/internal/command/test_test.go @@ -7,6 +7,7 @@ import ( "bytes" "context" "encoding/json" + "errors" "fmt" "io" "os" @@ -14,11 +15,13 @@ import ( "path/filepath" "regexp" "runtime" + "sort" "strings" "testing" "time" "github.com/google/go-cmp/cmp" + "github.com/google/go-cmp/cmp/cmpopts" "github.com/hashicorp/cli" "github.com/zclconf/go-cty/cty" @@ -27,10 +30,14 @@ import ( "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configload" + "github.com/hashicorp/terraform/internal/getproviders" "github.com/hashicorp/terraform/internal/initwd" + teststates "github.com/hashicorp/terraform/internal/moduletest/states" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/registry" + "github.com/hashicorp/terraform/internal/states/statemgr" "github.com/hashicorp/terraform/internal/terminal" + "github.com/hashicorp/terraform/internal/tfdiags" ) func TestTest_Runs(t *testing.T) { @@ -578,6 +585,8 @@ func TestTest_Interrupt(t *testing.T) { } func TestTest_DestroyFail(t *testing.T) { + // Testing that when a cleanup fails, we leave behind state files of the failed + // resources, and that the test command fails with a non-zero exit code. td := t.TempDir() testCopyDir(t, testFixturePath(path.Join("test", "destroy_fail")), td) t.Chdir(td) @@ -608,9 +617,10 @@ func TestTest_DestroyFail(t *testing.T) { c := &TestCommand{ Meta: Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - View: view, - ShutdownCh: interrupt, + testingOverrides: metaOverridesForProvider(provider.Provider), + View: view, + ShutdownCh: interrupt, + AllowExperimentalFeatures: true, }, } @@ -669,11 +679,430 @@ main.tftest.hcl/single, and they need to be cleaned up manually: t.Errorf("expected output to be \n%s\n\nbut got \n%s\n\n diff:\n%s\n", cleanupMessage, output.Stdout(), diff) } - // This time the test command shouldn't have cleaned up the resource because - // the destroy failed. if provider.ResourceCount() != 4 { t.Errorf("should not have deleted all resources on completion but only has %v", provider.ResourceString()) } + + expectedStates := map[string][]string{ + "main.": {"test_resource.another", "test_resource.resource"}, + "main.double": {"test_resource.another", "test_resource.resource"}, + } + if diff := cmp.Diff(expectedStates, statesFromManifest(t, td)); diff != "" { + t.Fatalf("unexpected states: %s", diff) + } + + t.Run("cleanup failed state", func(t *testing.T) { + interrupt := make(chan struct{}) + provider.Interrupt = interrupt + provider.Provider.PlanResourceChangeFn = func(req providers.PlanResourceChangeRequest) providers.PlanResourceChangeResponse { + return providers.PlanResourceChangeResponse{ + PlannedState: req.ProposedNewState, + } + } + provider.Provider.ApplyResourceChangeFn = func(req providers.ApplyResourceChangeRequest) providers.ApplyResourceChangeResponse { + return providers.ApplyResourceChangeResponse{ + NewState: req.PlannedState, + } + } + view, done = testView(t) + + c := &TestCleanupCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + View: view, + ShutdownCh: interrupt, + AllowExperimentalFeatures: true, + }, + } + + c.Run([]string{"-no-color"}) + output := done(t) + + actualCleanup := output.Stdout() + expectedCleanup := `main.tftest.hcl... in progress +main.tftest.hcl... tearing down +main.tftest.hcl... pass + +Success! +` + if diff := cmp.Diff(expectedCleanup, actualCleanup); diff != "" { + t.Fatalf("unexpected cleanup output: expected %s\n, got %s\n, diff: %s", expectedCleanup, actualCleanup, diff) + } + + expectedStates := map[string][]string{} + + if diff := cmp.Diff(expectedStates, statesFromManifest(t, td)); diff != "" { + t.Fatalf("unexpected states after cleanup: %s", diff) + } + }) +} + +func TestTest_Cleanup(t *testing.T) { + // function to consolidate the test command that should generate some state files and manifest + // It also does assertions. + executeTestCmd := func(provider *testing_command.TestProvider, providerSource *getproviders.MockSource) (td string) { + td = t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", "cleanup")), td) + t.Chdir(td) + + view, done := testView(t) + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + View: view, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, + } + + init := &InitCommand{Meta: meta} + if code := init.Run(nil); code != 0 { + output := done(t) + t.Fatalf("expected status code %d but got %d: %s", 0, code, output.All()) + } + interrupt := make(chan struct{}) + provider.Interrupt = interrupt + view, done = testView(t) + + c := &TestCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + View: view, + ShutdownCh: interrupt, + AllowExperimentalFeatures: true, + }, + } + + c.Run([]string{"-no-color"}) + output := done(t) + + message := `main.tftest.hcl... in progress + run "test"... pass + run "test_two"... pass + run "test_three"... pass + run "test_four"... pass +main.tftest.hcl... tearing down +main.tftest.hcl... fail + +Failure! 4 passed, 0 failed. +` + + outputErr := `Terraform encountered an error destroying resources created while executing +main.tftest.hcl/test_three. + +Error: Failed to destroy resource + +destroy_fail is set to true + +Terraform left the following resources in state after executing +main.tftest.hcl/test_three, and they need to be cleaned up manually: + - test_resource.resource +` + if diff := cmp.Diff(outputErr, output.Stderr()); diff != "" { + t.Errorf("expected err to be %s\n\nbut got %s\n\n diff:%s\n", outputErr, output.Stderr(), diff) + } + if diff := cmp.Diff(message, output.Stdout()); diff != "" { + t.Errorf("expected output to be %s\n\nbut got %s\n\n diff:%s\n", message, output.Stdout(), diff) + } + + if provider.ResourceCount() != 2 { + t.Errorf("should have 2 resources on completion but only has %v", provider.ResourceString()) + } + + expectedStates := map[string][]string{ + "main.": {"test_resource.resource"}, + "main.state_three": {"test_resource.resource"}, + } + actual := removeOutputs(statesFromManifest(t, td)) + + if diff := cmp.Diff(expectedStates, actual); diff != "" { + t.Fatalf("unexpected states: %s", diff) + } + + return + } + + t.Run("cleanup all left-over state", func(t *testing.T) { + provider := testing_command.NewProvider(nil) + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + // Run the test command to create the state + td := executeTestCmd(provider, providerSource) + interrupt := make(chan struct{}) + provider.Interrupt = interrupt + provider.Provider.PlanResourceChangeFn = func(req providers.PlanResourceChangeRequest) providers.PlanResourceChangeResponse { + return providers.PlanResourceChangeResponse{ + PlannedState: req.ProposedNewState, + } + } + provider.Provider.ApplyResourceChangeFn = func(req providers.ApplyResourceChangeRequest) providers.ApplyResourceChangeResponse { + var diags tfdiags.Diagnostics + // Simulate an error during apply, unless it is a destroy operation + if !req.PlannedState.IsNull() { + diags = diags.Append(fmt.Errorf("apply error")) + } + return providers.ApplyResourceChangeResponse{ + NewState: req.PlannedState, + Diagnostics: diags, + } + } + view, done := testView(t) + + c := &TestCleanupCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + View: view, + ShutdownCh: interrupt, + AllowExperimentalFeatures: true, + }, + } + + c.Run([]string{"-no-color"}) + output := done(t) + + expectedCleanup := `main.tftest.hcl... in progress +main.tftest.hcl... tearing down +main.tftest.hcl... pass + +Success! +` + if diff := cmp.Diff(expectedCleanup, output.Stdout()); diff != "" { + t.Errorf("unexpected cleanup output: expected %s\n, got %s\n, diff: %s", expectedCleanup, output.Stdout(), diff) + } + + expectedStates := map[string][]string{} + actualStates := removeOutputs(statesFromManifest(t, td)) + + if diff := cmp.Diff(expectedStates, actualStates); diff != "" { + t.Fatalf("unexpected states after cleanup: %s", diff) + } + }) + + t.Run("cleanup failed state only (-repair)", func(t *testing.T) { + provider := testing_command.NewProvider(nil) + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + // Run the test command to create the state + td := executeTestCmd(provider, providerSource) + + interrupt := make(chan struct{}) + provider.Interrupt = interrupt + provider.Provider.PlanResourceChangeFn = func(req providers.PlanResourceChangeRequest) providers.PlanResourceChangeResponse { + return providers.PlanResourceChangeResponse{ + PlannedState: req.ProposedNewState, + } + } + provider.Provider.ApplyResourceChangeFn = func(req providers.ApplyResourceChangeRequest) providers.ApplyResourceChangeResponse { + return providers.ApplyResourceChangeResponse{ + NewState: req.PlannedState, + } + } + view, done := testView(t) + + c := &TestCleanupCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + View: view, + ShutdownCh: interrupt, + AllowExperimentalFeatures: true, + }, + } + + c.Run([]string{"-no-color", "-repair"}) + output := done(t) + + expectedCleanup := `main.tftest.hcl... in progress +main.tftest.hcl... tearing down +main.tftest.hcl... pass + +Success! +` + if diff := cmp.Diff(expectedCleanup, output.Stdout()); diff != "" { + t.Fatalf("unexpected cleanup output: expected %s\n, got %s\n, diff: %s", expectedCleanup, output.Stdout(), diff) + } + + expectedStates := map[string][]string{ + "main.": {"test_resource.resource"}, + } + actual := removeOutputs(statesFromManifest(t, td)) + + if diff := cmp.Diff(expectedStates, actual); diff != "" { + t.Fatalf("unexpected states after cleanup: %s", diff) + } + }) +} + +func TestTest_CleanupActuallyCleansUp(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", "skip_cleanup_simple")), td) + t.Chdir(td) + + provider := testing_command.NewProvider(nil) + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, + } + + init := &InitCommand{ + Meta: meta, + } + + output := done(t) + + if code := init.Run(nil); code != 0 { + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) + + c := &TestCommand{ + Meta: meta, + } + + code := c.Run([]string{"-no-color"}) + output = done(t) + + if code != 0 { + t.Errorf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Run the cleanup command. + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) + + cleanup := &TestCleanupCommand{ + Meta: meta, + } + + code = cleanup.Run([]string{"-no-color"}) + output = done(t) + + if code != 0 { + t.Errorf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + // Running the test again should now work, because we cleaned everything + // up. + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) + + c = &TestCommand{ + Meta: meta, + } + + code = c.Run([]string{"-no-color"}) + output = done(t) + + if code != 0 { + t.Errorf("expected status code 0 but got %d: %s", code, output.All()) + } +} + +func TestTest_SkipCleanup_ConsecutiveTestsFail(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", "skip_cleanup_simple")), td) + t.Chdir(td) + + provider := testing_command.NewProvider(nil) + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, + } + + init := &InitCommand{ + Meta: meta, + } + + output := done(t) + + if code := init.Run(nil); code != 0 { + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) + + c := &TestCommand{ + Meta: meta, + } + + code := c.Run([]string{"-no-color"}) + output = done(t) + + if code != 0 { + t.Errorf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + // Running the test again should fail because of the skip cleanup. + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) + + c = &TestCommand{ + Meta: meta, + } + + code = c.Run([]string{"-no-color"}) + output = done(t) + + if code == 0 { + t.Errorf("expected status code 0 but got %d", code) + } + + expectedOut := "main.tftest.hcl... in progress\nmain.tftest.hcl... tearing down\nmain.tftest.hcl... fail\n\nFailure! 0 passed, 0 failed.\n" + expectedErr := "\nError: State manifest not empty\n\nThe state manifest for main.tftest.hcl should be empty before running tests.\nThis could be due to a previous test run not cleaning up after itself. Please\nensure that all state files are cleaned up before running tests.\n" + + if diff := cmp.Diff(expectedOut, output.Stdout()); len(diff) > 0 { + t.Error(diff) + } + if diff := cmp.Diff(expectedErr, output.Stderr()); len(diff) > 0 { + t.Error(diff) + } } func TestTest_SharedState_Order(t *testing.T) { @@ -2502,13 +2931,12 @@ Success! 5 passed, 0 failed. } } -func TestTest_OnlyExternalModules(t *testing.T) { +func TestTest_SkipCleanup(t *testing.T) { td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "only_modules")), td) + testCopyDir(t, testFixturePath(path.Join("test", "skip_cleanup")), td) t.Chdir(td) provider := testing_command.NewProvider(nil) - providerSource, close := newMockProviderSource(t, map[string][]string{ "test": {"1.0.0"}, }) @@ -2519,19 +2947,21 @@ func TestTest_OnlyExternalModules(t *testing.T) { ui := new(cli.MockUi) meta := Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - Ui: ui, - View: view, - Streams: streams, - ProviderSource: providerSource, + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, } init := &InitCommand{ Meta: meta, } + output := done(t) + if code := init.Run(nil); code != 0 { - output := done(t) t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) } @@ -2545,102 +2975,196 @@ func TestTest_OnlyExternalModules(t *testing.T) { } code := c.Run([]string{"-no-color"}) - output := done(t) + output = done(t) if code != 0 { t.Errorf("expected status code 0 but got %d", code) } - expected := `main.tftest.hcl... in progress - run "first"... pass - run "second"... pass + t.Run("skipped resources should not be deleted", func(t *testing.T) { + + expected := ` +Warning: Duplicate "skip_cleanup" block + + on main.tftest.hcl line 15, in run "test_three": + 15: skip_cleanup = true + +The run "test_three" has a skip_cleanup attribute set, but shares state with +an earlier run "test_two" that also has skip_cleanup set. The later run takes +precedence, and this attribute is ignored for the earlier run. +main.tftest.hcl... in progress + run "test"... pass + run "test_two"... pass + run "test_three"... pass + run "test_four"... pass + run "test_five"... pass main.tftest.hcl... tearing down main.tftest.hcl... pass -Success! 2 passed, 0 failed. +Success! 5 passed, 0 failed. ` - actual := output.Stdout() + actual := output.All() + if !strings.Contains(actual, expected) { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, cmp.Diff(expected, actual)) + } - if !strings.Contains(actual, expected) { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s", expected, actual) - } + if provider.ResourceCount() != 1 { + t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } - if provider.ResourceCount() > 0 { - t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) - } + val := provider.Store.Get(provider.ResourceString()) + + if val.GetAttr("value").AsString() != "test_three" { + t.Errorf("expected resource to have value 'test_three' but got %s", val.GetAttr("value").AsString()) + } + }) + + t.Run("state should be persisted", func(t *testing.T) { + expectedStates := map[string][]string{ + "main.": {"test_resource.resource"}, + } + actualStates := removeOutputs(statesFromManifest(t, td)) + + if diff := cmp.Diff(expectedStates, actualStates); diff != "" { + t.Fatalf("unexpected states: %s", diff) + } + }) } -func TestTest_PartialUpdates(t *testing.T) { +func TestTest_SkipCleanupWithRunDependencies(t *testing.T) { td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "partial_updates")), td) + testCopyDir(t, testFixturePath(path.Join("test", "skip_cleanup_with_run_deps")), td) t.Chdir(td) provider := testing_command.NewProvider(nil) - view, done := testView(t) + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, + } + + init := &InitCommand{ + Meta: meta, + } + + output := done(t) + + if code := init.Run(nil); code != 0 { + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) c := &TestCommand{ - Meta: Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - View: view, - }, + Meta: meta, } code := c.Run([]string{"-no-color"}) - output := done(t) + output = done(t) if code != 0 { t.Errorf("expected status code 0 but got %d", code) } - expected := `main.tftest.hcl... in progress - run "first"... pass + t.Run("skipped resources should not be deleted", func(t *testing.T) { -Warning: Resource targeting is in effect + expected := `main.tftest.hcl... in progress + run "test"... pass + run "test_two"... pass + run "test_three"... pass +main.tftest.hcl... tearing down +main.tftest.hcl... pass -You are creating a plan with the -target option, which means that the result -of this plan may not represent all of the changes requested by the current -configuration. +Success! 3 passed, 0 failed. +` -The -target option is not for routine use, and is provided only for -exceptional situations such as recovering from errors or mistakes, or when -Terraform specifically suggests to use it as part of an error message. + actual := output.All() + if !strings.Contains(actual, expected) { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, cmp.Diff(expected, actual)) + } -Warning: Applied changes may be incomplete + if provider.ResourceCount() != 1 { + t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } -The plan was created with the -target option in effect, so some changes -requested in the configuration may have been ignored and the output values -may not be fully updated. Run the following command to verify that no other -changes are pending: - terraform plan + val := provider.Store.Get(provider.ResourceString()) -Note that the -target option is not suitable for routine use, and is provided -only for exceptional situations such as recovering from errors or mistakes, -or when Terraform specifically suggests to use it as part of an error -message. + if val.GetAttr("value").AsString() != "test_two" { + t.Errorf("expected resource to have value 'test_two' but got %s", val.GetAttr("value").AsString()) + } + }) - run "second"... pass + // we want to check that we leave behind the state that was skipped + // and the states that it depends on + t.Run("state should be persisted", func(t *testing.T) { + expectedStates := map[string][]string{ + "main.": {"output.id", "output.unused"}, + "main.state": {"test_resource.resource", "output.id", "output.unused"}, + } + actualStates := statesFromManifest(t, td) + + if diff := cmp.Diff(expectedStates, actualStates, equalIgnoreOrder()); diff != "" { + t.Fatalf("unexpected states: %s", diff) + } + }) + + t.Run("cleanup all left-over state", func(t *testing.T) { + view, done := testView(t) + c := &TestCleanupCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + View: view, + + Ui: ui, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, + }, + } + + c.Run([]string{"-no-color"}) + output := done(t) + + expectedCleanup := `main.tftest.hcl... in progress main.tftest.hcl... tearing down main.tftest.hcl... pass -Success! 2 passed, 0 failed. +Success! ` + if diff := cmp.Diff(expectedCleanup, output.Stdout()); diff != "" { + t.Errorf("unexpected cleanup output: expected %s\n, got %s\n, diff: %s", expectedCleanup, output.Stdout(), diff) + } + if err := output.Stderr(); len(err) != 0 { + t.Errorf("unexpected error: %s", err) + } - actual := output.All() - - if diff := cmp.Diff(actual, expected); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, diff) - } + expectedStates := map[string][]string{} + actualStates := removeOutputs(statesFromManifest(t, td)) - if provider.ResourceCount() > 0 { - t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) - } + if diff := cmp.Diff(expectedStates, actualStates); diff != "" { + t.Fatalf("unexpected states after cleanup: %s", diff) + } + }) } -// There should not be warnings in clean-up -func TestTest_InvalidWarningsInCleanup(t *testing.T) { +func TestTest_SkipCleanup_JSON(t *testing.T) { td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "invalid-cleanup-warnings")), td) + testCopyDir(t, testFixturePath(path.Join("test", "skip_cleanup")), td) t.Chdir(td) provider := testing_command.NewProvider(nil) @@ -2654,19 +3178,21 @@ func TestTest_InvalidWarningsInCleanup(t *testing.T) { ui := new(cli.MockUi) meta := Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - Ui: ui, - View: view, - Streams: streams, - ProviderSource: providerSource, + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, } init := &InitCommand{ Meta: meta, } + output := done(t) + if code := init.Run(nil); code != 0 { - output := done(t) t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) } @@ -2679,106 +3205,283 @@ func TestTest_InvalidWarningsInCleanup(t *testing.T) { Meta: meta, } - code := c.Run([]string{"-no-color"}) - output := done(t) + code := c.Run([]string{"-no-color", "-json"}) + output = done(t) if code != 0 { t.Errorf("expected status code 0 but got %d", code) } - expected := `main.tftest.hcl... in progress - run "test"... pass + var messages []string + for ix, line := range strings.Split(output.All(), "\n") { + if len(line) == 0 { + // Skip empty lines. + continue + } -Warning: Value for undeclared variable + if ix == 0 { + // skip the first one, it's version information + continue + } - on main.tftest.hcl line 6, in run "test": - 6: validation = "Hello, world!" + var obj map[string]interface{} -The module under test does not declare a variable named "validation", but it -is declared in run block "test". + if err := json.Unmarshal([]byte(line), &obj); err != nil { + t.Errorf("failed to unmarshal returned line: %s", line) + continue + } -main.tftest.hcl... tearing down -main.tftest.hcl... pass + // Remove the timestamp as it changes every time. + delete(obj, "@timestamp") -Success! 1 passed, 0 failed. -` + if obj["type"].(string) == "test_run" { + // Then we need to delete the `elapsed` field from within the run + // as it'll cause flaky tests. - actual := output.All() + run := obj["test_run"].(map[string]interface{}) + if run["progress"].(string) != "complete" { + delete(run, "elapsed") + } + } - if diff := cmp.Diff(actual, expected); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, diff) - } + message, err := json.Marshal(obj) + if err != nil { + t.Errorf("failed to remarshal returned line: %s", line) + continue + } - if provider.ResourceCount() > 0 { - t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + messages = append(messages, string(message)) } + + t.Run("skipped resources should not be deleted", func(t *testing.T) { + + expected := []string{ + `{"@level":"warn","@message":"Warning: Duplicate \"skip_cleanup\" block","@module":"terraform.ui","diagnostic":{"detail":"The run \"test_three\" has a skip_cleanup attribute set, but shares state with an earlier run \"test_two\" that also has skip_cleanup set. The later run takes precedence, and this attribute is ignored for the earlier run.","range":{"end":{"byte":163,"column":15,"line":15},"filename":"main.tftest.hcl","start":{"byte":151,"column":3,"line":15}},"severity":"warning","snippet":{"code":" skip_cleanup = true","context":"run \"test_three\"","highlight_end_offset":14,"highlight_start_offset":2,"start_line":15,"values":[]},"summary":"Duplicate \"skip_cleanup\" block"},"type":"diagnostic"}`, + `{"@level":"info","@message":"Found 1 file and 5 run blocks","@module":"terraform.ui","test_abstract":{"main.tftest.hcl":["test","test_two","test_three","test_four","test_five"]},"type":"test_abstract"}`, + `{"@level":"info","@message":"main.tftest.hcl... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","test_file":{"path":"main.tftest.hcl","progress":"starting"},"type":"test_file"}`, + `{"@level":"info","@message":" \"test\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"starting","run":"test"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test\"... pass","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"complete","run":"test","status":"pass"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test_two\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test_two","test_run":{"path":"main.tftest.hcl","progress":"starting","run":"test_two"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test_two\"... pass","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test_two","test_run":{"path":"main.tftest.hcl","progress":"complete","run":"test_two","status":"pass"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test_three\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test_three","test_run":{"path":"main.tftest.hcl","progress":"starting","run":"test_three"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test_three\"... pass","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test_three","test_run":{"path":"main.tftest.hcl","progress":"complete","run":"test_three","status":"pass"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test_four\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test_four","test_run":{"path":"main.tftest.hcl","progress":"starting","run":"test_four"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test_four\"... pass","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test_four","test_run":{"path":"main.tftest.hcl","progress":"complete","run":"test_four","status":"pass"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test_five\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test_five","test_run":{"path":"main.tftest.hcl","progress":"starting","run":"test_five"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test_five\"... pass","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test_five","test_run":{"path":"main.tftest.hcl","progress":"complete","run":"test_five","status":"pass"},"type":"test_run"}`, + `{"@level":"info","@message":"main.tftest.hcl... tearing down","@module":"terraform.ui","@testfile":"main.tftest.hcl","test_file":{"path":"main.tftest.hcl","progress":"teardown"},"type":"test_file"}`, + `{"@level":"info","@message":" \"test_three\"... tearing down","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test_three","test_run":{"path":"main.tftest.hcl","progress":"teardown","run":"test_three"},"type":"test_run"}`, + `{"@level":"info","@message":"main.tftest.hcl... pass","@module":"terraform.ui","@testfile":"main.tftest.hcl","test_file":{"path":"main.tftest.hcl","progress":"complete","status":"pass"},"type":"test_file"}`, + `{"@level":"info","@message":"Success! 5 passed, 0 failed.","@module":"terraform.ui","test_summary":{"errored":0,"failed":0,"passed":5,"skipped":0,"status":"pass"},"type":"test_summary"}`, + } + + trimmedActual := strings.Join(messages, "\n") + trimmedExpected := strings.Join(expected, "\n") + if diff := cmp.Diff(trimmedExpected, trimmedActual); diff != "" { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", trimmedExpected, trimmedActual, diff) + } + + if provider.ResourceCount() != 1 { + t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } + + val := provider.Store.Get(provider.ResourceString()) + + if val.GetAttr("value").AsString() != "test_three" { + t.Errorf("expected resource to have value 'test_three' but got %s", val.GetAttr("value").AsString()) + } + }) } -func TestTest_BadReferences(t *testing.T) { +func TestTest_SkipCleanup_FileLevelFlag(t *testing.T) { td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "bad-references")), td) + testCopyDir(t, testFixturePath(path.Join("test", "skip_file_cleanup")), td) t.Chdir(td) provider := testing_command.NewProvider(nil) - view, done := testView(t) + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, + } + + init := &InitCommand{ + Meta: meta, + } + + output := done(t) + + if code := init.Run(nil); code != 0 { + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) c := &TestCommand{ - Meta: Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - View: view, - }, + Meta: meta, } code := c.Run([]string{"-no-color"}) - output := done(t) + output = done(t) - if code == 0 { + if code != 0 { t.Errorf("expected status code 0 but got %d", code) } - expectedOut := `main.tftest.hcl... in progress - run "setup"... pass - run "test"... fail - run "finalise"... skip + t.Run("skipped resources should not be deleted", func(t *testing.T) { + + expected := `main.tftest.hcl... in progress + run "test"... pass + run "test_two"... pass + run "test_three"... pass + run "test_four"... pass + run "test_five"... pass main.tftest.hcl... tearing down -main.tftest.hcl... fail -providers.tftest.hcl... in progress - run "test"... skip -providers.tftest.hcl... tearing down -providers.tftest.hcl... fail +main.tftest.hcl... pass -Failure! 1 passed, 1 failed, 2 skipped. +Success! 5 passed, 0 failed. ` - actualOut := output.Stdout() - if diff := cmp.Diff(actualOut, expectedOut); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedOut, actualOut, diff) - } - expectedErr := ` -Error: Reference to unavailable variable + actual := output.All() + if !strings.Contains(actual, expected) { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s", expected, actual) + } - on main.tftest.hcl line 15, in run "test": - 15: input_one = var.notreal + if provider.ResourceCount() != 1 { + t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } -The input variable "notreal" does not exist within this test file. + val := provider.Store.Get(provider.ResourceString()) -Error: Reference to unknown run block + if val.GetAttr("value").AsString() != "test_four" { + t.Errorf("expected resource to have value 'test_four' but got %s", val.GetAttr("value").AsString()) + } + }) - on main.tftest.hcl line 16, in run "test": - 16: input_two = run.madeup.response + t.Run("state should be persisted with valid reason", func(t *testing.T) { + manifest, err := teststates.LoadManifest(td, true) + if err != nil { + t.Fatal(err) + } -The run block "madeup" does not exist within this test file. + expectedStates := map[string][]string{ + "main.": {"test_resource.resource"}, + } + actualStates := make(map[string][]string) + + var reason teststates.StateReason + // Verify the states in the manifest + for fileName, file := range manifest.Files { + for name, state := range file.States { + sm := statemgr.NewFilesystem(manifest.StateFilePath(state.ID)) + if err := sm.RefreshState(); err != nil { + t.Fatalf("error when reading state file: %s", err) + } + reason = state.Reason + state := sm.State() -Error: Reference to unavailable variable + // If the state is nil, then the test cleaned up the state + if state == nil { + t.Fatalf("state is nil") + } - on providers.tftest.hcl line 3, in provider "test": - 3: resource_prefix = var.default + var resources []string + for _, module := range state.Modules { + for _, resource := range module.Resources { + resources = append(resources, resource.Addr.String()) + } + } + sort.Strings(resources) + actualStates[strings.TrimSuffix(fileName, ".tftest.hcl")+"."+name] = resources + } + } -The input variable "default" does not exist within this test file. + if diff := cmp.Diff(expectedStates, actualStates); diff != "" { + t.Fatalf("unexpected states: %s", diff) + } + if reason != teststates.StateReasonSkip { + t.Fatalf("expected reason to be skip but got %s", reason) + } + }) +} + +func TestTest_OnlyExternalModules(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", "only_modules")), td) + t.Chdir(td) + + provider := testing_command.NewProvider(nil) + + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + } + + init := &InitCommand{ + Meta: meta, + } + + if code := init.Run(nil); code != 0 { + output := done(t) + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) + + c := &TestCommand{ + Meta: meta, + } + + code := c.Run([]string{"-no-color"}) + output := done(t) + + if code != 0 { + t.Errorf("expected status code 0 but got %d", code) + } + + expected := `main.tftest.hcl... in progress + run "first"... pass + run "second"... pass +main.tftest.hcl... tearing down +main.tftest.hcl... pass + +Success! 2 passed, 0 failed. ` - actualErr := output.Stderr() - if diff := cmp.Diff(actualErr, expectedErr); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedErr, actualErr, diff) + + actual := output.Stdout() + + if !strings.Contains(actual, expected) { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s", expected, actual) } if provider.ResourceCount() > 0 { @@ -2786,9 +3489,9 @@ The input variable "default" does not exist within this test file. } } -func TestTest_UndefinedVariables(t *testing.T) { +func TestTest_PartialUpdates(t *testing.T) { td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "variables_undefined_in_config")), td) + testCopyDir(t, testFixturePath(path.Join("test", "partial_updates")), td) t.Chdir(td) provider := testing_command.NewProvider(nil) @@ -2804,34 +3507,47 @@ func TestTest_UndefinedVariables(t *testing.T) { code := c.Run([]string{"-no-color"}) output := done(t) - if code == 0 { + if code != 0 { t.Errorf("expected status code 0 but got %d", code) } - expectedOut := `main.tftest.hcl... in progress - run "test"... fail -main.tftest.hcl... tearing down -main.tftest.hcl... fail + expected := `main.tftest.hcl... in progress + run "first"... pass -Failure! 0 passed, 1 failed. -` - actualOut := output.Stdout() - if diff := cmp.Diff(actualOut, expectedOut); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedOut, actualOut, diff) - } +Warning: Resource targeting is in effect - expectedErr := ` -Error: Reference to undeclared input variable +You are creating a plan with the -target option, which means that the result +of this plan may not represent all of the changes requested by the current +configuration. - on main.tf line 2, in resource "test_resource" "foo": - 2: value = var.input +The -target option is not for routine use, and is provided only for +exceptional situations such as recovering from errors or mistakes, or when +Terraform specifically suggests to use it as part of an error message. -An input variable with the name "input" has not been declared. This variable -can be declared with a variable "input" {} block. +Warning: Applied changes may be incomplete + +The plan was created with the -target option in effect, so some changes +requested in the configuration may have been ignored and the output values +may not be fully updated. Run the following command to verify that no other +changes are pending: + terraform plan + +Note that the -target option is not suitable for routine use, and is provided +only for exceptional situations such as recovering from errors or mistakes, +or when Terraform specifically suggests to use it as part of an error +message. + + run "second"... pass +main.tftest.hcl... tearing down +main.tftest.hcl... pass + +Success! 2 passed, 0 failed. ` - actualErr := output.Stderr() - if diff := cmp.Diff(actualErr, expectedErr); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedErr, actualErr, diff) + + actual := output.All() + + if diff := cmp.Diff(actual, expected); len(diff) > 0 { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, diff) } if provider.ResourceCount() > 0 { @@ -2839,19 +3555,46 @@ can be declared with a variable "input" {} block. } } -func TestTest_VariablesInProviders(t *testing.T) { +// There should not be warnings in clean-up +func TestTest_InvalidWarningsInCleanup(t *testing.T) { td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "provider_vars")), td) + testCopyDir(t, testFixturePath(path.Join("test", "invalid-cleanup-warnings")), td) t.Chdir(td) provider := testing_command.NewProvider(nil) - view, done := testView(t) + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + } + + init := &InitCommand{ + Meta: meta, + } + + if code := init.Run(nil); code != 0 { + output := done(t) + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) c := &TestCommand{ - Meta: Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - View: view, - }, + Meta: meta, } code := c.Run([]string{"-no-color"}) @@ -2863,12 +3606,23 @@ func TestTest_VariablesInProviders(t *testing.T) { expected := `main.tftest.hcl... in progress run "test"... pass + +Warning: Value for undeclared variable + + on main.tftest.hcl line 6, in run "test": + 6: validation = "Hello, world!" + +The module under test does not declare a variable named "validation", but it +is declared in run block "test". + main.tftest.hcl... tearing down main.tftest.hcl... pass Success! 1 passed, 0 failed. ` + actual := output.All() + if diff := cmp.Diff(actual, expected); len(diff) > 0 { t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, diff) } @@ -2878,9 +3632,9 @@ Success! 1 passed, 0 failed. } } -func TestTest_ExpectedFailuresDuringPlanning(t *testing.T) { +func TestTest_BadReferences(t *testing.T) { td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "expected_failures_during_planning")), td) + testCopyDir(t, testFixturePath(path.Join("test", "bad-references")), td) t.Chdir(td) provider := testing_command.NewProvider(nil) @@ -2900,36 +3654,200 @@ func TestTest_ExpectedFailuresDuringPlanning(t *testing.T) { t.Errorf("expected status code 0 but got %d", code) } - expectedOut := `check.tftest.hcl... in progress - run "check_passes"... pass -check.tftest.hcl... tearing down -check.tftest.hcl... pass -input.tftest.hcl... in progress - run "input_failure"... fail + expectedOut := `main.tftest.hcl... in progress + run "setup"... pass + run "test"... fail + run "finalise"... skip +main.tftest.hcl... tearing down +main.tftest.hcl... fail +providers.tftest.hcl... in progress + run "test"... skip +providers.tftest.hcl... tearing down +providers.tftest.hcl... fail -Warning: Expected failure while planning +Failure! 1 passed, 1 failed, 2 skipped. +` + actualOut := output.Stdout() + if diff := cmp.Diff(actualOut, expectedOut); len(diff) > 0 { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedOut, actualOut, diff) + } -A custom condition within var.input failed during the planning stage and -prevented the requested apply operation. While this was an expected failure, -the apply operation could not be executed and so the overall test case will -be marked as a failure and the original diagnostic included in the test -report. + expectedErr := ` +Error: Reference to unavailable variable - run "no_run"... skip -input.tftest.hcl... tearing down -input.tftest.hcl... fail -output.tftest.hcl... in progress - run "output_failure"... fail + on main.tftest.hcl line 15, in run "test": + 15: input_one = var.notreal -Warning: Expected failure while planning +The input variable "notreal" does not exist within this test file. - on output.tftest.hcl line 13, in run "output_failure": - 13: output.output, +Error: Reference to unknown run block -A custom condition within output.output failed during the planning stage and -prevented the requested apply operation. While this was an expected failure, -the apply operation could not be executed and so the overall test case will -be marked as a failure and the original diagnostic included in the test + on main.tftest.hcl line 16, in run "test": + 16: input_two = run.madeup.response + +The run block "madeup" does not exist within this test file. + +Error: Reference to unavailable variable + + on providers.tftest.hcl line 3, in provider "test": + 3: resource_prefix = var.default + +The input variable "default" does not exist within this test file. +` + actualErr := output.Stderr() + if diff := cmp.Diff(actualErr, expectedErr); len(diff) > 0 { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedErr, actualErr, diff) + } + + if provider.ResourceCount() > 0 { + t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } +} + +func TestTest_UndefinedVariables(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", "variables_undefined_in_config")), td) + t.Chdir(td) + + provider := testing_command.NewProvider(nil) + view, done := testView(t) + + c := &TestCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + View: view, + }, + } + + code := c.Run([]string{"-no-color"}) + output := done(t) + + if code == 0 { + t.Errorf("expected status code 0 but got %d", code) + } + + expectedOut := `main.tftest.hcl... in progress + run "test"... fail +main.tftest.hcl... tearing down +main.tftest.hcl... fail + +Failure! 0 passed, 1 failed. +` + actualOut := output.Stdout() + if diff := cmp.Diff(actualOut, expectedOut); len(diff) > 0 { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedOut, actualOut, diff) + } + + expectedErr := ` +Error: Reference to undeclared input variable + + on main.tf line 2, in resource "test_resource" "foo": + 2: value = var.input + +An input variable with the name "input" has not been declared. This variable +can be declared with a variable "input" {} block. +` + actualErr := output.Stderr() + if diff := cmp.Diff(actualErr, expectedErr); len(diff) > 0 { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedErr, actualErr, diff) + } + + if provider.ResourceCount() > 0 { + t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } +} + +func TestTest_VariablesInProviders(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", "provider_vars")), td) + t.Chdir(td) + + provider := testing_command.NewProvider(nil) + view, done := testView(t) + + c := &TestCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + View: view, + }, + } + + code := c.Run([]string{"-no-color"}) + output := done(t) + + if code != 0 { + t.Errorf("expected status code 0 but got %d", code) + } + + expected := `main.tftest.hcl... in progress + run "test"... pass +main.tftest.hcl... tearing down +main.tftest.hcl... pass + +Success! 1 passed, 0 failed. +` + actual := output.All() + if diff := cmp.Diff(actual, expected); len(diff) > 0 { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, diff) + } + + if provider.ResourceCount() > 0 { + t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } +} + +func TestTest_ExpectedFailuresDuringPlanning(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", "expected_failures_during_planning")), td) + t.Chdir(td) + + provider := testing_command.NewProvider(nil) + view, done := testView(t) + + c := &TestCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + View: view, + }, + } + + code := c.Run([]string{"-no-color"}) + output := done(t) + + if code == 0 { + t.Errorf("expected status code 0 but got %d", code) + } + + expectedOut := `check.tftest.hcl... in progress + run "check_passes"... pass +check.tftest.hcl... tearing down +check.tftest.hcl... pass +input.tftest.hcl... in progress + run "input_failure"... fail + +Warning: Expected failure while planning + +A custom condition within var.input failed during the planning stage and +prevented the requested apply operation. While this was an expected failure, +the apply operation could not be executed and so the overall test case will +be marked as a failure and the original diagnostic included in the test +report. + + run "no_run"... skip +input.tftest.hcl... tearing down +input.tftest.hcl... fail +output.tftest.hcl... in progress + run "output_failure"... fail + +Warning: Expected failure while planning + + on output.tftest.hcl line 13, in run "output_failure": + 13: output.output, + +A custom condition within output.output failed during the planning stage and +prevented the requested apply operation. While this was an expected failure, +the apply operation could not be executed and so the overall test case will +be marked as a failure and the original diagnostic included in the test report. output.tftest.hcl... tearing down @@ -3463,50 +4381,482 @@ func TestTest_LongRunningTestJSON(t *testing.T) { // Then we need to delete the `elapsed` field from within the run // as it'll cause flaky tests. - run := obj["test_run"].(map[string]interface{}) - if run["progress"].(string) != "complete" { - delete(run, "elapsed") - } - } + run := obj["test_run"].(map[string]interface{}) + if run["progress"].(string) != "complete" { + delete(run, "elapsed") + } + } + + message, err := json.Marshal(obj) + if err != nil { + t.Errorf("failed to remarshal returned line: %s", line) + continue + } + + messages = append(messages, string(message)) + } + + expected := []string{ + `{"@level":"info","@message":"Found 1 file and 1 run block","@module":"terraform.ui","test_abstract":{"main.tftest.hcl":["test"]},"type":"test_abstract"}`, + `{"@level":"info","@message":"main.tftest.hcl... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","test_file":{"path":"main.tftest.hcl","progress":"starting"},"type":"test_file"}`, + `{"@level":"info","@message":" \"test\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"starting","run":"test"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"running","run":"test"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"running","run":"test"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test\"... pass","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"complete","run":"test","status":"pass"},"type":"test_run"}`, + `{"@level":"info","@message":"main.tftest.hcl... tearing down","@module":"terraform.ui","@testfile":"main.tftest.hcl","test_file":{"path":"main.tftest.hcl","progress":"teardown"},"type":"test_file"}`, + `{"@level":"info","@message":" \"test\"... tearing down","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"teardown","run":"test"},"type":"test_run"}`, + `{"@level":"info","@message":" \"test\"... tearing down","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"teardown","run":"test"},"type":"test_run"}`, + `{"@level":"info","@message":"main.tftest.hcl... pass","@module":"terraform.ui","@testfile":"main.tftest.hcl","test_file":{"path":"main.tftest.hcl","progress":"complete","status":"pass"},"type":"test_file"}`, + `{"@level":"info","@message":"Success! 1 passed, 0 failed.","@module":"terraform.ui","test_summary":{"errored":0,"failed":0,"passed":1,"skipped":0,"status":"pass"},"type":"test_summary"}`, + } + + if code != 0 { + t.Errorf("expected return code %d but got %d", 0, code) + } + + if diff := cmp.Diff(expected, messages); len(diff) > 0 { + t.Errorf("unexpected output\n\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", strings.Join(expected, "\n"), strings.Join(messages, "\n"), diff) + } +} + +func TestTest_InvalidOverrides(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", "invalid-overrides")), td) + t.Chdir(td) + + provider := testing_command.NewProvider(nil) + + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + } + + init := &InitCommand{ + Meta: meta, + } + + if code := init.Run(nil); code != 0 { + output := done(t) + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) + + c := &TestCommand{ + Meta: meta, + } + + code := c.Run([]string{"-no-color"}) + output := done(t) + + if code != 0 { + t.Errorf("expected status code 0 but got %d", code) + } + + expected := `main.tftest.hcl... in progress + run "setup"... pass + +Warning: Invalid override target + + on main.tftest.hcl line 39, in run "setup": + 39: target = test_resource.absent_five + +The override target test_resource.absent_five does not exist within the +configuration under test. This could indicate a typo in the target address or +an unnecessary override. + + run "test"... pass + +Warning: Invalid override target + + on main.tftest.hcl line 45, in run "test": + 45: target = module.setup.test_resource.absent_six + +The override target module.setup.test_resource.absent_six does not exist +within the configuration under test. This could indicate a typo in the target +address or an unnecessary override. + +main.tftest.hcl... tearing down +main.tftest.hcl... pass + +Warning: Invalid override target + + on main.tftest.hcl line 4, in mock_provider "test": + 4: target = test_resource.absent_one + +The override target test_resource.absent_one does not exist within the +configuration under test. This could indicate a typo in the target address or +an unnecessary override. + +(and 3 more similar warnings elsewhere) + +Success! 2 passed, 0 failed. +` + + actual := output.All() + + if diff := cmp.Diff(actual, expected); len(diff) > 0 { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, diff) + } + + if provider.ResourceCount() > 0 { + t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } +} + +func TestTest_InvalidConfig(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", "invalid_config")), td) + t.Chdir(td) + + provider := testing_command.NewProvider(nil) + + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + } + + init := &InitCommand{ + Meta: meta, + } + + if code := init.Run(nil); code != 0 { + output := done(t) + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) + + c := &TestCommand{ + Meta: meta, + } + + code := c.Run([]string{"-no-color"}) + output := done(t) + + if code != 1 { + t.Errorf("expected status code ! but got %d", code) + } + + expectedOut := `main.tftest.hcl... in progress + run "test"... fail +main.tftest.hcl... tearing down +main.tftest.hcl... fail + +Failure! 0 passed, 1 failed. +` + expectedErr := ` +Error: Failed to load plugin schemas + +Error while loading schemas for plugin components: Failed to obtain provider +schema: Could not load the schema for provider +registry.terraform.io/hashicorp/test: failed to instantiate provider +"registry.terraform.io/hashicorp/test" to obtain schema: fork/exec +.terraform/providers/registry.terraform.io/hashicorp/test/1.0.0/%s/terraform-provider-test_1.0.0: +permission denied.. +` + expectedErr = fmt.Sprintf(expectedErr, runtime.GOOS+"_"+runtime.GOARCH) + out := output.Stdout() + err := output.Stderr() + + if diff := cmp.Diff(out, expectedOut); len(diff) > 0 { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedErr, out, diff) + } + if diff := cmp.Diff(err, expectedErr); len(diff) > 0 { + t.Errorf("error didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedErr, err, diff) + } + + if provider.ResourceCount() > 0 { + t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } +} + +// TestTest_ValidateBackendConfiguration tests validation of how backends are declared in test files: +// * it's not valid to re-use the same backend config (i.e the same state file) +// * it's not valid to use a deprecated backend type +// * it's not valid to use a non-existent backend type +// +// Backend validation performed in the command package is dependent on the internal/backend/init package, +// which cannot be imported in configuration parsing packages without creating an import cycle. +func TestTest_ReusedBackendConfiguration(t *testing.T) { + testCases := map[string]struct { + dirName string + expectErr string + }{ + "validation detects when backend config is reused by runs using different user-supplied state key value": { + dirName: "reused-backend-config", + expectErr: ` +Error: Repeat use of the same backend block + + on main.tftest.hcl line 12, in run "test_2": + 12: backend "local" { + +The run "test_2" contains a backend configuration that's already been used in +run "test_1". Sharing the same backend configuration between separate runs +will result in conflicting state updates. +`, + }, + "validation detects when backend config is reused by runs using different implicit state key (corresponding to root and a child module) ": { + dirName: "reused-backend-config-child-modules", + expectErr: ` +Error: Repeat use of the same backend block + + on main.tftest.hcl line 19, in run "test_2": + 19: backend "local" { + +The run "test_2" contains a backend configuration that's already been used in +run "test_1". Sharing the same backend configuration between separate runs +will result in conflicting state updates. +`, + }, + "validation detects when a deprecated backend type is used": { + dirName: "removed-backend-type", + expectErr: ` +Error: Unsupported backend type + + on main.tftest.hcl line 7, in run "test_removed_backend": + 7: backend "etcd" { + +The "etcd" backend is not supported in Terraform v1.3 or later. +`, + }, + "validation detects when a non-existent backend type": { + dirName: "non-existent-backend-type", + expectErr: ` +Error: Unsupported backend type + + on main.tftest.hcl line 7, in run "test_invalid_backend": + 7: backend "foobar" { + +There is no backend type named "foobar". +`, + }, + } + + for tn, tc := range testCases { + t.Run(tn, func(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", tc.dirName)), td) + t.Chdir(td) + + provider := testing_command.NewProvider(nil) + + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, + } + + init := &InitCommand{ + Meta: meta, + } + + output := done(t) + + if code := init.Run(nil); code != 0 { + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } + + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) + + c := &TestCommand{ + Meta: meta, + } + + code := c.Run([]string{"-no-color"}) + output = done(t) + + // Assertions + if code != 1 { + t.Errorf("expected status code 1 but got %d", code) + } + + if diff := cmp.Diff(output.All(), tc.expectErr); len(diff) > 0 { + t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", tc.expectErr, output.All(), diff) + } + + if provider.ResourceCount() > 0 { + t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } + }) + } +} + +// When there is no starting state, state is created by the run containing the backend block +func TestTest_UseOfBackends_stateCreatedByBackend(t *testing.T) { + dirName := "valid-use-local-backend/no-prior-state" + + resourceId := "12345" + expectedState := `test_resource.foobar: + ID = 12345 + provider = provider["registry.terraform.io/hashicorp/test"] + destroy_fail = false + value = value-from-run-that-controls-backend + +Outputs: + +supplied_input_value = value-from-run-that-controls-backend +test_resource_id = 12345` + + // SETUP + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", dirName)), td) + t.Chdir(td) + localStatePath := filepath.Join(td, DefaultStateFilename) + + provider := testing_command.NewProvider(nil) + + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, + } + + // INIT + init := &InitCommand{ + Meta: meta, + } + + if code := init.Run(nil); code != 0 { + output := done(t) + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } + + // TEST + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) + + c := &TestCommand{ + Meta: meta, + } - message, err := json.Marshal(obj) - if err != nil { - t.Errorf("failed to remarshal returned line: %s", line) - continue - } + code := c.Run([]string{"-no-color"}) + output := done(t) - messages = append(messages, string(message)) + // ASSERTIONS + if code != 0 { + t.Errorf("expected status code 0 but got %d", code) + } + stdErr := output.Stderr() + if len(stdErr) > 0 { + t.Fatalf("unexpected error output:\n%s", stdErr) } - expected := []string{ - `{"@level":"info","@message":"Found 1 file and 1 run block","@module":"terraform.ui","test_abstract":{"main.tftest.hcl":["test"]},"type":"test_abstract"}`, - `{"@level":"info","@message":"main.tftest.hcl... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","test_file":{"path":"main.tftest.hcl","progress":"starting"},"type":"test_file"}`, - `{"@level":"info","@message":" \"test\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"starting","run":"test"},"type":"test_run"}`, - `{"@level":"info","@message":" \"test\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"running","run":"test"},"type":"test_run"}`, - `{"@level":"info","@message":" \"test\"... in progress","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"running","run":"test"},"type":"test_run"}`, - `{"@level":"info","@message":" \"test\"... pass","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"complete","run":"test","status":"pass"},"type":"test_run"}`, - `{"@level":"info","@message":"main.tftest.hcl... tearing down","@module":"terraform.ui","@testfile":"main.tftest.hcl","test_file":{"path":"main.tftest.hcl","progress":"teardown"},"type":"test_file"}`, - `{"@level":"info","@message":" \"test\"... tearing down","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"teardown","run":"test"},"type":"test_run"}`, - `{"@level":"info","@message":" \"test\"... tearing down","@module":"terraform.ui","@testfile":"main.tftest.hcl","@testrun":"test","test_run":{"path":"main.tftest.hcl","progress":"teardown","run":"test"},"type":"test_run"}`, - `{"@level":"info","@message":"main.tftest.hcl... pass","@module":"terraform.ui","@testfile":"main.tftest.hcl","test_file":{"path":"main.tftest.hcl","progress":"complete","status":"pass"},"type":"test_file"}`, - `{"@level":"info","@message":"Success! 1 passed, 0 failed.","@module":"terraform.ui","test_summary":{"errored":0,"failed":0,"passed":1,"skipped":0,"status":"pass"},"type":"test_summary"}`, + // State is stored according to the backend block + actualState := testStateRead(t, localStatePath) + if diff := cmp.Diff(actualState.String(), expectedState); len(diff) > 0 { + t.Fatalf("state didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedState, actualState, diff) } - if code != 0 { - t.Errorf("expected return code %d but got %d", 0, code) + if provider.ResourceCount() != 1 { + t.Fatalf("should have deleted all resources on completion except test_resource.a. Instead state contained %b resources: %v", provider.ResourceCount(), provider.ResourceString()) } - if diff := cmp.Diff(expected, messages); len(diff) > 0 { - t.Errorf("unexpected output\n\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", strings.Join(expected, "\n"), strings.Join(messages, "\n"), diff) + val := provider.Store.Get(resourceId) + + if val.GetAttr("id").AsString() != resourceId { + t.Errorf("expected resource to have value %q but got %s", resourceId, val.GetAttr("id").AsString()) + } + if val.GetAttr("value").AsString() != "value-from-run-that-controls-backend" { + t.Errorf("expected resource to have value 'value-from-run-that-controls-backend' but got %s", val.GetAttr("value").AsString()) } } -func TestTest_InvalidOverrides(t *testing.T) { +// When there is pre-existing state, the state is used by the run containing the backend block +func TestTest_UseOfBackends_priorStateUsedByBackend(t *testing.T) { + dirName := "valid-use-local-backend/with-prior-state" + resourceId := "53d69028-477d-7ba0-83c3-ff3807e3756f" // This value needs to match the state file in the test fixtures + expectedState := fmt.Sprintf(`test_resource.foobar: + ID = %s + provider = provider["registry.terraform.io/hashicorp/test"] + destroy_fail = false + value = value-from-run-that-controls-backend + +Outputs: + +supplied_input_value = value-from-run-that-controls-backend +test_resource_id = %s`, resourceId, resourceId) + + // SETUP td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "invalid-overrides")), td) + testCopyDir(t, testFixturePath(path.Join("test", dirName)), td) t.Chdir(td) - - provider := testing_command.NewProvider(nil) + localStatePath := filepath.Join(td, DefaultStateFilename) + + // resource store is like the remote object - needs to be assembled + // to resemble what's in state / the remote object being reused by the test. + resourceStore := &testing_command.ResourceStore{ + Data: map[string]cty.Value{ + resourceId: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal(resourceId), + "interrupt_count": cty.NullVal(cty.Number), + "value": cty.StringVal("value-from-run-that-controls-backend"), + "write_only": cty.NullVal(cty.String), + "create_wait_seconds": cty.NullVal(cty.Number), + "destroy_fail": cty.False, + "destroy_wait_seconds": cty.NullVal(cty.Number), + "defer": cty.NullVal(cty.Bool), + })}, + } + provider := testing_command.NewProvider(resourceStore) providerSource, close := newMockProviderSource(t, map[string][]string{ "test": {"1.0.0"}, @@ -3518,13 +4868,15 @@ func TestTest_InvalidOverrides(t *testing.T) { ui := new(cli.MockUi) meta := Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - Ui: ui, - View: view, - Streams: streams, - ProviderSource: providerSource, + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, } + // INIT init := &InitCommand{ Meta: meta, } @@ -3534,6 +4886,7 @@ func TestTest_InvalidOverrides(t *testing.T) { t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) } + // TEST // Reset the streams for the next command. streams, done = terminal.StreamsForTesting(t) meta.Streams = streams @@ -3546,67 +4899,245 @@ func TestTest_InvalidOverrides(t *testing.T) { code := c.Run([]string{"-no-color"}) output := done(t) + // ASSERTIONS if code != 0 { t.Errorf("expected status code 0 but got %d", code) } + stdErr := output.Stderr() + if len(stdErr) > 0 { + t.Fatalf("unexpected error output:\n%s", stdErr) + } - expected := `main.tftest.hcl... in progress - run "setup"... pass + // State is stored according to the backend block + actualState := testStateRead(t, localStatePath) + if diff := cmp.Diff(actualState.String(), expectedState); len(diff) > 0 { + t.Fatalf("state didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedState, actualState, diff) + } -Warning: Invalid override target + if provider.ResourceCount() != 1 { + t.Fatalf("should have deleted all resources on completion except test_resource.a. Instead state contained %b resources: %v", provider.ResourceCount(), provider.ResourceString()) + } - on main.tftest.hcl line 39, in run "setup": - 39: target = test_resource.absent_five + val := provider.Store.Get(resourceId) -The override target test_resource.absent_five does not exist within the -configuration under test. This could indicate a typo in the target address or -an unnecessary override. + // If the ID hasn't changed then we've used the pre-existing state, instead of remaking the resource + if val.GetAttr("id").AsString() != resourceId { + t.Errorf("expected resource to have value %q but got %s", resourceId, val.GetAttr("id").AsString()) + } + if val.GetAttr("value").AsString() != "value-from-run-that-controls-backend" { + t.Errorf("expected resource to have value 'value-from-run-that-controls-backend' but got %s", val.GetAttr("value").AsString()) + } +} - run "test"... pass +// Testing whether a state artifact is made for a run block with a backend or not +// +// Artifacts are made when the cleanup operation errors. +func TestTest_UseOfBackends_whenStateArtifactsAreMade(t *testing.T) { + cases := map[string]struct { + forceError bool + expectedCode int + expectStateManifest bool + }{ + "no artifact made when there are no cleanup errors when processing a run block with a backend": { + forceError: false, + expectedCode: 0, + expectStateManifest: false, + }, + "artifact made when a cleanup error is forced when processing a run block with a backend": { + forceError: true, + expectedCode: 1, + expectStateManifest: true, + }, + } -Warning: Invalid override target + for tn, tc := range cases { + t.Run(tn, func(t *testing.T) { - on main.tftest.hcl line 45, in run "test": - 45: target = module.setup.test_resource.absent_six + // SETUP + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", "valid-use-local-backend/no-prior-state")), td) + t.Chdir(td) -The override target module.setup.test_resource.absent_six does not exist -within the configuration under test. This could indicate a typo in the target -address or an unnecessary override. + provider := testing_command.NewProvider(nil) + erroringInvocationNum := 3 + applyResourceChangeCount := 0 + if tc.forceError { + oldFunc := provider.Provider.ApplyResourceChangeFn + newFunc := func(req providers.ApplyResourceChangeRequest) providers.ApplyResourceChangeResponse { + applyResourceChangeCount++ + if applyResourceChangeCount < erroringInvocationNum { + return oldFunc(req) + } + // Given the config in the test fixture used, the 5th call to this function is during cleanup + // Return error to force error diagnostics during cleanup + var diags tfdiags.Diagnostics + return providers.ApplyResourceChangeResponse{ + Diagnostics: diags.Append(errors.New("error forced by mock provider")), + } + } + provider.Provider.ApplyResourceChangeFn = newFunc + } -main.tftest.hcl... tearing down -main.tftest.hcl... pass + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() -Warning: Invalid override target + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) - on main.tftest.hcl line 4, in mock_provider "test": - 4: target = test_resource.absent_one + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, + } -The override target test_resource.absent_one does not exist within the -configuration under test. This could indicate a typo in the target address or -an unnecessary override. + // INIT + init := &InitCommand{ + Meta: meta, + } -(and 3 more similar warnings elsewhere) + if code := init.Run(nil); code != 0 { + output := done(t) + t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) + } -Success! 2 passed, 0 failed. -` + // TEST + // Reset the streams for the next command. + streams, done = terminal.StreamsForTesting(t) + meta.Streams = streams + meta.View = views.NewView(streams) - actual := output.All() + c := &TestCommand{ + Meta: meta, + } - if diff := cmp.Diff(actual, expected); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, diff) + code := c.Run([]string{"-no-color"}) + + // ASSERTIONS + + if tc.forceError && (applyResourceChangeCount != erroringInvocationNum) { + t.Fatalf(`Test did not force error as expected. This is because a magic number in the test setup is coupled to the config. +The apply resource change function was invoked %d times but we trigger an error on the %dth time`, applyResourceChangeCount, erroringInvocationNum) + } + + if code != tc.expectedCode { + output := done(t) + t.Errorf("expected status code %d but got %d: %s", tc.expectedCode, code, output.All()) + } + + // State is NOT stored in .terraform/test as a state artifact because + // there haven't been any failures or errors in the tests + manifest, err := teststates.LoadManifest(td, true) + if err != nil { + t.Fatal(err) + } + foundIds := []string{} + for _, file := range manifest.Files { + for _, state := range file.States { + foundIds = append(foundIds, state.ID) + } + } + if len(foundIds) > 0 && !tc.expectStateManifest { + t.Fatalf("found %d state files in .terraform/test when none were expected", len(foundIds)) + } + if len(foundIds) == 0 && tc.expectStateManifest { + t.Fatalf("found 0 state files in .terraform/test when they were were expected") + } + }) } +} - if provider.ResourceCount() > 0 { - t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) +func TestTest_UseOfBackends_validatesUseOfSkipCleanup(t *testing.T) { + cases := map[string]struct { + testDir string + expectCode int + expectErr bool + }{ + "cannot set skip_cleanup=false alongside a backend block": { + testDir: "backend-with-skip-cleanup/false", + expectCode: 1, + expectErr: true, + }, + "can set skip_cleanup=true alongside a backend block": { + testDir: "backend-with-skip-cleanup/true", + expectCode: 0, + expectErr: false, + }, + } + + for tn, tc := range cases { + t.Run(tn, func(t *testing.T) { + // SETUP + td := t.TempDir() + testCopyDir(t, testFixturePath(path.Join("test", tc.testDir)), td) + t.Chdir(td) + + provider := testing_command.NewProvider(nil) + providerSource, close := newMockProviderSource(t, map[string][]string{ + "test": {"1.0.0"}, + }) + defer close() + + streams, done := terminal.StreamsForTesting(t) + view := views.NewView(streams) + ui := new(cli.MockUi) + + meta := Meta{ + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, + } + + // INIT + init := &InitCommand{ + Meta: meta, + } + + code := init.Run([]string{"-no-color"}) + output := done(t) + + // ASSERTIONS + if code != tc.expectCode { + t.Errorf("expected status code %d but got %d", tc.expectCode, code) + } + stdErr := output.Stderr() + if len(stdErr) == 0 && tc.expectErr { + t.Fatal("expected error output but got none") + } + if len(stdErr) != 0 && !tc.expectErr { + t.Fatalf("did not expect error output but got: %s", stdErr) + } + + if provider.ResourceCount() > 0 { + t.Fatalf("should have deleted all resources on completion but left %v", provider.ResourceString()) + } + + }) } } -func TestTest_InvalidConfig(t *testing.T) { +func TestTest_UseOfBackends_failureDuringApply(t *testing.T) { + // SETUP td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "invalid_config")), td) + testCopyDir(t, testFixturePath(path.Join("test", "valid-use-local-backend/no-prior-state")), td) t.Chdir(td) + localStatePath := filepath.Join(td, DefaultStateFilename) provider := testing_command.NewProvider(nil) + // Force a failure during apply + provider.Provider.ApplyResourceChangeFn = func(req providers.ApplyResourceChangeRequest) providers.ApplyResourceChangeResponse { + resp := providers.ApplyResourceChangeResponse{} + resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("forced error")) + return resp + } providerSource, close := newMockProviderSource(t, map[string][]string{ "test": {"1.0.0"}, @@ -3618,21 +5149,25 @@ func TestTest_InvalidConfig(t *testing.T) { ui := new(cli.MockUi) meta := Meta{ - Ui: ui, - View: view, - Streams: streams, - ProviderSource: providerSource, + testingOverrides: metaOverridesForProvider(provider.Provider), + Ui: ui, + View: view, + Streams: streams, + ProviderSource: providerSource, + AllowExperimentalFeatures: true, } + // INIT init := &InitCommand{ Meta: meta, } + output := done(t) if code := init.Run(nil); code != 0 { - output := done(t) t.Fatalf("expected status code 0 but got %d: %s", code, output.All()) } + // TEST // Reset the streams for the next command. streams, done = terminal.StreamsForTesting(t) meta.Streams = streams @@ -3643,42 +5178,40 @@ func TestTest_InvalidConfig(t *testing.T) { } code := c.Run([]string{"-no-color"}) - output := done(t) + output = done(t) + // ASSERTIONS if code != 1 { - t.Errorf("expected status code ! but got %d", code) + t.Errorf("expected status code 1 but got %d", code) + } + stdErr := output.Stderr() + if len(stdErr) == 0 { + t.Fatal("expected error output but got none") } - expectedOut := `main.tftest.hcl... in progress - run "test"... fail -main.tftest.hcl... tearing down -main.tftest.hcl... fail - -Failure! 0 passed, 1 failed. -` - expectedErr := ` -Error: Failed to load plugin schemas + // Resource was not provisioned + if provider.ResourceCount() > 0 { + t.Fatalf("no resources should have been provisioned successfully but got %v", provider.ResourceString()) + } -Error while loading schemas for plugin components: Failed to obtain provider -schema: Could not load the schema for provider -registry.terraform.io/hashicorp/test: failed to instantiate provider -"registry.terraform.io/hashicorp/test" to obtain schema: fork/exec -.terraform/providers/registry.terraform.io/hashicorp/test/1.0.0/%s/terraform-provider-test_1.0.0: -permission denied.. -` - expectedErr = fmt.Sprintf(expectedErr, runtime.GOOS+"_"+runtime.GOARCH) - out := output.Stdout() - err := output.Stderr() + // When there is a failure to apply changes to the test_resource, the resulting state saved via the backend + // only includes the output and lacks any information about the test_resource + actualBackendState := testStateRead(t, localStatePath) + expectedBackendState := ` +Outputs: - if diff := cmp.Diff(out, expectedOut); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedErr, out, diff) - } - if diff := cmp.Diff(err, expectedErr); len(diff) > 0 { - t.Errorf("error didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedErr, err, diff) +supplied_input_value = value-from-run-that-controls-backend +test_resource_id = 12345` + if diff := cmp.Diff(actualBackendState.String(), expectedBackendState); len(diff) > 0 { + t.Fatalf("state didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedBackendState, actualBackendState, diff) } - if provider.ResourceCount() > 0 { - t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) + expectedStates := map[string][]string{} // empty + actualStates := statesFromManifest(t, td) + + // No state artifacts are made: Verify the states in the manifest + if diff := cmp.Diff(expectedStates, removeOutputs(actualStates)); diff != "" { + t.Fatalf("unexpected states: %s", diff) } } @@ -3841,7 +5374,6 @@ The run block "missing" does not exist within this test file. } func TestTest_JUnitOutput(t *testing.T) { - tcs := map[string]struct { path string code int @@ -4057,3 +5589,68 @@ func testModuleInline(t *testing.T, sources map[string]string) (*configs.Config, cleanup() } } + +func statesFromManifest(t *testing.T, td string) map[string][]string { + manifest, err := teststates.LoadManifest(td, true) + if err != nil { + t.Fatal(err) + } + + states := make(map[string][]string) + + // Verify the states in the manifest + for fileName, file := range manifest.Files { + for name, state := range file.States { + sm := statemgr.NewFilesystem(manifest.StateFilePath(state.ID)) + if err := sm.RefreshState(); err != nil { + t.Fatalf("error when reading state file: %s", err) + } + state := sm.State() + + // If the state is nil, then the test cleaned up the state + if state == nil || state.Empty() { + continue + } + + var resources []string + for _, module := range state.Modules { + for _, resource := range module.Resources { + resources = append(resources, resource.Addr.String()) + } + } + for _, output := range state.RootOutputValues { + resources = append(resources, output.Addr.String()) + } + if len(resources) == 0 { + continue + } + sort.Strings(resources) + states[strings.TrimSuffix(fileName, ".tftest.hcl")+"."+name] = resources + } + } + + return states +} + +func equalIgnoreOrder() cmp.Option { + less := func(a, b string) bool { return a < b } + return cmpopts.SortSlices(less) +} + +func removeOutputs(states map[string][]string) map[string][]string { + for k, v := range states { + new_v := make([]string, 0, len(v)) + for _, s := range v { + if !strings.HasPrefix(s, "output.") { + new_v = append(new_v, s) + } + } + if len(new_v) == 0 { + delete(states, k) + continue + } + states[k] = new_v + } + + return states +} diff --git a/internal/command/testdata/test/backend-with-skip-cleanup/false/main.tf b/internal/command/testdata/test/backend-with-skip-cleanup/false/main.tf new file mode 100644 index 0000000000..e1c99eb945 --- /dev/null +++ b/internal/command/testdata/test/backend-with-skip-cleanup/false/main.tf @@ -0,0 +1,4 @@ +resource "test_resource" "a" { + id = "12345" + value = "foobar" +} diff --git a/internal/command/testdata/test/backend-with-skip-cleanup/false/main.tftest.hcl b/internal/command/testdata/test/backend-with-skip-cleanup/false/main.tftest.hcl new file mode 100644 index 0000000000..3b9c4587d3 --- /dev/null +++ b/internal/command/testdata/test/backend-with-skip-cleanup/false/main.tftest.hcl @@ -0,0 +1,4 @@ +run "test" { + backend "local" {} + skip_cleanup = false +} diff --git a/internal/command/testdata/test/backend-with-skip-cleanup/true/main.tf b/internal/command/testdata/test/backend-with-skip-cleanup/true/main.tf new file mode 100644 index 0000000000..e1c99eb945 --- /dev/null +++ b/internal/command/testdata/test/backend-with-skip-cleanup/true/main.tf @@ -0,0 +1,4 @@ +resource "test_resource" "a" { + id = "12345" + value = "foobar" +} diff --git a/internal/command/testdata/test/backend-with-skip-cleanup/true/main.tftest.hcl b/internal/command/testdata/test/backend-with-skip-cleanup/true/main.tftest.hcl new file mode 100644 index 0000000000..a2e371717b --- /dev/null +++ b/internal/command/testdata/test/backend-with-skip-cleanup/true/main.tftest.hcl @@ -0,0 +1,4 @@ +run "test" { + backend "local" {} + skip_cleanup = true +} diff --git a/internal/command/testdata/test/cleanup/main.tf b/internal/command/testdata/test/cleanup/main.tf new file mode 100644 index 0000000000..87e31918dc --- /dev/null +++ b/internal/command/testdata/test/cleanup/main.tf @@ -0,0 +1,17 @@ +variable "id" { + type = string +} + +variable "destroy_fail" { + type = bool + default = false +} + +resource "test_resource" "resource" { + value = var.id + destroy_fail = var.destroy_fail +} + +output "id" { + value = test_resource.resource.id +} diff --git a/internal/command/testdata/test/cleanup/main.tftest.hcl b/internal/command/testdata/test/cleanup/main.tftest.hcl new file mode 100644 index 0000000000..d1816814e0 --- /dev/null +++ b/internal/command/testdata/test/cleanup/main.tftest.hcl @@ -0,0 +1,26 @@ +run "test" { + variables { + id = "test" + } +} + +run "test_two" { + skip_cleanup = true # This will leave behind the state + variables { + id = "test_two" + } +} + +run "test_three" { + state_key = "state_three" + variables { + id = "test_three" + destroy_fail = true // This will fail to destroy and leave behind the state + } +} + +run "test_four" { + variables { + id = "test_four" + } +} \ No newline at end of file diff --git a/internal/command/testdata/test/destroy_fail/main.tf b/internal/command/testdata/test/destroy_fail/main.tf index 4ee97ff639..1bacc920d4 100644 --- a/internal/command/testdata/test/destroy_fail/main.tf +++ b/internal/command/testdata/test/destroy_fail/main.tf @@ -7,4 +7,4 @@ resource "test_resource" "resource" { resource "test_resource" "another" { value = "Hello, world!" destroy_fail = true -} \ No newline at end of file +} diff --git a/internal/command/testdata/test/non-existent-backend-type/main.tf b/internal/command/testdata/test/non-existent-backend-type/main.tf new file mode 100644 index 0000000000..6c4306eacb --- /dev/null +++ b/internal/command/testdata/test/non-existent-backend-type/main.tf @@ -0,0 +1,10 @@ + +variable "input" { + type = string +} + +resource "test_resource" "a" { + value = var.input +} + +resource "test_resource" "c" {} diff --git a/internal/command/testdata/test/non-existent-backend-type/main.tftest.hcl b/internal/command/testdata/test/non-existent-backend-type/main.tftest.hcl new file mode 100644 index 0000000000..06de286b5e --- /dev/null +++ b/internal/command/testdata/test/non-existent-backend-type/main.tftest.hcl @@ -0,0 +1,9 @@ +# The "foobar" backend does not exist and isn't a removed backend either +run "test_invalid_backend" { + variables { + input = "foobar" + } + + backend "foobar" { + } +} diff --git a/internal/command/testdata/test/removed-backend-type/main.tf b/internal/command/testdata/test/removed-backend-type/main.tf new file mode 100644 index 0000000000..6c4306eacb --- /dev/null +++ b/internal/command/testdata/test/removed-backend-type/main.tf @@ -0,0 +1,10 @@ + +variable "input" { + type = string +} + +resource "test_resource" "a" { + value = var.input +} + +resource "test_resource" "c" {} diff --git a/internal/command/testdata/test/removed-backend-type/main.tftest.hcl b/internal/command/testdata/test/removed-backend-type/main.tftest.hcl new file mode 100644 index 0000000000..b713d0ee5d --- /dev/null +++ b/internal/command/testdata/test/removed-backend-type/main.tftest.hcl @@ -0,0 +1,9 @@ +# The "etcd" backend has been removed from Terraform versions 1.3+ +run "test_removed_backend" { + variables { + input = "foobar" + } + + backend "etcd" { + } +} diff --git a/internal/command/testdata/test/reused-backend-config-child-modules/child-module/main.tf b/internal/command/testdata/test/reused-backend-config-child-modules/child-module/main.tf new file mode 100644 index 0000000000..6c4306eacb --- /dev/null +++ b/internal/command/testdata/test/reused-backend-config-child-modules/child-module/main.tf @@ -0,0 +1,10 @@ + +variable "input" { + type = string +} + +resource "test_resource" "a" { + value = var.input +} + +resource "test_resource" "c" {} diff --git a/internal/command/testdata/test/reused-backend-config-child-modules/main.tf b/internal/command/testdata/test/reused-backend-config-child-modules/main.tf new file mode 100644 index 0000000000..40e0c4fc8d --- /dev/null +++ b/internal/command/testdata/test/reused-backend-config-child-modules/main.tf @@ -0,0 +1,9 @@ + +variable "input" { + type = string +} + +module "foobar" { + source = "./child-module" + input = "foobar" +} diff --git a/internal/command/testdata/test/reused-backend-config-child-modules/main.tftest.hcl b/internal/command/testdata/test/reused-backend-config-child-modules/main.tftest.hcl new file mode 100644 index 0000000000..e7b0a7d699 --- /dev/null +++ b/internal/command/testdata/test/reused-backend-config-child-modules/main.tftest.hcl @@ -0,0 +1,22 @@ +# The "state/terraform.tfstate" local backend is used with the implicit internal state "./child-module" +run "test_1" { + module { + source = "./child-module" + } + + variables { + input = "foobar" + } + + backend "local" { + path = "state/terraform.tfstate" + } +} + +# The "state/terraform.tfstate" local backend is used with the implicit internal state "" (empty string == root module under test) +run "test_2" { + + backend "local" { + path = "state/terraform.tfstate" + } +} diff --git a/internal/command/testdata/test/reused-backend-config/main.tf b/internal/command/testdata/test/reused-backend-config/main.tf new file mode 100644 index 0000000000..6c4306eacb --- /dev/null +++ b/internal/command/testdata/test/reused-backend-config/main.tf @@ -0,0 +1,10 @@ + +variable "input" { + type = string +} + +resource "test_resource" "a" { + value = var.input +} + +resource "test_resource" "c" {} diff --git a/internal/command/testdata/test/reused-backend-config/main.tftest.hcl b/internal/command/testdata/test/reused-backend-config/main.tftest.hcl new file mode 100644 index 0000000000..c6c7164caa --- /dev/null +++ b/internal/command/testdata/test/reused-backend-config/main.tftest.hcl @@ -0,0 +1,15 @@ +# The "state/terraform.tfstate" local backend is used with the user-supplied internal state "foobar-1" +run "test_1" { + state_key = "foobar-1" + backend "local" { + path = "state/terraform.tfstate" + } +} + +# The "state/terraform.tfstate" local backend is used with the user-supplied internal state "foobar-2" +run "test_2" { + state_key = "foobar-2" + backend "local" { + path = "state/terraform.tfstate" + } +} diff --git a/internal/command/testdata/test/skip_cleanup/main.tf b/internal/command/testdata/test/skip_cleanup/main.tf new file mode 100644 index 0000000000..19cb9eb05a --- /dev/null +++ b/internal/command/testdata/test/skip_cleanup/main.tf @@ -0,0 +1,11 @@ +variable "id" { + type = string +} + +resource "test_resource" "resource" { + value = var.id +} + +output "id" { + value = test_resource.resource.id +} diff --git a/internal/command/testdata/test/skip_cleanup/main.tftest.hcl b/internal/command/testdata/test/skip_cleanup/main.tftest.hcl new file mode 100644 index 0000000000..80c33c27ad --- /dev/null +++ b/internal/command/testdata/test/skip_cleanup/main.tftest.hcl @@ -0,0 +1,31 @@ +run "test" { + variables { + id = "test" + } +} + +run "test_two" { + skip_cleanup = true + variables { + id = "test_two" + } +} + +run "test_three" { + skip_cleanup = true + variables { + id = "test_three" + } +} + +run "test_four" { + variables { + id = "test_four" + } +} + +run "test_five" { + variables { + id = "test_five" + } +} \ No newline at end of file diff --git a/internal/command/testdata/test/skip_cleanup_simple/main.tf b/internal/command/testdata/test/skip_cleanup_simple/main.tf new file mode 100644 index 0000000000..19cb9eb05a --- /dev/null +++ b/internal/command/testdata/test/skip_cleanup_simple/main.tf @@ -0,0 +1,11 @@ +variable "id" { + type = string +} + +resource "test_resource" "resource" { + value = var.id +} + +output "id" { + value = test_resource.resource.id +} diff --git a/internal/command/testdata/test/skip_cleanup_simple/main.tftest.hcl b/internal/command/testdata/test/skip_cleanup_simple/main.tftest.hcl new file mode 100644 index 0000000000..be1659a88a --- /dev/null +++ b/internal/command/testdata/test/skip_cleanup_simple/main.tftest.hcl @@ -0,0 +1,7 @@ +run "test" { + skip_cleanup = true + + variables { + id = "foo" + } +} diff --git a/internal/command/testdata/test/skip_cleanup_with_run_deps/main.tf b/internal/command/testdata/test/skip_cleanup_with_run_deps/main.tf new file mode 100644 index 0000000000..df92f1eb42 --- /dev/null +++ b/internal/command/testdata/test/skip_cleanup_with_run_deps/main.tf @@ -0,0 +1,20 @@ +variable "id" { + type = string +} + +variable "unused" { + type = string + default = "unused" +} + +resource "test_resource" "resource" { + value = var.id +} + +output "id" { + value = test_resource.resource.id +} + +output "unused" { + value = var.unused +} \ No newline at end of file diff --git a/internal/command/testdata/test/skip_cleanup_with_run_deps/main.tftest.hcl b/internal/command/testdata/test/skip_cleanup_with_run_deps/main.tftest.hcl new file mode 100644 index 0000000000..1687b4df6d --- /dev/null +++ b/internal/command/testdata/test/skip_cleanup_with_run_deps/main.tftest.hcl @@ -0,0 +1,23 @@ +run "test" { + variables { + id = "test" + unused = "unused" + } +} + +run "test_two" { + state_key = "state" + skip_cleanup = true + variables { + id = "test_two" + // The output state data for this dependency will also be left behind, but the actual + // resource will have been destroyed by the cleanup step of test_three. + unused = run.test.unused + } +} + +run "test_three" { + variables { + id = "test_three" + } +} \ No newline at end of file diff --git a/internal/command/testdata/test/skip_file_cleanup/main.tf b/internal/command/testdata/test/skip_file_cleanup/main.tf new file mode 100644 index 0000000000..19cb9eb05a --- /dev/null +++ b/internal/command/testdata/test/skip_file_cleanup/main.tf @@ -0,0 +1,11 @@ +variable "id" { + type = string +} + +resource "test_resource" "resource" { + value = var.id +} + +output "id" { + value = test_resource.resource.id +} diff --git a/internal/command/testdata/test/skip_file_cleanup/main.tftest.hcl b/internal/command/testdata/test/skip_file_cleanup/main.tftest.hcl new file mode 100644 index 0000000000..acc59dcc9b --- /dev/null +++ b/internal/command/testdata/test/skip_file_cleanup/main.tftest.hcl @@ -0,0 +1,34 @@ +test { + skip_cleanup = true +} + +run "test" { + variables { + id = "test" + } +} + +run "test_two" { + variables { + id = "test_two" + } +} + +run "test_three" { + variables { + id = "test_three" + } +} + +run "test_four" { + variables { + id = "test_four" + } +} + +run "test_five" { + skip_cleanup = false # This will be cleaned up, and test_four will not + variables { + id = "test_five" + } +} \ No newline at end of file diff --git a/internal/command/testdata/test/valid-use-local-backend/no-prior-state/main.tf b/internal/command/testdata/test/valid-use-local-backend/no-prior-state/main.tf new file mode 100644 index 0000000000..1d3408a413 --- /dev/null +++ b/internal/command/testdata/test/valid-use-local-backend/no-prior-state/main.tf @@ -0,0 +1,18 @@ +variable "input" { + type = string +} + +resource "test_resource" "foobar" { + id = "12345" + # Set deterministic ID because this fixture is for testing what happens when there's no prior state + # i.e. this id will otherwise keep changing per test + value = var.input +} + +output "test_resource_id" { + value = test_resource.foobar.id +} + +output "supplied_input_value" { + value = var.input +} diff --git a/internal/command/testdata/test/valid-use-local-backend/no-prior-state/main.tftest.hcl b/internal/command/testdata/test/valid-use-local-backend/no-prior-state/main.tftest.hcl new file mode 100644 index 0000000000..fe9eb4fc79 --- /dev/null +++ b/internal/command/testdata/test/valid-use-local-backend/no-prior-state/main.tftest.hcl @@ -0,0 +1,15 @@ +run "setup_pet_name" { + backend "local" { + // Use default path + } + + variables { + input = "value-from-run-that-controls-backend" + } +} + +run "edit_input" { + variables { + input = "this-value-should-not-enter-state" + } +} diff --git a/internal/command/testdata/test/valid-use-local-backend/with-prior-state/main.tf b/internal/command/testdata/test/valid-use-local-backend/with-prior-state/main.tf new file mode 100644 index 0000000000..d06dcb6a4c --- /dev/null +++ b/internal/command/testdata/test/valid-use-local-backend/with-prior-state/main.tf @@ -0,0 +1,18 @@ +variable "input" { + type = string +} + +resource "test_resource" "foobar" { + # No ID set here + # We should be able to assert about its value as it will be loaded from state + # by the backend block in the run block + value = var.input +} + +output "test_resource_id" { + value = test_resource.foobar.id +} + +output "supplied_input_value" { + value = var.input +} diff --git a/internal/command/testdata/test/valid-use-local-backend/with-prior-state/main.tftest.hcl b/internal/command/testdata/test/valid-use-local-backend/with-prior-state/main.tftest.hcl new file mode 100644 index 0000000000..fe9eb4fc79 --- /dev/null +++ b/internal/command/testdata/test/valid-use-local-backend/with-prior-state/main.tftest.hcl @@ -0,0 +1,15 @@ +run "setup_pet_name" { + backend "local" { + // Use default path + } + + variables { + input = "value-from-run-that-controls-backend" + } +} + +run "edit_input" { + variables { + input = "this-value-should-not-enter-state" + } +} diff --git a/internal/command/testdata/test/valid-use-local-backend/with-prior-state/terraform.tfstate b/internal/command/testdata/test/valid-use-local-backend/with-prior-state/terraform.tfstate new file mode 100644 index 0000000000..bc5577bac9 --- /dev/null +++ b/internal/command/testdata/test/valid-use-local-backend/with-prior-state/terraform.tfstate @@ -0,0 +1,41 @@ +{ + "version": 4, + "terraform_version": "1.13.0", + "serial": 1, + "lineage": "c1f962ec-7cf6-281e-1eb8-eed10c450e16", + "outputs": { + "input": { + "value": "value-from-run-that-controls-backend", + "type": "string" + }, + "test_resource_id": { + "value": "53d69028-477d-7ba0-83c3-ff3807e3756f", + "type": "string" + } + }, + "resources": [ + { + "mode": "managed", + "type": "test_resource", + "name": "foobar", + "provider": "provider[\"registry.terraform.io/hashicorp/test\"]", + "instances": [ + { + "schema_version": 0, + "attributes": { + "create_wait_seconds": null, + "destroy_fail": false, + "destroy_wait_seconds": null, + "id": "53d69028-477d-7ba0-83c3-ff3807e3756f", + "interrupt_count": null, + "value": null, + "write_only": null + }, + "sensitive_attributes": [], + "identity_schema_version": 0 + } + ] + } + ], + "check_results": null +} \ No newline at end of file diff --git a/internal/command/views/test.go b/internal/command/views/test.go index 77540aded0..0feb2bab78 100644 --- a/internal/command/views/test.go +++ b/internal/command/views/test.go @@ -71,7 +71,7 @@ type Test interface { // addition, this function prints additional details about the current // operation alongside the current state as the state will be missing newly // created resources that also need to be handled manually. - FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, states map[*moduletest.Run]*states.State, created []*plans.ResourceInstanceChangeSrc) + FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, states map[string]*states.State, created []*plans.ResourceInstanceChangeSrc) // TFCStatusUpdate prints a reassuring update, letting users know the latest // status of their ongoing remote test run. @@ -136,11 +136,15 @@ func (t *TestHuman) Conclusion(suite *moduletest.Suite) { t.view.streams.Print(t.view.colorize.Color("[red]Failure![reset]")) } - t.view.streams.Printf(" %d passed, %d failed", counts[moduletest.Pass], counts[moduletest.Fail]+counts[moduletest.Error]) - if counts[moduletest.Skip] > 0 { - t.view.streams.Printf(", %d skipped.\n", counts[moduletest.Skip]) + if suite.CommandMode != moduletest.CleanupMode { + t.view.streams.Printf(" %d passed, %d failed", counts[moduletest.Pass], counts[moduletest.Fail]+counts[moduletest.Error]) + if counts[moduletest.Skip] > 0 { + t.view.streams.Printf(", %d skipped.\n", counts[moduletest.Skip]) + } else { + t.view.streams.Println(".") + } } else { - t.view.streams.Println(".") + t.view.streams.Println() } } @@ -276,7 +280,8 @@ func (t *TestHuman) DestroySummary(diags tfdiags.Diagnostics, run *moduletest.Ru } t.Diagnostics(run, file, diags) - if state.HasManagedResourceInstanceObjects() { + skipCleanup := run != nil && run.Config.SkipCleanup + if state.HasManagedResourceInstanceObjects() && !skipCleanup { // FIXME: This message says "resources" but this is actually a list // of resource instance objects. t.view.streams.Eprint(format.WordWrap(fmt.Sprintf("\nTerraform left the following resources in state after executing %s, and they need to be cleaned up manually:\n", identifier), t.view.errorColumns())) @@ -302,12 +307,12 @@ func (t *TestHuman) FatalInterrupt() { t.view.streams.Eprintln(format.WordWrap(fatalInterrupt, t.view.errorColumns())) } -func (t *TestHuman) FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, existingStates map[*moduletest.Run]*states.State, created []*plans.ResourceInstanceChangeSrc) { +func (t *TestHuman) FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, existingStates map[string]*states.State, created []*plans.ResourceInstanceChangeSrc) { t.view.streams.Eprint(format.WordWrap(fmt.Sprintf("\nTerraform was interrupted while executing %s, and may not have performed the expected cleanup operations.\n", file.Name), t.view.errorColumns())) // Print out the main state first, this is the state that isn't associated // with a run block. - if state, exists := existingStates[nil]; exists && !state.Empty() { + if state, exists := existingStates[configs.TestMainStateIdentifier]; exists && !state.Empty() { t.view.streams.Eprint(format.WordWrap("\nTerraform has already created the following resources from the module under test:\n", t.view.errorColumns())) for _, resource := range addrs.SetSortedNatural(state.AllManagedResourceInstanceObjectAddrs()) { if resource.DeposedKey != states.NotDeposed { @@ -318,14 +323,12 @@ func (t *TestHuman) FatalInterruptSummary(run *moduletest.Run, file *moduletest. } } - // Then print out the other states in order. - for _, run := range file.Runs { - state, exists := existingStates[run] - if !exists || state.Empty() { + for key, state := range existingStates { + if key == configs.TestMainStateIdentifier || state.Empty() { continue } - t.view.streams.Eprint(format.WordWrap(fmt.Sprintf("\nTerraform has already created the following resources for %q from %q:\n", run.Name, run.Config.Module.Source), t.view.errorColumns())) + t.view.streams.Eprint(format.WordWrap(fmt.Sprintf("\nTerraform has already created the following resources for %q:\n", key), t.view.errorColumns())) for _, resource := range addrs.SetSortedNatural(state.AllManagedResourceInstanceObjectAddrs()) { if resource.DeposedKey != states.NotDeposed { t.view.streams.Eprintf(" - %s (%s)\n", resource.ResourceInstance, resource.DeposedKey) @@ -445,11 +448,15 @@ func (t *TestJSON) Conclusion(suite *moduletest.Suite) { message.WriteString("Failure!") } - message.WriteString(fmt.Sprintf(" %d passed, %d failed", summary.Passed, summary.Failed+summary.Errored)) - if summary.Skipped > 0 { - message.WriteString(fmt.Sprintf(", %d skipped.", summary.Skipped)) - } else { - message.WriteString(".") + if suite.CommandMode != moduletest.CleanupMode { + // don't print test summaries during cleanup mode. + + message.WriteString(fmt.Sprintf(" %d passed, %d failed", summary.Passed, summary.Failed+summary.Errored)) + if summary.Skipped > 0 { + message.WriteString(fmt.Sprintf(", %d skipped.", summary.Skipped)) + } else { + message.WriteString(".") + } } } @@ -604,7 +611,8 @@ func (t *TestJSON) Run(run *moduletest.Run, file *moduletest.File, progress modu } func (t *TestJSON) DestroySummary(diags tfdiags.Diagnostics, run *moduletest.Run, file *moduletest.File, state *states.State) { - if state.HasManagedResourceInstanceObjects() { + skipCleanup := run != nil && run.Config.SkipCleanup + if state.HasManagedResourceInstanceObjects() && !skipCleanup { cleanup := json.TestFileCleanup{} for _, resource := range addrs.SetSortedNatural(state.AllManagedResourceInstanceObjectAddrs()) { cleanup.FailedResources = append(cleanup.FailedResources, json.TestFailedResource{ @@ -652,13 +660,13 @@ func (t *TestJSON) FatalInterrupt() { t.view.Log(fatalInterrupt) } -func (t *TestJSON) FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, existingStates map[*moduletest.Run]*states.State, created []*plans.ResourceInstanceChangeSrc) { +func (t *TestJSON) FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, existingStates map[string]*states.State, created []*plans.ResourceInstanceChangeSrc) { message := json.TestFatalInterrupt{ States: make(map[string][]json.TestFailedResource), } - for run, state := range existingStates { + for key, state := range existingStates { if state.Empty() { continue } @@ -671,10 +679,10 @@ func (t *TestJSON) FatalInterruptSummary(run *moduletest.Run, file *moduletest.F }) } - if run == nil { + if key == configs.TestMainStateIdentifier { message.State = resources } else { - message.States[run.Name] = resources + message.States[key] = resources } } diff --git a/internal/command/views/test_test.go b/internal/command/views/test_test.go index 2d75449f7b..5f4a5d27eb 100644 --- a/internal/command/views/test_test.go +++ b/internal/command/views/test_test.go @@ -480,7 +480,7 @@ func TestTestHuman_Run(t *testing.T) { StdErr string }{ "pass": { - Run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass}, + Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass}, Progress: moduletest.Complete, StdOut: " run \"run_block\"... pass\n", }, @@ -502,19 +502,19 @@ some warning happened during this test }, "pending": { - Run: &moduletest.Run{Name: "run_block", Status: moduletest.Pending}, + Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pending}, Progress: moduletest.Complete, StdOut: " run \"run_block\"... pending\n", }, "skip": { - Run: &moduletest.Run{Name: "run_block", Status: moduletest.Skip}, + Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Skip}, Progress: moduletest.Complete, StdOut: " run \"run_block\"... skip\n", }, "fail": { - Run: &moduletest.Run{Name: "run_block", Status: moduletest.Fail}, + Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Fail}, Progress: moduletest.Complete, StdOut: " run \"run_block\"... fail\n", }, @@ -542,7 +542,7 @@ other details }, "error": { - Run: &moduletest.Run{Name: "run_block", Status: moduletest.Error}, + Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Error}, Progress: moduletest.Complete, StdOut: " run \"run_block\"... fail\n", }, @@ -725,15 +725,15 @@ resource "test_resource" "creating" { // These next three tests should print nothing, as we only report on // progress complete. "progress_starting": { - Run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass}, + Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass}, Progress: moduletest.Starting, }, "progress_running": { - Run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass}, + Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass}, Progress: moduletest.Running, }, "progress_teardown": { - Run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass}, + Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass}, Progress: moduletest.TearDown, }, } @@ -822,7 +822,7 @@ this time it is very bad diags: tfdiags.Diagnostics{ tfdiags.Sourceless(tfdiags.Error, "first error", "this time it is very bad"), }, - run: &moduletest.Run{Name: "run_block"}, + run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}}, file: &moduletest.File{Name: "main.tftest.hcl"}, state: states.NewState(), stderr: `Terraform encountered an error destroying resources created while executing @@ -994,13 +994,13 @@ main.tftest.hcl, and they need to be cleaned up manually: func TestTestHuman_FatalInterruptSummary(t *testing.T) { tcs := map[string]struct { - states map[*moduletest.Run]*states.State + states map[string]*states.State run *moduletest.Run created []*plans.ResourceInstanceChangeSrc want string }{ "no_state_only_plan": { - states: make(map[*moduletest.Run]*states.State), + states: make(map[string]*states.State), run: &moduletest.Run{ Config: &configs.TestRun{}, Name: "run_block", @@ -1048,8 +1048,8 @@ Terraform was in the process of creating the following resources for `, }, "file_state_no_plan": { - states: map[*moduletest.Run]*states.State{ - nil: states.BuildState(func(state *states.SyncState) { + states: map[string]*states.State{ + configs.TestMainStateIdentifier: states.BuildState(func(state *states.SyncState) { state.SetResourceInstanceCurrent( addrs.AbsResourceInstance{ Module: addrs.RootModuleInstance, @@ -1091,15 +1091,8 @@ test: `, }, "run_states_no_plan": { - states: map[*moduletest.Run]*states.State{ - &moduletest.Run{ - Name: "setup_block", - Config: &configs.TestRun{ - Module: &configs.TestRunModuleCall{ - Source: addrs.ModuleSourceLocal("../setup"), - }, - }, - }: states.BuildState(func(state *states.SyncState) { + states: map[string]*states.State{ + "../setup": states.BuildState(func(state *states.SyncState) { state.SetResourceInstanceCurrent( addrs.AbsResourceInstance{ Module: addrs.RootModuleInstance, @@ -1134,22 +1127,14 @@ test: Terraform was interrupted while executing main.tftest.hcl, and may not have performed the expected cleanup operations. -Terraform has already created the following resources for "setup_block" from -"../setup": +Terraform has already created the following resources for "../setup": - test_instance.one - test_instance.two `, }, "all_states_with_plan": { - states: map[*moduletest.Run]*states.State{ - &moduletest.Run{ - Name: "setup_block", - Config: &configs.TestRun{ - Module: &configs.TestRunModuleCall{ - Source: addrs.ModuleSourceLocal("../setup"), - }, - }, - }: states.BuildState(func(state *states.SyncState) { + states: map[string]*states.State{ + "../setup": states.BuildState(func(state *states.SyncState) { state.SetResourceInstanceCurrent( addrs.AbsResourceInstance{ Module: addrs.RootModuleInstance, @@ -1178,7 +1163,7 @@ Terraform has already created the following resources for "setup_block" from &states.ResourceInstanceObjectSrc{}, addrs.AbsProviderConfig{}) }), - nil: states.BuildState(func(state *states.SyncState) { + configs.TestMainStateIdentifier: states.BuildState(func(state *states.SyncState) { state.SetResourceInstanceCurrent( addrs.AbsResourceInstance{ Module: addrs.RootModuleInstance, @@ -1253,8 +1238,7 @@ test: - test_instance.one - test_instance.two -Terraform has already created the following resources for "setup_block" from -"../setup": +Terraform has already created the following resources for "../setup": - test_instance.setup_one - test_instance.setup_two @@ -1272,15 +1256,6 @@ Terraform was in the process of creating the following resources for file := &moduletest.File{ Name: "main.tftest.hcl", - Runs: func() []*moduletest.Run { - var runs []*moduletest.Run - for run := range tc.states { - if run != nil { - runs = append(runs, run) - } - } - return runs - }(), } view.FatalInterruptSummary(tc.run, file, tc.states, tc.created) @@ -1973,7 +1948,7 @@ func TestTestJSON_DestroySummary(t *testing.T) { }, "state_from_run": { file: &moduletest.File{Name: "main.tftest.hcl"}, - run: &moduletest.Run{Name: "run_block"}, + run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}}, state: states.BuildState(func(state *states.SyncState) { state.SetResourceInstanceCurrent( addrs.Resource{ @@ -2380,7 +2355,7 @@ func TestTestJSON_Run(t *testing.T) { want []map[string]interface{} }{ "starting": { - run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass}, + run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass}, progress: moduletest.Starting, want: []map[string]interface{}{ { @@ -2401,7 +2376,7 @@ func TestTestJSON_Run(t *testing.T) { }, "running": { - run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass}, + run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass}, progress: moduletest.Running, elapsed: 2024, want: []map[string]interface{}{ @@ -2423,7 +2398,7 @@ func TestTestJSON_Run(t *testing.T) { }, "teardown": { - run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass}, + run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass}, progress: moduletest.TearDown, want: []map[string]interface{}{ { @@ -2444,7 +2419,7 @@ func TestTestJSON_Run(t *testing.T) { }, "pass": { - run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass}, + run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass}, progress: moduletest.Complete, want: []map[string]interface{}{ { @@ -2503,7 +2478,7 @@ func TestTestJSON_Run(t *testing.T) { }, "pending": { - run: &moduletest.Run{Name: "run_block", Status: moduletest.Pending}, + run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pending}, progress: moduletest.Complete, want: []map[string]interface{}{ { @@ -2524,7 +2499,7 @@ func TestTestJSON_Run(t *testing.T) { }, "skip": { - run: &moduletest.Run{Name: "run_block", Status: moduletest.Skip}, + run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Skip}, progress: moduletest.Complete, want: []map[string]interface{}{ { @@ -2545,7 +2520,7 @@ func TestTestJSON_Run(t *testing.T) { }, "fail": { - run: &moduletest.Run{Name: "run_block", Status: moduletest.Fail}, + run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Fail}, progress: moduletest.Complete, want: []map[string]interface{}{ { @@ -2620,7 +2595,7 @@ func TestTestJSON_Run(t *testing.T) { }, "error": { - run: &moduletest.Run{Name: "run_block", Status: moduletest.Error}, + run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Error}, progress: moduletest.Complete, want: []map[string]interface{}{ { @@ -2973,12 +2948,12 @@ func TestTestJSON_Run(t *testing.T) { func TestTestJSON_FatalInterruptSummary(t *testing.T) { tcs := map[string]struct { - states map[*moduletest.Run]*states.State + states map[string]*states.State changes []*plans.ResourceInstanceChangeSrc want []map[string]interface{} }{ "no_state_only_plan": { - states: make(map[*moduletest.Run]*states.State), + states: make(map[string]*states.State), changes: []*plans.ResourceInstanceChangeSrc{ { Addr: addrs.AbsResourceInstance{ @@ -3029,8 +3004,8 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) { }, }, "file_state_no_plan": { - states: map[*moduletest.Run]*states.State{ - nil: states.BuildState(func(state *states.SyncState) { + states: map[string]*states.State{ + configs.TestMainStateIdentifier: states.BuildState(func(state *states.SyncState) { state.SetResourceInstanceCurrent( addrs.AbsResourceInstance{ Module: addrs.RootModuleInstance, @@ -3083,8 +3058,8 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) { }, }, "run_states_no_plan": { - states: map[*moduletest.Run]*states.State{ - &moduletest.Run{Name: "setup_block"}: states.BuildState(func(state *states.SyncState) { + states: map[string]*states.State{ + "../setup": states.BuildState(func(state *states.SyncState) { state.SetResourceInstanceCurrent( addrs.AbsResourceInstance{ Module: addrs.RootModuleInstance, @@ -3124,7 +3099,7 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) { "@testrun": "run_block", "test_interrupt": map[string]interface{}{ "states": map[string]interface{}{ - "setup_block": []interface{}{ + "../setup": []interface{}{ map[string]interface{}{ "instance": "test_instance.one", }, @@ -3139,8 +3114,8 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) { }, }, "all_states_with_plan": { - states: map[*moduletest.Run]*states.State{ - &moduletest.Run{Name: "setup_block"}: states.BuildState(func(state *states.SyncState) { + states: map[string]*states.State{ + "../setup": states.BuildState(func(state *states.SyncState) { state.SetResourceInstanceCurrent( addrs.AbsResourceInstance{ Module: addrs.RootModuleInstance, @@ -3169,7 +3144,7 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) { &states.ResourceInstanceObjectSrc{}, addrs.AbsProviderConfig{}) }), - nil: states.BuildState(func(state *states.SyncState) { + configs.TestMainStateIdentifier: states.BuildState(func(state *states.SyncState) { state.SetResourceInstanceCurrent( addrs.AbsResourceInstance{ Module: addrs.RootModuleInstance, @@ -3248,7 +3223,7 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) { }, }, "states": map[string]interface{}{ - "setup_block": []interface{}{ + "../setup": []interface{}{ map[string]interface{}{ "instance": "test_instance.setup_one", }, diff --git a/internal/command/workdir/dir.go b/internal/command/workdir/dir.go index c8b9fdd05e..4d534acbc9 100644 --- a/internal/command/workdir/dir.go +++ b/internal/command/workdir/dir.go @@ -14,7 +14,7 @@ import ( // "Working directory" is unfortunately a slight misnomer, because non-default // options can potentially stretch the definition such that multiple working // directories end up appearing to share a data directory, or other similar -// anomolies, but we continue to use this terminology both for historical +// anomalies, but we continue to use this terminology both for historical // reasons and because it reflects the common case without any special // overrides. // @@ -135,6 +135,12 @@ func (d *Dir) DataDir() string { return d.dataDir } +// TestDataDir returns the path where the receiver keeps settings +// and artifacts related to terraform tests. +func (d *Dir) TestDataDir() string { + return filepath.Join(d.dataDir, "test") +} + // ensureDataDir creates the data directory and all of the necessary parent // directories that lead to it, if they don't already exist. // diff --git a/internal/configs/backend.go b/internal/configs/backend.go index 8e2381a3f0..31eca8e0f5 100644 --- a/internal/configs/backend.go +++ b/internal/configs/backend.go @@ -10,8 +10,9 @@ import ( "github.com/zclconf/go-cty/cty" ) -// Backend represents a "backend" block inside a "terraform" block in a module -// or file. +// Backend represents a "backend" block +// This could be inside a "terraform" block in a module +// or file, or in a "run" block in a .tftest.hcl file. type Backend struct { Type string Config hcl.Body diff --git a/internal/configs/parser_config.go b/internal/configs/parser_config.go index 0b1aeac702..31668a5c63 100644 --- a/internal/configs/parser_config.go +++ b/internal/configs/parser_config.go @@ -43,7 +43,7 @@ func (p *Parser) LoadTestFile(path string) (*TestFile, hcl.Diagnostics) { return nil, diags } - test, testDiags := loadTestFile(body) + test, testDiags := loadTestFile(body, p.allowExperiments) diags = append(diags, testDiags...) return test, diags } diff --git a/internal/configs/parser_config_dir_test.go b/internal/configs/parser_config_dir_test.go index 88a28a5ac5..4c6e4ec827 100644 --- a/internal/configs/parser_config_dir_test.go +++ b/internal/configs/parser_config_dir_test.go @@ -126,6 +126,8 @@ func TestParserLoadConfigDirSuccess(t *testing.T) { func TestParserLoadConfigDirWithTests(t *testing.T) { directories := []string{ "testdata/valid-modules/with-tests", + "testdata/valid-modules/with-tests-backend", + "testdata/valid-modules/with-tests-same-backend-across-files", "testdata/valid-modules/with-tests-expect-failures", "testdata/valid-modules/with-tests-nested", "testdata/valid-modules/with-tests-very-nested", @@ -142,6 +144,7 @@ func TestParserLoadConfigDirWithTests(t *testing.T) { } parser := NewParser(nil) + parser.AllowLanguageExperiments(true) mod, diags := parser.LoadConfigDir(directory, MatchTestFiles(testDirectory)) if len(diags) > 0 { // We don't want any warnings or errors. t.Errorf("unexpected diagnostics") @@ -300,6 +303,24 @@ func TestParserLoadTestFiles_Invalid(t *testing.T) { "duplicate_file_config.tftest.hcl:3,1-5: Multiple \"test\" blocks; This test file already has a \"test\" block defined at duplicate_file_config.tftest.hcl:1,1-5.", "duplicate_file_config.tftest.hcl:5,1-5: Multiple \"test\" blocks; This test file already has a \"test\" block defined at duplicate_file_config.tftest.hcl:1,1-5.", }, + "duplicate_backend_blocks_in_test": { + "duplicate_backend_blocks_in_test.tftest.hcl:15,3-18: Duplicate backend blocks; The run \"test\" already uses an internal state file that's loaded by a backend in the run \"setup\". Please ensure that a backend block is only in the first apply run block for a given internal state file.", + }, + "duplicate_backend_blocks_in_run": { + "duplicate_backend_blocks_in_run.tftest.hcl:6,3-18: Duplicate backend blocks; A backend block has already been defined inside the run \"setup\" at duplicate_backend_blocks_in_run.tftest.hcl:3,3-18.", + }, + "backend_block_in_plan_run": { + "backend_block_in_plan_run.tftest.hcl:6,3-18: Invalid backend block; A backend block can only be used in the first apply run block for a given internal state file. It cannot be included in a block to run a plan command.", + }, + "backend_block_in_second_apply_run": { + "backend_block_in_second_apply_run.tftest.hcl:10,3-18: Invalid backend block; The run \"test_2\" cannot load in state using a backend block, because internal state has already been created by an apply command in run \"test_1\". Backend blocks can only be present in the first apply command for a given internal state.", + }, + "non_state_storage_backend_in_test": { + "non_state_storage_backend_in_test.tftest.hcl:4,3-19: Invalid backend block; The \"remote\" backend type cannot be used in the backend block in run \"test\" at non_state_storage_backend_in_test.tftest.hcl:4,3-19. Only state storage backends can be used in a test run.", + }, + "skip_cleanup_after_backend": { + "skip_cleanup_after_backend.tftest.hcl:13,3-15: Duplicate \"skip_cleanup\" block; The run \"skip_cleanup\" has a skip_cleanup attribute set, but shares state with an earlier run \"backend\" that has a backend defined. The later run takes precedence, but the backend will still be used to manage this state.", + }, } for name, expected := range tcs { @@ -312,6 +333,7 @@ func TestParserLoadTestFiles_Invalid(t *testing.T) { parser := testParser(map[string]string{ fmt.Sprintf("%s.tftest.hcl", name): string(src), }) + parser.AllowLanguageExperiments(true) _, actual := parser.LoadTestFile(fmt.Sprintf("%s.tftest.hcl", name)) assertExactDiagnostics(t, actual, expected) diff --git a/internal/configs/test_file.go b/internal/configs/test_file.go index 9d1c7147b3..36fcba8103 100644 --- a/internal/configs/test_file.go +++ b/internal/configs/test_file.go @@ -72,6 +72,11 @@ type TestFile struct { // test. Providers map[string]*Provider + // BackendConfigs is a map of state keys to structs that contain backend + // configuration. This should be used to set the state for a given state key + // at the start of a test command. + BackendConfigs map[string]RunBlockBackend + // Overrides contains any specific overrides that should be applied for this // test outside any mock providers. Overrides addrs.Map[addrs.Targetable, *Override] @@ -90,6 +95,9 @@ type TestFileConfig struct { // Parallel: Indicates if test runs should be executed in parallel. Parallel bool + // SkipCleanup: Indicates if the test runs should skip the cleanup phase. + SkipCleanup bool + DeclRange hcl.Range } @@ -170,6 +178,12 @@ type TestRun struct { // will be executed in parallel with other test runs. Parallel bool + Backend *Backend + + // SkipCleanup: Indicates if the test run should skip the cleanup phase. + SkipCleanup bool + SkipCleanupRange *hcl.Range + NameDeclRange hcl.Range VariablesDeclRange hcl.Range DeclRange hcl.Range @@ -338,11 +352,24 @@ type TestRunOptions struct { DeclRange hcl.Range } -func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) { +// RunBlockBackend records a backend block and which run block it was parsed +// from. +type RunBlockBackend struct { + Backend *Backend + + // Run is the TestRun containing the backend block for this Backend. + // This is used in diagnostics to help avoid duplicate backends for a given + // internal state file or duplicated use of the same backend for multiple + // internal states. + Run *TestRun +} + +func loadTestFile(body hcl.Body, experimentsAllowed bool) (*TestFile, hcl.Diagnostics) { var diags hcl.Diagnostics tf := &TestFile{ VariableDefinitions: make(map[string]*Variable), Providers: make(map[string]*Provider), + BackendConfigs: make(map[string]RunBlockBackend), Overrides: addrs.MakeMap[addrs.Targetable, *Override](), } @@ -354,7 +381,7 @@ func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) { diags = append(diags, contentDiags...) var cDiags hcl.Diagnostics - tf.Config, cDiags = decodeFileConfigBlock(configContent) + tf.Config, cDiags = decodeFileConfigBlock(configContent, experimentsAllowed) diags = append(diags, cDiags...) if diags.HasErrors() { return nil, diags @@ -364,11 +391,14 @@ func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) { diags = append(diags, contentDiags...) runBlockNames := make(map[string]hcl.Range) + skipCleanups := make(map[string]string) for _, block := range content.Blocks { switch block.Type { case "run": - run, runDiags := decodeTestRunBlock(block, tf) + nextRunIndex := len(tf.Runs) + + run, runDiags := decodeTestRunBlock(block, tf, experimentsAllowed) diags = append(diags, runDiags...) if !runDiags.HasErrors() { tf.Runs = append(tf.Runs, run) @@ -379,11 +409,71 @@ func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) { Severity: hcl.DiagError, Summary: "Duplicate \"run\" block names", Detail: fmt.Sprintf("This test file already has a run named %s block defined at %s.", run.Name, rng), - Subject: block.DefRange.Ptr(), + Subject: run.NameDeclRange.Ptr(), }) - continue + } else { + runBlockNames[run.Name] = run.DeclRange + } + + if run.SkipCleanup && run.SkipCleanupRange != nil { + if backend, found := tf.BackendConfigs[run.StateKey]; found { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "Duplicate \"skip_cleanup\" block", + Detail: fmt.Sprintf("The run %q has a skip_cleanup attribute set, but shares state with an earlier run %q that has a backend defined. The later run takes precedence, but the backend will still be used to manage this state.", run.Name, backend.Run.Name), + Subject: run.SkipCleanupRange, + }) + } else { + if _, found := skipCleanups[run.StateKey]; found { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "Duplicate \"skip_cleanup\" block", + Detail: fmt.Sprintf("The run %q has a skip_cleanup attribute set, but shares state with an earlier run %q that also has skip_cleanup set. The later run takes precedence, and this attribute is ignored for the earlier run.", run.Name, skipCleanups[run.StateKey]), + Subject: run.SkipCleanupRange, + }) + } + skipCleanups[run.StateKey] = run.Name + } + } + + if run.Backend != nil { + if existing, exists := tf.BackendConfigs[run.StateKey]; exists { + // then we definitely have two run blocks with the same + // state key trying to load backends + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Duplicate backend blocks", + Detail: fmt.Sprintf("The run %q already uses an internal state file that's loaded by a backend in the run %q. Please ensure that a backend block is only in the first apply run block for a given internal state file.", run.Name, existing.Run.Name), + Subject: run.Backend.DeclRange.Ptr(), + }) + continue + } else { + // Record the backend block in the test file, under the related state key + tf.BackendConfigs[run.StateKey] = RunBlockBackend{ + Backend: run.Backend, + Run: run, + } + } + + for ix := range nextRunIndex { + previousRun := tf.Runs[ix] + + if previousRun.StateKey != run.StateKey { + continue + } + + if previousRun.Command == ApplyTestCommand { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid backend block", + Detail: fmt.Sprintf("The run %q cannot load in state using a backend block, because internal state has already been created by an apply command in run %q. Backend blocks can only be present in the first apply command for a given internal state.", run.Name, previousRun.Name), + Subject: run.Backend.DeclRange.Ptr(), + }) + break + } + } + } - runBlockNames[run.Name] = run.DeclRange case "variable": variable, variableDiags := decodeVariableBlock(block, false) @@ -527,7 +617,7 @@ func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) { return tf, diags } -func decodeFileConfigBlock(fileContent *hcl.BodyContent) (*TestFileConfig, hcl.Diagnostics) { +func decodeFileConfigBlock(fileContent *hcl.BodyContent, experimentsAllowed bool) (*TestFileConfig, hcl.Diagnostics) { var diags hcl.Diagnostics // The "test" block is optional, so we just return a nil config if it doesn't exist. @@ -561,10 +651,24 @@ func decodeFileConfigBlock(fileContent *hcl.BodyContent) (*TestFileConfig, hcl.D diags = append(diags, rawDiags...) } + if attr, exists := content.Attributes["skip_cleanup"]; exists { + if !experimentsAllowed { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid attribute", + Detail: "The skip_cleanup attribute is only available in experimental builds of Terraform.", + Subject: attr.NameRange.Ptr(), + }) + } + + rawDiags := gohcl.DecodeExpression(attr.Expr, nil, &ret.SkipCleanup) + diags = append(diags, rawDiags...) + } + return ret, diags } -func decodeTestRunBlock(block *hcl.Block, file *TestFile) (*TestRun, hcl.Diagnostics) { +func decodeTestRunBlock(block *hcl.Block, file *TestFile, experimentsAllowed bool) (*TestRun, hcl.Diagnostics) { var diags hcl.Diagnostics content, contentDiags := block.Body.Content(testRunBlockSchema) @@ -577,6 +681,7 @@ func decodeTestRunBlock(block *hcl.Block, file *TestFile) (*TestRun, hcl.Diagnos NameDeclRange: block.LabelRanges[0], DeclRange: block.DefRange, Parallel: file.Config != nil && file.Config.Parallel, + SkipCleanup: file.Config != nil && file.Config.SkipCleanup, } if !hclsyntax.ValidIdentifier(r.Name) { @@ -588,6 +693,7 @@ func decodeTestRunBlock(block *hcl.Block, file *TestFile) (*TestRun, hcl.Diagnos }) } + var backendRange *hcl.Range // Stored for validation once all blocks/attrs processed for _, block := range content.Blocks { switch block.Type { case "assert": @@ -697,6 +803,45 @@ func decodeTestRunBlock(block *hcl.Block, file *TestFile) (*TestRun, hcl.Diagnos } r.Overrides.Put(subject, override) } + case "backend": + if !experimentsAllowed { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid block", + Detail: "The backend block is only available within run blocks in experimental builds of Terraform.", + Subject: block.DefRange.Ptr(), + }) + } + + backend, backedDiags := decodeBackendBlock(block) + diags = append(diags, backedDiags...) + + if backend.Type == "remote" { + // Enhanced backends are not in use + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid backend block", + Detail: fmt.Sprintf("The \"remote\" backend type cannot be used in the backend block in run %q at %s. Only state storage backends can be used in a test run.", r.Name, block.DefRange), + Subject: block.DefRange.Ptr(), + }) + continue + } + + if r.Backend != nil { + // We've already encountered a backend for this run block + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Duplicate backend blocks", + Detail: fmt.Sprintf("A backend block has already been defined inside the run %q at %s.", r.Name, backendRange), + Subject: block.DefRange.Ptr(), + }) + continue + } + + r.Backend = backend + backendRange = &block.DefRange + // Using a backend implies skipping cleanup for that run + r.SkipCleanup = true } } @@ -760,6 +905,42 @@ func decodeTestRunBlock(block *hcl.Block, file *TestFile) (*TestRun, hcl.Diagnos diags = append(diags, rawDiags...) } + if r.Command != ApplyTestCommand && r.Backend != nil { + // Backend blocks must be used in the first _apply_ run block for a given internal state file. + // So, they cannot be present in a plan run block + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid backend block", + Detail: "A backend block can only be used in the first apply run block for a given internal state file. It cannot be included in a block to run a plan command.", + Subject: backendRange.Ptr(), + }) + } + + if attr, exists := content.Attributes["skip_cleanup"]; exists { + if !experimentsAllowed { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid attribute", + Detail: "The skip_cleanup attribute is only available in experimental builds of Terraform.", + Subject: attr.NameRange.Ptr(), + }) + } + + rawDiags := gohcl.DecodeExpression(attr.Expr, nil, &r.SkipCleanup) + diags = append(diags, rawDiags...) + r.SkipCleanupRange = attr.NameRange.Ptr() + } + + if r.SkipCleanupRange != nil && !r.SkipCleanup && r.Backend != nil { + // Stop user attempting to clean up long-lived resources + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Cannot use `skip_cleanup=false` in a run block that contains a backend block", + Detail: "Backend blocks are used in tests to allow reuse of long-lived resources. Due to this, cleanup behavior is implicitly skipped and backend blocks are incompatible with setting `skip_cleanup=false`", + Subject: r.SkipCleanupRange, + }) + } + return &r, diags } @@ -963,6 +1144,7 @@ var testFileSchema = &hcl.BodySchema{ var testFileConfigBlockSchema = &hcl.BodySchema{ Attributes: []hcl.AttributeSchema{ {Name: "parallel"}, + {Name: "skip_cleanup"}, }, } @@ -973,6 +1155,7 @@ var testRunBlockSchema = &hcl.BodySchema{ {Name: "expect_failures"}, {Name: "state_key"}, {Name: "parallel"}, + {Name: "skip_cleanup"}, }, Blocks: []hcl.BlockHeaderSchema{ { @@ -996,6 +1179,10 @@ var testRunBlockSchema = &hcl.BodySchema{ { Type: "override_module", }, + { + Type: "backend", + LabelNames: []string{"name"}, + }, }, } diff --git a/internal/configs/testdata/invalid-test-files/backend_block_in_plan_run.tftest.hcl b/internal/configs/testdata/invalid-test-files/backend_block_in_plan_run.tftest.hcl new file mode 100644 index 0000000000..9ecadb11ce --- /dev/null +++ b/internal/configs/testdata/invalid-test-files/backend_block_in_plan_run.tftest.hcl @@ -0,0 +1,13 @@ +# This backend block is used in a plan run block +# They're expected to be used in the first apply run block +# for a given state key +run "setup" { + command = plan + backend "local" { + path = "/tests/other-state" + } +} + +run "test" { + command = apply +} diff --git a/internal/configs/testdata/invalid-test-files/backend_block_in_second_apply_run.tftest.hcl b/internal/configs/testdata/invalid-test-files/backend_block_in_second_apply_run.tftest.hcl new file mode 100644 index 0000000000..1eb0a1c4b9 --- /dev/null +++ b/internal/configs/testdata/invalid-test-files/backend_block_in_second_apply_run.tftest.hcl @@ -0,0 +1,13 @@ +run "test_1" { + command = apply +} + +# This run block uses the same internal state as test_1, +# so this the backend block is attempting to load in state +# when there is already non-empty internal state. +run "test_2" { + command = apply + backend "local" { + path = "/tests/other-state" + } +} diff --git a/internal/configs/testdata/invalid-test-files/duplicate_backend_blocks_in_run.tftest.hcl b/internal/configs/testdata/invalid-test-files/duplicate_backend_blocks_in_run.tftest.hcl new file mode 100644 index 0000000000..ad1208ca35 --- /dev/null +++ b/internal/configs/testdata/invalid-test-files/duplicate_backend_blocks_in_run.tftest.hcl @@ -0,0 +1,12 @@ +# There cannot be two backend blocks in a single run block +run "setup" { + backend "local" { + path = "/tests/state/terraform.tfstate" + } + backend "local" { + path = "/tests/other-state/terraform.tfstate" + } +} + +run "test" { +} diff --git a/internal/configs/testdata/invalid-test-files/duplicate_backend_blocks_in_test.tftest.hcl b/internal/configs/testdata/invalid-test-files/duplicate_backend_blocks_in_test.tftest.hcl new file mode 100644 index 0000000000..cb304c996c --- /dev/null +++ b/internal/configs/testdata/invalid-test-files/duplicate_backend_blocks_in_test.tftest.hcl @@ -0,0 +1,18 @@ +run "setup" { + command = apply + + backend "local" { + path = "/tests/state/terraform.tfstate" + } +} + +# "test" uses the same internal state file as "setup", which has already loaded state from a backend block +# and is an apply run block. +# The backend block can only occur once in a given set of run blocks that share state. +run "test" { + command = apply + + backend "local" { + path = "/tests/state/terraform.tfstate" + } +} diff --git a/internal/configs/testdata/invalid-test-files/non_state_storage_backend_in_test.tftest.hcl b/internal/configs/testdata/invalid-test-files/non_state_storage_backend_in_test.tftest.hcl new file mode 100644 index 0000000000..f69bcf77d5 --- /dev/null +++ b/internal/configs/testdata/invalid-test-files/non_state_storage_backend_in_test.tftest.hcl @@ -0,0 +1,7 @@ +run "test" { + command = apply + + backend "remote" { + organization = "example_corp" + } +} diff --git a/internal/configs/testdata/invalid-test-files/skip_cleanup_after_backend.tftest.hcl b/internal/configs/testdata/invalid-test-files/skip_cleanup_after_backend.tftest.hcl new file mode 100644 index 0000000000..ddbd87f5dc --- /dev/null +++ b/internal/configs/testdata/invalid-test-files/skip_cleanup_after_backend.tftest.hcl @@ -0,0 +1,14 @@ +run "backend" { + command = apply + + backend "local" { + path = "/tests/state/terraform.tfstate" + } +} + +run "skip_cleanup" { + command = apply + + # Should warn us about the skip_cleanup option being set. + skip_cleanup = true +} diff --git a/internal/configs/testdata/valid-modules/with-tests-backend/main.tf b/internal/configs/testdata/valid-modules/with-tests-backend/main.tf new file mode 100644 index 0000000000..b84d4f3c41 --- /dev/null +++ b/internal/configs/testdata/valid-modules/with-tests-backend/main.tf @@ -0,0 +1,11 @@ + +variable "input" { + type = string +} + + +resource "foo_resource" "a" { + value = var.input +} + +resource "bar_resource" "c" {} diff --git a/internal/configs/testdata/valid-modules/with-tests-backend/test_case_one.tftest.hcl b/internal/configs/testdata/valid-modules/with-tests-backend/test_case_one.tftest.hcl new file mode 100644 index 0000000000..26b114caf3 --- /dev/null +++ b/internal/configs/testdata/valid-modules/with-tests-backend/test_case_one.tftest.hcl @@ -0,0 +1,22 @@ +variables { + input = "default" +} + +# The backend in "load_state" is used to set an internal state without an explicit key +run "load_state" { + backend "local" { + path = "state/terraform.tfstate" + } +} + +# "test_run" uses the same internal state as "load_state" +run "test_run" { + variables { + input = "custom" + } + + assert { + condition = foo_resource.a.value == "custom" + error_message = "invalid value" + } +} diff --git a/internal/configs/testdata/valid-modules/with-tests-backend/test_case_two.tftest.hcl b/internal/configs/testdata/valid-modules/with-tests-backend/test_case_two.tftest.hcl new file mode 100644 index 0000000000..236ad19bdc --- /dev/null +++ b/internal/configs/testdata/valid-modules/with-tests-backend/test_case_two.tftest.hcl @@ -0,0 +1,15 @@ +# The foobar-1 local backend is used with the user-supplied internal state "foobar-1" +run "test_1" { + state_key = "foobar-1" + backend "local" { + path = "state/foobar-1.tfstate" + } +} + +# The foobar-2 local backend is used with the user-supplied internal state "foobar-2" +run "test_2" { + state_key = "foobar-2" + backend "local" { + path = "state/foobar-2.tfstate" + } +} diff --git a/internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/main.tf b/internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/main.tf new file mode 100644 index 0000000000..7bb1380e65 --- /dev/null +++ b/internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/main.tf @@ -0,0 +1,7 @@ +resource "aws_instance" "web" { + ami = "ami-1234" + security_groups = [ + "foo", + "bar", + ] +} diff --git a/internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/test_file_one.tftest.hcl b/internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/test_file_one.tftest.hcl new file mode 100644 index 0000000000..6eac0fb2bf --- /dev/null +++ b/internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/test_file_one.tftest.hcl @@ -0,0 +1,34 @@ +# These run blocks either: +# 1) don't set an explicit state_key value and test the working directory, +# so would have the same internal state file as run blocks in the other test file. +# 2) do set an explicit state_key, which matches run blocks in the other test file. +# +# test_file_two.tftest.hcl as the same content as test_file_one.tftest.hcl, +# with renamed run blocks. +run "file_1_load_state" { + backend "local" { + path = "state/terraform.tfstate" + } +} + +run "file_1_test" { + assert { + condition = aws_instance.web.ami == "ami-1234" + error_message = "AMI should be ami-1234" + } +} + +run "file_1_load_state_state_key" { + state_key = "foobar" + backend "local" { + path = "state/terraform.tfstate" + } +} + +run "file_1_test_state_key" { + state_key = "foobar" + assert { + condition = aws_instance.web.ami == "ami-1234" + error_message = "AMI should be ami-1234" + } +} diff --git a/internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/test_file_two.tftest.hcl b/internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/test_file_two.tftest.hcl new file mode 100644 index 0000000000..ef01e8946c --- /dev/null +++ b/internal/configs/testdata/valid-modules/with-tests-same-backend-across-files/test_file_two.tftest.hcl @@ -0,0 +1,34 @@ +# These run blocks either: +# 1) don't set an explicit state_key value and test the working directory, +# so would have the same internal state file as run blocks in the other test file. +# 2) do set an explicit state_key, which matches run blocks in the other test file. +# +# test_file_two.tftest.hcl as the same content as test_file_one.tftest.hcl, +# with renamed run blocks. +run "file_2_load_state" { + backend "local" { + path = "state/terraform.tfstate" + } +} + +run "file_2_test" { + assert { + condition = aws_instance.web.ami == "ami-1234" + error_message = "AMI should be ami-1234" + } +} + +run "file_2_load_state_state_key" { + state_key = "foobar" + backend "local" { + path = "state/terraform.tfstate" + } +} + +run "file_2_test_state_key" { + state_key = "foobar" + assert { + condition = aws_instance.web.ami == "ami-1234" + error_message = "AMI should be ami-1234" + } +} diff --git a/internal/moduletest/graph/apply.go b/internal/moduletest/graph/apply.go index 9efcf559f6..4c2e308e2c 100644 --- a/internal/moduletest/graph/apply.go +++ b/internal/moduletest/graph/apply.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/lang" "github.com/hashicorp/terraform/internal/moduletest" + teststates "github.com/hashicorp/terraform/internal/moduletest/states" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/states" @@ -19,6 +20,9 @@ import ( "github.com/hashicorp/terraform/internal/tfdiags" ) +// testApply defines how to execute a run block representing an apply command +// +// See also: (n *NodeTestRun).testPlan func (n *NodeTestRun) testApply(ctx *EvalContext, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, mocks map[addrs.RootProviderConfig]*configs.MockData, waiter *operationWaiter) { file, run := n.File(), n.run config := run.ModuleConfig @@ -26,18 +30,18 @@ func (n *NodeTestRun) testApply(ctx *EvalContext, variables terraform.InputValue // FilterVariablesToModule only returns warnings, so we don't check the // returned diags for errors. - setVariables, testOnlyVariables, setVariableDiags := n.FilterVariablesToModule(variables) + setVariables, testOnlyVariables, setVariableDiags := FilterVariablesToModule(run.ModuleConfig, variables) run.Diagnostics = run.Diagnostics.Append(setVariableDiags) // ignore diags because validate has covered it tfCtx, _ := terraform.NewContext(n.opts.ContextOpts) // execute the terraform plan operation - _, plan, planDiags := n.plan(ctx, tfCtx, setVariables, providers, mocks, waiter) + _, plan, planDiags := plan(ctx, tfCtx, file.Config, run.Config, run.ModuleConfig, setVariables, providers, mocks, waiter) // Any error during the planning prevents our apply from // continuing which is an error. - planDiags = run.ExplainExpectedFailures(planDiags) + planDiags = moduletest.ExplainExpectedFailures(run.Config, planDiags) run.Diagnostics = run.Diagnostics.Append(planDiags) if planDiags.HasErrors() { run.Status = moduletest.Error @@ -59,18 +63,17 @@ func (n *NodeTestRun) testApply(ctx *EvalContext, variables terraform.InputValue run.Diagnostics = filteredDiags // execute the apply operation - applyScope, updated, applyDiags := n.apply(tfCtx, plan, moduletest.Running, variables, providers, waiter) + applyScope, updated, applyDiags := apply(tfCtx, run.Config, run.ModuleConfig, plan, moduletest.Running, variables, providers, waiter) // Remove expected diagnostics, and add diagnostics in case anything that should have failed didn't. // We'll also update the run status based on the presence of errors or missing expected failures. - failOrErr := n.checkForMissingExpectedFailures(ctx, run, applyDiags) - if failOrErr { + status, applyDiags := checkForMissingExpectedFailures(ctx, run.Config, applyDiags) + run.Diagnostics = run.Diagnostics.Append(applyDiags) + run.Status = run.Status.Merge(status) + if status == moduletest.Error { // Even though the apply operation failed, the graph may have done // partial updates and the returned state should reflect this. - ctx.SetFileState(key, &TestFileState{ - Run: run, - State: updated, - }) + ctx.SetFileState(key, run, updated, teststates.StateReasonNone) return } @@ -103,8 +106,8 @@ func (n *NodeTestRun) testApply(ctx *EvalContext, variables terraform.InputValue // of the run. We also pass in all the // previous contexts so this run block can refer to outputs from // previous run blocks. - newStatus, outputVals, moreDiags := ctx.EvaluateRun(run, applyScope, testOnlyVariables) - run.Status = newStatus + newStatus, outputVals, moreDiags := ctx.EvaluateRun(run.Config, run.ModuleConfig.Module, applyScope, testOnlyVariables) + run.Status = run.Status.Merge(newStatus) run.Diagnostics = run.Diagnostics.Append(moreDiags) run.Outputs = outputVals @@ -112,19 +115,13 @@ func (n *NodeTestRun) testApply(ctx *EvalContext, variables terraform.InputValue // actually updated by this change. We want to use the run that // most recently updated the tracked state as the cleanup // configuration. - ctx.SetFileState(key, &TestFileState{ - Run: run, - State: updated, - }) + ctx.SetFileState(key, run, updated, teststates.StateReasonNone) } -func (n *NodeTestRun) apply(tfCtx *terraform.Context, plan *plans.Plan, progress moduletest.Progress, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, waiter *operationWaiter) (*lang.Scope, *states.State, tfdiags.Diagnostics) { - run := n.run - file := n.File() - log.Printf("[TRACE] TestFileRunner: called apply for %s/%s", file.Name, run.Name) +func apply(tfCtx *terraform.Context, run *configs.TestRun, module *configs.Config, plan *plans.Plan, progress moduletest.Progress, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, waiter *operationWaiter) (*lang.Scope, *states.State, tfdiags.Diagnostics) { + log.Printf("[TRACE] TestFileRunner: called apply for %s", run.Name) var diags tfdiags.Diagnostics - config := run.ModuleConfig // If things get cancelled while we are executing the apply operation below // we want to print out all the objects that we were creating so the user @@ -148,7 +145,7 @@ func (n *NodeTestRun) apply(tfCtx *terraform.Context, plan *plans.Plan, progress // We only need to pass ephemeral variables to the apply operation, as the // plan has already been evaluated with the full set of variables. ephemeralVariables := make(terraform.InputValues) - for k, v := range config.Root.Module.Variables { + for k, v := range module.Root.Module.Variables { if v.EphemeralSet { if value, ok := variables[k]; ok { ephemeralVariables[k] = value @@ -162,9 +159,9 @@ func (n *NodeTestRun) apply(tfCtx *terraform.Context, plan *plans.Plan, progress } waiter.update(tfCtx, progress, created) - log.Printf("[DEBUG] TestFileRunner: starting apply for %s/%s", file.Name, run.Name) - updated, newScope, applyDiags := tfCtx.ApplyAndEval(plan, config, applyOpts) - log.Printf("[DEBUG] TestFileRunner: completed apply for %s/%s", file.Name, run.Name) + log.Printf("[DEBUG] TestFileRunner: starting apply for %s", run.Name) + updated, newScope, applyDiags := tfCtx.ApplyAndEval(plan, module, applyOpts) + log.Printf("[DEBUG] TestFileRunner: completed apply for %s", run.Name) diags = diags.Append(applyDiags) return newScope, updated, diags @@ -172,31 +169,31 @@ func (n *NodeTestRun) apply(tfCtx *terraform.Context, plan *plans.Plan, progress // checkForMissingExpectedFailures checks for missing expected failures in the diagnostics. // It updates the run status based on the presence of errors or missing expected failures. -func (n *NodeTestRun) checkForMissingExpectedFailures(ctx *EvalContext, run *moduletest.Run, diags tfdiags.Diagnostics) (failOrErr bool) { +func checkForMissingExpectedFailures(ctx *EvalContext, config *configs.TestRun, originals tfdiags.Diagnostics) (moduletest.Status, tfdiags.Diagnostics) { // Retrieve and append diagnostics that are either unrelated to expected failures // or report missing expected failures. - unexpectedDiags := run.ValidateExpectedFailures(diags) - - if ctx.Verbose() { - // in verbose mode, we still add all the original diagnostics for - // display even if they are expected. - run.Diagnostics = run.Diagnostics.Append(diags) - } else { - run.Diagnostics = run.Diagnostics.Append(unexpectedDiags) - } + unexpectedDiags := moduletest.ValidateExpectedFailures(config, originals) + status := moduletest.Pass for _, diag := range unexpectedDiags { // // If any diagnostic indicates a missing expected failure, set the run status to fail. if ok := moduletest.DiagnosticFromMissingExpectedFailure(diag); ok { - run.Status = run.Status.Merge(moduletest.Fail) + status = status.Merge(moduletest.Fail) continue } // upgrade the run status to error if there still are other errors in the diagnostics if diag.Severity() == tfdiags.Error { - run.Status = run.Status.Merge(moduletest.Error) + status = status.Merge(moduletest.Error) break } } - return run.Status > moduletest.Pass + + if ctx.Verbose() { + // in verbose mode, we still add all the original diagnostics for + // display even if they are expected. + return status, originals + } else { + return status, unexpectedDiags + } } diff --git a/internal/moduletest/graph/eval_context.go b/internal/moduletest/graph/eval_context.go index 3096ac9e9b..0b72ec38f0 100644 --- a/internal/moduletest/graph/eval_context.go +++ b/internal/moduletest/graph/eval_context.go @@ -15,6 +15,7 @@ import ( "github.com/zclconf/go-cty/cty/convert" "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/backend/backendrun" "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/configs" @@ -22,19 +23,13 @@ import ( "github.com/hashicorp/terraform/internal/lang" "github.com/hashicorp/terraform/internal/lang/langrefs" "github.com/hashicorp/terraform/internal/moduletest" + teststates "github.com/hashicorp/terraform/internal/moduletest/states" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/terraform" "github.com/hashicorp/terraform/internal/tfdiags" ) -// TestFileState is a helper struct that just maps a run block to the state that -// was produced by the execution of that run block. -type TestFileState struct { - Run *moduletest.Run - State *states.State -} - // EvalContext is a container for context relating to the evaluation of a // particular .tftest.hcl file. // This context is used to track the various values that are available to the @@ -60,11 +55,9 @@ type EvalContext struct { providersLock sync.Mutex // FileStates is a mapping of module keys to it's last applied state - // file. - // - // This is used to clean up the infrastructure created during the test after - // the test has finished. - FileStates map[string]*TestFileState + // file. This is tracked and returned to log state files of ongoing test + // operations. + FileStates map[string]*teststates.TestRunState stateLock sync.Mutex // cancelContext and stopContext can be used to terminate the evaluation of the @@ -75,24 +68,41 @@ type EvalContext struct { cancelFunc context.CancelFunc stopContext context.Context stopFunc context.CancelFunc + config *configs.Config + renderer views.Test + verbose bool - config *configs.Config - renderer views.Test - verbose bool + // mode and repair affect the behaviour of the cleanup process of the graph. + // + // in cleanup mode, the tests will actually be skipped and the cleanup nodes + // are executed immediately. Normally, the skip_cleanup attributes will + // be skipped in cleanup mode with all states being destroyed completely. + // + // in repair mode, the skip_cleanup attributes are still respected. this + // means only states that were left behind due to an error will be + // destroyed. + mode moduletest.CommandMode deferralAllowed bool evalSem terraform.Semaphore + + // repair is true if the test suite is being run in cleanup repair mode. + // It is only set when in test cleanup mode. + repair bool } type EvalContextOpts struct { Verbose bool + Repair bool Render views.Test CancelCtx context.Context StopCtx context.Context UnparsedVariables map[string]backendrun.UnparsedVariableValue Config *configs.Config + FileStates map[string]*teststates.TestRunState Concurrency int DeferralAllowed bool + Mode moduletest.CommandMode } // NewEvalContext constructs a new graph evaluation context for use in @@ -112,15 +122,17 @@ func NewEvalContext(opts EvalContextOpts) *EvalContext { providers: make(map[addrs.RootProviderConfig]providers.Interface), providerStatus: make(map[addrs.RootProviderConfig]moduletest.Status), providersLock: sync.Mutex{}, - FileStates: make(map[string]*TestFileState), + FileStates: opts.FileStates, stateLock: sync.Mutex{}, cancelContext: cancelCtx, cancelFunc: cancel, stopContext: stopCtx, stopFunc: stop, + config: opts.Config, verbose: opts.Verbose, + repair: opts.Repair, renderer: opts.Render, - config: opts.Config, + mode: opts.Mode, deferralAllowed: opts.DeferralAllowed, evalSem: terraform.NewSemaphore(opts.Concurrency), } @@ -253,19 +265,14 @@ func (ec *EvalContext) HclContext(references []*addrs.Reference) (*hcl.EvalConte // already available in resultScope in case there are additional input // variables that were defined only for use in the test suite. Any variable // not defined in extraVariableVals will be evaluated through resultScope instead. -func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope, extraVariableVals terraform.InputValues) (moduletest.Status, cty.Value, tfdiags.Diagnostics) { +func (ec *EvalContext) EvaluateRun(run *configs.TestRun, module *configs.Module, resultScope *lang.Scope, extraVariableVals terraform.InputValues) (moduletest.Status, cty.Value, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics - if run.ModuleConfig == nil { - // This should never happen, but if it does, we can't evaluate the run - return moduletest.Error, cty.NilVal, tfdiags.Diagnostics{} - } - mod := run.ModuleConfig.Module // We need a derived evaluation scope that also supports referring to // the prior run output values using the "run.NAME" syntax. evalData := &evaluationData{ ctx: ec, - module: mod, + module: module, current: resultScope.Data, extraVars: extraVariableVals, } @@ -279,14 +286,14 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope, ExternalFuncs: resultScope.ExternalFuncs, } - log.Printf("[TRACE] EvalContext.Evaluate for %s", run.Addr()) + log.Printf("[TRACE] EvalContext.Evaluate for %s", run.Name) // We're going to assume the run has passed, and then if anything fails this // value will be updated. - status := run.Status.Merge(moduletest.Pass) + status := moduletest.Pass // Now validate all the assertions within this run block. - for i, rule := range run.Config.CheckRules { + for i, rule := range run.CheckRules { var ruleDiags tfdiags.Diagnostics refs, moreDiags := langrefs.ReferencesInExpr(addrs.ParseRefFromTestingScope, rule.Condition) @@ -304,9 +311,9 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope, hclCtx, moreDiags := scope.EvalContext(refs) ruleDiags = ruleDiags.Append(moreDiags) if moreDiags.HasErrors() { - // if we can't evaluate the context properly, we can't evaulate the rule + // if we can't evaluate the context properly, we can't evaluate the rule // we add the diagnostics to the main diags and continue to the next rule - log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s is invalid, could not evalaute the context, so cannot evaluate it", i, run.Addr()) + log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s is invalid, could not evalaute the context, so cannot evaluate it", i, run.Name) status = status.Merge(moduletest.Error) diags = diags.Append(ruleDiags) continue @@ -320,7 +327,7 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope, diags = diags.Append(ruleDiags) if ruleDiags.HasErrors() { - log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s is invalid, so cannot evaluate it", i, run.Addr()) + log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s is invalid, so cannot evaluate it", i, run.Name) status = status.Merge(moduletest.Error) continue } @@ -335,7 +342,7 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope, Expression: rule.Condition, EvalContext: hclCtx, }) - log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has null condition result", i, run.Addr()) + log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has null condition result", i, run.Name) continue } @@ -349,7 +356,7 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope, Expression: rule.Condition, EvalContext: hclCtx, }) - log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has unknown condition result", i, run.Addr()) + log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has unknown condition result", i, run.Name) continue } @@ -364,7 +371,7 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope, Expression: rule.Condition, EvalContext: hclCtx, }) - log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has non-boolean condition result", i, run.Addr()) + log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has non-boolean condition result", i, run.Name) continue } @@ -373,7 +380,7 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope, runVal, _ = runVal.Unmark() if runVal.False() { - log.Printf("[TRACE] EvalContext.Evaluate: test assertion failed for %s assertion %d", run.Addr(), i) + log.Printf("[TRACE] EvalContext.Evaluate: test assertion failed for %s assertion %d", run.Name, i) status = status.Merge(moduletest.Fail) diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, @@ -389,16 +396,16 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope, }) continue } else { - log.Printf("[TRACE] EvalContext.Evaluate: test assertion succeeded for %s assertion %d", run.Addr(), i) + log.Printf("[TRACE] EvalContext.Evaluate: test assertion succeeded for %s assertion %d", run.Name, i) } } // Our result includes an object representing all of the output values // from the module we've just tested, which will then be available in // any subsequent test cases in the same test suite. - outputVals := make(map[string]cty.Value, len(mod.Outputs)) - runRng := tfdiags.SourceRangeFromHCL(run.Config.DeclRange) - for _, oc := range mod.Outputs { + outputVals := make(map[string]cty.Value, len(module.Outputs)) + runRng := tfdiags.SourceRangeFromHCL(run.DeclRange) + for _, oc := range module.Outputs { addr := oc.Addr() v, moreDiags := scope.Data.GetOutput(addr, runRng) diags = diags.Append(moreDiags) @@ -561,19 +568,82 @@ func diagsForEphemeralResources(refs []*addrs.Reference) (diags tfdiags.Diagnost return diags } -func (ec *EvalContext) SetFileState(key string, state *TestFileState) { +func (ec *EvalContext) SetFileState(key string, run *moduletest.Run, state *states.State, reason teststates.StateReason) { + ec.stateLock.Lock() + defer ec.stateLock.Unlock() + + current := ec.getState(key) + + // Whatever happens we're going to record the latest state for this key. + current.State = state + current.Manifest.Reason = reason + + if run.Config.SkipCleanup { + // if skip cleanup is set on the run block, we're going to track it + // as the thing to target regardless of what else might be true. + current.Run = run + + // we'll mark the state as being restored to the current run block + // if (a) we're not in cleanup mode (meaning everything should be + // destroyed) or (b) we are in cleanup mode but with the repair flag + // which means that only errored states should be destroyed. + current.RestoreState = ec.mode != moduletest.CleanupMode || ec.repair + } else if !current.RestoreState { + // otherwise, only set the new run block if we haven't been told the + // earlier run block is more relevant. + current.Run = run + } +} + +// GetState retrieves the current state for the specified key, exactly as it +// specified within the current cache. +func (ec *EvalContext) GetState(key string) *teststates.TestRunState { ec.stateLock.Lock() defer ec.stateLock.Unlock() - ec.FileStates[key] = &TestFileState{ - Run: state.Run, - State: state.State, + return ec.getState(key) +} + +func (ec *EvalContext) getState(key string) *teststates.TestRunState { + current := ec.FileStates[key] + if current == nil { + // this shouldn't happen, all the states must be initialised prior to + // the evaluation context being created. + // + // panic here, where the origin of the bug is instead of returning a + // null state to panic later. + panic("null state found in test execution") } + return current } -func (ec *EvalContext) GetFileState(key string) *TestFileState { +// LoadState returns the correct state for the specified run block. This differs +// from GetState in that it will load the state from any remote backend +// specified within the run block rather than simply retrieve the cached state +// (which might be empty for a run block with a backend if it hasn't executed +// yet). +func (ec *EvalContext) LoadState(run *configs.TestRun) (*states.State, error) { ec.stateLock.Lock() defer ec.stateLock.Unlock() - return ec.FileStates[key] + + current := ec.getState(run.StateKey) + + if run.Backend != nil { + // Then we'll load the state from the backend instead of just using + // whatever was in the state. + + stmgr, err := current.Backend.StateMgr(backend.DefaultStateName) + if err != nil { + return nil, err + } + + if err := stmgr.RefreshState(); err != nil { + return nil, err + } + + return stmgr.State(), nil + } + + return current.State, nil } // ReferencesCompleted returns true if all the listed references were actually diff --git a/internal/moduletest/graph/eval_context_test.go b/internal/moduletest/graph/eval_context_test.go index 2580b97125..8929ee3cda 100644 --- a/internal/moduletest/graph/eval_context_test.go +++ b/internal/moduletest/graph/eval_context_test.go @@ -746,7 +746,7 @@ func TestEvalContext_Evaluate(t *testing.T) { run.Outputs = test.priorOutputs[run.Name] testCtx.runBlocks[run.Name] = run } - gotStatus, gotOutputs, diags := testCtx.EvaluateRun(run, planScope, test.testOnlyVars) + gotStatus, gotOutputs, diags := testCtx.EvaluateRun(run.Config, run.ModuleConfig.Module, planScope, test.testOnlyVars) if got, want := gotStatus, test.expectedStatus; got != want { t.Errorf("wrong status %q; want %q", got, want) diff --git a/internal/moduletest/graph/node_state_cleanup.go b/internal/moduletest/graph/node_state_cleanup.go index 68201e8a43..973f11cf59 100644 --- a/internal/moduletest/graph/node_state_cleanup.go +++ b/internal/moduletest/graph/node_state_cleanup.go @@ -9,8 +9,10 @@ import ( "time" "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/moduletest" "github.com/hashicorp/terraform/internal/moduletest/mocking" + teststates "github.com/hashicorp/terraform/internal/moduletest/states" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/terraform" @@ -21,6 +23,9 @@ var ( _ GraphNodeExecutable = (*NodeStateCleanup)(nil) ) +// NodeStateCleanup is responsible for cleaning up the state of resources +// defined in the state file. It uses the provided stateKey to identify the +// specific state to clean up and opts for additional configuration options. type NodeStateCleanup struct { stateKey string opts *graphOptions @@ -31,12 +36,9 @@ func (n *NodeStateCleanup) Name() string { } // Execute destroys the resources created in the state file. -// This function should never return non-fatal error diagnostics, as that would -// prevent further cleanup from happening. Instead, the diagnostics -// will be rendered directly. func (n *NodeStateCleanup) Execute(evalCtx *EvalContext) { file := n.opts.File - state := evalCtx.GetFileState(n.stateKey) + state := evalCtx.GetState(n.stateKey) log.Printf("[TRACE] TestStateManager: cleaning up state for %s", file.Name) if evalCtx.Cancelled() { @@ -45,22 +47,13 @@ func (n *NodeStateCleanup) Execute(evalCtx *EvalContext) { return } - empty := true - if !state.State.Empty() { - for _, module := range state.State.Modules { - for _, resource := range module.Resources { - if resource.Addr.Resource.Mode == addrs.ManagedResourceMode { - empty = false - break - } - } - } - } - - if empty { + if emptyState(state.State) { // The state can be empty for a run block that just executed a plan // command, or a run block that only read data sources. We'll just - // skip empty run blocks. + // skip empty run blocks. We do reset the state reason to None for this + // just to make sure we are indicating externally this state file + // doesn't need to be saved. + evalCtx.SetFileState(n.stateKey, state.Run, state.State, teststates.StateReasonNone) return } @@ -76,44 +69,98 @@ func (n *NodeStateCleanup) Execute(evalCtx *EvalContext) { diags := tfdiags.Diagnostics{tfdiags.Sourceless(tfdiags.Error, "Inconsistent state", fmt.Sprintf("Found inconsistent state while cleaning up %s. This is a bug in Terraform - please report it", file.Name))} file.UpdateStatus(moduletest.Error) evalCtx.Renderer().DestroySummary(diags, nil, file, state.State) - - // intentionally return nil to allow further cleanup return } - runNode := &NodeTestRun{run: state.Run, opts: n.opts} updated := state.State startTime := time.Now().UTC() - waiter := NewOperationWaiter(nil, evalCtx, runNode, moduletest.Running, startTime.UnixMilli()) + waiter := NewOperationWaiter(nil, evalCtx, file, state.Run, moduletest.Running, startTime.UnixMilli()) var destroyDiags tfdiags.Diagnostics + evalCtx.Renderer().Run(state.Run, file, moduletest.TearDown, 0) cancelled := waiter.Run(func() { - updated, destroyDiags = n.destroy(evalCtx, runNode, waiter) + if state.RestoreState { + updated, destroyDiags = n.restore(evalCtx, file.Config, state.Run.Config, state.Run.ModuleConfig, updated, waiter) + } else { + updated, destroyDiags = n.destroy(evalCtx, file.Config, state.Run.Config, state.Run.ModuleConfig, updated, waiter) + updated.RootOutputValues = state.State.RootOutputValues // we're going to preserve the output values in case we need to tidy up + } }) if cancelled { destroyDiags = destroyDiags.Append(tfdiags.Sourceless(tfdiags.Error, "Test interrupted", "The test operation could not be completed due to an interrupt signal. Please read the remaining diagnostics carefully for any sign of failed state cleanup or dangling resources.")) } - if !updated.Empty() { - // Then we failed to adequately clean up the state, so mark success - // as false. + switch { + case destroyDiags.HasErrors(): + file.UpdateStatus(moduletest.Error) + evalCtx.SetFileState(n.stateKey, state.Run, updated, teststates.StateReasonError) + case state.Run.Config.Backend != nil: + evalCtx.SetFileState(n.stateKey, state.Run, updated, teststates.StateReasonNone) + case state.RestoreState: + evalCtx.SetFileState(n.stateKey, state.Run, updated, teststates.StateReasonSkip) + case !emptyState(updated): file.UpdateStatus(moduletest.Error) + evalCtx.SetFileState(n.stateKey, state.Run, updated, teststates.StateReasonError) + default: + evalCtx.SetFileState(n.stateKey, state.Run, updated, teststates.StateReasonNone) } evalCtx.Renderer().DestroySummary(destroyDiags, state.Run, file, updated) } -func (n *NodeStateCleanup) destroy(ctx *EvalContext, runNode *NodeTestRun, waiter *operationWaiter) (*states.State, tfdiags.Diagnostics) { - file := n.opts.File - fileState := ctx.GetFileState(n.stateKey) - state := fileState.State - run := runNode.run - log.Printf("[TRACE] TestFileRunner: called destroy for %s/%s", file.Name, run.Name) +func (n *NodeStateCleanup) restore(ctx *EvalContext, file *configs.TestFile, run *configs.TestRun, module *configs.Config, state *states.State, waiter *operationWaiter) (*states.State, tfdiags.Diagnostics) { + log.Printf("[TRACE] TestFileRunner: called restore for %s", run.Name) - if state.Empty() { - // Nothing to do! - return state, nil + variables, diags := GetVariables(ctx, run, module, false) + if diags.HasErrors() { + return state, diags } - variables, diags := runNode.GetVariables(ctx, false) + // we ignore the diagnostics from here, because we will have reported them + // during the initial execution of the run block and we would not have + // executed the run block if there were any errors. + providers, mocks, _ := getProviders(ctx, file, run, module) + + // During the destroy operation, we don't add warnings from this operation. + // Anything that would have been reported here was already reported during + // the original plan, and a successful destroy operation is the only thing + // we care about. + setVariables, _, _ := FilterVariablesToModule(module, variables) + + planOpts := &terraform.PlanOpts{ + Mode: plans.NormalMode, + SetVariables: setVariables, + Overrides: mocking.PackageOverrides(run, file, mocks), + ExternalProviders: providers, + SkipRefresh: true, + OverridePreventDestroy: true, + DeferralAllowed: ctx.deferralAllowed, + } + + tfCtx, _ := terraform.NewContext(n.opts.ContextOpts) + + waiter.update(tfCtx, moduletest.TearDown, nil) + plan, planDiags := tfCtx.Plan(module, state, planOpts) + diags = diags.Append(planDiags) + if diags.HasErrors() || plan.Errored { + return state, diags + } + + if !plan.Complete { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Warning, + "Incomplete restore plan", + fmt.Sprintf("The restore plan for %s was reported as incomplete."+ + " This means some of the cleanup operations were deferred due to unknown values, please check the rest of the output to see which resources could not be reverted.", run.Name))) + } + + _, updated, applyDiags := apply(tfCtx, run, module, plan, moduletest.TearDown, variables, providers, waiter) + diags = diags.Append(applyDiags) + return updated, diags +} + +func (n *NodeStateCleanup) destroy(ctx *EvalContext, file *configs.TestFile, run *configs.TestRun, module *configs.Config, state *states.State, waiter *operationWaiter) (*states.State, tfdiags.Diagnostics) { + log.Printf("[TRACE] TestFileRunner: called destroy for %s", run.Name) + + variables, diags := GetVariables(ctx, run, module, false) if diags.HasErrors() { return state, diags } @@ -121,18 +168,18 @@ func (n *NodeStateCleanup) destroy(ctx *EvalContext, runNode *NodeTestRun, waite // we ignore the diagnostics from here, because we will have reported them // during the initial execution of the run block and we would not have // executed the run block if there were any errors. - providers, mocks, _ := runNode.getProviders(ctx) + providers, mocks, _ := getProviders(ctx, file, run, module) // During the destroy operation, we don't add warnings from this operation. // Anything that would have been reported here was already reported during // the original plan, and a successful destroy operation is the only thing // we care about. - setVariables, _, _ := runNode.FilterVariablesToModule(variables) + setVariables, _, _ := FilterVariablesToModule(module, variables) planOpts := &terraform.PlanOpts{ Mode: plans.DestroyMode, SetVariables: setVariables, - Overrides: mocking.PackageOverrides(run.Config, file.Config, mocks), + Overrides: mocking.PackageOverrides(run, file, mocks), ExternalProviders: providers, SkipRefresh: true, OverridePreventDestroy: true, @@ -140,10 +187,9 @@ func (n *NodeStateCleanup) destroy(ctx *EvalContext, runNode *NodeTestRun, waite } tfCtx, _ := terraform.NewContext(n.opts.ContextOpts) - ctx.Renderer().Run(run, file, moduletest.TearDown, 0) waiter.update(tfCtx, moduletest.TearDown, nil) - plan, planDiags := tfCtx.Plan(run.ModuleConfig, state, planOpts) + plan, planDiags := tfCtx.Plan(module, state, planOpts) diags = diags.Append(planDiags) if diags.HasErrors() || plan.Errored { return state, diags @@ -153,11 +199,25 @@ func (n *NodeStateCleanup) destroy(ctx *EvalContext, runNode *NodeTestRun, waite diags = diags.Append(tfdiags.Sourceless( tfdiags.Warning, "Incomplete destroy plan", - fmt.Sprintf("The destroy plan for %s/%s was reported as incomplete."+ - " This means some of the cleanup operations were deferred due to unknown values, please check the rest of the output to see which resources could not be destroyed.", file.Name, run.Name))) + fmt.Sprintf("The destroy plan for %s was reported as incomplete."+ + " This means some of the cleanup operations were deferred due to unknown values, please check the rest of the output to see which resources could not be destroyed.", run.Name))) } - _, updated, applyDiags := runNode.apply(tfCtx, plan, moduletest.TearDown, variables, providers, waiter) + _, updated, applyDiags := apply(tfCtx, run, module, plan, moduletest.TearDown, variables, providers, waiter) diags = diags.Append(applyDiags) return updated, diags } + +func emptyState(state *states.State) bool { + if state.Empty() { + return true + } + for _, module := range state.Modules { + for _, resource := range module.Resources { + if resource.Addr.Resource.Mode == addrs.ManagedResourceMode { + return false + } + } + } + return true +} diff --git a/internal/moduletest/graph/node_test_run.go b/internal/moduletest/graph/node_test_run.go index 7929b3e484..3845b65d84 100644 --- a/internal/moduletest/graph/node_test_run.go +++ b/internal/moduletest/graph/node_test_run.go @@ -48,7 +48,7 @@ func (n *NodeTestRun) Referenceable() addrs.Referenceable { } func (n *NodeTestRun) References() []*addrs.Reference { - references, _ := n.run.GetReferences() + references, _ := moduletest.GetRunReferences(n.run.Config) for _, run := range n.priorRuns { // we'll also draw an implicit reference to all prior runs to make sure @@ -59,6 +59,27 @@ func (n *NodeTestRun) References() []*addrs.Reference { }) } + for name, variable := range n.run.ModuleConfig.Module.Variables { + + // because we also draw implicit references back to any variables + // defined in the test file with the same name as actual variables, then + // we'll count these as references as well. + + if _, ok := n.run.Config.Variables[name]; ok { + + // BUT, if the variable is defined within the list of variables + // within the run block then we don't want to draw an implicit + // reference as the data comes from that expression. + + continue + } + + references = append(references, &addrs.Reference{ + Subject: addrs.InputVariable{Name: name}, + SourceRange: tfdiags.SourceRangeFromHCL(variable.DeclRange), + }) + } + return references } @@ -106,7 +127,7 @@ func (n *NodeTestRun) Execute(evalCtx *EvalContext) { // Before the terraform operation is started, the operation updates the // waiter with the cleanup context on cancellation, as well as the // progress status. - waiter := NewOperationWaiter(nil, evalCtx, n, moduletest.Running, startTime.UnixMilli()) + waiter := NewOperationWaiter(nil, evalCtx, file, run, moduletest.Running, startTime.UnixMilli()) cancelled := waiter.Run(func() { defer logging.PanicHandler() n.execute(evalCtx, waiter) @@ -128,7 +149,7 @@ func (n *NodeTestRun) execute(ctx *EvalContext, waiter *operationWaiter) { file, run := n.File(), n.run ctx.Renderer().Run(run, file, moduletest.Starting, 0) - providers, mocks, providerDiags := n.getProviders(ctx) + providers, mocks, providerDiags := getProviders(ctx, file.Config, run.Config, run.ModuleConfig) if !ctx.ProvidersCompleted(providers) { run.Status = moduletest.Skip return @@ -145,7 +166,7 @@ func (n *NodeTestRun) execute(ctx *EvalContext, waiter *operationWaiter) { return } - variables, variableDiags := n.GetVariables(ctx, true) + variables, variableDiags := GetVariables(ctx, run.Config, run.ModuleConfig, true) run.Diagnostics = run.Diagnostics.Append(variableDiags) if variableDiags.HasErrors() { run.Status = moduletest.Error @@ -181,19 +202,17 @@ func (n *NodeTestRun) testValidate(providers map[addrs.RootProviderConfig]provid } } -func (n *NodeTestRun) getProviders(ctx *EvalContext) (map[addrs.RootProviderConfig]providers.Interface, map[addrs.RootProviderConfig]*configs.MockData, tfdiags.Diagnostics) { - run := n.run - +func getProviders(ctx *EvalContext, file *configs.TestFile, run *configs.TestRun, module *configs.Config) (map[addrs.RootProviderConfig]providers.Interface, map[addrs.RootProviderConfig]*configs.MockData, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics - if len(run.Config.Providers) > 0 { + if len(run.Providers) > 0 { // Then we'll only provide the specific providers asked for by the run // block. - providers := make(map[addrs.RootProviderConfig]providers.Interface, len(run.Config.Providers)) + providers := make(map[addrs.RootProviderConfig]providers.Interface, len(run.Providers)) mocks := make(map[addrs.RootProviderConfig]*configs.MockData) - for _, ref := range run.Config.Providers { + for _, ref := range run.Providers { testAddr := addrs.RootProviderConfig{ Provider: ctx.ProviderForConfigAddr(ref.InParent.Addr()), @@ -201,7 +220,7 @@ func (n *NodeTestRun) getProviders(ctx *EvalContext) (map[addrs.RootProviderConf } moduleAddr := addrs.RootProviderConfig{ - Provider: run.ModuleConfig.ProviderForConfigAddr(ref.InChild.Addr()), + Provider: module.ProviderForConfigAddr(ref.InChild.Addr()), Alias: ref.InChild.Alias, } @@ -218,7 +237,7 @@ func (n *NodeTestRun) getProviders(ctx *EvalContext) (map[addrs.RootProviderConf if provider, ok := ctx.GetProvider(testAddr); ok { providers[moduleAddr] = provider - config := n.File().Config.Providers[ref.InParent.String()] + config := file.Providers[ref.InParent.String()] if config.Mock { mocks[moduleAddr] = config.MockData } @@ -241,7 +260,7 @@ func (n *NodeTestRun) getProviders(ctx *EvalContext) (map[addrs.RootProviderConf providers := make(map[addrs.RootProviderConfig]providers.Interface) mocks := make(map[addrs.RootProviderConfig]*configs.MockData) - for addr := range requiredProviders(run.ModuleConfig) { + for addr := range requiredProviders(module) { if provider, ok := ctx.GetProvider(addr); ok { providers[addr] = provider @@ -249,7 +268,7 @@ func (n *NodeTestRun) getProviders(ctx *EvalContext) (map[addrs.RootProviderConf if len(addr.Alias) > 0 { local = fmt.Sprintf("%s.%s", local, addr.Alias) } - config := n.File().Config.Providers[local] + config := file.Providers[local] if config.Mock { mocks[addr] = config.MockData } diff --git a/internal/moduletest/graph/node_test_run_cleanup.go b/internal/moduletest/graph/node_test_run_cleanup.go new file mode 100644 index 0000000000..c6c7c29579 --- /dev/null +++ b/internal/moduletest/graph/node_test_run_cleanup.go @@ -0,0 +1,105 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: BUSL-1.1 + +package graph + +import ( + "fmt" + "log" + + "github.com/hashicorp/hcl/v2" + "github.com/zclconf/go-cty/cty" + + "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/lang/marks" + "github.com/hashicorp/terraform/internal/moduletest" + teststates "github.com/hashicorp/terraform/internal/moduletest/states" + "github.com/hashicorp/terraform/internal/tfdiags" +) + +var ( + _ GraphNodeExecutable = (*NodeTestRunCleanup)(nil) + _ GraphNodeReferenceable = (*NodeTestRunCleanup)(nil) + _ GraphNodeReferencer = (*NodeTestRunCleanup)(nil) +) + +type NodeTestRunCleanup struct { + run *moduletest.Run + priorRuns map[string]*moduletest.Run + opts *graphOptions +} + +func (n *NodeTestRunCleanup) Name() string { + return fmt.Sprintf("%s.%s (cleanup)", n.opts.File.Name, n.run.Addr().String()) +} + +func (n *NodeTestRunCleanup) References() []*addrs.Reference { + references, _ := moduletest.GetRunReferences(n.run.Config) + + for _, run := range n.priorRuns { + // we'll also draw an implicit reference to all prior runs to make sure + // they execute first + references = append(references, &addrs.Reference{ + Subject: run.Addr(), + SourceRange: tfdiags.SourceRangeFromHCL(n.run.Config.DeclRange), + }) + } + + for name, variable := range n.run.ModuleConfig.Module.Variables { + + // because we also draw implicit references back to any variables + // defined in the test file with the same name as actual variables, then + // we'll count these as references as well. + + if _, ok := n.run.Config.Variables[name]; ok { + + // BUT, if the variable is defined within the list of variables + // within the run block then we don't want to draw an implicit + // reference as the data comes from that expression. + + continue + } + + references = append(references, &addrs.Reference{ + Subject: addrs.InputVariable{Name: name}, + SourceRange: tfdiags.SourceRangeFromHCL(variable.DeclRange), + }) + } + + return references +} + +func (n *NodeTestRunCleanup) Referenceable() addrs.Referenceable { + return n.run.Addr() +} + +func (n *NodeTestRunCleanup) Execute(ctx *EvalContext) { + log.Printf("[TRACE] TestFileRunner: executing run block %s/%s", n.opts.File.Name, n.run.Name) + + n.run.Status = moduletest.Pass + + state, err := ctx.LoadState(n.run.Config) + if err != nil { + n.run.Status = moduletest.Fail + n.run.Diagnostics = n.run.Diagnostics.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to load state", + Detail: fmt.Sprintf("Could not retrieve state for run %s: %s.", n.run.Name, err), + Subject: n.run.Config.Backend.DeclRange.Ptr(), + }) + return + } + + outputs := make(map[string]cty.Value) + for name, output := range state.RootOutputValues { + if output.Sensitive { + outputs[name] = output.Value.Mark(marks.Sensitive) + continue + } + outputs[name] = output.Value + } + n.run.Outputs = cty.ObjectVal(outputs) + + ctx.SetFileState(n.run.Config.StateKey, n.run, state, teststates.StateReasonNone) + ctx.AddRunBlock(n.run) +} diff --git a/internal/moduletest/graph/plan.go b/internal/moduletest/graph/plan.go index 6ef0125277..29bb441eb7 100644 --- a/internal/moduletest/graph/plan.go +++ b/internal/moduletest/graph/plan.go @@ -8,6 +8,8 @@ import ( "log" "path/filepath" + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/lang" @@ -19,23 +21,26 @@ import ( "github.com/hashicorp/terraform/internal/tfdiags" ) +// testPlan defines how to execute a run block representing a plan command +// +// See also: (n *NodeTestRun).testApply func (n *NodeTestRun) testPlan(ctx *EvalContext, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, mocks map[addrs.RootProviderConfig]*configs.MockData, waiter *operationWaiter) { file, run := n.File(), n.run config := run.ModuleConfig // FilterVariablesToModule only returns warnings, so we don't check the // returned diags for errors. - setVariables, testOnlyVariables, setVariableDiags := n.FilterVariablesToModule(variables) + setVariables, testOnlyVariables, setVariableDiags := FilterVariablesToModule(run.ModuleConfig, variables) run.Diagnostics = run.Diagnostics.Append(setVariableDiags) // ignore diags because validate has covered it tfCtx, _ := terraform.NewContext(n.opts.ContextOpts) // execute the terraform plan operation - planScope, plan, originalDiags := n.plan(ctx, tfCtx, setVariables, providers, mocks, waiter) + planScope, plan, originalDiags := plan(ctx, tfCtx, file.Config, run.Config, run.ModuleConfig, setVariables, providers, mocks, waiter) // We exclude the diagnostics that are expected to fail from the plan // diagnostics, and if an expected failure is not found, we add a new error diagnostic. - planDiags := run.ValidateExpectedFailures(originalDiags) + planDiags := moduletest.ValidateExpectedFailures(run.Config, originalDiags) if ctx.Verbose() { // in verbose mode, we still add all the original diagnostics for @@ -79,32 +84,43 @@ func (n *NodeTestRun) testPlan(ctx *EvalContext, variables terraform.InputValues // of the run. We also pass in all the // previous contexts so this run block can refer to outputs from // previous run blocks. - newStatus, outputVals, moreDiags := ctx.EvaluateRun(run, planScope, testOnlyVariables) - run.Status = newStatus + status, outputVals, moreDiags := ctx.EvaluateRun(run.Config, run.ModuleConfig.Module, planScope, testOnlyVariables) + run.Status = run.Status.Merge(status) run.Diagnostics = run.Diagnostics.Append(moreDiags) run.Outputs = outputVals } -func (n *NodeTestRun) plan(ctx *EvalContext, tfCtx *terraform.Context, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, mocks map[addrs.RootProviderConfig]*configs.MockData, waiter *operationWaiter) (*lang.Scope, *plans.Plan, tfdiags.Diagnostics) { - file, run := n.File(), n.run - config := run.ModuleConfig - log.Printf("[TRACE] TestFileRunner: called plan for %s/%s", file.Name, run.Name) +func plan(ctx *EvalContext, tfCtx *terraform.Context, file *configs.TestFile, run *configs.TestRun, module *configs.Config, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, mocks map[addrs.RootProviderConfig]*configs.MockData, waiter *operationWaiter) (*lang.Scope, *plans.Plan, tfdiags.Diagnostics) { + log.Printf("[TRACE] TestFileRunner: called plan for %s", run.Name) var diags tfdiags.Diagnostics - targets, targetDiags := run.GetTargets() + targets, targetDiags := moduletest.GetRunTargets(run) diags = diags.Append(targetDiags) - replaces, replaceDiags := run.GetReplaces() + replaces, replaceDiags := moduletest.GetRunReplaces(run) diags = diags.Append(replaceDiags) + references, referenceDiags := moduletest.GetRunReferences(run) + diags = diags.Append(referenceDiags) + + state, err := ctx.LoadState(run) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to load state", + Detail: fmt.Sprintf("Could not retrieve state for run %s: %s.", run.Name, err), + Subject: run.Backend.DeclRange.Ptr(), + }) + } + if diags.HasErrors() { return nil, nil, diags } planOpts := &terraform.PlanOpts{ Mode: func() plans.Mode { - switch run.Config.Options.Mode { + switch run.Options.Mode { case configs.RefreshOnlyTestMode: return plans.RefreshOnlyMode default: @@ -113,20 +129,19 @@ func (n *NodeTestRun) plan(ctx *EvalContext, tfCtx *terraform.Context, variables }(), Targets: targets, ForceReplace: replaces, - SkipRefresh: !run.Config.Options.Refresh, + SkipRefresh: !run.Options.Refresh, SetVariables: variables, - ExternalReferences: n.References(), + ExternalReferences: references, ExternalProviders: providers, - Overrides: mocking.PackageOverrides(run.Config, file.Config, mocks), + Overrides: mocking.PackageOverrides(run, file, mocks), DeferralAllowed: ctx.deferralAllowed, } waiter.update(tfCtx, moduletest.Running, nil) - log.Printf("[DEBUG] TestFileRunner: starting plan for %s/%s", file.Name, run.Name) - state := ctx.GetFileState(run.Config.StateKey).State - plan, planScope, planDiags := tfCtx.PlanAndEval(config, state, planOpts) - log.Printf("[DEBUG] TestFileRunner: completed plan for %s/%s", file.Name, run.Name) + log.Printf("[DEBUG] TestFileRunner: starting plan for %s", run.Name) + plan, scope, planDiags := tfCtx.PlanAndEval(module, state, planOpts) + log.Printf("[DEBUG] TestFileRunner: completed plan for %s", run.Name) diags = diags.Append(planDiags) - return planScope, plan, diags + return scope, plan, diags } diff --git a/internal/moduletest/graph/test_graph_builder.go b/internal/moduletest/graph/test_graph_builder.go index 425d59f008..9923f9aa8b 100644 --- a/internal/moduletest/graph/test_graph_builder.go +++ b/internal/moduletest/graph/test_graph_builder.go @@ -26,6 +26,7 @@ type TestGraphBuilder struct { Config *configs.Config File *moduletest.File ContextOpts *terraform.ContextOpts + CommandMode moduletest.CommandMode } type graphOptions struct { @@ -49,11 +50,11 @@ func (b *TestGraphBuilder) Steps() []terraform.GraphTransformer { ContextOpts: b.ContextOpts, } steps := []terraform.GraphTransformer{ - &TestRunTransformer{opts}, + &TestRunTransformer{opts: opts, mode: b.CommandMode}, &TestVariablesTransformer{File: b.File}, terraform.DynamicTransformer(validateRunConfigs), terraform.DynamicTransformer(func(g *terraform.Graph) error { - cleanup := &TeardownSubgraph{opts: opts, parent: g} + cleanup := &TeardownSubgraph{opts: opts, parent: g, mode: b.CommandMode} g.Add(cleanup) // ensure that the teardown node runs after all the run nodes @@ -70,7 +71,6 @@ func (b *TestGraphBuilder) Steps() []terraform.GraphTransformer { File: b.File, Providers: opts.ContextOpts.Providers, }, - &EvalContextTransformer{File: b.File}, &ReferenceTransformer{}, &CloseTestGraphTransformer{}, &terraform.TransitiveReductionTransformer{}, @@ -90,16 +90,6 @@ func validateRunConfigs(g *terraform.Graph) error { return nil } -// dynamicNode is a helper node which can be added to the graph to execute -// a dynamic function at some desired point in the graph. -type dynamicNode struct { - eval func(*EvalContext) -} - -func (n *dynamicNode) Execute(evalCtx *EvalContext) { - n.eval(evalCtx) -} - func Walk(g *terraform.Graph, ctx *EvalContext) tfdiags.Diagnostics { walkFn := func(v dag.Vertex) tfdiags.Diagnostics { if ctx.Cancelled() { diff --git a/internal/moduletest/graph/transform_context.go b/internal/moduletest/graph/transform_context.go deleted file mode 100644 index 8e3dace071..0000000000 --- a/internal/moduletest/graph/transform_context.go +++ /dev/null @@ -1,47 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package graph - -import ( - "github.com/hashicorp/terraform/internal/dag" - "github.com/hashicorp/terraform/internal/moduletest" - "github.com/hashicorp/terraform/internal/states" - "github.com/hashicorp/terraform/internal/terraform" -) - -var _ terraform.GraphTransformer = (*EvalContextTransformer)(nil) - -// EvalContextTransformer should be the first node to execute in the graph, and -// it initialises the run blocks and state files in the evaluation context. -type EvalContextTransformer struct { - File *moduletest.File -} - -func (e *EvalContextTransformer) Transform(graph *terraform.Graph) error { - node := &dynamicNode{ - eval: func(ctx *EvalContext) { - for _, run := range e.File.Runs { - // initialise all the state keys before the graph starts - // properly - key := run.Config.StateKey - if state := ctx.GetFileState(key); state == nil { - ctx.SetFileState(key, &TestFileState{ - Run: nil, - State: states.NewState(), - }) - } - } - }, - } - - graph.Add(node) - for v := range graph.VerticesSeq() { - if v == node { - continue - } - graph.Connect(dag.BasicEdge(v, node)) - } - - return nil -} diff --git a/internal/moduletest/graph/transform_providers.go b/internal/moduletest/graph/transform_providers.go index 7f0faea443..34a1ddd028 100644 --- a/internal/moduletest/graph/transform_providers.go +++ b/internal/moduletest/graph/transform_providers.go @@ -85,6 +85,9 @@ func (t *TestProvidersTransformer) Transform(g *terraform.Graph) error { configure: configure, close: close, } + + // make sure the provider is only closed after the provider starts. + g.Connect(dag.BasicEdge(close, configure)) } for vertex := range g.VerticesSeq() { diff --git a/internal/moduletest/graph/transform_state_cleanup.go b/internal/moduletest/graph/transform_state_cleanup.go index d7bca4e22b..ec0f909f98 100644 --- a/internal/moduletest/graph/transform_state_cleanup.go +++ b/internal/moduletest/graph/transform_state_cleanup.go @@ -26,18 +26,30 @@ type Subgrapher interface { type TeardownSubgraph struct { opts *graphOptions parent *terraform.Graph + mode moduletest.CommandMode } func (b *TeardownSubgraph) Execute(ctx *EvalContext) { ctx.Renderer().File(b.opts.File, moduletest.TearDown) - // work out the transitive state dependencies for each run node in the parent graph runRefMap := make(map[addrs.Run][]string) - for runNode := range dag.SelectSeq[*NodeTestRun](b.parent.VerticesSeq()) { - refs := b.parent.Ancestors(runNode) - for _, ref := range refs { - if ref, ok := ref.(*NodeTestRun); ok && ref.run.Config.StateKey != runNode.run.Config.StateKey { - runRefMap[runNode.run.Addr()] = append(runRefMap[runNode.run.Addr()], ref.run.Config.StateKey) + + if b.mode == moduletest.CleanupMode { + for runNode := range dag.SelectSeq[*NodeTestRunCleanup](b.parent.VerticesSeq()) { + refs := b.parent.Ancestors(runNode) + for _, ref := range refs { + if ref, ok := ref.(*NodeTestRunCleanup); ok && ref.run.Config.StateKey != runNode.run.Config.StateKey { + runRefMap[runNode.run.Addr()] = append(runRefMap[runNode.run.Addr()], ref.run.Config.StateKey) + } + } + } + } else { + for runNode := range dag.SelectSeq[*NodeTestRun](b.parent.VerticesSeq()) { + refs := b.parent.Ancestors(runNode) + for _, ref := range refs { + if ref, ok := ref.(*NodeTestRun); ok && ref.run.Config.StateKey != runNode.run.Config.StateKey { + runRefMap[runNode.run.Addr()] = append(runRefMap[runNode.run.Addr()], ref.run.Config.StateKey) + } } } } diff --git a/internal/moduletest/graph/transform_test_run.go b/internal/moduletest/graph/transform_test_run.go index dadb661683..299155adf4 100644 --- a/internal/moduletest/graph/transform_test_run.go +++ b/internal/moduletest/graph/transform_test_run.go @@ -12,27 +12,49 @@ import ( // and the variables defined in each run block, to the graph. type TestRunTransformer struct { opts *graphOptions + mode moduletest.CommandMode } func (t *TestRunTransformer) Transform(g *terraform.Graph) error { - // Create and add nodes for each run - for _, run := range t.opts.File.Runs { - priorRuns := make(map[string]*moduletest.Run) - for ix := run.Index - 1; ix >= 0; ix-- { - // If either node isn't parallel, we should draw an edge between - // them. Also, if they share the same state key we should also draw - // an edge between them regardless of the parallelisation. - if target := t.opts.File.Runs[ix]; !run.Config.Parallel || !target.Config.Parallel || run.Config.StateKey == target.Config.StateKey { - priorRuns[target.Name] = target + + switch t.mode { + case moduletest.CleanupMode: + for _, run := range t.opts.File.Runs { + priorRuns := make(map[string]*moduletest.Run) + for ix := run.Index - 1; ix >= 0; ix-- { + // If either node isn't parallel, we should draw an edge between + // them. Also, if they share the same state key we should also draw + // an edge between them regardless of the parallelisation. + if target := t.opts.File.Runs[ix]; !run.Config.Parallel || !target.Config.Parallel || run.Config.StateKey == target.Config.StateKey { + priorRuns[target.Name] = target + } } + + g.Add(&NodeTestRunCleanup{ + run: run, + opts: t.opts, + priorRuns: priorRuns, + }) } - g.Add(&NodeTestRun{ - run: run, - opts: t.opts, - priorRuns: priorRuns, - }) - } + default: + for _, run := range t.opts.File.Runs { + priorRuns := make(map[string]*moduletest.Run) + for ix := run.Index - 1; ix >= 0; ix-- { + // If either node isn't parallel, we should draw an edge between + // them. Also, if they share the same state key we should also draw + // an edge between them regardless of the parallelisation. + if target := t.opts.File.Runs[ix]; !run.Config.Parallel || !target.Config.Parallel || run.Config.StateKey == target.Config.StateKey { + priorRuns[target.Name] = target + } + } + g.Add(&NodeTestRun{ + run: run, + opts: t.opts, + priorRuns: priorRuns, + }) + } + } return nil } diff --git a/internal/moduletest/graph/variables.go b/internal/moduletest/graph/variables.go index 6e3b8282bd..8dafd5751c 100644 --- a/internal/moduletest/graph/variables.go +++ b/internal/moduletest/graph/variables.go @@ -10,7 +10,9 @@ import ( "github.com/zclconf/go-cty/cty" "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/lang/langrefs" + "github.com/hashicorp/terraform/internal/moduletest" "github.com/hashicorp/terraform/internal/terraform" "github.com/hashicorp/terraform/internal/tfdiags" ) @@ -25,9 +27,8 @@ import ( // more variables than are required by the config. FilterVariablesToConfig // should be called before trying to use these variables within a Terraform // plan, apply, or destroy operation. -func (n *NodeTestRun) GetVariables(ctx *EvalContext, includeWarnings bool) (terraform.InputValues, tfdiags.Diagnostics) { +func GetVariables(ctx *EvalContext, run *configs.TestRun, module *configs.Config, includeWarnings bool) (terraform.InputValues, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics - run := n.run // relevantVariables contains the variables that are of interest to this // run block. This is a combination of the variables declared within the // configuration for this run block, and the variables referenced by the @@ -36,14 +37,15 @@ func (n *NodeTestRun) GetVariables(ctx *EvalContext, includeWarnings bool) (terr // First, we'll check to see which variables the run block assertions // reference. - for _, reference := range n.References() { + references, _ := moduletest.GetRunReferences(run) + for _, reference := range references { if addr, ok := reference.Subject.(addrs.InputVariable); ok { relevantVariables[addr.Name] = reference } } // And check to see which variables the run block configuration references. - for name := range run.ModuleConfig.Module.Variables { + for name := range module.Module.Variables { relevantVariables[name] = nil } @@ -53,7 +55,7 @@ func (n *NodeTestRun) GetVariables(ctx *EvalContext, includeWarnings bool) (terr // First, let's step through the expressions within the run block and work // them out. - for name, expr := range run.Config.Variables { + for name, expr := range run.Variables { refs, refDiags := langrefs.ReferencesInExpr(addrs.ParseRefFromTestingScope, expr) diags = append(diags, refDiags...) if refDiags.HasErrors() { @@ -99,7 +101,7 @@ func (n *NodeTestRun) GetVariables(ctx *EvalContext, includeWarnings bool) (terr // use a default fallback value to let Terraform attempt to apply defaults // if they exist. - for name, variable := range run.ModuleConfig.Module.Variables { + for name, variable := range module.Module.Variables { if _, exists := values[name]; exists { // Then we've provided a variable for this explicitly. It's all // good. @@ -197,11 +199,11 @@ func (n *NodeTestRun) GetVariables(ctx *EvalContext, includeWarnings bool) (terr // // This function can only return warnings, and the callers can rely on this so // please check the callers of this function if you add any error diagnostics. -func (n *NodeTestRun) FilterVariablesToModule(values terraform.InputValues) (moduleVars, testOnlyVars terraform.InputValues, diags tfdiags.Diagnostics) { +func FilterVariablesToModule(config *configs.Config, values terraform.InputValues) (moduleVars, testOnlyVars terraform.InputValues, diags tfdiags.Diagnostics) { moduleVars = make(terraform.InputValues) testOnlyVars = make(terraform.InputValues) for name, value := range values { - _, exists := n.run.ModuleConfig.Module.Variables[name] + _, exists := config.Module.Variables[name] if !exists { // If it's not in the configuration then it's a test-only variable. testOnlyVars[name] = value diff --git a/internal/moduletest/graph/wait.go b/internal/moduletest/graph/wait.go index 8610bcb95c..e1e9c8e4e7 100644 --- a/internal/moduletest/graph/wait.go +++ b/internal/moduletest/graph/wait.go @@ -47,13 +47,13 @@ func (a *atomicProgress[T]) Store(progress T) { } // NewOperationWaiter creates a new operation waiter. -func NewOperationWaiter(ctx *terraform.Context, evalCtx *EvalContext, n *NodeTestRun, +func NewOperationWaiter(ctx *terraform.Context, evalCtx *EvalContext, file *moduletest.File, run *moduletest.Run, progress moduletest.Progress, start int64) *operationWaiter { identifier := "validate" - if n.File() != nil { - identifier = n.File().Name - if n.run != nil { - identifier = fmt.Sprintf("%s/%s", identifier, n.run.Name) + if file != nil { + identifier = file.Name + if run != nil { + identifier = fmt.Sprintf("%s/%s", identifier, run.Name) } } @@ -62,8 +62,8 @@ func NewOperationWaiter(ctx *terraform.Context, evalCtx *EvalContext, n *NodeTes return &operationWaiter{ ctx: ctx, - run: n.run, - file: n.File(), + run: run, + file: file, progress: p, start: start, identifier: identifier, @@ -73,7 +73,7 @@ func NewOperationWaiter(ctx *terraform.Context, evalCtx *EvalContext, n *NodeTes } // Run executes the given function in a goroutine and waits for it to finish. -// If the function finishes, it returns false. If the function is cancelled or +// If the function finishes successfully, it returns false. If the function is cancelled or // interrupted, it returns true. func (w *operationWaiter) Run(fn func()) bool { runningCtx, doneRunning := context.WithCancel(context.Background()) @@ -134,14 +134,13 @@ func (w *operationWaiter) updateProgress() { // handleCancelled is called when the test execution is hard cancelled. func (w *operationWaiter) handleCancelled() bool { log.Printf("[DEBUG] TestFileRunner: test execution cancelled during %s", w.identifier) - states := make(map[*moduletest.Run]*states.State) - mainKey := configs.TestMainStateIdentifier - states[nil] = w.evalCtx.GetFileState(mainKey).State + states := make(map[string]*states.State) + states[configs.TestMainStateIdentifier] = w.evalCtx.GetState(configs.TestMainStateIdentifier).State for key, module := range w.evalCtx.FileStates { - if key == mainKey { + if key == configs.TestMainStateIdentifier { continue } - states[module.Run] = module.State + states[key] = module.State } w.renderer.FatalInterruptSummary(w.run, w.file, states, w.created) diff --git a/internal/moduletest/opts.go b/internal/moduletest/opts.go new file mode 100644 index 0000000000..37dd43e664 --- /dev/null +++ b/internal/moduletest/opts.go @@ -0,0 +1,85 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: BUSL-1.1 + +package moduletest + +import ( + "github.com/hashicorp/hcl/v2" + + "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" + "github.com/hashicorp/terraform/internal/lang/langrefs" + "github.com/hashicorp/terraform/internal/tfdiags" +) + +func GetRunTargets(config *configs.TestRun) ([]addrs.Targetable, tfdiags.Diagnostics) { + var diagnostics tfdiags.Diagnostics + var targets []addrs.Targetable + + for _, target := range config.Options.Target { + addr, diags := addrs.ParseTarget(target) + diagnostics = diagnostics.Append(diags) + if addr != nil { + targets = append(targets, addr.Subject) + } + } + + return targets, diagnostics +} + +func GetRunReplaces(config *configs.TestRun) ([]addrs.AbsResourceInstance, tfdiags.Diagnostics) { + var diagnostics tfdiags.Diagnostics + var replaces []addrs.AbsResourceInstance + + for _, replace := range config.Options.Replace { + addr, diags := addrs.ParseAbsResourceInstance(replace) + diagnostics = diagnostics.Append(diags) + if diags.HasErrors() { + continue + } + + if addr.Resource.Resource.Mode != addrs.ManagedResourceMode { + diagnostics = diagnostics.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Can only target managed resources for forced replacements.", + Detail: addr.String(), + Subject: replace.SourceRange().Ptr(), + }) + continue + } + + replaces = append(replaces, addr) + } + + return replaces, diagnostics +} + +func GetRunReferences(config *configs.TestRun) ([]*addrs.Reference, tfdiags.Diagnostics) { + var diagnostics tfdiags.Diagnostics + var references []*addrs.Reference + + for _, rule := range config.CheckRules { + for _, variable := range rule.Condition.Variables() { + reference, diags := addrs.ParseRefFromTestingScope(variable) + diagnostics = diagnostics.Append(diags) + if reference != nil { + references = append(references, reference) + } + } + for _, variable := range rule.ErrorMessage.Variables() { + reference, diags := addrs.ParseRefFromTestingScope(variable) + diagnostics = diagnostics.Append(diags) + if reference != nil { + references = append(references, reference) + } + } + } + + for _, expr := range config.Variables { + moreRefs, moreDiags := langrefs.ReferencesInExpr(addrs.ParseRefFromTestingScope, expr) + diagnostics = diagnostics.Append(moreDiags) + references = append(references, moreRefs...) + } + + return references, diagnostics +} diff --git a/internal/moduletest/run.go b/internal/moduletest/run.go index 216960e67f..646c1a2940 100644 --- a/internal/moduletest/run.go +++ b/internal/moduletest/run.go @@ -13,7 +13,6 @@ import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configschema" - "github.com/hashicorp/terraform/internal/lang/langrefs" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/states" @@ -94,106 +93,6 @@ func (run *Run) Addr() addrs.Run { return addrs.Run{Name: run.Name} } -func (run *Run) GetTargets() ([]addrs.Targetable, tfdiags.Diagnostics) { - var diagnostics tfdiags.Diagnostics - var targets []addrs.Targetable - - for _, target := range run.Config.Options.Target { - addr, diags := addrs.ParseTarget(target) - diagnostics = diagnostics.Append(diags) - if addr != nil { - targets = append(targets, addr.Subject) - } - } - - return targets, diagnostics -} - -func (run *Run) GetReplaces() ([]addrs.AbsResourceInstance, tfdiags.Diagnostics) { - var diagnostics tfdiags.Diagnostics - var replaces []addrs.AbsResourceInstance - - for _, replace := range run.Config.Options.Replace { - addr, diags := addrs.ParseAbsResourceInstance(replace) - diagnostics = diagnostics.Append(diags) - if diags.HasErrors() { - continue - } - - if addr.Resource.Resource.Mode != addrs.ManagedResourceMode { - diagnostics = diagnostics.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Can only target managed resources for forced replacements.", - Detail: addr.String(), - Subject: replace.SourceRange().Ptr(), - }) - continue - } - - replaces = append(replaces, addr) - } - - return replaces, diagnostics -} - -func (run *Run) GetReferences() ([]*addrs.Reference, tfdiags.Diagnostics) { - var diagnostics tfdiags.Diagnostics - var references []*addrs.Reference - - for _, rule := range run.Config.CheckRules { - for _, variable := range rule.Condition.Variables() { - reference, diags := addrs.ParseRefFromTestingScope(variable) - diagnostics = diagnostics.Append(diags) - if reference != nil { - references = append(references, reference) - } - } - for _, variable := range rule.ErrorMessage.Variables() { - reference, diags := addrs.ParseRefFromTestingScope(variable) - diagnostics = diagnostics.Append(diags) - if reference != nil { - references = append(references, reference) - } - } - } - - for _, expr := range run.Config.Variables { - moreRefs, moreDiags := langrefs.ReferencesInExpr(addrs.ParseRefFromTestingScope, expr) - diagnostics = diagnostics.Append(moreDiags) - references = append(references, moreRefs...) - } - - for name, variable := range run.ModuleConfig.Module.Variables { - - // because we also draw implicit references back to any variables - // defined in the test file with the same name as actual variables, then - // we'll count these as references as well. - - if _, ok := run.Config.Variables[name]; ok { - - // BUT, if the variable is defined within the list of variables - // within the run block then we don't want to draw an implicit - // reference as the data comes from that expression. - - continue - } - - references = append(references, &addrs.Reference{ - Subject: addrs.InputVariable{Name: name}, - SourceRange: tfdiags.SourceRangeFromHCL(variable.DeclRange), - }) - } - - return references, diagnostics -} - -// GetModuleConfigID returns the identifier for the module configuration that -// this run is testing. This is used to uniquely identify the module -// configuration in the test state. -func (run *Run) GetModuleConfigID() string { - return run.ModuleConfig.Module.SourceDir -} - // ExplainExpectedFailures is similar to ValidateExpectedFailures except it // looks for any diagnostics produced by custom conditions and are included in // the expected failures and adds an additional explanation that clarifies the @@ -203,14 +102,14 @@ func (run *Run) GetModuleConfigID() string { // an expected failure during the planning stage will still result in the // overall test failing as the plan failed and we couldn't even execute the // apply stage. -func (run *Run) ExplainExpectedFailures(originals tfdiags.Diagnostics) tfdiags.Diagnostics { +func ExplainExpectedFailures(config *configs.TestRun, originals tfdiags.Diagnostics) tfdiags.Diagnostics { // We're going to capture all the checkable objects that are referenced // from the expected failures. expectedFailures := addrs.MakeMap[addrs.Referenceable, bool]() sourceRanges := addrs.MakeMap[addrs.Referenceable, tfdiags.SourceRange]() - for _, traversal := range run.Config.ExpectFailures { + for _, traversal := range config.ExpectFailures { // Ignore the diagnostics returned from the reference parsing, these // references will have been checked earlier in the process by the // validate stage so we don't need to do that again here. @@ -330,14 +229,14 @@ func (run *Run) ExplainExpectedFailures(originals tfdiags.Diagnostics) tfdiags.D // diagnostics were generated by custom conditions. Terraform adds the // addrs.CheckRule that generated each diagnostic to the diagnostic itself so we // can tell which diagnostics can be expected. -func (run *Run) ValidateExpectedFailures(originals tfdiags.Diagnostics) tfdiags.Diagnostics { +func ValidateExpectedFailures(config *configs.TestRun, originals tfdiags.Diagnostics) tfdiags.Diagnostics { // We're going to capture all the checkable objects that are referenced // from the expected failures. expectedFailures := addrs.MakeMap[addrs.Referenceable, bool]() sourceRanges := addrs.MakeMap[addrs.Referenceable, tfdiags.SourceRange]() - for _, traversal := range run.Config.ExpectFailures { + for _, traversal := range config.ExpectFailures { // Ignore the diagnostics returned from the reference parsing, these // references will have been checked earlier in the process by the // validate stage so we don't need to do that again here. diff --git a/internal/moduletest/run_test.go b/internal/moduletest/run_test.go index 86b9debb1c..b118152a9e 100644 --- a/internal/moduletest/run_test.go +++ b/internal/moduletest/run_test.go @@ -766,7 +766,7 @@ func TestRun_ValidateExpectedFailures(t *testing.T) { }, } - out := run.ValidateExpectedFailures(tc.Input) + out := ValidateExpectedFailures(run.Config, tc.Input) ix := 0 for ; ix < len(tc.Output); ix++ { expected := tc.Output[ix] diff --git a/internal/moduletest/states/manifest.go b/internal/moduletest/states/manifest.go new file mode 100644 index 0000000000..8dc0630d4d --- /dev/null +++ b/internal/moduletest/states/manifest.go @@ -0,0 +1,558 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: BUSL-1.1 + +package states + +import ( + "encoding/json" + "fmt" + "io" + "log" + "math/rand/v2" + "os" + "path/filepath" + + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/hcldec" + + "github.com/hashicorp/terraform/internal/backend" + "github.com/hashicorp/terraform/internal/command/workdir" + "github.com/hashicorp/terraform/internal/configs" + "github.com/hashicorp/terraform/internal/moduletest" + "github.com/hashicorp/terraform/internal/states" + "github.com/hashicorp/terraform/internal/states/statemgr" + "github.com/hashicorp/terraform/internal/tfdiags" +) + +const alphanumeric = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789" + +type StateReason string + +const ( + StateReasonNone StateReason = "" + StateReasonSkip StateReason = "skip_cleanup" + StateReasonDep StateReason = "dependency" + StateReasonError StateReason = "error" +) + +// TestManifest represents the structure of the manifest file that keeps track +// of the state files left-over during test runs. +type TestManifest struct { + Version int `json:"version"` + Files map[string]*TestFileManifest `json:"files"` + + dataDir string // Directory where all test-related data is stored + ids map[string]bool +} + +// TestFileManifest represents a single file with its states keyed by the state +// key. +type TestFileManifest struct { + States map[string]*TestRunManifest `json:"states"` // Map of state keys to their manifests. +} + +// TestRunManifest represents an individual test run state. +type TestRunManifest struct { + // ID of the state file, used for identification. This will be empty if the + // state was written to a real backend and not stored locally. + ID string `json:"id,omitempty"` + + // Reason for the state being left over + Reason StateReason `json:"reason,omitempty"` +} + +// LoadManifest loads the test manifest from the specified root directory. +func LoadManifest(rootDir string, experimentsAllowed bool) (*TestManifest, error) { + if !experimentsAllowed { + // Just return an empty manifest file every time when experiments are + // disabled. + return &TestManifest{ + Version: 0, + Files: make(map[string]*TestFileManifest), + dataDir: workdir.NewDir(rootDir).TestDataDir(), + ids: make(map[string]bool), + }, nil + } + + wd := workdir.NewDir(rootDir) + + manifest := &TestManifest{ + Version: 0, + Files: make(map[string]*TestFileManifest), + dataDir: wd.TestDataDir(), + ids: make(map[string]bool), + } + + // Create directory if it doesn't exist + if err := manifest.ensureDataDir(); err != nil { + return nil, err + } + + data, err := os.OpenFile(manifest.filePath(), os.O_CREATE|os.O_RDONLY, 0644) + if err != nil { + return nil, err + } + defer data.Close() + + if err := json.NewDecoder(data).Decode(manifest); err != nil && err != io.EOF { + return nil, err + } + + for _, fileManifest := range manifest.Files { + for _, runManifest := range fileManifest.States { + // keep a cache of all known ids + manifest.ids[runManifest.ID] = true + } + } + + return manifest, nil +} + +// Save saves the current state of the manifest to the data directory. +func (manifest *TestManifest) Save(experimentsAllowed bool) error { + if !experimentsAllowed { + // just don't save the manifest file when experiments are disabled. + return nil + } + + data, err := json.MarshalIndent(manifest, "", " ") + if err != nil { + return err + } + + return os.WriteFile(manifest.filePath(), data, 0644) +} + +// LoadStates loads the states for the specified file. +func (manifest *TestManifest) LoadStates(file *moduletest.File, factory func(string) backend.InitFn) (map[string]*TestRunState, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + + allStates := make(map[string]*TestRunState) + + var existingStates map[string]*TestRunManifest + if fm, exists := manifest.Files[file.Name]; exists { + existingStates = fm.States + } + + for _, run := range file.Runs { + key := run.Config.StateKey + if existing, exists := allStates[key]; exists { + + if run.Config.Backend != nil { + f := factory(run.Config.Backend.Type) + if f == nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Unknown backend type", + Detail: fmt.Sprintf("Backend type %q is not a recognised backend.", run.Config.Backend.Type), + Subject: run.Config.Backend.DeclRange.Ptr(), + }) + continue + } + + be, err := getBackendInstance(run.Config.StateKey, run.Config.Backend, f) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid backend configuration", + Detail: fmt.Sprintf("Backend configuration was invalid: %s.", err), + Subject: run.Config.Backend.DeclRange.Ptr(), + }) + continue + } + + // Save the backend for this state when we find it, even if the + // state was initialised first. + existing.Backend = be + } + + continue + } + + var backend backend.Backend + if run.Config.Backend != nil { + // Then we have to load the state from the backend instead of + // locally or creating a new one. + + f := factory(run.Config.Backend.Type) + if f == nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Unknown backend type", + Detail: fmt.Sprintf("Backend type %q is not a recognised backend.", run.Config.Backend.Type), + Subject: run.Config.Backend.DeclRange.Ptr(), + }) + continue + } + + be, err := getBackendInstance(run.Config.StateKey, run.Config.Backend, f) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid backend configuration", + Detail: fmt.Sprintf("Backend configuration was invalid: %s.", err), + Subject: run.Config.Backend.DeclRange.Ptr(), + }) + continue + } + + backend = be + } + + if existing := existingStates[key]; existing != nil { + + var state *states.State + if len(existing.ID) > 0 { + s, err := manifest.loadState(existing) + if err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to load state", + fmt.Sprintf("Failed to load state from manifest file for %s: %s", run.Name, err))) + continue + } + state = s + } else { + state = states.NewState() + } + + allStates[key] = &TestRunState{ + Run: run, + Manifest: &TestRunManifest{ // copy this, so we can edit without affecting the original + ID: existing.ID, + Reason: existing.Reason, + }, + State: state, + Backend: backend, + } + } else { + var id string + if backend == nil { + id = manifest.generateID() + } + + allStates[key] = &TestRunState{ + Run: run, + Manifest: &TestRunManifest{ + ID: id, + Reason: StateReasonNone, + }, + State: states.NewState(), + Backend: backend, + } + } + } + + for key := range existingStates { + if _, exists := allStates[key]; !exists { + stateKey := key + if stateKey == configs.TestMainStateIdentifier { + stateKey = "for the module under test" + } + + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Warning, + "Orphaned state", + fmt.Sprintf("The state key %s is stored in the state manifest indicating a failed cleanup operation, but the state key is not claimed by any run blocks within the current test file. Either restore a run block that manages the specified state, or manually cleanup this state file.", stateKey))) + } + } + + return allStates, diags +} + +func (manifest *TestManifest) loadState(state *TestRunManifest) (*states.State, error) { + stateFile := statemgr.NewFilesystem(manifest.StateFilePath(state.ID)) + if err := stateFile.RefreshState(); err != nil { + return nil, fmt.Errorf("error loading state from file %s: %w", manifest.StateFilePath(state.ID), err) + } + return stateFile.State(), nil +} + +// SaveStates saves the states for the specified file to the manifest. +func (manifest *TestManifest) SaveStates(file *moduletest.File, states map[string]*TestRunState) tfdiags.Diagnostics { + var diags tfdiags.Diagnostics + + if existingStates, exists := manifest.Files[file.Name]; exists { + + // If we have existing states, we're doing update or delete operations + // rather than just adding new states. + + for key, existingState := range existingStates.States { + + // First, check all the existing states against the states being + // saved. + + if state, exists := states[key]; exists { + + // If we have a new state, then overwrite the existing one + // assuming that it has a reason to be saved. + + if state.Backend != nil { + // If we have a backend, regardless of the reason, then + // we'll save the state to the backend. + + stmgr, err := state.Backend.StateMgr(backend.DefaultStateName) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to write state", + Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err), + }) + continue + } + + if err := stmgr.WriteState(state.State); err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to write state", + Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err), + }) + continue + } + + // But, still keep the manifest file itself up-to-date. + + if state.Manifest.Reason != StateReasonNone { + existingStates.States[key] = state.Manifest + } else { + delete(existingStates.States, key) + } + + } else if state.Manifest.Reason != StateReasonNone { + if err := manifest.writeState(state); err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to write state", + Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err), + }) + continue + } + existingStates.States[key] = state.Manifest + continue + } else { + + // If no reason to be saved, then it means we managed to + // clean everything up properly. So we'll delete the + // existing state file and remove any mention of it. + + if err := manifest.deleteState(existingState); err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to delete state", + Detail: fmt.Sprintf("Failed to delete state file for key %s: %s.", key, err), + }) + continue + } + delete(existingStates.States, key) // remove the state from the manifest file + } + } + + // Otherwise, we just leave the state file as is. We don't want to + // remove it prematurely, as users might still need it to tidy + // something up. + + } + + // now, we've updated / removed any pre-existing states we should also + // write any states that are brand new, and weren't in the existing + // state. + + for key, state := range states { + if _, exists := existingStates.States[key]; exists { + // we've already handled everything in the existing state + continue + } + + if state.Backend != nil { + + stmgr, err := state.Backend.StateMgr(backend.DefaultStateName) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to write state", + Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err), + }) + continue + } + + if err := stmgr.WriteState(state.State); err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to write state", + Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err), + }) + continue + } + + if state.Manifest.Reason != StateReasonNone { + existingStates.States[key] = state.Manifest + } + } else if state.Manifest.Reason != StateReasonNone { + if err := manifest.writeState(state); err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to write state", + Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err), + }) + continue + } + existingStates.States[key] = state.Manifest + } + } + + if len(existingStates.States) == 0 { + // if we now have tidied everything up, remove record of this from + // the manifest. + delete(manifest.Files, file.Name) + } + + } else { + + // We're just writing entirely new states, so we can just create a new + // TestFileManifest and add it to the manifest. + + newStates := make(map[string]*TestRunManifest) + for key, state := range states { + if state.Backend != nil { + + stmgr, err := state.Backend.StateMgr(backend.DefaultStateName) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to write state", + Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err), + }) + continue + } + + if err := stmgr.WriteState(state.State); err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to write state", + Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err), + }) + continue + } + + if state.Manifest.Reason != StateReasonNone { + newStates[key] = state.Manifest + } + } else if state.Manifest.Reason != StateReasonNone { + if err := manifest.writeState(state); err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Failed to write state", + Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err), + }) + continue + } + newStates[key] = state.Manifest + } + } + + if len(newStates) > 0 { + + // only add this into the manifest if we actually wrote any + // new states + + manifest.Files[file.Name] = &TestFileManifest{ + States: newStates, + } + } + } + + return diags +} + +func (manifest *TestManifest) writeState(state *TestRunState) error { + stateFile := statemgr.NewFilesystem(manifest.StateFilePath(state.Manifest.ID)) + if err := stateFile.WriteState(state.State); err != nil { + return fmt.Errorf("error writing state to file %s: %w", manifest.StateFilePath(state.Manifest.ID), err) + } + return nil +} + +func (manifest *TestManifest) deleteState(runManifest *TestRunManifest) error { + target := manifest.StateFilePath(runManifest.ID) + if err := os.Remove(target); err != nil { + if os.IsNotExist(err) { + // If the file doesn't exist, we can ignore this error. + return nil + } + return fmt.Errorf("error deleting state file %s: %w", target, err) + } + return nil +} + +func (manifest *TestManifest) generateID() string { + const maxAttempts = 10 + + for ix := 0; ix < maxAttempts; ix++ { + var b [8]byte + for i := range b { + n := rand.IntN(len(alphanumeric)) + b[i] = alphanumeric[n] + } + + id := string(b[:]) + if _, exists := manifest.ids[id]; exists { + continue // generate another one + } + + manifest.ids[id] = true + return id + } + + panic("failed to generate a unique id 10 times") +} + +func (manifest *TestManifest) ensureDataDir() error { + if _, err := os.Stat(manifest.dataDir); os.IsNotExist(err) { + return os.MkdirAll(manifest.dataDir, 0755) + } + return nil +} + +// filePath returns the path to the manifest file +func (manifest *TestManifest) filePath() string { + return filepath.Join(manifest.dataDir, "manifest.json") +} + +// StateFilePath returns the path to the state file for a given ID. +// +// Visible for testing purposes. +func (manifest *TestManifest) StateFilePath(id string) string { + return filepath.Join(manifest.dataDir, fmt.Sprintf("%s.tfstate", id)) +} + +// getBackendInstance uses the config for a given run block's backend block to create and return a configured +// instance of that backend type. +func getBackendInstance(stateKey string, config *configs.Backend, f backend.InitFn) (backend.Backend, error) { + b := f() + log.Printf("[TRACE] TestConfigTransformer.Transform: instantiated backend of type %T", b) + + schema := b.ConfigSchema() + decSpec := schema.NoneRequired().DecoderSpec() + configVal, hclDiags := hcldec.Decode(config.Config, decSpec, nil) + if hclDiags.HasErrors() { + return nil, fmt.Errorf("error decoding backend configuration for state key %s : %v", stateKey, hclDiags.Errs()) + } + + if !configVal.IsWhollyKnown() { + return nil, fmt.Errorf("unknown values within backend definition for state key %s", stateKey) + } + + newVal, validateDiags := b.PrepareConfig(configVal) + validateDiags = validateDiags.InConfigBody(config.Config, "") + if validateDiags.HasErrors() { + return nil, validateDiags.Err() + } + + configureDiags := b.Configure(newVal) + configureDiags = configureDiags.InConfigBody(config.Config, "") + if validateDiags.HasErrors() { + return nil, configureDiags.Err() + } + + return b, nil +} diff --git a/internal/moduletest/states/states.go b/internal/moduletest/states/states.go new file mode 100644 index 0000000000..1668b57e98 --- /dev/null +++ b/internal/moduletest/states/states.go @@ -0,0 +1,29 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: BUSL-1.1 + +package states + +import ( + "github.com/hashicorp/terraform/internal/backend" + "github.com/hashicorp/terraform/internal/moduletest" + "github.com/hashicorp/terraform/internal/states" +) + +type TestRunState struct { + // Run and RestoreState represent the run block to use to either destroy + // or restore the state to. If RestoreState is false, then the state will + // destroyed, if true it will be restored to the config of the relevant + // run block. + Run *moduletest.Run + RestoreState bool + + // Manifest is the underlying state manifest for this state. + Manifest *TestRunManifest + + // State is the actual state. + State *states.State + + // Backend is the backend where this state should be saved upon test + // completion. + Backend backend.Backend +} diff --git a/internal/moduletest/suite.go b/internal/moduletest/suite.go index b7a165d4a7..92f00535a5 100644 --- a/internal/moduletest/suite.go +++ b/internal/moduletest/suite.go @@ -3,16 +3,30 @@ package moduletest -import "github.com/hashicorp/terraform/internal/tfdiags" +import ( + "github.com/hashicorp/terraform/internal/tfdiags" +) + +type CommandMode int + +const ( + // NormalMode is the default mode for running terraform test. + NormalMode CommandMode = iota + // CleanupMode is used when running terraform test cleanup. + // In this mode, the graph will be built with the intention of cleaning up + // the state, rather than applying changes. + CleanupMode +) type Suite struct { - Status Status + Status Status + CommandMode CommandMode Files map[string]*File } type TestSuiteRunner interface { - Test() (Status, tfdiags.Diagnostics) + Test(experimentsAllowed bool) (Status, tfdiags.Diagnostics) Stop() Cancel() diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index cf1ea9a196..87d00f2d8b 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -979,8 +979,8 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, externalProviderConfigs = opts.ExternalProviders } - if opts != nil && opts.OverridePreventDestroy && opts.Mode != plans.DestroyMode { - panic("you can only set OverridePreventDestroy during destroy operations.") + if opts != nil && opts.OverridePreventDestroy && opts.Mode == plans.RefreshOnlyMode { + panic("you can't set OverridePreventDestroy during refresh operations.") } switch mode := opts.Mode; mode { @@ -1010,6 +1010,7 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, GenerateConfigPath: opts.GenerateConfigPath, SkipGraphValidation: c.graphOpts.SkipGraphValidation, queryPlan: opts.Query, + overridePreventDestroy: opts.OverridePreventDestroy, }).Build(addrs.RootModuleInstance) return graph, walkPlan, diags case plans.RefreshOnlyMode: diff --git a/internal/terraform/graph_builder_plan.go b/internal/terraform/graph_builder_plan.go index 36c90211c3..69776cff54 100644 --- a/internal/terraform/graph_builder_plan.go +++ b/internal/terraform/graph_builder_plan.go @@ -152,10 +152,6 @@ func (b *PlanGraphBuilder) Steps() []GraphTransformer { panic("invalid plan operation: " + b.Operation.String()) } - if b.overridePreventDestroy && b.Operation != walkPlanDestroy { - panic("overridePreventDestroy can only be set during walkPlanDestroy operations") - } - steps := []GraphTransformer{ // Creates all the resources represented in the config &ConfigTransformer{ @@ -322,6 +318,7 @@ func (b *PlanGraphBuilder) initPlan() { } b.ConcreteResource = func(a *NodeAbstractResource) dag.Vertex { + a.overridePreventDestroy = b.overridePreventDestroy return &nodeExpandPlannableResource{ NodeAbstractResource: a, skipRefresh: b.skipRefresh, @@ -332,6 +329,7 @@ func (b *PlanGraphBuilder) initPlan() { } b.ConcreteResourceOrphan = func(a *NodeAbstractResourceInstance) dag.Vertex { + a.overridePreventDestroy = b.overridePreventDestroy return &NodePlannableResourceInstanceOrphan{ NodeAbstractResourceInstance: a, skipRefresh: b.skipRefresh, @@ -342,6 +340,7 @@ func (b *PlanGraphBuilder) initPlan() { } b.ConcreteResourceInstanceDeposed = func(a *NodeAbstractResourceInstance, key states.DeposedKey) dag.Vertex { + a.overridePreventDestroy = b.overridePreventDestroy return &NodePlanDeposedResourceInstanceObject{ NodeAbstractResourceInstance: a, DeposedKey: key, diff --git a/internal/terraform/node_resource_abstract.go b/internal/terraform/node_resource_abstract.go index e331ffe0c0..f4787cb30e 100644 --- a/internal/terraform/node_resource_abstract.go +++ b/internal/terraform/node_resource_abstract.go @@ -87,6 +87,11 @@ type NodeAbstractResource struct { generateConfigPath string forceCreateBeforeDestroy bool + + // overridePreventDestroy is set during test cleanup operations to allow + // tests to clean up any created infrastructure regardless of this setting + // in the configuration. + overridePreventDestroy bool } var ( diff --git a/internal/terraform/node_resource_abstract_instance.go b/internal/terraform/node_resource_abstract_instance.go index c6f1930103..e15a5593c3 100644 --- a/internal/terraform/node_resource_abstract_instance.go +++ b/internal/terraform/node_resource_abstract_instance.go @@ -46,11 +46,6 @@ type NodeAbstractResourceInstance struct { preDestroyRefresh bool - // overridePreventDestroy is set during test cleanup operations to allow - // tests to clean up any created infrastructure regardless of this setting - // in the configuration. - overridePreventDestroy bool - // During import (or query) we may generate configuration for a resource, which needs // to be stored in the final change. generatedConfigHCL string