Implement controlling destroy functionality within Terraform Test (#37359)

* Add ability to parse backend blocks present in a test file's run blocks, validate configuration (#36541)

* Add ability to parse backend blocks from a run block

* Add validation to avoid multiple backend blocks across run blocks that use the same internal state file. Update tests.

* Add validation to avoid multiple backend blocks within a single run block. Update tests.

* Remove use of quotes in diagnostic messages

* Add validation to avoid backend blocks being used in plan run blocks. Update tests.

* Correct local backend blocks in new test fixtures

* Add test to show that different test files can use same backend block for same state key.

* Add validation to enforce state-storage backend types are used

* Remove TODO comment

We only need to consider one file at a time when checking if a state_key already has a backend associated with it; parallelism in `terraform test` is scoped down to individual files.

* Add validation to assert that the backend block must be in the first apply command for an internal state

* Consolidate backend block validation inside a single if statement

* Add initial version of validation that ensures a backend isn't re-used within a file

* Explicitly set the state_key at the point of parsing the config

TODO: What should be done with method (moduletest.Run).GetStateKey?

* Update test fixture now that reusing backend configs has been made invalid

* Add automated test showing validation of reused configuration blocks

* Skip test due to flakiness, minor change to test config naming

* Update test so it tolerates non-deterministic order run blocks are evaluated in

* Remove unnecessary value assignment to r.StateKey

* Replace use of GetStateKey() with accessing the state key that's now set during test config parsing

* Fix bug so that run blocks using child modules get the correct state key set at parsing time

* Update acceptance test to also cover scenario where root and child module state keys are in use

* Update test name

* Add newline to regex

* Ensure consistent place where repeat backend error is raised from

* Write leftover test state(s) to file (#36614)

* Add additional validation that the backend used in a run is a supported type (#36648)

* Prevent test run when leftover state data is present (#36685)

* `test`: Set the initial state for a state files from a backend, allow the run that defines a backend to write state to the backend (#36646)

* Allow use of backend block to set initial state for a state key

* Note about alternative place to keep 'backend factories'

* Allow the run block defining the backend to write state to it

* Fix rebase

* Change to accessing backend init functions via ContextOpts

* Add tests demonstrating how runs containing backend blocks use and update persisted state

* Fix test fixture

* Address test failure due to trouble opening the state file

This problem doesn't happen on MacOS, so I assume is due to the Linux environment of GitHub runners.

* Fix issue with paths properly

I hope

* Fix defect in test assertion

* Pivot back to approach introduced in 4afc3d7

* Let failing tests write to persistent state, add test case covering that.

I split the acceptance tests into happy/unhappy paths for this, which required some of the helper functions' declarations to be raised up to package-level.

* Change how we update internal state files, so that information about the associated backend is never lost

* Fix UpdateStateFile

* Ensure that the states map set by TestStateTransformer associates a backend with the correct run.

* Misc spelling fixes in comments and a log

* Replace state get/set functions with existing helpers (#36747)

* Replace state get/set functions with existing helpers

* Compare to string representation of state

* Compare to string representation of state

* Terraform Test: Allow skipping cleanup of entire test file or individual run blocks (#36729)

* Add validation to enforce skip_cleanup=false cannot be used with backend blocks (#36857)

* Integrate use of backend blocks in tests with skip_cleanup feature (#36848)

* Fix nil pointer error, update test to not be table-driven

* Make using a backend block implicitly set skip_cleanup to true

* Stop state artefacts being created when a backend is in use and no cleanup errors have occurred

* Return diagnostics so calling code knows if cleanup experienced issues or not

* Update tests to show that when cleanup fails a state artefact is created

* Add comment about why diag not returned

* Bug fix - actually pull in the state from the state manager!

* Split and simplify (?) tests to show the backend block can create and/or reuse prior state

* Update test to use new fixtures, assert about state artefact. Fix nil pointer

* Update test fixture in use, add guardrail for flakiness of forced error during cleanup

* Refactor so resource ID set in only one place

* Add documentation for using a `backend` block during `test` (#36832)

* Add backend as a documented block in a run block

* Add documentation about backend blocks in run blocks.

* Make the relationship between backends and state keys more clear, other improvements

* More test documentation (#36838)

* Terraform Test: cleanup command (#36847)

* Allow cleanup of states that depend on prior runs outputs (#36902)

* terraform test: refactor graph edge calculation

* create fake run block nodes during cleanup operation

* tidy up TODOs

* fix tests

* remove old changes

* Update internal/moduletest/graph/node_state_cleanup.go

Co-authored-by: Samsondeen <40821565+dsa0x@users.noreply.github.com>

* Improve diagnostics around skip_cleanup conflicts (#37385)

* Improve diagnostics around skip_cleanup conflicts

* remove unused dynamic node

* terraform test: refactor manifest file for simplicity (#37412)

* test: refactor apply and plan functions so no run block is needed

* terraform test: write and load state manifest files

* Terraform Test: Allow skipping cleanup of entire test file or individual run blocks (#36729)

* terraform test: add support for skip_cleanup attr

* terraform test: add cleanup command

* terraform test: add backend blocks

* pause

* fix tests

* remove commented code

* terraform test: make controlling destroy functionality experimental (#37419)

* address comments

* Update internal/moduletest/graph/node_state_cleanup.go

Co-authored-by: Samsondeen <40821565+dsa0x@users.noreply.github.com>

---------

Co-authored-by: Samsondeen <40821565+dsa0x@users.noreply.github.com>

* add experimental changelog entries

---------

Co-authored-by: Sarah French <15078782+SarahFrench@users.noreply.github.com>
Co-authored-by: Samsondeen <40821565+dsa0x@users.noreply.github.com>
Co-authored-by: Samsondeen Dare <samsondeen.dare@hashicorp.com>
pull/37589/head
Liam Cervante 7 months ago committed by GitHub
parent e315a07c71
commit 551ba2e525
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -3,6 +3,10 @@ EXPERIMENTS:
Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases.
- The experimental "deferred actions" feature, enabled by passing the `-allow-deferral` option to `terraform plan`, permits `count` and `for_each` arguments in `module`, `resource`, and `data` blocks to have unknown values and allows providers to react more flexibly to unknown values.
- `terraform test cleanup`: The experimental `test cleanup` command. In experimental builds of Terraform, a manifest file and state files for each failed cleanup operation during test operations are saved within the `.terraform` local directory. The `test cleanup` command will attempt to clean up the local state files left behind automatically, without requiring manual intervention.
- `terraform test`: `backend` blocks and `skip_cleanup` attributes:
- Test authors can now specify `backend` blocks within `run` blocks in Terraform Test files. Run blocks with `backend` blocks will load state from the specified backend instead of starting from empty state on every execution. This allows test authors to keep long-running test infrastructure alive between test operations, saving time during regular test operations.
- Test authors can now specify `skip_cleanup` attributes within test files and within run blocks. The `skip_cleanup` attribute tells `terraform test` not to clean up state files produced by run blocks with this attribute set to true. The state files for affected run blocks will be written to disk within the `.terraform` directory, where they can then be cleaned up manually using the also experimental `terraform test cleanup` command.
## Previous Releases

@ -456,6 +456,12 @@ func initCommands(
Meta: meta,
}, nil
}
Commands["test cleanup"] = func() (cli.Command, error) {
return &command.TestCleanupCommand{
Meta: meta,
}, nil
}
}
PrimaryCommands = []string{

@ -11,12 +11,14 @@ import (
"path/filepath"
"slices"
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/backend/backendrun"
"github.com/hashicorp/terraform/internal/command/junit"
"github.com/hashicorp/terraform/internal/command/views"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/moduletest"
"github.com/hashicorp/terraform/internal/moduletest/graph"
teststates "github.com/hashicorp/terraform/internal/moduletest/states"
"github.com/hashicorp/terraform/internal/terraform"
"github.com/hashicorp/terraform/internal/tfdiags"
)
@ -24,6 +26,15 @@ import (
type TestSuiteRunner struct {
Config *configs.Config
// BackendFactory is used to enable initializing multiple backend types,
// depending on which backends are used in a test suite.
//
// Note: This is currently necessary because the source of the init functions,
// the backend/init package, experiences import cycles if used in other test-related
// packages. We set this field on a TestSuiteRunner when making runners in the
// command package, which is the main place where backend/init has previously been used.
BackendFactory func(string) backend.InitFn
TestingDirectory string
// Global variables comes from the main configuration directory,
@ -60,6 +71,14 @@ type TestSuiteRunner struct {
Concurrency int
DeferralAllowed bool
CommandMode moduletest.CommandMode
// Repair is used to indicate whether the test cleanup command should run in
// "repair" mode. In this mode, the cleanup command will only remove state
// files that are a result of failed destroy operations, leaving any
// state due to skip_cleanup in place.
Repair bool
}
func (runner *TestSuiteRunner) Stop() {
@ -74,7 +93,7 @@ func (runner *TestSuiteRunner) Cancel() {
runner.Cancelled = true
}
func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) {
func (runner *TestSuiteRunner) Test(experimentsAllowed bool) (moduletest.Status, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
if runner.Concurrency < 1 {
@ -87,6 +106,14 @@ func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) {
return moduletest.Error, diags
}
manifest, err := teststates.LoadManifest(".", experimentsAllowed)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Failed to open state manifest",
fmt.Sprintf("The test state manifest file could not be opened: %s.", err)))
}
runner.View.Abstract(suite)
// We have two sets of variables that are available to different test files.
@ -104,38 +131,24 @@ func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) {
if runner.Cancelled {
return moduletest.Error, diags
}
file := suite.Files[name]
currentGlobalVariables := runner.GlobalVariables
if filepath.Dir(file.Name) == runner.TestingDirectory {
// If the file is in the test directory, we'll use the union of the
// global variables and the global test variables.
currentGlobalVariables = testDirectoryGlobalVariables
}
evalCtx := graph.NewEvalContext(graph.EvalContextOpts{
Config: runner.Config,
CancelCtx: runner.CancelledCtx,
StopCtx: runner.StoppedCtx,
Verbose: runner.Verbose,
Render: runner.View,
UnparsedVariables: currentGlobalVariables,
Concurrency: runner.Concurrency,
DeferralAllowed: runner.DeferralAllowed,
})
fileRunner := &TestFileRunner{
Suite: runner,
EvalContext: evalCtx,
Suite: runner,
TestDirectoryGlobalVariables: testDirectoryGlobalVariables,
Manifest: manifest,
}
runner.View.File(file, moduletest.Starting)
fileRunner.Test(file)
runner.View.File(file, moduletest.Complete)
suite.Status = suite.Status.Merge(file.Status)
}
if err := manifest.Save(experimentsAllowed); err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Failed to save state manifest",
fmt.Sprintf("The test state manifest file could not be saved: %s.", err)))
}
runner.View.Conclusion(suite)
if runner.JUnit != nil {
@ -155,6 +168,8 @@ func (runner *TestSuiteRunner) collectTests() (*moduletest.Suite, tfdiags.Diagno
var diags tfdiags.Diagnostics
suite := &moduletest.Suite{
Status: moduletest.Pending,
CommandMode: runner.CommandMode,
Files: func() map[string]*moduletest.File {
files := make(map[string]*moduletest.File)
@ -219,8 +234,9 @@ func (runner *TestSuiteRunner) collectTests() (*moduletest.Suite, tfdiags.Diagno
type TestFileRunner struct {
// Suite contains all the helpful metadata about the test that we need
// during the execution of a file.
Suite *TestSuiteRunner
EvalContext *graph.EvalContext
Suite *TestSuiteRunner
TestDirectoryGlobalVariables map[string]backendrun.UnparsedVariableValue
Manifest *teststates.TestManifest
}
func (runner *TestFileRunner) Test(file *moduletest.File) {
@ -230,6 +246,25 @@ func (runner *TestFileRunner) Test(file *moduletest.File) {
// checking anything about them.
file.Diagnostics = file.Diagnostics.Append(file.Config.Validate(runner.Suite.Config))
states, stateDiags := runner.Manifest.LoadStates(file, runner.Suite.BackendFactory)
file.Diagnostics = file.Diagnostics.Append(stateDiags)
if stateDiags.HasErrors() {
file.Status = moduletest.Error
}
if runner.Suite.CommandMode != moduletest.CleanupMode {
// then we can't have any state files pending cleanup
for _, state := range states {
if state.Manifest.Reason != teststates.StateReasonNone {
file.Diagnostics = file.Diagnostics.Append(tfdiags.Sourceless(
tfdiags.Error,
"State manifest not empty",
fmt.Sprintf("The state manifest for %s should be empty before running tests. This could be due to a previous test run not cleaning up after itself. Please ensure that all state files are cleaned up before running tests.", file.Name)))
file.Status = moduletest.Error
}
}
}
// We'll execute the tests in the file. First, mark the overall status as
// being skipped. This will ensure that if we've cancelled and the files not
// going to do anything it'll be marked as skipped.
@ -238,13 +273,39 @@ func (runner *TestFileRunner) Test(file *moduletest.File) {
// If we have zero run blocks then we'll just mark the file as passed.
file.Status = file.Status.Merge(moduletest.Pass)
return
} else if runner.Suite.CommandMode == moduletest.CleanupMode {
// In cleanup mode, we don't actually execute the run blocks so we'll
// start with the assumption they have all passed.
file.Status = file.Status.Merge(moduletest.Pass)
}
currentGlobalVariables := runner.Suite.GlobalVariables
if filepath.Dir(file.Name) == runner.Suite.TestingDirectory {
// If the file is in the test directory, we'll use the union of the
// global variables and the global test variables.
currentGlobalVariables = runner.TestDirectoryGlobalVariables
}
evalCtx := graph.NewEvalContext(graph.EvalContextOpts{
Config: runner.Suite.Config,
CancelCtx: runner.Suite.CancelledCtx,
StopCtx: runner.Suite.StoppedCtx,
Verbose: runner.Suite.Verbose,
Render: runner.Suite.View,
UnparsedVariables: currentGlobalVariables,
FileStates: states,
Concurrency: runner.Suite.Concurrency,
DeferralAllowed: runner.Suite.DeferralAllowed,
Mode: runner.Suite.CommandMode,
Repair: runner.Suite.Repair,
})
// Build the graph for the file.
b := graph.TestGraphBuilder{
Config: runner.Suite.Config,
File: file,
ContextOpts: runner.Suite.Opts,
CommandMode: runner.Suite.CommandMode,
}
g, diags := b.Build()
file.Diagnostics = file.Diagnostics.Append(diags)
@ -253,13 +314,37 @@ func (runner *TestFileRunner) Test(file *moduletest.File) {
}
// walk and execute the graph
diags = diags.Append(graph.Walk(g, runner.EvalContext))
diags = diags.Append(graph.Walk(g, evalCtx))
// save any dangling state files. we'll check all the states we have in
// memory, and if any are skipped or errored it means we might want to do
// a cleanup command in the future. this means we need to save the other
// state files as dependencies in case they are needed during the cleanup.
saveDependencies := false
for _, state := range states {
if state.Manifest.Reason == teststates.StateReasonSkip || state.Manifest.Reason == teststates.StateReasonError {
saveDependencies = true // at least one state file does have resources left over
break
}
}
if saveDependencies {
for _, state := range states {
if state.Manifest.Reason == teststates.StateReasonNone {
// any states that have no reason to be saved, will be updated
// to the dependency reason and this will tell the manifest to
// save those state files as well.
state.Manifest.Reason = teststates.StateReasonDep
}
}
}
diags = diags.Append(runner.Manifest.SaveStates(file, states))
// If the graph walk was terminated, we don't want to add the diagnostics.
// The error the user receives will just be:
// Failure! 0 passed, 1 failed.
// exit status 1
if runner.EvalContext.Cancelled() {
if evalCtx.Cancelled() {
file.UpdateStatus(moduletest.Error)
log.Printf("[TRACE] TestFileRunner: graph walk terminated for %s", file.Name)
return

@ -121,7 +121,7 @@ func (runner *TestSuiteRunner) Cancel() {
runner.Cancelled = true
}
func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) {
func (runner *TestSuiteRunner) Test(_ bool) (moduletest.Status, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
configDirectory, err := filepath.Abs(runner.ConfigDirectory)

@ -82,7 +82,7 @@ func TestTest(t *testing.T) {
clientOverride: client,
}
_, diags := runner.Test()
_, diags := runner.Test(false)
if len(diags) > 0 {
t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings())
}
@ -168,7 +168,7 @@ func TestTest_Parallelism(t *testing.T) {
clientOverride: client,
}
_, diags := runner.Test()
_, diags := runner.Test(false)
if len(diags) > 0 {
t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings())
}
@ -238,7 +238,7 @@ func TestTest_JSON(t *testing.T) {
clientOverride: client,
}
_, diags := runner.Test()
_, diags := runner.Test(false)
if len(diags) > 0 {
t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings())
}
@ -335,7 +335,7 @@ func TestTest_Verbose(t *testing.T) {
clientOverride: client,
}
_, diags := runner.Test()
_, diags := runner.Test(false)
if len(diags) > 0 {
t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings())
}
@ -498,7 +498,7 @@ func TestTest_Cancel(t *testing.T) {
var diags tfdiags.Diagnostics
go func() {
defer done()
_, diags = runner.Test()
_, diags = runner.Test(false)
}()
stop() // immediately cancel
@ -621,7 +621,7 @@ func TestTest_DelayedCancel(t *testing.T) {
var diags tfdiags.Diagnostics
go func() {
defer done()
_, diags = runner.Test()
_, diags = runner.Test(false)
}()
// Wait for finish!
@ -743,7 +743,7 @@ func TestTest_ForceCancel(t *testing.T) {
var diags tfdiags.Diagnostics
go func() {
defer done()
_, diags = runner.Test()
_, diags = runner.Test(false)
}()
stop()
@ -893,7 +893,7 @@ func TestTest_LongRunningTest(t *testing.T) {
clientOverride: client,
}
_, diags := runner.Test()
_, diags := runner.Test(false)
if len(diags) > 0 {
t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings())
}
@ -977,7 +977,7 @@ func TestTest_LongRunningTestJSON(t *testing.T) {
clientOverride: client,
}
_, diags := runner.Test()
_, diags := runner.Test(false)
if len(diags) > 0 {
t.Errorf("found diags and expected none: %s", diags.ErrWithWarnings())
}

@ -51,6 +51,9 @@ type Test struct {
// DeferralAllowed enables deferrals during test operations. This matches
// the same-named flag in the Operation struct.
DeferralAllowed bool
// These flags are only relevant to the "test cleanup" command.
Repair bool
}
func ParseTest(args []string) (*Test, tfdiags.Diagnostics) {
@ -70,6 +73,7 @@ func ParseTest(args []string) (*Test, tfdiags.Diagnostics) {
cmdFlags.IntVar(&test.OperationParallelism, "parallelism", DefaultParallelism, "parallelism")
cmdFlags.IntVar(&test.RunParallelism, "run-parallelism", DefaultParallelism, "run-parallelism")
cmdFlags.BoolVar(&test.DeferralAllowed, "allow-deferral", false, "allow-deferral")
cmdFlags.BoolVar(&test.Repair, "repair", false, "repair")
// TODO: Finalise the name of this flag.
cmdFlags.StringVar(&test.CloudRunSource, "cloud-run", "", "cloud-run")

@ -5,18 +5,28 @@ package command
import (
"context"
"fmt"
"maps"
"path/filepath"
"slices"
"sort"
"strings"
"time"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/terraform/internal/backend/backendrun"
backendInit "github.com/hashicorp/terraform/internal/backend/init"
"github.com/hashicorp/terraform/internal/backend/local"
"github.com/hashicorp/terraform/internal/cloud"
"github.com/hashicorp/terraform/internal/command/arguments"
"github.com/hashicorp/terraform/internal/command/jsonformat"
"github.com/hashicorp/terraform/internal/command/junit"
"github.com/hashicorp/terraform/internal/command/views"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/logging"
"github.com/hashicorp/terraform/internal/moduletest"
"github.com/hashicorp/terraform/internal/terraform"
"github.com/hashicorp/terraform/internal/tfdiags"
)
@ -89,91 +99,17 @@ func (c *TestCommand) Synopsis() string {
}
func (c *TestCommand) Run(rawArgs []string) int {
var diags tfdiags.Diagnostics
common, rawArgs := arguments.ParseView(rawArgs)
c.View.Configure(common)
// Since we build the colorizer for the cloud runner outside the views
// package we need to propagate our no-color setting manually. Once the
// cloud package is fully migrated over to the new streams IO we should be
// able to remove this.
c.Meta.color = !common.NoColor
c.Meta.Color = c.Meta.color
args, diags := arguments.ParseTest(rawArgs)
preparation, diags := c.setupTestExecution(moduletest.NormalMode, "test", rawArgs)
if diags.HasErrors() {
c.View.Diagnostics(diags)
c.View.HelpPrompt("test")
return 1
}
c.Meta.parallelism = args.OperationParallelism
view := views.NewTest(args.ViewType, c.View)
// EXPERIMENTAL: maybe enable deferred actions
if !c.AllowExperimentalFeatures && args.DeferralAllowed {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Failed to parse command-line flags",
"The -allow-deferral flag is only valid in experimental builds of Terraform.",
))
view.Diagnostics(nil, nil, diags)
return 1
}
// The specified testing directory must be a relative path, and it must
// point to a directory that is a descendant of the configuration directory.
if !filepath.IsLocal(args.TestDirectory) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid testing directory",
"The testing directory must be a relative path pointing to a directory local to the configuration directory."))
view.Diagnostics(nil, nil, diags)
return 1
}
config, configDiags := c.loadConfigWithTests(".", args.TestDirectory)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
view.Diagnostics(nil, nil, diags)
return 1
}
// Users can also specify variables via the command line, so we'll parse
// all that here.
var items []arguments.FlagNameValue
for _, variable := range args.Vars.All() {
items = append(items, arguments.FlagNameValue{
Name: variable.Name,
Value: variable.Value,
})
}
c.variableArgs = arguments.FlagNameValueSlice{Items: &items}
// Collect variables for "terraform test"
testVariables, variableDiags := c.collectVariableValuesForTests(args.TestDirectory)
diags = diags.Append(variableDiags)
variables, variableDiags := c.collectVariableValues()
diags = diags.Append(variableDiags)
if variableDiags.HasErrors() {
view.Diagnostics(nil, nil, diags)
return 1
}
opts, err := c.contextOpts()
if err != nil {
diags = diags.Append(err)
view.Diagnostics(nil, nil, diags)
return 1
}
// Print out all the diagnostics we have from the setup. These will just be
// warnings, and we want them out of the way before we start the actual
// testing.
view.Diagnostics(nil, nil, diags)
args := preparation.Args
view := preparation.View
config := preparation.Config
variables := preparation.Variables
testVariables := preparation.TestVariables
opts := preparation.Opts
// We have two levels of interrupt here. A 'stop' and a 'cancel'. A 'stop'
// is a soft request to stop. We'll finish the current test, do the tidy up,
@ -222,7 +158,8 @@ func (c *TestCommand) Run(rawArgs []string) int {
}
} else {
localRunner := &local.TestSuiteRunner{
Config: config,
BackendFactory: backendInit.Backend,
Config: config,
// The GlobalVariables are loaded from the
// main configuration directory
// The GlobalTestVariables are loaded from the
@ -260,7 +197,7 @@ func (c *TestCommand) Run(rawArgs []string) int {
defer stop()
defer cancel()
status, testDiags = runner.Test()
status, testDiags = runner.Test(c.AllowExperimentalFeatures)
}()
// Wait for the operation to complete, or for an interrupt to occur.
@ -318,3 +255,173 @@ func (c *TestCommand) Run(rawArgs []string) int {
}
return 0
}
type TestRunnerSetup struct {
Args *arguments.Test
View views.Test
Config *configs.Config
Variables map[string]backendrun.UnparsedVariableValue
TestVariables map[string]backendrun.UnparsedVariableValue
Opts *terraform.ContextOpts
}
func (m *Meta) setupTestExecution(mode moduletest.CommandMode, command string, rawArgs []string) (preparation TestRunnerSetup, diags tfdiags.Diagnostics) {
common, rawArgs := arguments.ParseView(rawArgs)
m.View.Configure(common)
var moreDiags tfdiags.Diagnostics
// Since we build the colorizer for the cloud runner outside the views
// package we need to propagate our no-color setting manually. Once the
// cloud package is fully migrated over to the new streams IO we should be
// able to remove this.
m.color = !common.NoColor
m.Color = m.color
preparation.Args, moreDiags = arguments.ParseTest(rawArgs)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
m.View.Diagnostics(diags)
m.View.HelpPrompt(command)
return
}
if preparation.Args.Repair && mode != moduletest.CleanupMode {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid command mode",
"The -repair flag is only valid for the 'test cleanup' command."))
m.View.Diagnostics(diags)
return preparation, diags
}
m.parallelism = preparation.Args.OperationParallelism
view := views.NewTest(preparation.Args.ViewType, m.View)
preparation.View = view
// EXPERIMENTAL: maybe enable deferred actions
if !m.AllowExperimentalFeatures && preparation.Args.DeferralAllowed {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Failed to parse command-line flags",
"The -allow-deferral flag is only valid in experimental builds of Terraform.",
))
view.Diagnostics(nil, nil, diags)
return
}
// The specified testing directory must be a relative path, and it must
// point to a directory that is a descendant of the configuration directory.
if !filepath.IsLocal(preparation.Args.TestDirectory) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid testing directory",
"The testing directory must be a relative path pointing to a directory local to the configuration directory."))
view.Diagnostics(nil, nil, diags)
return
}
preparation.Config, moreDiags = m.loadConfigWithTests(".", preparation.Args.TestDirectory)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
view.Diagnostics(nil, nil, diags)
return
}
// Per file, ensure backends:
// * aren't reused
// * are valid types
var backendDiags tfdiags.Diagnostics
for _, tf := range preparation.Config.Module.Tests {
bucketHashes := make(map[int]string)
// Use an ordered list of backends, so that errors are raised by 2nd+ time
// that a backend config is used in a file.
for _, bc := range orderBackendsByDeclarationLine(tf.BackendConfigs) {
f := backendInit.Backend(bc.Backend.Type)
if f == nil {
detail := fmt.Sprintf("There is no backend type named %q.", bc.Backend.Type)
if msg, removed := backendInit.RemovedBackends[bc.Backend.Type]; removed {
detail = msg
}
backendDiags = backendDiags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsupported backend type",
Detail: detail,
Subject: &bc.Backend.TypeRange,
})
continue
}
b := f()
schema := b.ConfigSchema()
hash := bc.Backend.Hash(schema)
if runName, exists := bucketHashes[hash]; exists {
// This backend's been encountered before
backendDiags = backendDiags.Append(
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Repeat use of the same backend block",
Detail: fmt.Sprintf("The run %q contains a backend configuration that's already been used in run %q. Sharing the same backend configuration between separate runs will result in conflicting state updates.", bc.Run.Name, runName),
Subject: bc.Backend.TypeRange.Ptr(),
},
)
continue
}
bucketHashes[bc.Backend.Hash(schema)] = bc.Run.Name
}
}
diags = diags.Append(backendDiags)
if backendDiags.HasErrors() {
view.Diagnostics(nil, nil, diags)
return
}
// Users can also specify variables via the command line, so we'll parse
// all that here.
var items []arguments.FlagNameValue
for _, variable := range preparation.Args.Vars.All() {
items = append(items, arguments.FlagNameValue{
Name: variable.Name,
Value: variable.Value,
})
}
m.variableArgs = arguments.FlagNameValueSlice{Items: &items}
// Collect variables for "terraform test"
preparation.TestVariables, moreDiags = m.collectVariableValuesForTests(preparation.Args.TestDirectory)
diags = diags.Append(moreDiags)
preparation.Variables, moreDiags = m.collectVariableValues()
diags = diags.Append(moreDiags)
if diags.HasErrors() {
view.Diagnostics(nil, nil, diags)
return
}
opts, err := m.contextOpts()
if err != nil {
diags = diags.Append(err)
view.Diagnostics(nil, nil, diags)
return
}
preparation.Opts = opts
// Print out all the diagnostics we have from the setup. These will just be
// warnings, and we want them out of the way before we start the actual
// testing.
view.Diagnostics(nil, nil, diags)
return
}
// orderBackendsByDeclarationLine takes in a map of state keys to backend configs and returns a list of
// those backend configs, sorted by the line their declaration range starts on. This allows identification
// of the 2nd+ time that a backend configuration is used in the same file.
func orderBackendsByDeclarationLine(backendConfigs map[string]configs.RunBlockBackend) []configs.RunBlockBackend {
bcs := slices.Collect(maps.Values(backendConfigs))
sort.Slice(bcs, func(i, j int) bool {
return bcs[i].Run.DeclRange.Start.Line < bcs[j].Run.DeclRange.Start.Line
})
return bcs
}

@ -0,0 +1,145 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: BUSL-1.1
package command
import (
"context"
"strings"
"time"
backendInit "github.com/hashicorp/terraform/internal/backend/init"
"github.com/hashicorp/terraform/internal/backend/local"
"github.com/hashicorp/terraform/internal/logging"
"github.com/hashicorp/terraform/internal/moduletest"
"github.com/hashicorp/terraform/internal/tfdiags"
)
// TestCleanupCommand is a command that cleans up left-over resources created
// during Terraform test runs. It basically runs the test command in cleanup mode.
type TestCleanupCommand struct {
Meta
}
func (c *TestCleanupCommand) Help() string {
helpText := `
Usage: terraform [global options] test cleanup [options]
Cleans up left-over resources in states that were created during Terraform test runs.
By default, this command ignores the skip_cleanup attributes in the manifest
file. Use the -repair flag to override this behavior, which will ensure that
resources that were intentionally left-over are exempt from cleanup.
Options:
-repair Overrides the skip_cleanup attribute in the manifest
file and attempts to clean up all resources.
-no-color If specified, output won't contain any color.
-verbose Print detailed output during the cleanup process.
`
return strings.TrimSpace(helpText)
}
func (c *TestCleanupCommand) Synopsis() string {
return "Clean up left-over resources created during Terraform test runs"
}
func (c *TestCleanupCommand) Run(rawArgs []string) int {
setup, diags := c.setupTestExecution(moduletest.CleanupMode, "test cleanup", rawArgs)
if diags.HasErrors() {
return 1
}
args := setup.Args
view := setup.View
config := setup.Config
variables := setup.Variables
testVariables := setup.TestVariables
opts := setup.Opts
// We have two levels of interrupt here. A 'stop' and a 'cancel'. A 'stop'
// is a soft request to stop. We'll finish the current test, do the tidy up,
// but then skip all remaining tests and run blocks. A 'cancel' is a hard
// request to stop now. We'll cancel the current operation immediately
// even if it's a delete operation, and we won't clean up any infrastructure
// if we're halfway through a test. We'll print details explaining what was
// stopped so the user can do their best to recover from it.
runningCtx, done := context.WithCancel(context.Background())
stopCtx, stop := context.WithCancel(runningCtx)
cancelCtx, cancel := context.WithCancel(context.Background())
runner := &local.TestSuiteRunner{
BackendFactory: backendInit.Backend,
Config: config,
// The GlobalVariables are loaded from the
// main configuration directory
// The GlobalTestVariables are loaded from the
// test directory
GlobalVariables: variables,
GlobalTestVariables: testVariables,
TestingDirectory: args.TestDirectory,
Opts: opts,
View: view,
Stopped: false,
Cancelled: false,
StoppedCtx: stopCtx,
CancelledCtx: cancelCtx,
Filter: args.Filter,
Verbose: args.Verbose,
Repair: args.Repair,
CommandMode: moduletest.CleanupMode,
}
var testDiags tfdiags.Diagnostics
go func() {
defer logging.PanicHandler()
defer done()
defer stop()
defer cancel()
_, testDiags = runner.Test(c.Meta.AllowExperimentalFeatures)
}()
// Wait for the operation to complete, or for an interrupt to occur.
select {
case <-c.ShutdownCh:
// Nice request to be cancelled.
view.Interrupted()
runner.Stop()
stop()
select {
case <-c.ShutdownCh:
// The user pressed it again, now we have to get it to stop as
// fast as possible.
view.FatalInterrupt()
runner.Cancel()
cancel()
waitTime := 5 * time.Second
// We'll wait 5 seconds for this operation to finish now, regardless
// of whether it finishes successfully or not.
select {
case <-runningCtx.Done():
case <-time.After(waitTime):
}
case <-runningCtx.Done():
// The application finished nicely after the request was stopped.
}
case <-runningCtx.Done():
// tests finished normally with no interrupts.
}
view.Diagnostics(nil, nil, testDiags)
return 0
}

File diff suppressed because it is too large Load Diff

@ -0,0 +1,4 @@
resource "test_resource" "a" {
id = "12345"
value = "foobar"
}

@ -0,0 +1,4 @@
run "test" {
backend "local" {}
skip_cleanup = false
}

@ -0,0 +1,4 @@
resource "test_resource" "a" {
id = "12345"
value = "foobar"
}

@ -0,0 +1,4 @@
run "test" {
backend "local" {}
skip_cleanup = true
}

@ -0,0 +1,17 @@
variable "id" {
type = string
}
variable "destroy_fail" {
type = bool
default = false
}
resource "test_resource" "resource" {
value = var.id
destroy_fail = var.destroy_fail
}
output "id" {
value = test_resource.resource.id
}

@ -0,0 +1,26 @@
run "test" {
variables {
id = "test"
}
}
run "test_two" {
skip_cleanup = true # This will leave behind the state
variables {
id = "test_two"
}
}
run "test_three" {
state_key = "state_three"
variables {
id = "test_three"
destroy_fail = true // This will fail to destroy and leave behind the state
}
}
run "test_four" {
variables {
id = "test_four"
}
}

@ -7,4 +7,4 @@ resource "test_resource" "resource" {
resource "test_resource" "another" {
value = "Hello, world!"
destroy_fail = true
}
}

@ -0,0 +1,10 @@
variable "input" {
type = string
}
resource "test_resource" "a" {
value = var.input
}
resource "test_resource" "c" {}

@ -0,0 +1,9 @@
# The "foobar" backend does not exist and isn't a removed backend either
run "test_invalid_backend" {
variables {
input = "foobar"
}
backend "foobar" {
}
}

@ -0,0 +1,10 @@
variable "input" {
type = string
}
resource "test_resource" "a" {
value = var.input
}
resource "test_resource" "c" {}

@ -0,0 +1,9 @@
# The "etcd" backend has been removed from Terraform versions 1.3+
run "test_removed_backend" {
variables {
input = "foobar"
}
backend "etcd" {
}
}

@ -0,0 +1,10 @@
variable "input" {
type = string
}
resource "test_resource" "a" {
value = var.input
}
resource "test_resource" "c" {}

@ -0,0 +1,9 @@
variable "input" {
type = string
}
module "foobar" {
source = "./child-module"
input = "foobar"
}

@ -0,0 +1,22 @@
# The "state/terraform.tfstate" local backend is used with the implicit internal state "./child-module"
run "test_1" {
module {
source = "./child-module"
}
variables {
input = "foobar"
}
backend "local" {
path = "state/terraform.tfstate"
}
}
# The "state/terraform.tfstate" local backend is used with the implicit internal state "" (empty string == root module under test)
run "test_2" {
backend "local" {
path = "state/terraform.tfstate"
}
}

@ -0,0 +1,10 @@
variable "input" {
type = string
}
resource "test_resource" "a" {
value = var.input
}
resource "test_resource" "c" {}

@ -0,0 +1,15 @@
# The "state/terraform.tfstate" local backend is used with the user-supplied internal state "foobar-1"
run "test_1" {
state_key = "foobar-1"
backend "local" {
path = "state/terraform.tfstate"
}
}
# The "state/terraform.tfstate" local backend is used with the user-supplied internal state "foobar-2"
run "test_2" {
state_key = "foobar-2"
backend "local" {
path = "state/terraform.tfstate"
}
}

@ -0,0 +1,11 @@
variable "id" {
type = string
}
resource "test_resource" "resource" {
value = var.id
}
output "id" {
value = test_resource.resource.id
}

@ -0,0 +1,31 @@
run "test" {
variables {
id = "test"
}
}
run "test_two" {
skip_cleanup = true
variables {
id = "test_two"
}
}
run "test_three" {
skip_cleanup = true
variables {
id = "test_three"
}
}
run "test_four" {
variables {
id = "test_four"
}
}
run "test_five" {
variables {
id = "test_five"
}
}

@ -0,0 +1,11 @@
variable "id" {
type = string
}
resource "test_resource" "resource" {
value = var.id
}
output "id" {
value = test_resource.resource.id
}

@ -0,0 +1,7 @@
run "test" {
skip_cleanup = true
variables {
id = "foo"
}
}

@ -0,0 +1,20 @@
variable "id" {
type = string
}
variable "unused" {
type = string
default = "unused"
}
resource "test_resource" "resource" {
value = var.id
}
output "id" {
value = test_resource.resource.id
}
output "unused" {
value = var.unused
}

@ -0,0 +1,23 @@
run "test" {
variables {
id = "test"
unused = "unused"
}
}
run "test_two" {
state_key = "state"
skip_cleanup = true
variables {
id = "test_two"
// The output state data for this dependency will also be left behind, but the actual
// resource will have been destroyed by the cleanup step of test_three.
unused = run.test.unused
}
}
run "test_three" {
variables {
id = "test_three"
}
}

@ -0,0 +1,11 @@
variable "id" {
type = string
}
resource "test_resource" "resource" {
value = var.id
}
output "id" {
value = test_resource.resource.id
}

@ -0,0 +1,34 @@
test {
skip_cleanup = true
}
run "test" {
variables {
id = "test"
}
}
run "test_two" {
variables {
id = "test_two"
}
}
run "test_three" {
variables {
id = "test_three"
}
}
run "test_four" {
variables {
id = "test_four"
}
}
run "test_five" {
skip_cleanup = false # This will be cleaned up, and test_four will not
variables {
id = "test_five"
}
}

@ -0,0 +1,18 @@
variable "input" {
type = string
}
resource "test_resource" "foobar" {
id = "12345"
# Set deterministic ID because this fixture is for testing what happens when there's no prior state
# i.e. this id will otherwise keep changing per test
value = var.input
}
output "test_resource_id" {
value = test_resource.foobar.id
}
output "supplied_input_value" {
value = var.input
}

@ -0,0 +1,15 @@
run "setup_pet_name" {
backend "local" {
// Use default path
}
variables {
input = "value-from-run-that-controls-backend"
}
}
run "edit_input" {
variables {
input = "this-value-should-not-enter-state"
}
}

@ -0,0 +1,18 @@
variable "input" {
type = string
}
resource "test_resource" "foobar" {
# No ID set here
# We should be able to assert about its value as it will be loaded from state
# by the backend block in the run block
value = var.input
}
output "test_resource_id" {
value = test_resource.foobar.id
}
output "supplied_input_value" {
value = var.input
}

@ -0,0 +1,15 @@
run "setup_pet_name" {
backend "local" {
// Use default path
}
variables {
input = "value-from-run-that-controls-backend"
}
}
run "edit_input" {
variables {
input = "this-value-should-not-enter-state"
}
}

@ -0,0 +1,41 @@
{
"version": 4,
"terraform_version": "1.13.0",
"serial": 1,
"lineage": "c1f962ec-7cf6-281e-1eb8-eed10c450e16",
"outputs": {
"input": {
"value": "value-from-run-that-controls-backend",
"type": "string"
},
"test_resource_id": {
"value": "53d69028-477d-7ba0-83c3-ff3807e3756f",
"type": "string"
}
},
"resources": [
{
"mode": "managed",
"type": "test_resource",
"name": "foobar",
"provider": "provider[\"registry.terraform.io/hashicorp/test\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"create_wait_seconds": null,
"destroy_fail": false,
"destroy_wait_seconds": null,
"id": "53d69028-477d-7ba0-83c3-ff3807e3756f",
"interrupt_count": null,
"value": null,
"write_only": null
},
"sensitive_attributes": [],
"identity_schema_version": 0
}
]
}
],
"check_results": null
}

@ -71,7 +71,7 @@ type Test interface {
// addition, this function prints additional details about the current
// operation alongside the current state as the state will be missing newly
// created resources that also need to be handled manually.
FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, states map[*moduletest.Run]*states.State, created []*plans.ResourceInstanceChangeSrc)
FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, states map[string]*states.State, created []*plans.ResourceInstanceChangeSrc)
// TFCStatusUpdate prints a reassuring update, letting users know the latest
// status of their ongoing remote test run.
@ -136,11 +136,15 @@ func (t *TestHuman) Conclusion(suite *moduletest.Suite) {
t.view.streams.Print(t.view.colorize.Color("[red]Failure![reset]"))
}
t.view.streams.Printf(" %d passed, %d failed", counts[moduletest.Pass], counts[moduletest.Fail]+counts[moduletest.Error])
if counts[moduletest.Skip] > 0 {
t.view.streams.Printf(", %d skipped.\n", counts[moduletest.Skip])
if suite.CommandMode != moduletest.CleanupMode {
t.view.streams.Printf(" %d passed, %d failed", counts[moduletest.Pass], counts[moduletest.Fail]+counts[moduletest.Error])
if counts[moduletest.Skip] > 0 {
t.view.streams.Printf(", %d skipped.\n", counts[moduletest.Skip])
} else {
t.view.streams.Println(".")
}
} else {
t.view.streams.Println(".")
t.view.streams.Println()
}
}
@ -276,7 +280,8 @@ func (t *TestHuman) DestroySummary(diags tfdiags.Diagnostics, run *moduletest.Ru
}
t.Diagnostics(run, file, diags)
if state.HasManagedResourceInstanceObjects() {
skipCleanup := run != nil && run.Config.SkipCleanup
if state.HasManagedResourceInstanceObjects() && !skipCleanup {
// FIXME: This message says "resources" but this is actually a list
// of resource instance objects.
t.view.streams.Eprint(format.WordWrap(fmt.Sprintf("\nTerraform left the following resources in state after executing %s, and they need to be cleaned up manually:\n", identifier), t.view.errorColumns()))
@ -302,12 +307,12 @@ func (t *TestHuman) FatalInterrupt() {
t.view.streams.Eprintln(format.WordWrap(fatalInterrupt, t.view.errorColumns()))
}
func (t *TestHuman) FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, existingStates map[*moduletest.Run]*states.State, created []*plans.ResourceInstanceChangeSrc) {
func (t *TestHuman) FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, existingStates map[string]*states.State, created []*plans.ResourceInstanceChangeSrc) {
t.view.streams.Eprint(format.WordWrap(fmt.Sprintf("\nTerraform was interrupted while executing %s, and may not have performed the expected cleanup operations.\n", file.Name), t.view.errorColumns()))
// Print out the main state first, this is the state that isn't associated
// with a run block.
if state, exists := existingStates[nil]; exists && !state.Empty() {
if state, exists := existingStates[configs.TestMainStateIdentifier]; exists && !state.Empty() {
t.view.streams.Eprint(format.WordWrap("\nTerraform has already created the following resources from the module under test:\n", t.view.errorColumns()))
for _, resource := range addrs.SetSortedNatural(state.AllManagedResourceInstanceObjectAddrs()) {
if resource.DeposedKey != states.NotDeposed {
@ -318,14 +323,12 @@ func (t *TestHuman) FatalInterruptSummary(run *moduletest.Run, file *moduletest.
}
}
// Then print out the other states in order.
for _, run := range file.Runs {
state, exists := existingStates[run]
if !exists || state.Empty() {
for key, state := range existingStates {
if key == configs.TestMainStateIdentifier || state.Empty() {
continue
}
t.view.streams.Eprint(format.WordWrap(fmt.Sprintf("\nTerraform has already created the following resources for %q from %q:\n", run.Name, run.Config.Module.Source), t.view.errorColumns()))
t.view.streams.Eprint(format.WordWrap(fmt.Sprintf("\nTerraform has already created the following resources for %q:\n", key), t.view.errorColumns()))
for _, resource := range addrs.SetSortedNatural(state.AllManagedResourceInstanceObjectAddrs()) {
if resource.DeposedKey != states.NotDeposed {
t.view.streams.Eprintf(" - %s (%s)\n", resource.ResourceInstance, resource.DeposedKey)
@ -445,11 +448,15 @@ func (t *TestJSON) Conclusion(suite *moduletest.Suite) {
message.WriteString("Failure!")
}
message.WriteString(fmt.Sprintf(" %d passed, %d failed", summary.Passed, summary.Failed+summary.Errored))
if summary.Skipped > 0 {
message.WriteString(fmt.Sprintf(", %d skipped.", summary.Skipped))
} else {
message.WriteString(".")
if suite.CommandMode != moduletest.CleanupMode {
// don't print test summaries during cleanup mode.
message.WriteString(fmt.Sprintf(" %d passed, %d failed", summary.Passed, summary.Failed+summary.Errored))
if summary.Skipped > 0 {
message.WriteString(fmt.Sprintf(", %d skipped.", summary.Skipped))
} else {
message.WriteString(".")
}
}
}
@ -604,7 +611,8 @@ func (t *TestJSON) Run(run *moduletest.Run, file *moduletest.File, progress modu
}
func (t *TestJSON) DestroySummary(diags tfdiags.Diagnostics, run *moduletest.Run, file *moduletest.File, state *states.State) {
if state.HasManagedResourceInstanceObjects() {
skipCleanup := run != nil && run.Config.SkipCleanup
if state.HasManagedResourceInstanceObjects() && !skipCleanup {
cleanup := json.TestFileCleanup{}
for _, resource := range addrs.SetSortedNatural(state.AllManagedResourceInstanceObjectAddrs()) {
cleanup.FailedResources = append(cleanup.FailedResources, json.TestFailedResource{
@ -652,13 +660,13 @@ func (t *TestJSON) FatalInterrupt() {
t.view.Log(fatalInterrupt)
}
func (t *TestJSON) FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, existingStates map[*moduletest.Run]*states.State, created []*plans.ResourceInstanceChangeSrc) {
func (t *TestJSON) FatalInterruptSummary(run *moduletest.Run, file *moduletest.File, existingStates map[string]*states.State, created []*plans.ResourceInstanceChangeSrc) {
message := json.TestFatalInterrupt{
States: make(map[string][]json.TestFailedResource),
}
for run, state := range existingStates {
for key, state := range existingStates {
if state.Empty() {
continue
}
@ -671,10 +679,10 @@ func (t *TestJSON) FatalInterruptSummary(run *moduletest.Run, file *moduletest.F
})
}
if run == nil {
if key == configs.TestMainStateIdentifier {
message.State = resources
} else {
message.States[run.Name] = resources
message.States[key] = resources
}
}

@ -480,7 +480,7 @@ func TestTestHuman_Run(t *testing.T) {
StdErr string
}{
"pass": {
Run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass},
Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass},
Progress: moduletest.Complete,
StdOut: " run \"run_block\"... pass\n",
},
@ -502,19 +502,19 @@ some warning happened during this test
},
"pending": {
Run: &moduletest.Run{Name: "run_block", Status: moduletest.Pending},
Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pending},
Progress: moduletest.Complete,
StdOut: " run \"run_block\"... pending\n",
},
"skip": {
Run: &moduletest.Run{Name: "run_block", Status: moduletest.Skip},
Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Skip},
Progress: moduletest.Complete,
StdOut: " run \"run_block\"... skip\n",
},
"fail": {
Run: &moduletest.Run{Name: "run_block", Status: moduletest.Fail},
Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Fail},
Progress: moduletest.Complete,
StdOut: " run \"run_block\"... fail\n",
},
@ -542,7 +542,7 @@ other details
},
"error": {
Run: &moduletest.Run{Name: "run_block", Status: moduletest.Error},
Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Error},
Progress: moduletest.Complete,
StdOut: " run \"run_block\"... fail\n",
},
@ -725,15 +725,15 @@ resource "test_resource" "creating" {
// These next three tests should print nothing, as we only report on
// progress complete.
"progress_starting": {
Run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass},
Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass},
Progress: moduletest.Starting,
},
"progress_running": {
Run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass},
Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass},
Progress: moduletest.Running,
},
"progress_teardown": {
Run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass},
Run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass},
Progress: moduletest.TearDown,
},
}
@ -822,7 +822,7 @@ this time it is very bad
diags: tfdiags.Diagnostics{
tfdiags.Sourceless(tfdiags.Error, "first error", "this time it is very bad"),
},
run: &moduletest.Run{Name: "run_block"},
run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}},
file: &moduletest.File{Name: "main.tftest.hcl"},
state: states.NewState(),
stderr: `Terraform encountered an error destroying resources created while executing
@ -994,13 +994,13 @@ main.tftest.hcl, and they need to be cleaned up manually:
func TestTestHuman_FatalInterruptSummary(t *testing.T) {
tcs := map[string]struct {
states map[*moduletest.Run]*states.State
states map[string]*states.State
run *moduletest.Run
created []*plans.ResourceInstanceChangeSrc
want string
}{
"no_state_only_plan": {
states: make(map[*moduletest.Run]*states.State),
states: make(map[string]*states.State),
run: &moduletest.Run{
Config: &configs.TestRun{},
Name: "run_block",
@ -1048,8 +1048,8 @@ Terraform was in the process of creating the following resources for
`,
},
"file_state_no_plan": {
states: map[*moduletest.Run]*states.State{
nil: states.BuildState(func(state *states.SyncState) {
states: map[string]*states.State{
configs.TestMainStateIdentifier: states.BuildState(func(state *states.SyncState) {
state.SetResourceInstanceCurrent(
addrs.AbsResourceInstance{
Module: addrs.RootModuleInstance,
@ -1091,15 +1091,8 @@ test:
`,
},
"run_states_no_plan": {
states: map[*moduletest.Run]*states.State{
&moduletest.Run{
Name: "setup_block",
Config: &configs.TestRun{
Module: &configs.TestRunModuleCall{
Source: addrs.ModuleSourceLocal("../setup"),
},
},
}: states.BuildState(func(state *states.SyncState) {
states: map[string]*states.State{
"../setup": states.BuildState(func(state *states.SyncState) {
state.SetResourceInstanceCurrent(
addrs.AbsResourceInstance{
Module: addrs.RootModuleInstance,
@ -1134,22 +1127,14 @@ test:
Terraform was interrupted while executing main.tftest.hcl, and may not have
performed the expected cleanup operations.
Terraform has already created the following resources for "setup_block" from
"../setup":
Terraform has already created the following resources for "../setup":
- test_instance.one
- test_instance.two
`,
},
"all_states_with_plan": {
states: map[*moduletest.Run]*states.State{
&moduletest.Run{
Name: "setup_block",
Config: &configs.TestRun{
Module: &configs.TestRunModuleCall{
Source: addrs.ModuleSourceLocal("../setup"),
},
},
}: states.BuildState(func(state *states.SyncState) {
states: map[string]*states.State{
"../setup": states.BuildState(func(state *states.SyncState) {
state.SetResourceInstanceCurrent(
addrs.AbsResourceInstance{
Module: addrs.RootModuleInstance,
@ -1178,7 +1163,7 @@ Terraform has already created the following resources for "setup_block" from
&states.ResourceInstanceObjectSrc{},
addrs.AbsProviderConfig{})
}),
nil: states.BuildState(func(state *states.SyncState) {
configs.TestMainStateIdentifier: states.BuildState(func(state *states.SyncState) {
state.SetResourceInstanceCurrent(
addrs.AbsResourceInstance{
Module: addrs.RootModuleInstance,
@ -1253,8 +1238,7 @@ test:
- test_instance.one
- test_instance.two
Terraform has already created the following resources for "setup_block" from
"../setup":
Terraform has already created the following resources for "../setup":
- test_instance.setup_one
- test_instance.setup_two
@ -1272,15 +1256,6 @@ Terraform was in the process of creating the following resources for
file := &moduletest.File{
Name: "main.tftest.hcl",
Runs: func() []*moduletest.Run {
var runs []*moduletest.Run
for run := range tc.states {
if run != nil {
runs = append(runs, run)
}
}
return runs
}(),
}
view.FatalInterruptSummary(tc.run, file, tc.states, tc.created)
@ -1973,7 +1948,7 @@ func TestTestJSON_DestroySummary(t *testing.T) {
},
"state_from_run": {
file: &moduletest.File{Name: "main.tftest.hcl"},
run: &moduletest.Run{Name: "run_block"},
run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}},
state: states.BuildState(func(state *states.SyncState) {
state.SetResourceInstanceCurrent(
addrs.Resource{
@ -2380,7 +2355,7 @@ func TestTestJSON_Run(t *testing.T) {
want []map[string]interface{}
}{
"starting": {
run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass},
run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass},
progress: moduletest.Starting,
want: []map[string]interface{}{
{
@ -2401,7 +2376,7 @@ func TestTestJSON_Run(t *testing.T) {
},
"running": {
run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass},
run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass},
progress: moduletest.Running,
elapsed: 2024,
want: []map[string]interface{}{
@ -2423,7 +2398,7 @@ func TestTestJSON_Run(t *testing.T) {
},
"teardown": {
run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass},
run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass},
progress: moduletest.TearDown,
want: []map[string]interface{}{
{
@ -2444,7 +2419,7 @@ func TestTestJSON_Run(t *testing.T) {
},
"pass": {
run: &moduletest.Run{Name: "run_block", Status: moduletest.Pass},
run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pass},
progress: moduletest.Complete,
want: []map[string]interface{}{
{
@ -2503,7 +2478,7 @@ func TestTestJSON_Run(t *testing.T) {
},
"pending": {
run: &moduletest.Run{Name: "run_block", Status: moduletest.Pending},
run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Pending},
progress: moduletest.Complete,
want: []map[string]interface{}{
{
@ -2524,7 +2499,7 @@ func TestTestJSON_Run(t *testing.T) {
},
"skip": {
run: &moduletest.Run{Name: "run_block", Status: moduletest.Skip},
run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Skip},
progress: moduletest.Complete,
want: []map[string]interface{}{
{
@ -2545,7 +2520,7 @@ func TestTestJSON_Run(t *testing.T) {
},
"fail": {
run: &moduletest.Run{Name: "run_block", Status: moduletest.Fail},
run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Fail},
progress: moduletest.Complete,
want: []map[string]interface{}{
{
@ -2620,7 +2595,7 @@ func TestTestJSON_Run(t *testing.T) {
},
"error": {
run: &moduletest.Run{Name: "run_block", Status: moduletest.Error},
run: &moduletest.Run{Name: "run_block", Config: &configs.TestRun{}, Status: moduletest.Error},
progress: moduletest.Complete,
want: []map[string]interface{}{
{
@ -2973,12 +2948,12 @@ func TestTestJSON_Run(t *testing.T) {
func TestTestJSON_FatalInterruptSummary(t *testing.T) {
tcs := map[string]struct {
states map[*moduletest.Run]*states.State
states map[string]*states.State
changes []*plans.ResourceInstanceChangeSrc
want []map[string]interface{}
}{
"no_state_only_plan": {
states: make(map[*moduletest.Run]*states.State),
states: make(map[string]*states.State),
changes: []*plans.ResourceInstanceChangeSrc{
{
Addr: addrs.AbsResourceInstance{
@ -3029,8 +3004,8 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) {
},
},
"file_state_no_plan": {
states: map[*moduletest.Run]*states.State{
nil: states.BuildState(func(state *states.SyncState) {
states: map[string]*states.State{
configs.TestMainStateIdentifier: states.BuildState(func(state *states.SyncState) {
state.SetResourceInstanceCurrent(
addrs.AbsResourceInstance{
Module: addrs.RootModuleInstance,
@ -3083,8 +3058,8 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) {
},
},
"run_states_no_plan": {
states: map[*moduletest.Run]*states.State{
&moduletest.Run{Name: "setup_block"}: states.BuildState(func(state *states.SyncState) {
states: map[string]*states.State{
"../setup": states.BuildState(func(state *states.SyncState) {
state.SetResourceInstanceCurrent(
addrs.AbsResourceInstance{
Module: addrs.RootModuleInstance,
@ -3124,7 +3099,7 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) {
"@testrun": "run_block",
"test_interrupt": map[string]interface{}{
"states": map[string]interface{}{
"setup_block": []interface{}{
"../setup": []interface{}{
map[string]interface{}{
"instance": "test_instance.one",
},
@ -3139,8 +3114,8 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) {
},
},
"all_states_with_plan": {
states: map[*moduletest.Run]*states.State{
&moduletest.Run{Name: "setup_block"}: states.BuildState(func(state *states.SyncState) {
states: map[string]*states.State{
"../setup": states.BuildState(func(state *states.SyncState) {
state.SetResourceInstanceCurrent(
addrs.AbsResourceInstance{
Module: addrs.RootModuleInstance,
@ -3169,7 +3144,7 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) {
&states.ResourceInstanceObjectSrc{},
addrs.AbsProviderConfig{})
}),
nil: states.BuildState(func(state *states.SyncState) {
configs.TestMainStateIdentifier: states.BuildState(func(state *states.SyncState) {
state.SetResourceInstanceCurrent(
addrs.AbsResourceInstance{
Module: addrs.RootModuleInstance,
@ -3248,7 +3223,7 @@ func TestTestJSON_FatalInterruptSummary(t *testing.T) {
},
},
"states": map[string]interface{}{
"setup_block": []interface{}{
"../setup": []interface{}{
map[string]interface{}{
"instance": "test_instance.setup_one",
},

@ -14,7 +14,7 @@ import (
// "Working directory" is unfortunately a slight misnomer, because non-default
// options can potentially stretch the definition such that multiple working
// directories end up appearing to share a data directory, or other similar
// anomolies, but we continue to use this terminology both for historical
// anomalies, but we continue to use this terminology both for historical
// reasons and because it reflects the common case without any special
// overrides.
//
@ -135,6 +135,12 @@ func (d *Dir) DataDir() string {
return d.dataDir
}
// TestDataDir returns the path where the receiver keeps settings
// and artifacts related to terraform tests.
func (d *Dir) TestDataDir() string {
return filepath.Join(d.dataDir, "test")
}
// ensureDataDir creates the data directory and all of the necessary parent
// directories that lead to it, if they don't already exist.
//

@ -10,8 +10,9 @@ import (
"github.com/zclconf/go-cty/cty"
)
// Backend represents a "backend" block inside a "terraform" block in a module
// or file.
// Backend represents a "backend" block
// This could be inside a "terraform" block in a module
// or file, or in a "run" block in a .tftest.hcl file.
type Backend struct {
Type string
Config hcl.Body

@ -43,7 +43,7 @@ func (p *Parser) LoadTestFile(path string) (*TestFile, hcl.Diagnostics) {
return nil, diags
}
test, testDiags := loadTestFile(body)
test, testDiags := loadTestFile(body, p.allowExperiments)
diags = append(diags, testDiags...)
return test, diags
}

@ -126,6 +126,8 @@ func TestParserLoadConfigDirSuccess(t *testing.T) {
func TestParserLoadConfigDirWithTests(t *testing.T) {
directories := []string{
"testdata/valid-modules/with-tests",
"testdata/valid-modules/with-tests-backend",
"testdata/valid-modules/with-tests-same-backend-across-files",
"testdata/valid-modules/with-tests-expect-failures",
"testdata/valid-modules/with-tests-nested",
"testdata/valid-modules/with-tests-very-nested",
@ -142,6 +144,7 @@ func TestParserLoadConfigDirWithTests(t *testing.T) {
}
parser := NewParser(nil)
parser.AllowLanguageExperiments(true)
mod, diags := parser.LoadConfigDir(directory, MatchTestFiles(testDirectory))
if len(diags) > 0 { // We don't want any warnings or errors.
t.Errorf("unexpected diagnostics")
@ -300,6 +303,24 @@ func TestParserLoadTestFiles_Invalid(t *testing.T) {
"duplicate_file_config.tftest.hcl:3,1-5: Multiple \"test\" blocks; This test file already has a \"test\" block defined at duplicate_file_config.tftest.hcl:1,1-5.",
"duplicate_file_config.tftest.hcl:5,1-5: Multiple \"test\" blocks; This test file already has a \"test\" block defined at duplicate_file_config.tftest.hcl:1,1-5.",
},
"duplicate_backend_blocks_in_test": {
"duplicate_backend_blocks_in_test.tftest.hcl:15,3-18: Duplicate backend blocks; The run \"test\" already uses an internal state file that's loaded by a backend in the run \"setup\". Please ensure that a backend block is only in the first apply run block for a given internal state file.",
},
"duplicate_backend_blocks_in_run": {
"duplicate_backend_blocks_in_run.tftest.hcl:6,3-18: Duplicate backend blocks; A backend block has already been defined inside the run \"setup\" at duplicate_backend_blocks_in_run.tftest.hcl:3,3-18.",
},
"backend_block_in_plan_run": {
"backend_block_in_plan_run.tftest.hcl:6,3-18: Invalid backend block; A backend block can only be used in the first apply run block for a given internal state file. It cannot be included in a block to run a plan command.",
},
"backend_block_in_second_apply_run": {
"backend_block_in_second_apply_run.tftest.hcl:10,3-18: Invalid backend block; The run \"test_2\" cannot load in state using a backend block, because internal state has already been created by an apply command in run \"test_1\". Backend blocks can only be present in the first apply command for a given internal state.",
},
"non_state_storage_backend_in_test": {
"non_state_storage_backend_in_test.tftest.hcl:4,3-19: Invalid backend block; The \"remote\" backend type cannot be used in the backend block in run \"test\" at non_state_storage_backend_in_test.tftest.hcl:4,3-19. Only state storage backends can be used in a test run.",
},
"skip_cleanup_after_backend": {
"skip_cleanup_after_backend.tftest.hcl:13,3-15: Duplicate \"skip_cleanup\" block; The run \"skip_cleanup\" has a skip_cleanup attribute set, but shares state with an earlier run \"backend\" that has a backend defined. The later run takes precedence, but the backend will still be used to manage this state.",
},
}
for name, expected := range tcs {
@ -312,6 +333,7 @@ func TestParserLoadTestFiles_Invalid(t *testing.T) {
parser := testParser(map[string]string{
fmt.Sprintf("%s.tftest.hcl", name): string(src),
})
parser.AllowLanguageExperiments(true)
_, actual := parser.LoadTestFile(fmt.Sprintf("%s.tftest.hcl", name))
assertExactDiagnostics(t, actual, expected)

@ -72,6 +72,11 @@ type TestFile struct {
// test.
Providers map[string]*Provider
// BackendConfigs is a map of state keys to structs that contain backend
// configuration. This should be used to set the state for a given state key
// at the start of a test command.
BackendConfigs map[string]RunBlockBackend
// Overrides contains any specific overrides that should be applied for this
// test outside any mock providers.
Overrides addrs.Map[addrs.Targetable, *Override]
@ -90,6 +95,9 @@ type TestFileConfig struct {
// Parallel: Indicates if test runs should be executed in parallel.
Parallel bool
// SkipCleanup: Indicates if the test runs should skip the cleanup phase.
SkipCleanup bool
DeclRange hcl.Range
}
@ -170,6 +178,12 @@ type TestRun struct {
// will be executed in parallel with other test runs.
Parallel bool
Backend *Backend
// SkipCleanup: Indicates if the test run should skip the cleanup phase.
SkipCleanup bool
SkipCleanupRange *hcl.Range
NameDeclRange hcl.Range
VariablesDeclRange hcl.Range
DeclRange hcl.Range
@ -338,11 +352,24 @@ type TestRunOptions struct {
DeclRange hcl.Range
}
func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) {
// RunBlockBackend records a backend block and which run block it was parsed
// from.
type RunBlockBackend struct {
Backend *Backend
// Run is the TestRun containing the backend block for this Backend.
// This is used in diagnostics to help avoid duplicate backends for a given
// internal state file or duplicated use of the same backend for multiple
// internal states.
Run *TestRun
}
func loadTestFile(body hcl.Body, experimentsAllowed bool) (*TestFile, hcl.Diagnostics) {
var diags hcl.Diagnostics
tf := &TestFile{
VariableDefinitions: make(map[string]*Variable),
Providers: make(map[string]*Provider),
BackendConfigs: make(map[string]RunBlockBackend),
Overrides: addrs.MakeMap[addrs.Targetable, *Override](),
}
@ -354,7 +381,7 @@ func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) {
diags = append(diags, contentDiags...)
var cDiags hcl.Diagnostics
tf.Config, cDiags = decodeFileConfigBlock(configContent)
tf.Config, cDiags = decodeFileConfigBlock(configContent, experimentsAllowed)
diags = append(diags, cDiags...)
if diags.HasErrors() {
return nil, diags
@ -364,11 +391,14 @@ func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) {
diags = append(diags, contentDiags...)
runBlockNames := make(map[string]hcl.Range)
skipCleanups := make(map[string]string)
for _, block := range content.Blocks {
switch block.Type {
case "run":
run, runDiags := decodeTestRunBlock(block, tf)
nextRunIndex := len(tf.Runs)
run, runDiags := decodeTestRunBlock(block, tf, experimentsAllowed)
diags = append(diags, runDiags...)
if !runDiags.HasErrors() {
tf.Runs = append(tf.Runs, run)
@ -379,11 +409,71 @@ func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) {
Severity: hcl.DiagError,
Summary: "Duplicate \"run\" block names",
Detail: fmt.Sprintf("This test file already has a run named %s block defined at %s.", run.Name, rng),
Subject: block.DefRange.Ptr(),
Subject: run.NameDeclRange.Ptr(),
})
continue
} else {
runBlockNames[run.Name] = run.DeclRange
}
if run.SkipCleanup && run.SkipCleanupRange != nil {
if backend, found := tf.BackendConfigs[run.StateKey]; found {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagWarning,
Summary: "Duplicate \"skip_cleanup\" block",
Detail: fmt.Sprintf("The run %q has a skip_cleanup attribute set, but shares state with an earlier run %q that has a backend defined. The later run takes precedence, but the backend will still be used to manage this state.", run.Name, backend.Run.Name),
Subject: run.SkipCleanupRange,
})
} else {
if _, found := skipCleanups[run.StateKey]; found {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagWarning,
Summary: "Duplicate \"skip_cleanup\" block",
Detail: fmt.Sprintf("The run %q has a skip_cleanup attribute set, but shares state with an earlier run %q that also has skip_cleanup set. The later run takes precedence, and this attribute is ignored for the earlier run.", run.Name, skipCleanups[run.StateKey]),
Subject: run.SkipCleanupRange,
})
}
skipCleanups[run.StateKey] = run.Name
}
}
if run.Backend != nil {
if existing, exists := tf.BackendConfigs[run.StateKey]; exists {
// then we definitely have two run blocks with the same
// state key trying to load backends
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate backend blocks",
Detail: fmt.Sprintf("The run %q already uses an internal state file that's loaded by a backend in the run %q. Please ensure that a backend block is only in the first apply run block for a given internal state file.", run.Name, existing.Run.Name),
Subject: run.Backend.DeclRange.Ptr(),
})
continue
} else {
// Record the backend block in the test file, under the related state key
tf.BackendConfigs[run.StateKey] = RunBlockBackend{
Backend: run.Backend,
Run: run,
}
}
for ix := range nextRunIndex {
previousRun := tf.Runs[ix]
if previousRun.StateKey != run.StateKey {
continue
}
if previousRun.Command == ApplyTestCommand {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid backend block",
Detail: fmt.Sprintf("The run %q cannot load in state using a backend block, because internal state has already been created by an apply command in run %q. Backend blocks can only be present in the first apply command for a given internal state.", run.Name, previousRun.Name),
Subject: run.Backend.DeclRange.Ptr(),
})
break
}
}
}
runBlockNames[run.Name] = run.DeclRange
case "variable":
variable, variableDiags := decodeVariableBlock(block, false)
@ -527,7 +617,7 @@ func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) {
return tf, diags
}
func decodeFileConfigBlock(fileContent *hcl.BodyContent) (*TestFileConfig, hcl.Diagnostics) {
func decodeFileConfigBlock(fileContent *hcl.BodyContent, experimentsAllowed bool) (*TestFileConfig, hcl.Diagnostics) {
var diags hcl.Diagnostics
// The "test" block is optional, so we just return a nil config if it doesn't exist.
@ -561,10 +651,24 @@ func decodeFileConfigBlock(fileContent *hcl.BodyContent) (*TestFileConfig, hcl.D
diags = append(diags, rawDiags...)
}
if attr, exists := content.Attributes["skip_cleanup"]; exists {
if !experimentsAllowed {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid attribute",
Detail: "The skip_cleanup attribute is only available in experimental builds of Terraform.",
Subject: attr.NameRange.Ptr(),
})
}
rawDiags := gohcl.DecodeExpression(attr.Expr, nil, &ret.SkipCleanup)
diags = append(diags, rawDiags...)
}
return ret, diags
}
func decodeTestRunBlock(block *hcl.Block, file *TestFile) (*TestRun, hcl.Diagnostics) {
func decodeTestRunBlock(block *hcl.Block, file *TestFile, experimentsAllowed bool) (*TestRun, hcl.Diagnostics) {
var diags hcl.Diagnostics
content, contentDiags := block.Body.Content(testRunBlockSchema)
@ -577,6 +681,7 @@ func decodeTestRunBlock(block *hcl.Block, file *TestFile) (*TestRun, hcl.Diagnos
NameDeclRange: block.LabelRanges[0],
DeclRange: block.DefRange,
Parallel: file.Config != nil && file.Config.Parallel,
SkipCleanup: file.Config != nil && file.Config.SkipCleanup,
}
if !hclsyntax.ValidIdentifier(r.Name) {
@ -588,6 +693,7 @@ func decodeTestRunBlock(block *hcl.Block, file *TestFile) (*TestRun, hcl.Diagnos
})
}
var backendRange *hcl.Range // Stored for validation once all blocks/attrs processed
for _, block := range content.Blocks {
switch block.Type {
case "assert":
@ -697,6 +803,45 @@ func decodeTestRunBlock(block *hcl.Block, file *TestFile) (*TestRun, hcl.Diagnos
}
r.Overrides.Put(subject, override)
}
case "backend":
if !experimentsAllowed {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid block",
Detail: "The backend block is only available within run blocks in experimental builds of Terraform.",
Subject: block.DefRange.Ptr(),
})
}
backend, backedDiags := decodeBackendBlock(block)
diags = append(diags, backedDiags...)
if backend.Type == "remote" {
// Enhanced backends are not in use
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid backend block",
Detail: fmt.Sprintf("The \"remote\" backend type cannot be used in the backend block in run %q at %s. Only state storage backends can be used in a test run.", r.Name, block.DefRange),
Subject: block.DefRange.Ptr(),
})
continue
}
if r.Backend != nil {
// We've already encountered a backend for this run block
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate backend blocks",
Detail: fmt.Sprintf("A backend block has already been defined inside the run %q at %s.", r.Name, backendRange),
Subject: block.DefRange.Ptr(),
})
continue
}
r.Backend = backend
backendRange = &block.DefRange
// Using a backend implies skipping cleanup for that run
r.SkipCleanup = true
}
}
@ -760,6 +905,42 @@ func decodeTestRunBlock(block *hcl.Block, file *TestFile) (*TestRun, hcl.Diagnos
diags = append(diags, rawDiags...)
}
if r.Command != ApplyTestCommand && r.Backend != nil {
// Backend blocks must be used in the first _apply_ run block for a given internal state file.
// So, they cannot be present in a plan run block
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid backend block",
Detail: "A backend block can only be used in the first apply run block for a given internal state file. It cannot be included in a block to run a plan command.",
Subject: backendRange.Ptr(),
})
}
if attr, exists := content.Attributes["skip_cleanup"]; exists {
if !experimentsAllowed {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid attribute",
Detail: "The skip_cleanup attribute is only available in experimental builds of Terraform.",
Subject: attr.NameRange.Ptr(),
})
}
rawDiags := gohcl.DecodeExpression(attr.Expr, nil, &r.SkipCleanup)
diags = append(diags, rawDiags...)
r.SkipCleanupRange = attr.NameRange.Ptr()
}
if r.SkipCleanupRange != nil && !r.SkipCleanup && r.Backend != nil {
// Stop user attempting to clean up long-lived resources
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Cannot use `skip_cleanup=false` in a run block that contains a backend block",
Detail: "Backend blocks are used in tests to allow reuse of long-lived resources. Due to this, cleanup behavior is implicitly skipped and backend blocks are incompatible with setting `skip_cleanup=false`",
Subject: r.SkipCleanupRange,
})
}
return &r, diags
}
@ -963,6 +1144,7 @@ var testFileSchema = &hcl.BodySchema{
var testFileConfigBlockSchema = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{Name: "parallel"},
{Name: "skip_cleanup"},
},
}
@ -973,6 +1155,7 @@ var testRunBlockSchema = &hcl.BodySchema{
{Name: "expect_failures"},
{Name: "state_key"},
{Name: "parallel"},
{Name: "skip_cleanup"},
},
Blocks: []hcl.BlockHeaderSchema{
{
@ -996,6 +1179,10 @@ var testRunBlockSchema = &hcl.BodySchema{
{
Type: "override_module",
},
{
Type: "backend",
LabelNames: []string{"name"},
},
},
}

@ -0,0 +1,13 @@
# This backend block is used in a plan run block
# They're expected to be used in the first apply run block
# for a given state key
run "setup" {
command = plan
backend "local" {
path = "/tests/other-state"
}
}
run "test" {
command = apply
}

@ -0,0 +1,13 @@
run "test_1" {
command = apply
}
# This run block uses the same internal state as test_1,
# so this the backend block is attempting to load in state
# when there is already non-empty internal state.
run "test_2" {
command = apply
backend "local" {
path = "/tests/other-state"
}
}

@ -0,0 +1,12 @@
# There cannot be two backend blocks in a single run block
run "setup" {
backend "local" {
path = "/tests/state/terraform.tfstate"
}
backend "local" {
path = "/tests/other-state/terraform.tfstate"
}
}
run "test" {
}

@ -0,0 +1,18 @@
run "setup" {
command = apply
backend "local" {
path = "/tests/state/terraform.tfstate"
}
}
# "test" uses the same internal state file as "setup", which has already loaded state from a backend block
# and is an apply run block.
# The backend block can only occur once in a given set of run blocks that share state.
run "test" {
command = apply
backend "local" {
path = "/tests/state/terraform.tfstate"
}
}

@ -0,0 +1,7 @@
run "test" {
command = apply
backend "remote" {
organization = "example_corp"
}
}

@ -0,0 +1,14 @@
run "backend" {
command = apply
backend "local" {
path = "/tests/state/terraform.tfstate"
}
}
run "skip_cleanup" {
command = apply
# Should warn us about the skip_cleanup option being set.
skip_cleanup = true
}

@ -0,0 +1,11 @@
variable "input" {
type = string
}
resource "foo_resource" "a" {
value = var.input
}
resource "bar_resource" "c" {}

@ -0,0 +1,22 @@
variables {
input = "default"
}
# The backend in "load_state" is used to set an internal state without an explicit key
run "load_state" {
backend "local" {
path = "state/terraform.tfstate"
}
}
# "test_run" uses the same internal state as "load_state"
run "test_run" {
variables {
input = "custom"
}
assert {
condition = foo_resource.a.value == "custom"
error_message = "invalid value"
}
}

@ -0,0 +1,15 @@
# The foobar-1 local backend is used with the user-supplied internal state "foobar-1"
run "test_1" {
state_key = "foobar-1"
backend "local" {
path = "state/foobar-1.tfstate"
}
}
# The foobar-2 local backend is used with the user-supplied internal state "foobar-2"
run "test_2" {
state_key = "foobar-2"
backend "local" {
path = "state/foobar-2.tfstate"
}
}

@ -0,0 +1,7 @@
resource "aws_instance" "web" {
ami = "ami-1234"
security_groups = [
"foo",
"bar",
]
}

@ -0,0 +1,34 @@
# These run blocks either:
# 1) don't set an explicit state_key value and test the working directory,
# so would have the same internal state file as run blocks in the other test file.
# 2) do set an explicit state_key, which matches run blocks in the other test file.
#
# test_file_two.tftest.hcl as the same content as test_file_one.tftest.hcl,
# with renamed run blocks.
run "file_1_load_state" {
backend "local" {
path = "state/terraform.tfstate"
}
}
run "file_1_test" {
assert {
condition = aws_instance.web.ami == "ami-1234"
error_message = "AMI should be ami-1234"
}
}
run "file_1_load_state_state_key" {
state_key = "foobar"
backend "local" {
path = "state/terraform.tfstate"
}
}
run "file_1_test_state_key" {
state_key = "foobar"
assert {
condition = aws_instance.web.ami == "ami-1234"
error_message = "AMI should be ami-1234"
}
}

@ -0,0 +1,34 @@
# These run blocks either:
# 1) don't set an explicit state_key value and test the working directory,
# so would have the same internal state file as run blocks in the other test file.
# 2) do set an explicit state_key, which matches run blocks in the other test file.
#
# test_file_two.tftest.hcl as the same content as test_file_one.tftest.hcl,
# with renamed run blocks.
run "file_2_load_state" {
backend "local" {
path = "state/terraform.tfstate"
}
}
run "file_2_test" {
assert {
condition = aws_instance.web.ami == "ami-1234"
error_message = "AMI should be ami-1234"
}
}
run "file_2_load_state_state_key" {
state_key = "foobar"
backend "local" {
path = "state/terraform.tfstate"
}
}
run "file_2_test_state_key" {
state_key = "foobar"
assert {
condition = aws_instance.web.ami == "ami-1234"
error_message = "AMI should be ami-1234"
}
}

@ -12,6 +12,7 @@ import (
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/lang"
"github.com/hashicorp/terraform/internal/moduletest"
teststates "github.com/hashicorp/terraform/internal/moduletest/states"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/providers"
"github.com/hashicorp/terraform/internal/states"
@ -19,6 +20,9 @@ import (
"github.com/hashicorp/terraform/internal/tfdiags"
)
// testApply defines how to execute a run block representing an apply command
//
// See also: (n *NodeTestRun).testPlan
func (n *NodeTestRun) testApply(ctx *EvalContext, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, mocks map[addrs.RootProviderConfig]*configs.MockData, waiter *operationWaiter) {
file, run := n.File(), n.run
config := run.ModuleConfig
@ -26,18 +30,18 @@ func (n *NodeTestRun) testApply(ctx *EvalContext, variables terraform.InputValue
// FilterVariablesToModule only returns warnings, so we don't check the
// returned diags for errors.
setVariables, testOnlyVariables, setVariableDiags := n.FilterVariablesToModule(variables)
setVariables, testOnlyVariables, setVariableDiags := FilterVariablesToModule(run.ModuleConfig, variables)
run.Diagnostics = run.Diagnostics.Append(setVariableDiags)
// ignore diags because validate has covered it
tfCtx, _ := terraform.NewContext(n.opts.ContextOpts)
// execute the terraform plan operation
_, plan, planDiags := n.plan(ctx, tfCtx, setVariables, providers, mocks, waiter)
_, plan, planDiags := plan(ctx, tfCtx, file.Config, run.Config, run.ModuleConfig, setVariables, providers, mocks, waiter)
// Any error during the planning prevents our apply from
// continuing which is an error.
planDiags = run.ExplainExpectedFailures(planDiags)
planDiags = moduletest.ExplainExpectedFailures(run.Config, planDiags)
run.Diagnostics = run.Diagnostics.Append(planDiags)
if planDiags.HasErrors() {
run.Status = moduletest.Error
@ -59,18 +63,17 @@ func (n *NodeTestRun) testApply(ctx *EvalContext, variables terraform.InputValue
run.Diagnostics = filteredDiags
// execute the apply operation
applyScope, updated, applyDiags := n.apply(tfCtx, plan, moduletest.Running, variables, providers, waiter)
applyScope, updated, applyDiags := apply(tfCtx, run.Config, run.ModuleConfig, plan, moduletest.Running, variables, providers, waiter)
// Remove expected diagnostics, and add diagnostics in case anything that should have failed didn't.
// We'll also update the run status based on the presence of errors or missing expected failures.
failOrErr := n.checkForMissingExpectedFailures(ctx, run, applyDiags)
if failOrErr {
status, applyDiags := checkForMissingExpectedFailures(ctx, run.Config, applyDiags)
run.Diagnostics = run.Diagnostics.Append(applyDiags)
run.Status = run.Status.Merge(status)
if status == moduletest.Error {
// Even though the apply operation failed, the graph may have done
// partial updates and the returned state should reflect this.
ctx.SetFileState(key, &TestFileState{
Run: run,
State: updated,
})
ctx.SetFileState(key, run, updated, teststates.StateReasonNone)
return
}
@ -103,8 +106,8 @@ func (n *NodeTestRun) testApply(ctx *EvalContext, variables terraform.InputValue
// of the run. We also pass in all the
// previous contexts so this run block can refer to outputs from
// previous run blocks.
newStatus, outputVals, moreDiags := ctx.EvaluateRun(run, applyScope, testOnlyVariables)
run.Status = newStatus
newStatus, outputVals, moreDiags := ctx.EvaluateRun(run.Config, run.ModuleConfig.Module, applyScope, testOnlyVariables)
run.Status = run.Status.Merge(newStatus)
run.Diagnostics = run.Diagnostics.Append(moreDiags)
run.Outputs = outputVals
@ -112,19 +115,13 @@ func (n *NodeTestRun) testApply(ctx *EvalContext, variables terraform.InputValue
// actually updated by this change. We want to use the run that
// most recently updated the tracked state as the cleanup
// configuration.
ctx.SetFileState(key, &TestFileState{
Run: run,
State: updated,
})
ctx.SetFileState(key, run, updated, teststates.StateReasonNone)
}
func (n *NodeTestRun) apply(tfCtx *terraform.Context, plan *plans.Plan, progress moduletest.Progress, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, waiter *operationWaiter) (*lang.Scope, *states.State, tfdiags.Diagnostics) {
run := n.run
file := n.File()
log.Printf("[TRACE] TestFileRunner: called apply for %s/%s", file.Name, run.Name)
func apply(tfCtx *terraform.Context, run *configs.TestRun, module *configs.Config, plan *plans.Plan, progress moduletest.Progress, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, waiter *operationWaiter) (*lang.Scope, *states.State, tfdiags.Diagnostics) {
log.Printf("[TRACE] TestFileRunner: called apply for %s", run.Name)
var diags tfdiags.Diagnostics
config := run.ModuleConfig
// If things get cancelled while we are executing the apply operation below
// we want to print out all the objects that we were creating so the user
@ -148,7 +145,7 @@ func (n *NodeTestRun) apply(tfCtx *terraform.Context, plan *plans.Plan, progress
// We only need to pass ephemeral variables to the apply operation, as the
// plan has already been evaluated with the full set of variables.
ephemeralVariables := make(terraform.InputValues)
for k, v := range config.Root.Module.Variables {
for k, v := range module.Root.Module.Variables {
if v.EphemeralSet {
if value, ok := variables[k]; ok {
ephemeralVariables[k] = value
@ -162,9 +159,9 @@ func (n *NodeTestRun) apply(tfCtx *terraform.Context, plan *plans.Plan, progress
}
waiter.update(tfCtx, progress, created)
log.Printf("[DEBUG] TestFileRunner: starting apply for %s/%s", file.Name, run.Name)
updated, newScope, applyDiags := tfCtx.ApplyAndEval(plan, config, applyOpts)
log.Printf("[DEBUG] TestFileRunner: completed apply for %s/%s", file.Name, run.Name)
log.Printf("[DEBUG] TestFileRunner: starting apply for %s", run.Name)
updated, newScope, applyDiags := tfCtx.ApplyAndEval(plan, module, applyOpts)
log.Printf("[DEBUG] TestFileRunner: completed apply for %s", run.Name)
diags = diags.Append(applyDiags)
return newScope, updated, diags
@ -172,31 +169,31 @@ func (n *NodeTestRun) apply(tfCtx *terraform.Context, plan *plans.Plan, progress
// checkForMissingExpectedFailures checks for missing expected failures in the diagnostics.
// It updates the run status based on the presence of errors or missing expected failures.
func (n *NodeTestRun) checkForMissingExpectedFailures(ctx *EvalContext, run *moduletest.Run, diags tfdiags.Diagnostics) (failOrErr bool) {
func checkForMissingExpectedFailures(ctx *EvalContext, config *configs.TestRun, originals tfdiags.Diagnostics) (moduletest.Status, tfdiags.Diagnostics) {
// Retrieve and append diagnostics that are either unrelated to expected failures
// or report missing expected failures.
unexpectedDiags := run.ValidateExpectedFailures(diags)
if ctx.Verbose() {
// in verbose mode, we still add all the original diagnostics for
// display even if they are expected.
run.Diagnostics = run.Diagnostics.Append(diags)
} else {
run.Diagnostics = run.Diagnostics.Append(unexpectedDiags)
}
unexpectedDiags := moduletest.ValidateExpectedFailures(config, originals)
status := moduletest.Pass
for _, diag := range unexpectedDiags {
// // If any diagnostic indicates a missing expected failure, set the run status to fail.
if ok := moduletest.DiagnosticFromMissingExpectedFailure(diag); ok {
run.Status = run.Status.Merge(moduletest.Fail)
status = status.Merge(moduletest.Fail)
continue
}
// upgrade the run status to error if there still are other errors in the diagnostics
if diag.Severity() == tfdiags.Error {
run.Status = run.Status.Merge(moduletest.Error)
status = status.Merge(moduletest.Error)
break
}
}
return run.Status > moduletest.Pass
if ctx.Verbose() {
// in verbose mode, we still add all the original diagnostics for
// display even if they are expected.
return status, originals
} else {
return status, unexpectedDiags
}
}

@ -15,6 +15,7 @@ import (
"github.com/zclconf/go-cty/cty/convert"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/backend/backendrun"
"github.com/hashicorp/terraform/internal/command/views"
"github.com/hashicorp/terraform/internal/configs"
@ -22,19 +23,13 @@ import (
"github.com/hashicorp/terraform/internal/lang"
"github.com/hashicorp/terraform/internal/lang/langrefs"
"github.com/hashicorp/terraform/internal/moduletest"
teststates "github.com/hashicorp/terraform/internal/moduletest/states"
"github.com/hashicorp/terraform/internal/providers"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/terraform"
"github.com/hashicorp/terraform/internal/tfdiags"
)
// TestFileState is a helper struct that just maps a run block to the state that
// was produced by the execution of that run block.
type TestFileState struct {
Run *moduletest.Run
State *states.State
}
// EvalContext is a container for context relating to the evaluation of a
// particular .tftest.hcl file.
// This context is used to track the various values that are available to the
@ -60,11 +55,9 @@ type EvalContext struct {
providersLock sync.Mutex
// FileStates is a mapping of module keys to it's last applied state
// file.
//
// This is used to clean up the infrastructure created during the test after
// the test has finished.
FileStates map[string]*TestFileState
// file. This is tracked and returned to log state files of ongoing test
// operations.
FileStates map[string]*teststates.TestRunState
stateLock sync.Mutex
// cancelContext and stopContext can be used to terminate the evaluation of the
@ -75,24 +68,41 @@ type EvalContext struct {
cancelFunc context.CancelFunc
stopContext context.Context
stopFunc context.CancelFunc
config *configs.Config
renderer views.Test
verbose bool
config *configs.Config
renderer views.Test
verbose bool
// mode and repair affect the behaviour of the cleanup process of the graph.
//
// in cleanup mode, the tests will actually be skipped and the cleanup nodes
// are executed immediately. Normally, the skip_cleanup attributes will
// be skipped in cleanup mode with all states being destroyed completely.
//
// in repair mode, the skip_cleanup attributes are still respected. this
// means only states that were left behind due to an error will be
// destroyed.
mode moduletest.CommandMode
deferralAllowed bool
evalSem terraform.Semaphore
// repair is true if the test suite is being run in cleanup repair mode.
// It is only set when in test cleanup mode.
repair bool
}
type EvalContextOpts struct {
Verbose bool
Repair bool
Render views.Test
CancelCtx context.Context
StopCtx context.Context
UnparsedVariables map[string]backendrun.UnparsedVariableValue
Config *configs.Config
FileStates map[string]*teststates.TestRunState
Concurrency int
DeferralAllowed bool
Mode moduletest.CommandMode
}
// NewEvalContext constructs a new graph evaluation context for use in
@ -112,15 +122,17 @@ func NewEvalContext(opts EvalContextOpts) *EvalContext {
providers: make(map[addrs.RootProviderConfig]providers.Interface),
providerStatus: make(map[addrs.RootProviderConfig]moduletest.Status),
providersLock: sync.Mutex{},
FileStates: make(map[string]*TestFileState),
FileStates: opts.FileStates,
stateLock: sync.Mutex{},
cancelContext: cancelCtx,
cancelFunc: cancel,
stopContext: stopCtx,
stopFunc: stop,
config: opts.Config,
verbose: opts.Verbose,
repair: opts.Repair,
renderer: opts.Render,
config: opts.Config,
mode: opts.Mode,
deferralAllowed: opts.DeferralAllowed,
evalSem: terraform.NewSemaphore(opts.Concurrency),
}
@ -253,19 +265,14 @@ func (ec *EvalContext) HclContext(references []*addrs.Reference) (*hcl.EvalConte
// already available in resultScope in case there are additional input
// variables that were defined only for use in the test suite. Any variable
// not defined in extraVariableVals will be evaluated through resultScope instead.
func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope, extraVariableVals terraform.InputValues) (moduletest.Status, cty.Value, tfdiags.Diagnostics) {
func (ec *EvalContext) EvaluateRun(run *configs.TestRun, module *configs.Module, resultScope *lang.Scope, extraVariableVals terraform.InputValues) (moduletest.Status, cty.Value, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
if run.ModuleConfig == nil {
// This should never happen, but if it does, we can't evaluate the run
return moduletest.Error, cty.NilVal, tfdiags.Diagnostics{}
}
mod := run.ModuleConfig.Module
// We need a derived evaluation scope that also supports referring to
// the prior run output values using the "run.NAME" syntax.
evalData := &evaluationData{
ctx: ec,
module: mod,
module: module,
current: resultScope.Data,
extraVars: extraVariableVals,
}
@ -279,14 +286,14 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope,
ExternalFuncs: resultScope.ExternalFuncs,
}
log.Printf("[TRACE] EvalContext.Evaluate for %s", run.Addr())
log.Printf("[TRACE] EvalContext.Evaluate for %s", run.Name)
// We're going to assume the run has passed, and then if anything fails this
// value will be updated.
status := run.Status.Merge(moduletest.Pass)
status := moduletest.Pass
// Now validate all the assertions within this run block.
for i, rule := range run.Config.CheckRules {
for i, rule := range run.CheckRules {
var ruleDiags tfdiags.Diagnostics
refs, moreDiags := langrefs.ReferencesInExpr(addrs.ParseRefFromTestingScope, rule.Condition)
@ -304,9 +311,9 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope,
hclCtx, moreDiags := scope.EvalContext(refs)
ruleDiags = ruleDiags.Append(moreDiags)
if moreDiags.HasErrors() {
// if we can't evaluate the context properly, we can't evaulate the rule
// if we can't evaluate the context properly, we can't evaluate the rule
// we add the diagnostics to the main diags and continue to the next rule
log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s is invalid, could not evalaute the context, so cannot evaluate it", i, run.Addr())
log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s is invalid, could not evalaute the context, so cannot evaluate it", i, run.Name)
status = status.Merge(moduletest.Error)
diags = diags.Append(ruleDiags)
continue
@ -320,7 +327,7 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope,
diags = diags.Append(ruleDiags)
if ruleDiags.HasErrors() {
log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s is invalid, so cannot evaluate it", i, run.Addr())
log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s is invalid, so cannot evaluate it", i, run.Name)
status = status.Merge(moduletest.Error)
continue
}
@ -335,7 +342,7 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope,
Expression: rule.Condition,
EvalContext: hclCtx,
})
log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has null condition result", i, run.Addr())
log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has null condition result", i, run.Name)
continue
}
@ -349,7 +356,7 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope,
Expression: rule.Condition,
EvalContext: hclCtx,
})
log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has unknown condition result", i, run.Addr())
log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has unknown condition result", i, run.Name)
continue
}
@ -364,7 +371,7 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope,
Expression: rule.Condition,
EvalContext: hclCtx,
})
log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has non-boolean condition result", i, run.Addr())
log.Printf("[TRACE] EvalContext.Evaluate: check rule %d for %s has non-boolean condition result", i, run.Name)
continue
}
@ -373,7 +380,7 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope,
runVal, _ = runVal.Unmark()
if runVal.False() {
log.Printf("[TRACE] EvalContext.Evaluate: test assertion failed for %s assertion %d", run.Addr(), i)
log.Printf("[TRACE] EvalContext.Evaluate: test assertion failed for %s assertion %d", run.Name, i)
status = status.Merge(moduletest.Fail)
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
@ -389,16 +396,16 @@ func (ec *EvalContext) EvaluateRun(run *moduletest.Run, resultScope *lang.Scope,
})
continue
} else {
log.Printf("[TRACE] EvalContext.Evaluate: test assertion succeeded for %s assertion %d", run.Addr(), i)
log.Printf("[TRACE] EvalContext.Evaluate: test assertion succeeded for %s assertion %d", run.Name, i)
}
}
// Our result includes an object representing all of the output values
// from the module we've just tested, which will then be available in
// any subsequent test cases in the same test suite.
outputVals := make(map[string]cty.Value, len(mod.Outputs))
runRng := tfdiags.SourceRangeFromHCL(run.Config.DeclRange)
for _, oc := range mod.Outputs {
outputVals := make(map[string]cty.Value, len(module.Outputs))
runRng := tfdiags.SourceRangeFromHCL(run.DeclRange)
for _, oc := range module.Outputs {
addr := oc.Addr()
v, moreDiags := scope.Data.GetOutput(addr, runRng)
diags = diags.Append(moreDiags)
@ -561,19 +568,82 @@ func diagsForEphemeralResources(refs []*addrs.Reference) (diags tfdiags.Diagnost
return diags
}
func (ec *EvalContext) SetFileState(key string, state *TestFileState) {
func (ec *EvalContext) SetFileState(key string, run *moduletest.Run, state *states.State, reason teststates.StateReason) {
ec.stateLock.Lock()
defer ec.stateLock.Unlock()
current := ec.getState(key)
// Whatever happens we're going to record the latest state for this key.
current.State = state
current.Manifest.Reason = reason
if run.Config.SkipCleanup {
// if skip cleanup is set on the run block, we're going to track it
// as the thing to target regardless of what else might be true.
current.Run = run
// we'll mark the state as being restored to the current run block
// if (a) we're not in cleanup mode (meaning everything should be
// destroyed) or (b) we are in cleanup mode but with the repair flag
// which means that only errored states should be destroyed.
current.RestoreState = ec.mode != moduletest.CleanupMode || ec.repair
} else if !current.RestoreState {
// otherwise, only set the new run block if we haven't been told the
// earlier run block is more relevant.
current.Run = run
}
}
// GetState retrieves the current state for the specified key, exactly as it
// specified within the current cache.
func (ec *EvalContext) GetState(key string) *teststates.TestRunState {
ec.stateLock.Lock()
defer ec.stateLock.Unlock()
ec.FileStates[key] = &TestFileState{
Run: state.Run,
State: state.State,
return ec.getState(key)
}
func (ec *EvalContext) getState(key string) *teststates.TestRunState {
current := ec.FileStates[key]
if current == nil {
// this shouldn't happen, all the states must be initialised prior to
// the evaluation context being created.
//
// panic here, where the origin of the bug is instead of returning a
// null state to panic later.
panic("null state found in test execution")
}
return current
}
func (ec *EvalContext) GetFileState(key string) *TestFileState {
// LoadState returns the correct state for the specified run block. This differs
// from GetState in that it will load the state from any remote backend
// specified within the run block rather than simply retrieve the cached state
// (which might be empty for a run block with a backend if it hasn't executed
// yet).
func (ec *EvalContext) LoadState(run *configs.TestRun) (*states.State, error) {
ec.stateLock.Lock()
defer ec.stateLock.Unlock()
return ec.FileStates[key]
current := ec.getState(run.StateKey)
if run.Backend != nil {
// Then we'll load the state from the backend instead of just using
// whatever was in the state.
stmgr, err := current.Backend.StateMgr(backend.DefaultStateName)
if err != nil {
return nil, err
}
if err := stmgr.RefreshState(); err != nil {
return nil, err
}
return stmgr.State(), nil
}
return current.State, nil
}
// ReferencesCompleted returns true if all the listed references were actually

@ -746,7 +746,7 @@ func TestEvalContext_Evaluate(t *testing.T) {
run.Outputs = test.priorOutputs[run.Name]
testCtx.runBlocks[run.Name] = run
}
gotStatus, gotOutputs, diags := testCtx.EvaluateRun(run, planScope, test.testOnlyVars)
gotStatus, gotOutputs, diags := testCtx.EvaluateRun(run.Config, run.ModuleConfig.Module, planScope, test.testOnlyVars)
if got, want := gotStatus, test.expectedStatus; got != want {
t.Errorf("wrong status %q; want %q", got, want)

@ -9,8 +9,10 @@ import (
"time"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/moduletest"
"github.com/hashicorp/terraform/internal/moduletest/mocking"
teststates "github.com/hashicorp/terraform/internal/moduletest/states"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/terraform"
@ -21,6 +23,9 @@ var (
_ GraphNodeExecutable = (*NodeStateCleanup)(nil)
)
// NodeStateCleanup is responsible for cleaning up the state of resources
// defined in the state file. It uses the provided stateKey to identify the
// specific state to clean up and opts for additional configuration options.
type NodeStateCleanup struct {
stateKey string
opts *graphOptions
@ -31,12 +36,9 @@ func (n *NodeStateCleanup) Name() string {
}
// Execute destroys the resources created in the state file.
// This function should never return non-fatal error diagnostics, as that would
// prevent further cleanup from happening. Instead, the diagnostics
// will be rendered directly.
func (n *NodeStateCleanup) Execute(evalCtx *EvalContext) {
file := n.opts.File
state := evalCtx.GetFileState(n.stateKey)
state := evalCtx.GetState(n.stateKey)
log.Printf("[TRACE] TestStateManager: cleaning up state for %s", file.Name)
if evalCtx.Cancelled() {
@ -45,22 +47,13 @@ func (n *NodeStateCleanup) Execute(evalCtx *EvalContext) {
return
}
empty := true
if !state.State.Empty() {
for _, module := range state.State.Modules {
for _, resource := range module.Resources {
if resource.Addr.Resource.Mode == addrs.ManagedResourceMode {
empty = false
break
}
}
}
}
if empty {
if emptyState(state.State) {
// The state can be empty for a run block that just executed a plan
// command, or a run block that only read data sources. We'll just
// skip empty run blocks.
// skip empty run blocks. We do reset the state reason to None for this
// just to make sure we are indicating externally this state file
// doesn't need to be saved.
evalCtx.SetFileState(n.stateKey, state.Run, state.State, teststates.StateReasonNone)
return
}
@ -76,44 +69,98 @@ func (n *NodeStateCleanup) Execute(evalCtx *EvalContext) {
diags := tfdiags.Diagnostics{tfdiags.Sourceless(tfdiags.Error, "Inconsistent state", fmt.Sprintf("Found inconsistent state while cleaning up %s. This is a bug in Terraform - please report it", file.Name))}
file.UpdateStatus(moduletest.Error)
evalCtx.Renderer().DestroySummary(diags, nil, file, state.State)
// intentionally return nil to allow further cleanup
return
}
runNode := &NodeTestRun{run: state.Run, opts: n.opts}
updated := state.State
startTime := time.Now().UTC()
waiter := NewOperationWaiter(nil, evalCtx, runNode, moduletest.Running, startTime.UnixMilli())
waiter := NewOperationWaiter(nil, evalCtx, file, state.Run, moduletest.Running, startTime.UnixMilli())
var destroyDiags tfdiags.Diagnostics
evalCtx.Renderer().Run(state.Run, file, moduletest.TearDown, 0)
cancelled := waiter.Run(func() {
updated, destroyDiags = n.destroy(evalCtx, runNode, waiter)
if state.RestoreState {
updated, destroyDiags = n.restore(evalCtx, file.Config, state.Run.Config, state.Run.ModuleConfig, updated, waiter)
} else {
updated, destroyDiags = n.destroy(evalCtx, file.Config, state.Run.Config, state.Run.ModuleConfig, updated, waiter)
updated.RootOutputValues = state.State.RootOutputValues // we're going to preserve the output values in case we need to tidy up
}
})
if cancelled {
destroyDiags = destroyDiags.Append(tfdiags.Sourceless(tfdiags.Error, "Test interrupted", "The test operation could not be completed due to an interrupt signal. Please read the remaining diagnostics carefully for any sign of failed state cleanup or dangling resources."))
}
if !updated.Empty() {
// Then we failed to adequately clean up the state, so mark success
// as false.
switch {
case destroyDiags.HasErrors():
file.UpdateStatus(moduletest.Error)
evalCtx.SetFileState(n.stateKey, state.Run, updated, teststates.StateReasonError)
case state.Run.Config.Backend != nil:
evalCtx.SetFileState(n.stateKey, state.Run, updated, teststates.StateReasonNone)
case state.RestoreState:
evalCtx.SetFileState(n.stateKey, state.Run, updated, teststates.StateReasonSkip)
case !emptyState(updated):
file.UpdateStatus(moduletest.Error)
evalCtx.SetFileState(n.stateKey, state.Run, updated, teststates.StateReasonError)
default:
evalCtx.SetFileState(n.stateKey, state.Run, updated, teststates.StateReasonNone)
}
evalCtx.Renderer().DestroySummary(destroyDiags, state.Run, file, updated)
}
func (n *NodeStateCleanup) destroy(ctx *EvalContext, runNode *NodeTestRun, waiter *operationWaiter) (*states.State, tfdiags.Diagnostics) {
file := n.opts.File
fileState := ctx.GetFileState(n.stateKey)
state := fileState.State
run := runNode.run
log.Printf("[TRACE] TestFileRunner: called destroy for %s/%s", file.Name, run.Name)
func (n *NodeStateCleanup) restore(ctx *EvalContext, file *configs.TestFile, run *configs.TestRun, module *configs.Config, state *states.State, waiter *operationWaiter) (*states.State, tfdiags.Diagnostics) {
log.Printf("[TRACE] TestFileRunner: called restore for %s", run.Name)
if state.Empty() {
// Nothing to do!
return state, nil
variables, diags := GetVariables(ctx, run, module, false)
if diags.HasErrors() {
return state, diags
}
variables, diags := runNode.GetVariables(ctx, false)
// we ignore the diagnostics from here, because we will have reported them
// during the initial execution of the run block and we would not have
// executed the run block if there were any errors.
providers, mocks, _ := getProviders(ctx, file, run, module)
// During the destroy operation, we don't add warnings from this operation.
// Anything that would have been reported here was already reported during
// the original plan, and a successful destroy operation is the only thing
// we care about.
setVariables, _, _ := FilterVariablesToModule(module, variables)
planOpts := &terraform.PlanOpts{
Mode: plans.NormalMode,
SetVariables: setVariables,
Overrides: mocking.PackageOverrides(run, file, mocks),
ExternalProviders: providers,
SkipRefresh: true,
OverridePreventDestroy: true,
DeferralAllowed: ctx.deferralAllowed,
}
tfCtx, _ := terraform.NewContext(n.opts.ContextOpts)
waiter.update(tfCtx, moduletest.TearDown, nil)
plan, planDiags := tfCtx.Plan(module, state, planOpts)
diags = diags.Append(planDiags)
if diags.HasErrors() || plan.Errored {
return state, diags
}
if !plan.Complete {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
"Incomplete restore plan",
fmt.Sprintf("The restore plan for %s was reported as incomplete."+
" This means some of the cleanup operations were deferred due to unknown values, please check the rest of the output to see which resources could not be reverted.", run.Name)))
}
_, updated, applyDiags := apply(tfCtx, run, module, plan, moduletest.TearDown, variables, providers, waiter)
diags = diags.Append(applyDiags)
return updated, diags
}
func (n *NodeStateCleanup) destroy(ctx *EvalContext, file *configs.TestFile, run *configs.TestRun, module *configs.Config, state *states.State, waiter *operationWaiter) (*states.State, tfdiags.Diagnostics) {
log.Printf("[TRACE] TestFileRunner: called destroy for %s", run.Name)
variables, diags := GetVariables(ctx, run, module, false)
if diags.HasErrors() {
return state, diags
}
@ -121,18 +168,18 @@ func (n *NodeStateCleanup) destroy(ctx *EvalContext, runNode *NodeTestRun, waite
// we ignore the diagnostics from here, because we will have reported them
// during the initial execution of the run block and we would not have
// executed the run block if there were any errors.
providers, mocks, _ := runNode.getProviders(ctx)
providers, mocks, _ := getProviders(ctx, file, run, module)
// During the destroy operation, we don't add warnings from this operation.
// Anything that would have been reported here was already reported during
// the original plan, and a successful destroy operation is the only thing
// we care about.
setVariables, _, _ := runNode.FilterVariablesToModule(variables)
setVariables, _, _ := FilterVariablesToModule(module, variables)
planOpts := &terraform.PlanOpts{
Mode: plans.DestroyMode,
SetVariables: setVariables,
Overrides: mocking.PackageOverrides(run.Config, file.Config, mocks),
Overrides: mocking.PackageOverrides(run, file, mocks),
ExternalProviders: providers,
SkipRefresh: true,
OverridePreventDestroy: true,
@ -140,10 +187,9 @@ func (n *NodeStateCleanup) destroy(ctx *EvalContext, runNode *NodeTestRun, waite
}
tfCtx, _ := terraform.NewContext(n.opts.ContextOpts)
ctx.Renderer().Run(run, file, moduletest.TearDown, 0)
waiter.update(tfCtx, moduletest.TearDown, nil)
plan, planDiags := tfCtx.Plan(run.ModuleConfig, state, planOpts)
plan, planDiags := tfCtx.Plan(module, state, planOpts)
diags = diags.Append(planDiags)
if diags.HasErrors() || plan.Errored {
return state, diags
@ -153,11 +199,25 @@ func (n *NodeStateCleanup) destroy(ctx *EvalContext, runNode *NodeTestRun, waite
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
"Incomplete destroy plan",
fmt.Sprintf("The destroy plan for %s/%s was reported as incomplete."+
" This means some of the cleanup operations were deferred due to unknown values, please check the rest of the output to see which resources could not be destroyed.", file.Name, run.Name)))
fmt.Sprintf("The destroy plan for %s was reported as incomplete."+
" This means some of the cleanup operations were deferred due to unknown values, please check the rest of the output to see which resources could not be destroyed.", run.Name)))
}
_, updated, applyDiags := runNode.apply(tfCtx, plan, moduletest.TearDown, variables, providers, waiter)
_, updated, applyDiags := apply(tfCtx, run, module, plan, moduletest.TearDown, variables, providers, waiter)
diags = diags.Append(applyDiags)
return updated, diags
}
func emptyState(state *states.State) bool {
if state.Empty() {
return true
}
for _, module := range state.Modules {
for _, resource := range module.Resources {
if resource.Addr.Resource.Mode == addrs.ManagedResourceMode {
return false
}
}
}
return true
}

@ -48,7 +48,7 @@ func (n *NodeTestRun) Referenceable() addrs.Referenceable {
}
func (n *NodeTestRun) References() []*addrs.Reference {
references, _ := n.run.GetReferences()
references, _ := moduletest.GetRunReferences(n.run.Config)
for _, run := range n.priorRuns {
// we'll also draw an implicit reference to all prior runs to make sure
@ -59,6 +59,27 @@ func (n *NodeTestRun) References() []*addrs.Reference {
})
}
for name, variable := range n.run.ModuleConfig.Module.Variables {
// because we also draw implicit references back to any variables
// defined in the test file with the same name as actual variables, then
// we'll count these as references as well.
if _, ok := n.run.Config.Variables[name]; ok {
// BUT, if the variable is defined within the list of variables
// within the run block then we don't want to draw an implicit
// reference as the data comes from that expression.
continue
}
references = append(references, &addrs.Reference{
Subject: addrs.InputVariable{Name: name},
SourceRange: tfdiags.SourceRangeFromHCL(variable.DeclRange),
})
}
return references
}
@ -106,7 +127,7 @@ func (n *NodeTestRun) Execute(evalCtx *EvalContext) {
// Before the terraform operation is started, the operation updates the
// waiter with the cleanup context on cancellation, as well as the
// progress status.
waiter := NewOperationWaiter(nil, evalCtx, n, moduletest.Running, startTime.UnixMilli())
waiter := NewOperationWaiter(nil, evalCtx, file, run, moduletest.Running, startTime.UnixMilli())
cancelled := waiter.Run(func() {
defer logging.PanicHandler()
n.execute(evalCtx, waiter)
@ -128,7 +149,7 @@ func (n *NodeTestRun) execute(ctx *EvalContext, waiter *operationWaiter) {
file, run := n.File(), n.run
ctx.Renderer().Run(run, file, moduletest.Starting, 0)
providers, mocks, providerDiags := n.getProviders(ctx)
providers, mocks, providerDiags := getProviders(ctx, file.Config, run.Config, run.ModuleConfig)
if !ctx.ProvidersCompleted(providers) {
run.Status = moduletest.Skip
return
@ -145,7 +166,7 @@ func (n *NodeTestRun) execute(ctx *EvalContext, waiter *operationWaiter) {
return
}
variables, variableDiags := n.GetVariables(ctx, true)
variables, variableDiags := GetVariables(ctx, run.Config, run.ModuleConfig, true)
run.Diagnostics = run.Diagnostics.Append(variableDiags)
if variableDiags.HasErrors() {
run.Status = moduletest.Error
@ -181,19 +202,17 @@ func (n *NodeTestRun) testValidate(providers map[addrs.RootProviderConfig]provid
}
}
func (n *NodeTestRun) getProviders(ctx *EvalContext) (map[addrs.RootProviderConfig]providers.Interface, map[addrs.RootProviderConfig]*configs.MockData, tfdiags.Diagnostics) {
run := n.run
func getProviders(ctx *EvalContext, file *configs.TestFile, run *configs.TestRun, module *configs.Config) (map[addrs.RootProviderConfig]providers.Interface, map[addrs.RootProviderConfig]*configs.MockData, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
if len(run.Config.Providers) > 0 {
if len(run.Providers) > 0 {
// Then we'll only provide the specific providers asked for by the run
// block.
providers := make(map[addrs.RootProviderConfig]providers.Interface, len(run.Config.Providers))
providers := make(map[addrs.RootProviderConfig]providers.Interface, len(run.Providers))
mocks := make(map[addrs.RootProviderConfig]*configs.MockData)
for _, ref := range run.Config.Providers {
for _, ref := range run.Providers {
testAddr := addrs.RootProviderConfig{
Provider: ctx.ProviderForConfigAddr(ref.InParent.Addr()),
@ -201,7 +220,7 @@ func (n *NodeTestRun) getProviders(ctx *EvalContext) (map[addrs.RootProviderConf
}
moduleAddr := addrs.RootProviderConfig{
Provider: run.ModuleConfig.ProviderForConfigAddr(ref.InChild.Addr()),
Provider: module.ProviderForConfigAddr(ref.InChild.Addr()),
Alias: ref.InChild.Alias,
}
@ -218,7 +237,7 @@ func (n *NodeTestRun) getProviders(ctx *EvalContext) (map[addrs.RootProviderConf
if provider, ok := ctx.GetProvider(testAddr); ok {
providers[moduleAddr] = provider
config := n.File().Config.Providers[ref.InParent.String()]
config := file.Providers[ref.InParent.String()]
if config.Mock {
mocks[moduleAddr] = config.MockData
}
@ -241,7 +260,7 @@ func (n *NodeTestRun) getProviders(ctx *EvalContext) (map[addrs.RootProviderConf
providers := make(map[addrs.RootProviderConfig]providers.Interface)
mocks := make(map[addrs.RootProviderConfig]*configs.MockData)
for addr := range requiredProviders(run.ModuleConfig) {
for addr := range requiredProviders(module) {
if provider, ok := ctx.GetProvider(addr); ok {
providers[addr] = provider
@ -249,7 +268,7 @@ func (n *NodeTestRun) getProviders(ctx *EvalContext) (map[addrs.RootProviderConf
if len(addr.Alias) > 0 {
local = fmt.Sprintf("%s.%s", local, addr.Alias)
}
config := n.File().Config.Providers[local]
config := file.Providers[local]
if config.Mock {
mocks[addr] = config.MockData
}

@ -0,0 +1,105 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: BUSL-1.1
package graph
import (
"fmt"
"log"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/lang/marks"
"github.com/hashicorp/terraform/internal/moduletest"
teststates "github.com/hashicorp/terraform/internal/moduletest/states"
"github.com/hashicorp/terraform/internal/tfdiags"
)
var (
_ GraphNodeExecutable = (*NodeTestRunCleanup)(nil)
_ GraphNodeReferenceable = (*NodeTestRunCleanup)(nil)
_ GraphNodeReferencer = (*NodeTestRunCleanup)(nil)
)
type NodeTestRunCleanup struct {
run *moduletest.Run
priorRuns map[string]*moduletest.Run
opts *graphOptions
}
func (n *NodeTestRunCleanup) Name() string {
return fmt.Sprintf("%s.%s (cleanup)", n.opts.File.Name, n.run.Addr().String())
}
func (n *NodeTestRunCleanup) References() []*addrs.Reference {
references, _ := moduletest.GetRunReferences(n.run.Config)
for _, run := range n.priorRuns {
// we'll also draw an implicit reference to all prior runs to make sure
// they execute first
references = append(references, &addrs.Reference{
Subject: run.Addr(),
SourceRange: tfdiags.SourceRangeFromHCL(n.run.Config.DeclRange),
})
}
for name, variable := range n.run.ModuleConfig.Module.Variables {
// because we also draw implicit references back to any variables
// defined in the test file with the same name as actual variables, then
// we'll count these as references as well.
if _, ok := n.run.Config.Variables[name]; ok {
// BUT, if the variable is defined within the list of variables
// within the run block then we don't want to draw an implicit
// reference as the data comes from that expression.
continue
}
references = append(references, &addrs.Reference{
Subject: addrs.InputVariable{Name: name},
SourceRange: tfdiags.SourceRangeFromHCL(variable.DeclRange),
})
}
return references
}
func (n *NodeTestRunCleanup) Referenceable() addrs.Referenceable {
return n.run.Addr()
}
func (n *NodeTestRunCleanup) Execute(ctx *EvalContext) {
log.Printf("[TRACE] TestFileRunner: executing run block %s/%s", n.opts.File.Name, n.run.Name)
n.run.Status = moduletest.Pass
state, err := ctx.LoadState(n.run.Config)
if err != nil {
n.run.Status = moduletest.Fail
n.run.Diagnostics = n.run.Diagnostics.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to load state",
Detail: fmt.Sprintf("Could not retrieve state for run %s: %s.", n.run.Name, err),
Subject: n.run.Config.Backend.DeclRange.Ptr(),
})
return
}
outputs := make(map[string]cty.Value)
for name, output := range state.RootOutputValues {
if output.Sensitive {
outputs[name] = output.Value.Mark(marks.Sensitive)
continue
}
outputs[name] = output.Value
}
n.run.Outputs = cty.ObjectVal(outputs)
ctx.SetFileState(n.run.Config.StateKey, n.run, state, teststates.StateReasonNone)
ctx.AddRunBlock(n.run)
}

@ -8,6 +8,8 @@ import (
"log"
"path/filepath"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/lang"
@ -19,23 +21,26 @@ import (
"github.com/hashicorp/terraform/internal/tfdiags"
)
// testPlan defines how to execute a run block representing a plan command
//
// See also: (n *NodeTestRun).testApply
func (n *NodeTestRun) testPlan(ctx *EvalContext, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, mocks map[addrs.RootProviderConfig]*configs.MockData, waiter *operationWaiter) {
file, run := n.File(), n.run
config := run.ModuleConfig
// FilterVariablesToModule only returns warnings, so we don't check the
// returned diags for errors.
setVariables, testOnlyVariables, setVariableDiags := n.FilterVariablesToModule(variables)
setVariables, testOnlyVariables, setVariableDiags := FilterVariablesToModule(run.ModuleConfig, variables)
run.Diagnostics = run.Diagnostics.Append(setVariableDiags)
// ignore diags because validate has covered it
tfCtx, _ := terraform.NewContext(n.opts.ContextOpts)
// execute the terraform plan operation
planScope, plan, originalDiags := n.plan(ctx, tfCtx, setVariables, providers, mocks, waiter)
planScope, plan, originalDiags := plan(ctx, tfCtx, file.Config, run.Config, run.ModuleConfig, setVariables, providers, mocks, waiter)
// We exclude the diagnostics that are expected to fail from the plan
// diagnostics, and if an expected failure is not found, we add a new error diagnostic.
planDiags := run.ValidateExpectedFailures(originalDiags)
planDiags := moduletest.ValidateExpectedFailures(run.Config, originalDiags)
if ctx.Verbose() {
// in verbose mode, we still add all the original diagnostics for
@ -79,32 +84,43 @@ func (n *NodeTestRun) testPlan(ctx *EvalContext, variables terraform.InputValues
// of the run. We also pass in all the
// previous contexts so this run block can refer to outputs from
// previous run blocks.
newStatus, outputVals, moreDiags := ctx.EvaluateRun(run, planScope, testOnlyVariables)
run.Status = newStatus
status, outputVals, moreDiags := ctx.EvaluateRun(run.Config, run.ModuleConfig.Module, planScope, testOnlyVariables)
run.Status = run.Status.Merge(status)
run.Diagnostics = run.Diagnostics.Append(moreDiags)
run.Outputs = outputVals
}
func (n *NodeTestRun) plan(ctx *EvalContext, tfCtx *terraform.Context, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, mocks map[addrs.RootProviderConfig]*configs.MockData, waiter *operationWaiter) (*lang.Scope, *plans.Plan, tfdiags.Diagnostics) {
file, run := n.File(), n.run
config := run.ModuleConfig
log.Printf("[TRACE] TestFileRunner: called plan for %s/%s", file.Name, run.Name)
func plan(ctx *EvalContext, tfCtx *terraform.Context, file *configs.TestFile, run *configs.TestRun, module *configs.Config, variables terraform.InputValues, providers map[addrs.RootProviderConfig]providers.Interface, mocks map[addrs.RootProviderConfig]*configs.MockData, waiter *operationWaiter) (*lang.Scope, *plans.Plan, tfdiags.Diagnostics) {
log.Printf("[TRACE] TestFileRunner: called plan for %s", run.Name)
var diags tfdiags.Diagnostics
targets, targetDiags := run.GetTargets()
targets, targetDiags := moduletest.GetRunTargets(run)
diags = diags.Append(targetDiags)
replaces, replaceDiags := run.GetReplaces()
replaces, replaceDiags := moduletest.GetRunReplaces(run)
diags = diags.Append(replaceDiags)
references, referenceDiags := moduletest.GetRunReferences(run)
diags = diags.Append(referenceDiags)
state, err := ctx.LoadState(run)
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to load state",
Detail: fmt.Sprintf("Could not retrieve state for run %s: %s.", run.Name, err),
Subject: run.Backend.DeclRange.Ptr(),
})
}
if diags.HasErrors() {
return nil, nil, diags
}
planOpts := &terraform.PlanOpts{
Mode: func() plans.Mode {
switch run.Config.Options.Mode {
switch run.Options.Mode {
case configs.RefreshOnlyTestMode:
return plans.RefreshOnlyMode
default:
@ -113,20 +129,19 @@ func (n *NodeTestRun) plan(ctx *EvalContext, tfCtx *terraform.Context, variables
}(),
Targets: targets,
ForceReplace: replaces,
SkipRefresh: !run.Config.Options.Refresh,
SkipRefresh: !run.Options.Refresh,
SetVariables: variables,
ExternalReferences: n.References(),
ExternalReferences: references,
ExternalProviders: providers,
Overrides: mocking.PackageOverrides(run.Config, file.Config, mocks),
Overrides: mocking.PackageOverrides(run, file, mocks),
DeferralAllowed: ctx.deferralAllowed,
}
waiter.update(tfCtx, moduletest.Running, nil)
log.Printf("[DEBUG] TestFileRunner: starting plan for %s/%s", file.Name, run.Name)
state := ctx.GetFileState(run.Config.StateKey).State
plan, planScope, planDiags := tfCtx.PlanAndEval(config, state, planOpts)
log.Printf("[DEBUG] TestFileRunner: completed plan for %s/%s", file.Name, run.Name)
log.Printf("[DEBUG] TestFileRunner: starting plan for %s", run.Name)
plan, scope, planDiags := tfCtx.PlanAndEval(module, state, planOpts)
log.Printf("[DEBUG] TestFileRunner: completed plan for %s", run.Name)
diags = diags.Append(planDiags)
return planScope, plan, diags
return scope, plan, diags
}

@ -26,6 +26,7 @@ type TestGraphBuilder struct {
Config *configs.Config
File *moduletest.File
ContextOpts *terraform.ContextOpts
CommandMode moduletest.CommandMode
}
type graphOptions struct {
@ -49,11 +50,11 @@ func (b *TestGraphBuilder) Steps() []terraform.GraphTransformer {
ContextOpts: b.ContextOpts,
}
steps := []terraform.GraphTransformer{
&TestRunTransformer{opts},
&TestRunTransformer{opts: opts, mode: b.CommandMode},
&TestVariablesTransformer{File: b.File},
terraform.DynamicTransformer(validateRunConfigs),
terraform.DynamicTransformer(func(g *terraform.Graph) error {
cleanup := &TeardownSubgraph{opts: opts, parent: g}
cleanup := &TeardownSubgraph{opts: opts, parent: g, mode: b.CommandMode}
g.Add(cleanup)
// ensure that the teardown node runs after all the run nodes
@ -70,7 +71,6 @@ func (b *TestGraphBuilder) Steps() []terraform.GraphTransformer {
File: b.File,
Providers: opts.ContextOpts.Providers,
},
&EvalContextTransformer{File: b.File},
&ReferenceTransformer{},
&CloseTestGraphTransformer{},
&terraform.TransitiveReductionTransformer{},
@ -90,16 +90,6 @@ func validateRunConfigs(g *terraform.Graph) error {
return nil
}
// dynamicNode is a helper node which can be added to the graph to execute
// a dynamic function at some desired point in the graph.
type dynamicNode struct {
eval func(*EvalContext)
}
func (n *dynamicNode) Execute(evalCtx *EvalContext) {
n.eval(evalCtx)
}
func Walk(g *terraform.Graph, ctx *EvalContext) tfdiags.Diagnostics {
walkFn := func(v dag.Vertex) tfdiags.Diagnostics {
if ctx.Cancelled() {

@ -1,47 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: BUSL-1.1
package graph
import (
"github.com/hashicorp/terraform/internal/dag"
"github.com/hashicorp/terraform/internal/moduletest"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/terraform"
)
var _ terraform.GraphTransformer = (*EvalContextTransformer)(nil)
// EvalContextTransformer should be the first node to execute in the graph, and
// it initialises the run blocks and state files in the evaluation context.
type EvalContextTransformer struct {
File *moduletest.File
}
func (e *EvalContextTransformer) Transform(graph *terraform.Graph) error {
node := &dynamicNode{
eval: func(ctx *EvalContext) {
for _, run := range e.File.Runs {
// initialise all the state keys before the graph starts
// properly
key := run.Config.StateKey
if state := ctx.GetFileState(key); state == nil {
ctx.SetFileState(key, &TestFileState{
Run: nil,
State: states.NewState(),
})
}
}
},
}
graph.Add(node)
for v := range graph.VerticesSeq() {
if v == node {
continue
}
graph.Connect(dag.BasicEdge(v, node))
}
return nil
}

@ -85,6 +85,9 @@ func (t *TestProvidersTransformer) Transform(g *terraform.Graph) error {
configure: configure,
close: close,
}
// make sure the provider is only closed after the provider starts.
g.Connect(dag.BasicEdge(close, configure))
}
for vertex := range g.VerticesSeq() {

@ -26,18 +26,30 @@ type Subgrapher interface {
type TeardownSubgraph struct {
opts *graphOptions
parent *terraform.Graph
mode moduletest.CommandMode
}
func (b *TeardownSubgraph) Execute(ctx *EvalContext) {
ctx.Renderer().File(b.opts.File, moduletest.TearDown)
// work out the transitive state dependencies for each run node in the parent graph
runRefMap := make(map[addrs.Run][]string)
for runNode := range dag.SelectSeq[*NodeTestRun](b.parent.VerticesSeq()) {
refs := b.parent.Ancestors(runNode)
for _, ref := range refs {
if ref, ok := ref.(*NodeTestRun); ok && ref.run.Config.StateKey != runNode.run.Config.StateKey {
runRefMap[runNode.run.Addr()] = append(runRefMap[runNode.run.Addr()], ref.run.Config.StateKey)
if b.mode == moduletest.CleanupMode {
for runNode := range dag.SelectSeq[*NodeTestRunCleanup](b.parent.VerticesSeq()) {
refs := b.parent.Ancestors(runNode)
for _, ref := range refs {
if ref, ok := ref.(*NodeTestRunCleanup); ok && ref.run.Config.StateKey != runNode.run.Config.StateKey {
runRefMap[runNode.run.Addr()] = append(runRefMap[runNode.run.Addr()], ref.run.Config.StateKey)
}
}
}
} else {
for runNode := range dag.SelectSeq[*NodeTestRun](b.parent.VerticesSeq()) {
refs := b.parent.Ancestors(runNode)
for _, ref := range refs {
if ref, ok := ref.(*NodeTestRun); ok && ref.run.Config.StateKey != runNode.run.Config.StateKey {
runRefMap[runNode.run.Addr()] = append(runRefMap[runNode.run.Addr()], ref.run.Config.StateKey)
}
}
}
}

@ -12,27 +12,49 @@ import (
// and the variables defined in each run block, to the graph.
type TestRunTransformer struct {
opts *graphOptions
mode moduletest.CommandMode
}
func (t *TestRunTransformer) Transform(g *terraform.Graph) error {
// Create and add nodes for each run
for _, run := range t.opts.File.Runs {
priorRuns := make(map[string]*moduletest.Run)
for ix := run.Index - 1; ix >= 0; ix-- {
// If either node isn't parallel, we should draw an edge between
// them. Also, if they share the same state key we should also draw
// an edge between them regardless of the parallelisation.
if target := t.opts.File.Runs[ix]; !run.Config.Parallel || !target.Config.Parallel || run.Config.StateKey == target.Config.StateKey {
priorRuns[target.Name] = target
switch t.mode {
case moduletest.CleanupMode:
for _, run := range t.opts.File.Runs {
priorRuns := make(map[string]*moduletest.Run)
for ix := run.Index - 1; ix >= 0; ix-- {
// If either node isn't parallel, we should draw an edge between
// them. Also, if they share the same state key we should also draw
// an edge between them regardless of the parallelisation.
if target := t.opts.File.Runs[ix]; !run.Config.Parallel || !target.Config.Parallel || run.Config.StateKey == target.Config.StateKey {
priorRuns[target.Name] = target
}
}
g.Add(&NodeTestRunCleanup{
run: run,
opts: t.opts,
priorRuns: priorRuns,
})
}
g.Add(&NodeTestRun{
run: run,
opts: t.opts,
priorRuns: priorRuns,
})
}
default:
for _, run := range t.opts.File.Runs {
priorRuns := make(map[string]*moduletest.Run)
for ix := run.Index - 1; ix >= 0; ix-- {
// If either node isn't parallel, we should draw an edge between
// them. Also, if they share the same state key we should also draw
// an edge between them regardless of the parallelisation.
if target := t.opts.File.Runs[ix]; !run.Config.Parallel || !target.Config.Parallel || run.Config.StateKey == target.Config.StateKey {
priorRuns[target.Name] = target
}
}
g.Add(&NodeTestRun{
run: run,
opts: t.opts,
priorRuns: priorRuns,
})
}
}
return nil
}

@ -10,7 +10,9 @@ import (
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/lang/langrefs"
"github.com/hashicorp/terraform/internal/moduletest"
"github.com/hashicorp/terraform/internal/terraform"
"github.com/hashicorp/terraform/internal/tfdiags"
)
@ -25,9 +27,8 @@ import (
// more variables than are required by the config. FilterVariablesToConfig
// should be called before trying to use these variables within a Terraform
// plan, apply, or destroy operation.
func (n *NodeTestRun) GetVariables(ctx *EvalContext, includeWarnings bool) (terraform.InputValues, tfdiags.Diagnostics) {
func GetVariables(ctx *EvalContext, run *configs.TestRun, module *configs.Config, includeWarnings bool) (terraform.InputValues, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
run := n.run
// relevantVariables contains the variables that are of interest to this
// run block. This is a combination of the variables declared within the
// configuration for this run block, and the variables referenced by the
@ -36,14 +37,15 @@ func (n *NodeTestRun) GetVariables(ctx *EvalContext, includeWarnings bool) (terr
// First, we'll check to see which variables the run block assertions
// reference.
for _, reference := range n.References() {
references, _ := moduletest.GetRunReferences(run)
for _, reference := range references {
if addr, ok := reference.Subject.(addrs.InputVariable); ok {
relevantVariables[addr.Name] = reference
}
}
// And check to see which variables the run block configuration references.
for name := range run.ModuleConfig.Module.Variables {
for name := range module.Module.Variables {
relevantVariables[name] = nil
}
@ -53,7 +55,7 @@ func (n *NodeTestRun) GetVariables(ctx *EvalContext, includeWarnings bool) (terr
// First, let's step through the expressions within the run block and work
// them out.
for name, expr := range run.Config.Variables {
for name, expr := range run.Variables {
refs, refDiags := langrefs.ReferencesInExpr(addrs.ParseRefFromTestingScope, expr)
diags = append(diags, refDiags...)
if refDiags.HasErrors() {
@ -99,7 +101,7 @@ func (n *NodeTestRun) GetVariables(ctx *EvalContext, includeWarnings bool) (terr
// use a default fallback value to let Terraform attempt to apply defaults
// if they exist.
for name, variable := range run.ModuleConfig.Module.Variables {
for name, variable := range module.Module.Variables {
if _, exists := values[name]; exists {
// Then we've provided a variable for this explicitly. It's all
// good.
@ -197,11 +199,11 @@ func (n *NodeTestRun) GetVariables(ctx *EvalContext, includeWarnings bool) (terr
//
// This function can only return warnings, and the callers can rely on this so
// please check the callers of this function if you add any error diagnostics.
func (n *NodeTestRun) FilterVariablesToModule(values terraform.InputValues) (moduleVars, testOnlyVars terraform.InputValues, diags tfdiags.Diagnostics) {
func FilterVariablesToModule(config *configs.Config, values terraform.InputValues) (moduleVars, testOnlyVars terraform.InputValues, diags tfdiags.Diagnostics) {
moduleVars = make(terraform.InputValues)
testOnlyVars = make(terraform.InputValues)
for name, value := range values {
_, exists := n.run.ModuleConfig.Module.Variables[name]
_, exists := config.Module.Variables[name]
if !exists {
// If it's not in the configuration then it's a test-only variable.
testOnlyVars[name] = value

@ -47,13 +47,13 @@ func (a *atomicProgress[T]) Store(progress T) {
}
// NewOperationWaiter creates a new operation waiter.
func NewOperationWaiter(ctx *terraform.Context, evalCtx *EvalContext, n *NodeTestRun,
func NewOperationWaiter(ctx *terraform.Context, evalCtx *EvalContext, file *moduletest.File, run *moduletest.Run,
progress moduletest.Progress, start int64) *operationWaiter {
identifier := "validate"
if n.File() != nil {
identifier = n.File().Name
if n.run != nil {
identifier = fmt.Sprintf("%s/%s", identifier, n.run.Name)
if file != nil {
identifier = file.Name
if run != nil {
identifier = fmt.Sprintf("%s/%s", identifier, run.Name)
}
}
@ -62,8 +62,8 @@ func NewOperationWaiter(ctx *terraform.Context, evalCtx *EvalContext, n *NodeTes
return &operationWaiter{
ctx: ctx,
run: n.run,
file: n.File(),
run: run,
file: file,
progress: p,
start: start,
identifier: identifier,
@ -73,7 +73,7 @@ func NewOperationWaiter(ctx *terraform.Context, evalCtx *EvalContext, n *NodeTes
}
// Run executes the given function in a goroutine and waits for it to finish.
// If the function finishes, it returns false. If the function is cancelled or
// If the function finishes successfully, it returns false. If the function is cancelled or
// interrupted, it returns true.
func (w *operationWaiter) Run(fn func()) bool {
runningCtx, doneRunning := context.WithCancel(context.Background())
@ -134,14 +134,13 @@ func (w *operationWaiter) updateProgress() {
// handleCancelled is called when the test execution is hard cancelled.
func (w *operationWaiter) handleCancelled() bool {
log.Printf("[DEBUG] TestFileRunner: test execution cancelled during %s", w.identifier)
states := make(map[*moduletest.Run]*states.State)
mainKey := configs.TestMainStateIdentifier
states[nil] = w.evalCtx.GetFileState(mainKey).State
states := make(map[string]*states.State)
states[configs.TestMainStateIdentifier] = w.evalCtx.GetState(configs.TestMainStateIdentifier).State
for key, module := range w.evalCtx.FileStates {
if key == mainKey {
if key == configs.TestMainStateIdentifier {
continue
}
states[module.Run] = module.State
states[key] = module.State
}
w.renderer.FatalInterruptSummary(w.run, w.file, states, w.created)

@ -0,0 +1,85 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: BUSL-1.1
package moduletest
import (
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/lang/langrefs"
"github.com/hashicorp/terraform/internal/tfdiags"
)
func GetRunTargets(config *configs.TestRun) ([]addrs.Targetable, tfdiags.Diagnostics) {
var diagnostics tfdiags.Diagnostics
var targets []addrs.Targetable
for _, target := range config.Options.Target {
addr, diags := addrs.ParseTarget(target)
diagnostics = diagnostics.Append(diags)
if addr != nil {
targets = append(targets, addr.Subject)
}
}
return targets, diagnostics
}
func GetRunReplaces(config *configs.TestRun) ([]addrs.AbsResourceInstance, tfdiags.Diagnostics) {
var diagnostics tfdiags.Diagnostics
var replaces []addrs.AbsResourceInstance
for _, replace := range config.Options.Replace {
addr, diags := addrs.ParseAbsResourceInstance(replace)
diagnostics = diagnostics.Append(diags)
if diags.HasErrors() {
continue
}
if addr.Resource.Resource.Mode != addrs.ManagedResourceMode {
diagnostics = diagnostics.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Can only target managed resources for forced replacements.",
Detail: addr.String(),
Subject: replace.SourceRange().Ptr(),
})
continue
}
replaces = append(replaces, addr)
}
return replaces, diagnostics
}
func GetRunReferences(config *configs.TestRun) ([]*addrs.Reference, tfdiags.Diagnostics) {
var diagnostics tfdiags.Diagnostics
var references []*addrs.Reference
for _, rule := range config.CheckRules {
for _, variable := range rule.Condition.Variables() {
reference, diags := addrs.ParseRefFromTestingScope(variable)
diagnostics = diagnostics.Append(diags)
if reference != nil {
references = append(references, reference)
}
}
for _, variable := range rule.ErrorMessage.Variables() {
reference, diags := addrs.ParseRefFromTestingScope(variable)
diagnostics = diagnostics.Append(diags)
if reference != nil {
references = append(references, reference)
}
}
}
for _, expr := range config.Variables {
moreRefs, moreDiags := langrefs.ReferencesInExpr(addrs.ParseRefFromTestingScope, expr)
diagnostics = diagnostics.Append(moreDiags)
references = append(references, moreRefs...)
}
return references, diagnostics
}

@ -13,7 +13,6 @@ import (
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/configs/configschema"
"github.com/hashicorp/terraform/internal/lang/langrefs"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/providers"
"github.com/hashicorp/terraform/internal/states"
@ -94,106 +93,6 @@ func (run *Run) Addr() addrs.Run {
return addrs.Run{Name: run.Name}
}
func (run *Run) GetTargets() ([]addrs.Targetable, tfdiags.Diagnostics) {
var diagnostics tfdiags.Diagnostics
var targets []addrs.Targetable
for _, target := range run.Config.Options.Target {
addr, diags := addrs.ParseTarget(target)
diagnostics = diagnostics.Append(diags)
if addr != nil {
targets = append(targets, addr.Subject)
}
}
return targets, diagnostics
}
func (run *Run) GetReplaces() ([]addrs.AbsResourceInstance, tfdiags.Diagnostics) {
var diagnostics tfdiags.Diagnostics
var replaces []addrs.AbsResourceInstance
for _, replace := range run.Config.Options.Replace {
addr, diags := addrs.ParseAbsResourceInstance(replace)
diagnostics = diagnostics.Append(diags)
if diags.HasErrors() {
continue
}
if addr.Resource.Resource.Mode != addrs.ManagedResourceMode {
diagnostics = diagnostics.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Can only target managed resources for forced replacements.",
Detail: addr.String(),
Subject: replace.SourceRange().Ptr(),
})
continue
}
replaces = append(replaces, addr)
}
return replaces, diagnostics
}
func (run *Run) GetReferences() ([]*addrs.Reference, tfdiags.Diagnostics) {
var diagnostics tfdiags.Diagnostics
var references []*addrs.Reference
for _, rule := range run.Config.CheckRules {
for _, variable := range rule.Condition.Variables() {
reference, diags := addrs.ParseRefFromTestingScope(variable)
diagnostics = diagnostics.Append(diags)
if reference != nil {
references = append(references, reference)
}
}
for _, variable := range rule.ErrorMessage.Variables() {
reference, diags := addrs.ParseRefFromTestingScope(variable)
diagnostics = diagnostics.Append(diags)
if reference != nil {
references = append(references, reference)
}
}
}
for _, expr := range run.Config.Variables {
moreRefs, moreDiags := langrefs.ReferencesInExpr(addrs.ParseRefFromTestingScope, expr)
diagnostics = diagnostics.Append(moreDiags)
references = append(references, moreRefs...)
}
for name, variable := range run.ModuleConfig.Module.Variables {
// because we also draw implicit references back to any variables
// defined in the test file with the same name as actual variables, then
// we'll count these as references as well.
if _, ok := run.Config.Variables[name]; ok {
// BUT, if the variable is defined within the list of variables
// within the run block then we don't want to draw an implicit
// reference as the data comes from that expression.
continue
}
references = append(references, &addrs.Reference{
Subject: addrs.InputVariable{Name: name},
SourceRange: tfdiags.SourceRangeFromHCL(variable.DeclRange),
})
}
return references, diagnostics
}
// GetModuleConfigID returns the identifier for the module configuration that
// this run is testing. This is used to uniquely identify the module
// configuration in the test state.
func (run *Run) GetModuleConfigID() string {
return run.ModuleConfig.Module.SourceDir
}
// ExplainExpectedFailures is similar to ValidateExpectedFailures except it
// looks for any diagnostics produced by custom conditions and are included in
// the expected failures and adds an additional explanation that clarifies the
@ -203,14 +102,14 @@ func (run *Run) GetModuleConfigID() string {
// an expected failure during the planning stage will still result in the
// overall test failing as the plan failed and we couldn't even execute the
// apply stage.
func (run *Run) ExplainExpectedFailures(originals tfdiags.Diagnostics) tfdiags.Diagnostics {
func ExplainExpectedFailures(config *configs.TestRun, originals tfdiags.Diagnostics) tfdiags.Diagnostics {
// We're going to capture all the checkable objects that are referenced
// from the expected failures.
expectedFailures := addrs.MakeMap[addrs.Referenceable, bool]()
sourceRanges := addrs.MakeMap[addrs.Referenceable, tfdiags.SourceRange]()
for _, traversal := range run.Config.ExpectFailures {
for _, traversal := range config.ExpectFailures {
// Ignore the diagnostics returned from the reference parsing, these
// references will have been checked earlier in the process by the
// validate stage so we don't need to do that again here.
@ -330,14 +229,14 @@ func (run *Run) ExplainExpectedFailures(originals tfdiags.Diagnostics) tfdiags.D
// diagnostics were generated by custom conditions. Terraform adds the
// addrs.CheckRule that generated each diagnostic to the diagnostic itself so we
// can tell which diagnostics can be expected.
func (run *Run) ValidateExpectedFailures(originals tfdiags.Diagnostics) tfdiags.Diagnostics {
func ValidateExpectedFailures(config *configs.TestRun, originals tfdiags.Diagnostics) tfdiags.Diagnostics {
// We're going to capture all the checkable objects that are referenced
// from the expected failures.
expectedFailures := addrs.MakeMap[addrs.Referenceable, bool]()
sourceRanges := addrs.MakeMap[addrs.Referenceable, tfdiags.SourceRange]()
for _, traversal := range run.Config.ExpectFailures {
for _, traversal := range config.ExpectFailures {
// Ignore the diagnostics returned from the reference parsing, these
// references will have been checked earlier in the process by the
// validate stage so we don't need to do that again here.

@ -766,7 +766,7 @@ func TestRun_ValidateExpectedFailures(t *testing.T) {
},
}
out := run.ValidateExpectedFailures(tc.Input)
out := ValidateExpectedFailures(run.Config, tc.Input)
ix := 0
for ; ix < len(tc.Output); ix++ {
expected := tc.Output[ix]

@ -0,0 +1,558 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: BUSL-1.1
package states
import (
"encoding/json"
"fmt"
"io"
"log"
"math/rand/v2"
"os"
"path/filepath"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hcldec"
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/command/workdir"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/moduletest"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/states/statemgr"
"github.com/hashicorp/terraform/internal/tfdiags"
)
const alphanumeric = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
type StateReason string
const (
StateReasonNone StateReason = ""
StateReasonSkip StateReason = "skip_cleanup"
StateReasonDep StateReason = "dependency"
StateReasonError StateReason = "error"
)
// TestManifest represents the structure of the manifest file that keeps track
// of the state files left-over during test runs.
type TestManifest struct {
Version int `json:"version"`
Files map[string]*TestFileManifest `json:"files"`
dataDir string // Directory where all test-related data is stored
ids map[string]bool
}
// TestFileManifest represents a single file with its states keyed by the state
// key.
type TestFileManifest struct {
States map[string]*TestRunManifest `json:"states"` // Map of state keys to their manifests.
}
// TestRunManifest represents an individual test run state.
type TestRunManifest struct {
// ID of the state file, used for identification. This will be empty if the
// state was written to a real backend and not stored locally.
ID string `json:"id,omitempty"`
// Reason for the state being left over
Reason StateReason `json:"reason,omitempty"`
}
// LoadManifest loads the test manifest from the specified root directory.
func LoadManifest(rootDir string, experimentsAllowed bool) (*TestManifest, error) {
if !experimentsAllowed {
// Just return an empty manifest file every time when experiments are
// disabled.
return &TestManifest{
Version: 0,
Files: make(map[string]*TestFileManifest),
dataDir: workdir.NewDir(rootDir).TestDataDir(),
ids: make(map[string]bool),
}, nil
}
wd := workdir.NewDir(rootDir)
manifest := &TestManifest{
Version: 0,
Files: make(map[string]*TestFileManifest),
dataDir: wd.TestDataDir(),
ids: make(map[string]bool),
}
// Create directory if it doesn't exist
if err := manifest.ensureDataDir(); err != nil {
return nil, err
}
data, err := os.OpenFile(manifest.filePath(), os.O_CREATE|os.O_RDONLY, 0644)
if err != nil {
return nil, err
}
defer data.Close()
if err := json.NewDecoder(data).Decode(manifest); err != nil && err != io.EOF {
return nil, err
}
for _, fileManifest := range manifest.Files {
for _, runManifest := range fileManifest.States {
// keep a cache of all known ids
manifest.ids[runManifest.ID] = true
}
}
return manifest, nil
}
// Save saves the current state of the manifest to the data directory.
func (manifest *TestManifest) Save(experimentsAllowed bool) error {
if !experimentsAllowed {
// just don't save the manifest file when experiments are disabled.
return nil
}
data, err := json.MarshalIndent(manifest, "", " ")
if err != nil {
return err
}
return os.WriteFile(manifest.filePath(), data, 0644)
}
// LoadStates loads the states for the specified file.
func (manifest *TestManifest) LoadStates(file *moduletest.File, factory func(string) backend.InitFn) (map[string]*TestRunState, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
allStates := make(map[string]*TestRunState)
var existingStates map[string]*TestRunManifest
if fm, exists := manifest.Files[file.Name]; exists {
existingStates = fm.States
}
for _, run := range file.Runs {
key := run.Config.StateKey
if existing, exists := allStates[key]; exists {
if run.Config.Backend != nil {
f := factory(run.Config.Backend.Type)
if f == nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unknown backend type",
Detail: fmt.Sprintf("Backend type %q is not a recognised backend.", run.Config.Backend.Type),
Subject: run.Config.Backend.DeclRange.Ptr(),
})
continue
}
be, err := getBackendInstance(run.Config.StateKey, run.Config.Backend, f)
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid backend configuration",
Detail: fmt.Sprintf("Backend configuration was invalid: %s.", err),
Subject: run.Config.Backend.DeclRange.Ptr(),
})
continue
}
// Save the backend for this state when we find it, even if the
// state was initialised first.
existing.Backend = be
}
continue
}
var backend backend.Backend
if run.Config.Backend != nil {
// Then we have to load the state from the backend instead of
// locally or creating a new one.
f := factory(run.Config.Backend.Type)
if f == nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unknown backend type",
Detail: fmt.Sprintf("Backend type %q is not a recognised backend.", run.Config.Backend.Type),
Subject: run.Config.Backend.DeclRange.Ptr(),
})
continue
}
be, err := getBackendInstance(run.Config.StateKey, run.Config.Backend, f)
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid backend configuration",
Detail: fmt.Sprintf("Backend configuration was invalid: %s.", err),
Subject: run.Config.Backend.DeclRange.Ptr(),
})
continue
}
backend = be
}
if existing := existingStates[key]; existing != nil {
var state *states.State
if len(existing.ID) > 0 {
s, err := manifest.loadState(existing)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Failed to load state",
fmt.Sprintf("Failed to load state from manifest file for %s: %s", run.Name, err)))
continue
}
state = s
} else {
state = states.NewState()
}
allStates[key] = &TestRunState{
Run: run,
Manifest: &TestRunManifest{ // copy this, so we can edit without affecting the original
ID: existing.ID,
Reason: existing.Reason,
},
State: state,
Backend: backend,
}
} else {
var id string
if backend == nil {
id = manifest.generateID()
}
allStates[key] = &TestRunState{
Run: run,
Manifest: &TestRunManifest{
ID: id,
Reason: StateReasonNone,
},
State: states.NewState(),
Backend: backend,
}
}
}
for key := range existingStates {
if _, exists := allStates[key]; !exists {
stateKey := key
if stateKey == configs.TestMainStateIdentifier {
stateKey = "for the module under test"
}
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
"Orphaned state",
fmt.Sprintf("The state key %s is stored in the state manifest indicating a failed cleanup operation, but the state key is not claimed by any run blocks within the current test file. Either restore a run block that manages the specified state, or manually cleanup this state file.", stateKey)))
}
}
return allStates, diags
}
func (manifest *TestManifest) loadState(state *TestRunManifest) (*states.State, error) {
stateFile := statemgr.NewFilesystem(manifest.StateFilePath(state.ID))
if err := stateFile.RefreshState(); err != nil {
return nil, fmt.Errorf("error loading state from file %s: %w", manifest.StateFilePath(state.ID), err)
}
return stateFile.State(), nil
}
// SaveStates saves the states for the specified file to the manifest.
func (manifest *TestManifest) SaveStates(file *moduletest.File, states map[string]*TestRunState) tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
if existingStates, exists := manifest.Files[file.Name]; exists {
// If we have existing states, we're doing update or delete operations
// rather than just adding new states.
for key, existingState := range existingStates.States {
// First, check all the existing states against the states being
// saved.
if state, exists := states[key]; exists {
// If we have a new state, then overwrite the existing one
// assuming that it has a reason to be saved.
if state.Backend != nil {
// If we have a backend, regardless of the reason, then
// we'll save the state to the backend.
stmgr, err := state.Backend.StateMgr(backend.DefaultStateName)
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to write state",
Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err),
})
continue
}
if err := stmgr.WriteState(state.State); err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to write state",
Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err),
})
continue
}
// But, still keep the manifest file itself up-to-date.
if state.Manifest.Reason != StateReasonNone {
existingStates.States[key] = state.Manifest
} else {
delete(existingStates.States, key)
}
} else if state.Manifest.Reason != StateReasonNone {
if err := manifest.writeState(state); err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to write state",
Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err),
})
continue
}
existingStates.States[key] = state.Manifest
continue
} else {
// If no reason to be saved, then it means we managed to
// clean everything up properly. So we'll delete the
// existing state file and remove any mention of it.
if err := manifest.deleteState(existingState); err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to delete state",
Detail: fmt.Sprintf("Failed to delete state file for key %s: %s.", key, err),
})
continue
}
delete(existingStates.States, key) // remove the state from the manifest file
}
}
// Otherwise, we just leave the state file as is. We don't want to
// remove it prematurely, as users might still need it to tidy
// something up.
}
// now, we've updated / removed any pre-existing states we should also
// write any states that are brand new, and weren't in the existing
// state.
for key, state := range states {
if _, exists := existingStates.States[key]; exists {
// we've already handled everything in the existing state
continue
}
if state.Backend != nil {
stmgr, err := state.Backend.StateMgr(backend.DefaultStateName)
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to write state",
Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err),
})
continue
}
if err := stmgr.WriteState(state.State); err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to write state",
Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err),
})
continue
}
if state.Manifest.Reason != StateReasonNone {
existingStates.States[key] = state.Manifest
}
} else if state.Manifest.Reason != StateReasonNone {
if err := manifest.writeState(state); err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to write state",
Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err),
})
continue
}
existingStates.States[key] = state.Manifest
}
}
if len(existingStates.States) == 0 {
// if we now have tidied everything up, remove record of this from
// the manifest.
delete(manifest.Files, file.Name)
}
} else {
// We're just writing entirely new states, so we can just create a new
// TestFileManifest and add it to the manifest.
newStates := make(map[string]*TestRunManifest)
for key, state := range states {
if state.Backend != nil {
stmgr, err := state.Backend.StateMgr(backend.DefaultStateName)
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to write state",
Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err),
})
continue
}
if err := stmgr.WriteState(state.State); err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to write state",
Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err),
})
continue
}
if state.Manifest.Reason != StateReasonNone {
newStates[key] = state.Manifest
}
} else if state.Manifest.Reason != StateReasonNone {
if err := manifest.writeState(state); err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Failed to write state",
Detail: fmt.Sprintf("Failed to write state file for key %s: %s.", key, err),
})
continue
}
newStates[key] = state.Manifest
}
}
if len(newStates) > 0 {
// only add this into the manifest if we actually wrote any
// new states
manifest.Files[file.Name] = &TestFileManifest{
States: newStates,
}
}
}
return diags
}
func (manifest *TestManifest) writeState(state *TestRunState) error {
stateFile := statemgr.NewFilesystem(manifest.StateFilePath(state.Manifest.ID))
if err := stateFile.WriteState(state.State); err != nil {
return fmt.Errorf("error writing state to file %s: %w", manifest.StateFilePath(state.Manifest.ID), err)
}
return nil
}
func (manifest *TestManifest) deleteState(runManifest *TestRunManifest) error {
target := manifest.StateFilePath(runManifest.ID)
if err := os.Remove(target); err != nil {
if os.IsNotExist(err) {
// If the file doesn't exist, we can ignore this error.
return nil
}
return fmt.Errorf("error deleting state file %s: %w", target, err)
}
return nil
}
func (manifest *TestManifest) generateID() string {
const maxAttempts = 10
for ix := 0; ix < maxAttempts; ix++ {
var b [8]byte
for i := range b {
n := rand.IntN(len(alphanumeric))
b[i] = alphanumeric[n]
}
id := string(b[:])
if _, exists := manifest.ids[id]; exists {
continue // generate another one
}
manifest.ids[id] = true
return id
}
panic("failed to generate a unique id 10 times")
}
func (manifest *TestManifest) ensureDataDir() error {
if _, err := os.Stat(manifest.dataDir); os.IsNotExist(err) {
return os.MkdirAll(manifest.dataDir, 0755)
}
return nil
}
// filePath returns the path to the manifest file
func (manifest *TestManifest) filePath() string {
return filepath.Join(manifest.dataDir, "manifest.json")
}
// StateFilePath returns the path to the state file for a given ID.
//
// Visible for testing purposes.
func (manifest *TestManifest) StateFilePath(id string) string {
return filepath.Join(manifest.dataDir, fmt.Sprintf("%s.tfstate", id))
}
// getBackendInstance uses the config for a given run block's backend block to create and return a configured
// instance of that backend type.
func getBackendInstance(stateKey string, config *configs.Backend, f backend.InitFn) (backend.Backend, error) {
b := f()
log.Printf("[TRACE] TestConfigTransformer.Transform: instantiated backend of type %T", b)
schema := b.ConfigSchema()
decSpec := schema.NoneRequired().DecoderSpec()
configVal, hclDiags := hcldec.Decode(config.Config, decSpec, nil)
if hclDiags.HasErrors() {
return nil, fmt.Errorf("error decoding backend configuration for state key %s : %v", stateKey, hclDiags.Errs())
}
if !configVal.IsWhollyKnown() {
return nil, fmt.Errorf("unknown values within backend definition for state key %s", stateKey)
}
newVal, validateDiags := b.PrepareConfig(configVal)
validateDiags = validateDiags.InConfigBody(config.Config, "")
if validateDiags.HasErrors() {
return nil, validateDiags.Err()
}
configureDiags := b.Configure(newVal)
configureDiags = configureDiags.InConfigBody(config.Config, "")
if validateDiags.HasErrors() {
return nil, configureDiags.Err()
}
return b, nil
}

@ -0,0 +1,29 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: BUSL-1.1
package states
import (
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/moduletest"
"github.com/hashicorp/terraform/internal/states"
)
type TestRunState struct {
// Run and RestoreState represent the run block to use to either destroy
// or restore the state to. If RestoreState is false, then the state will
// destroyed, if true it will be restored to the config of the relevant
// run block.
Run *moduletest.Run
RestoreState bool
// Manifest is the underlying state manifest for this state.
Manifest *TestRunManifest
// State is the actual state.
State *states.State
// Backend is the backend where this state should be saved upon test
// completion.
Backend backend.Backend
}

@ -3,16 +3,30 @@
package moduletest
import "github.com/hashicorp/terraform/internal/tfdiags"
import (
"github.com/hashicorp/terraform/internal/tfdiags"
)
type CommandMode int
const (
// NormalMode is the default mode for running terraform test.
NormalMode CommandMode = iota
// CleanupMode is used when running terraform test cleanup.
// In this mode, the graph will be built with the intention of cleaning up
// the state, rather than applying changes.
CleanupMode
)
type Suite struct {
Status Status
Status Status
CommandMode CommandMode
Files map[string]*File
}
type TestSuiteRunner interface {
Test() (Status, tfdiags.Diagnostics)
Test(experimentsAllowed bool) (Status, tfdiags.Diagnostics)
Stop()
Cancel()

@ -979,8 +979,8 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State,
externalProviderConfigs = opts.ExternalProviders
}
if opts != nil && opts.OverridePreventDestroy && opts.Mode != plans.DestroyMode {
panic("you can only set OverridePreventDestroy during destroy operations.")
if opts != nil && opts.OverridePreventDestroy && opts.Mode == plans.RefreshOnlyMode {
panic("you can't set OverridePreventDestroy during refresh operations.")
}
switch mode := opts.Mode; mode {
@ -1010,6 +1010,7 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State,
GenerateConfigPath: opts.GenerateConfigPath,
SkipGraphValidation: c.graphOpts.SkipGraphValidation,
queryPlan: opts.Query,
overridePreventDestroy: opts.OverridePreventDestroy,
}).Build(addrs.RootModuleInstance)
return graph, walkPlan, diags
case plans.RefreshOnlyMode:

@ -152,10 +152,6 @@ func (b *PlanGraphBuilder) Steps() []GraphTransformer {
panic("invalid plan operation: " + b.Operation.String())
}
if b.overridePreventDestroy && b.Operation != walkPlanDestroy {
panic("overridePreventDestroy can only be set during walkPlanDestroy operations")
}
steps := []GraphTransformer{
// Creates all the resources represented in the config
&ConfigTransformer{
@ -322,6 +318,7 @@ func (b *PlanGraphBuilder) initPlan() {
}
b.ConcreteResource = func(a *NodeAbstractResource) dag.Vertex {
a.overridePreventDestroy = b.overridePreventDestroy
return &nodeExpandPlannableResource{
NodeAbstractResource: a,
skipRefresh: b.skipRefresh,
@ -332,6 +329,7 @@ func (b *PlanGraphBuilder) initPlan() {
}
b.ConcreteResourceOrphan = func(a *NodeAbstractResourceInstance) dag.Vertex {
a.overridePreventDestroy = b.overridePreventDestroy
return &NodePlannableResourceInstanceOrphan{
NodeAbstractResourceInstance: a,
skipRefresh: b.skipRefresh,
@ -342,6 +340,7 @@ func (b *PlanGraphBuilder) initPlan() {
}
b.ConcreteResourceInstanceDeposed = func(a *NodeAbstractResourceInstance, key states.DeposedKey) dag.Vertex {
a.overridePreventDestroy = b.overridePreventDestroy
return &NodePlanDeposedResourceInstanceObject{
NodeAbstractResourceInstance: a,
DeposedKey: key,

@ -87,6 +87,11 @@ type NodeAbstractResource struct {
generateConfigPath string
forceCreateBeforeDestroy bool
// overridePreventDestroy is set during test cleanup operations to allow
// tests to clean up any created infrastructure regardless of this setting
// in the configuration.
overridePreventDestroy bool
}
var (

@ -46,11 +46,6 @@ type NodeAbstractResourceInstance struct {
preDestroyRefresh bool
// overridePreventDestroy is set during test cleanup operations to allow
// tests to clean up any created infrastructure regardless of this setting
// in the configuration.
overridePreventDestroy bool
// During import (or query) we may generate configuration for a resource, which needs
// to be stored in the final change.
generatedConfigHCL string

Loading…
Cancel
Save