terraform-stacks by hashicorp/agent-skills
npx skills add https://github.com/hashicorp/agent-skills --skill terraform-stacksTerraform Stacks 通过在传统 Terraform 模块之上提供一个配置层,简化了大规模基础设施的供应和管理。Stacks 支持跨环境、区域和云账户对多个组件进行声明式编排。
Stack : 一个完整的基础设施单元,由可以一起管理的组件和部署组成。
Component : 围绕定义基础设施片段的 Terraform 模块的抽象。每个组件指定一个源模块、输入和提供者。
Deployment : 具有特定输入值的 Stack 中所有组件的一个实例。使用部署来处理不同的环境(开发/预发布/生产)、区域或云账户。
Stack Language : 一种独立的基于 HCL 的语言(不是常规的 Terraform HCL),具有独特的代码块和文件扩展名。
Terraform Stacks 使用特定的文件扩展名:
.tfcomponent.hcl.tfdeploy.hcl.terraform.lock.hcl(由 CLI 生成)所有配置文件必须位于 Stack 仓库的根目录级别。HCP Terraform 会按照依赖顺序处理所有文件。
my-stack/
├── .terraform-version # 此 Stack 所需的 Terraform 版本
├── variables.tfcomponent.hcl # 变量声明
├── providers.tfcomponent.hcl # 提供者配置
├── components.tfcomponent.hcl # 组件定义
├── outputs.tfcomponent.hcl # Stack 输出
├── deployments.tfdeploy.hcl # 部署定义
├── .terraform.lock.hcl # 提供者锁定文件(生成)
└── modules/ # 本地模块(可选 - 仅在需要使用本地模块时)
├── s3/
└── compute/
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
注意 : modules/ 目录仅在需要使用本地模块源时才需要。组件可以引用来自以下位置的模块:
./modules/vpcterraform-aws-modules/vpc/awsapp.terraform.io/<组织名称>/vpc/awsgit::https://github.com/org/repo.git//path?ref=v1.0.0HCP Terraform 会按照依赖顺序处理所有 .tfcomponent.hcl 和 .tfdeploy.hcl 文件。
使用 Terraform v1.13.x 或更高版本来访问 Stacks CLI 插件并运行 terraform stacks CLI 命令。首先在 Stack 的根目录中添加一个 .terraform-version 文件,以指定 Stack 所需的 Terraform 版本。例如,以下文件指定了 Terraform v1.14.5:
1.14.5
为 Stack 配置声明输入变量。变量必须定义 type 字段,并且不支持 validation 参数。
variable "aws_region" {
type = string
description = "部署的 AWS 区域"
default = "us-west-1"
}
variable "identity_token" {
type = string
description = "OIDC 身份令牌"
ephemeral = true # 不持久化到状态文件
}
variable "instance_count" {
type = number
nullable = false
}
重要提示 : 对于凭据和令牌(身份令牌、API 密钥、密码),使用 ephemeral = true 以防止它们持久化在状态文件中。对于需要在多次运行中持久化的较长生命周期的值(如许可证密钥),使用 stable。
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.5.0"
}
}
提供者块与传统 Terraform 不同:
for_each 元参数config 块接受配置单一提供者配置:
provider "aws" "this" {
config {
region = var.aws_region
assume_role_with_web_identity {
role_arn = var.role_arn
web_identity_token = var.identity_token
}
}
}
使用 for_each 的多个提供者配置:
provider "aws" "configurations" {
for_each = var.regions
config {
region = each.value
assume_role_with_web_identity {
role_arn = var.role_arn
web_identity_token = var.identity_token
}
}
}
身份验证最佳实践 : 使用 工作负载身份(OIDC)作为 Stacks 的首选身份验证方法。这种方法:
使用 identity_token 块和提供者配置中的 assume_role_with_web_identity 来配置工作负载身份。有关 AWS、Azure 和 GCP 的详细设置说明,请参阅:https://developer.hashicorp.com/terraform/cloud-docs/dynamic-provider-credentials
每个 Stack 至少需要一个组件块。为要包含在 Stack 中的每个模块添加一个组件。组件引用来自本地路径、注册表或 Git 的模块。
component "vpc" {
source = "app.terraform.io/my-org/vpc/aws" # 本地、注册表或 Git URL
version = "2.1.0" # 用于注册表模块
inputs = {
cidr_block = var.vpc_cidr
name_prefix = var.name_prefix
}
providers = {
aws = provider.aws.this
}
}
有关依赖项、for_each、公共注册表模块、Git 源等的示例,请参阅 references/component-blocks.md。
要点:
component.<名称>.<输出> 或 component.<名称>[键].<输出>[for x in component.s3 : x.bucket_name]for_each 的组件,引用特定实例:component.<名称>[each.value].<输出>provider.<类型>.<别名> 或 provider.<类型>.<别名>[each.value]输出需要 type 参数,并且不支持 preconditions:
output "vpc_id" {
type = string
description = "VPC ID"
value = component.vpc.vpc_id
}
output "endpoint_urls" {
type = map(string)
value = {
for region, comp in component.api : region => comp.endpoint_url
}
sensitive = false
}
局部变量块在 .tfcomponent.hcl 和 .tfdeploy.hcl 文件中的工作方式相同:
locals {
common_tags = {
Environment = var.environment
ManagedBy = "Terraform Stacks"
Project = var.project_name
}
region_config = {
for region in var.regions : region => {
name_suffix = "${var.environment}-${region}"
}
}
}
用于安全地从 Stack 中移除组件。HCP Terraform 需要该组件的提供者来移除它。
removed {
from = component.old_component
source = "./modules/old-module"
providers = {
aws = provider.aws.this
}
}
为与云提供者进行 OIDC 身份验证生成 JWT 令牌:
identity_token "aws" {
audience = ["aws.workload.identity"]
}
identity_token "azure" {
audience = ["api://AzureADTokenExchange"]
}
在部署中使用 identity_token.<名称>.jwt 引用令牌
在 Stack 部署中访问 HCP Terraform 变量集:
store "varset" "aws_credentials" {
id = "varset-ABC123" # 或者使用:name = "varset_name"
source = "tfc-cloud-shared"
category = "terraform" # 或者对于环境变量使用:category = "env"
}
deployment "production" {
inputs = {
aws_access_key = store.varset.aws_credentials.AWS_ACCESS_KEY_ID
}
}
用于集中管理凭据并在 Stacks 之间共享变量。详情请参阅 references/deployment-blocks.md。
定义部署实例(每个 Stack 最少 1 个,最多 20 个):
deployment "production" {
inputs = {
aws_region = "us-west-1"
instance_count = 3
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
# 为不同环境创建多个部署
deployment "development" {
inputs = {
aws_region = "us-east-1"
instance_count = 1
name_suffix = "dev"
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
要销毁一个部署 : 设置 destroy = true,上传配置,批准销毁运行,然后移除部署块。详情请参阅 references/deployment-blocks.md。
将部署分组以共享设置(HCP Terraform Premium 层级功能)。免费/标准层级使用名为 {部署名称}_default 的默认组。
deployment_group "canary" {
auto_approve_checks = [deployment_auto_approve.safe_changes]
}
deployment "dev" {
inputs = { /* ... */ }
deployment_group = deployment_group.canary
}
多个部署可以引用同一个组。详情请参阅 references/deployment-blocks.md。
定义规则以自动批准部署计划(HCP Terraform Premium 层级功能):
deployment_auto_approve "safe_changes" {
deployment_group = deployment_group.canary
check {
condition = context.plan.changes.remove == 0
reason = "无法自动批准包含资源删除的计划"
}
}
可用的上下文变量 : context.plan.applyable, context.plan.changes.add/change/remove/total, context.success
注意: orchestrate 块已弃用。请改用 deployment_group 和 deployment_auto_approve。
所有上下文变量和模式请参阅 references/deployment-blocks.md。
通过发布一个 Stack 的输出并在另一个 Stack 中使用它们来链接 Stacks:
# 在网络 Stack 中 - 发布输出
publish_output "vpc_id_network" {
type = string
value = deployment.network.vpc_id
}
# 在应用 Stack 中 - 使用输出
upstream_input "network_stack" {
type = "stack"
source = "app.terraform.io/my-org/my-project/networking-stack"
}
deployment "app" {
inputs = {
vpc_id = upstream_input.network_stack.vpc_id_network
}
}
完整的文档和示例请参阅 references/linked-stacks.md。
注意 : Terraform Stacks 自 Terraform CLI v1.13+ 起已正式发布(GA)。Stacks 现在计入 HCP Terraform 计费的托管资源(RUM)。
terraform stacks init # 下载提供者、模块,生成锁定文件
terraform stacks providers-lock # 重新生成锁定文件(如果需要,添加平台)
terraform stacks validate # 检查语法而不上传
重要提示 : 没有 plan 或 apply 命令。上传配置会自动触发部署运行。
# 1. 上传配置(触发部署运行)
terraform stacks configuration upload
# 2. 监控部署
terraform stacks deployment-run list # 列出运行(非交互式)
terraform stacks deployment-group watch -deployment-group=... # 流式传输状态更新
# 3. 批准部署(如果未配置自动批准)
terraform stacks deployment-run approve-all-plans -deployment-run-id=...
terraform stacks deployment-group approve-all-plans -deployment-group=...
terraform stacks deployment-run cancel -deployment-run-id=... # 如果需要,取消
terraform stacks configuration list # 列出配置版本
terraform stacks configuration fetch -configuration-id=... # 下载配置
terraform stacks configuration watch # 监控上传状态
terraform stacks create # 创建新 Stack(交互式)
terraform stacks fmt # 格式化 Stack 文件
terraform stacks list # 显示所有 Stacks
terraform stacks version # 显示版本
terraform stacks deployment-group rerun -deployment-group=... # 重新运行部署
对于自动化、CI/CD 或非交互式环境(如 AI 代理)中的程序化监控,请使用 HCP Terraform API 而不是 CLI watch 命令。该 API 提供以下端点:
要点:
GET /api/v2/stack-deployment-steps/{step-id}/artifacts?name=apply-descriptionstack_deployment_step_id 查询参数curl -L)完整的 API 工作流、身份验证、轮询最佳实践和示例脚本,请参阅 references/api-monitoring.md。
组件依赖 : 当一个组件引用另一个组件的输出时(例如,subnet_ids = component.vpc.private_subnet_ids),依赖关系会自动推断。
多区域部署 : 在提供者和组件上使用 for_each 以跨多个区域部署。每个区域都有自己的提供者配置和组件实例。
延迟变更 : Stacks 支持延迟变更,以处理仅在应用后才知道值的依赖关系。这使得复杂的多组件部署成为可能,其中一些资源依赖于其他组件的运行时值(集群端点、生成的密码等)。
包括多区域部署、组件依赖、延迟变更模式和链接 Stacks 的完整示例,请参阅 references/examples.md。
.terraform.lock.hcl 到版本控制循环依赖 : 重构以打破循环引用或使用中间组件。
部署销毁 : 无法从 UI 销毁。在部署块中设置 destroy = true,上传配置,HCP Terraform 将创建一个销毁运行。
空诊断 : 在诊断 API 请求中添加必需的 stack_deployment_step_id 查询参数。
模块兼容性 : 在生产使用之前测试公共注册表模块。某些模块可能与 Stacks 存在兼容性问题。
详细文档请参阅:
references/component-blocks.md - 包含所有参数和语法的完整组件块参考references/deployment-blocks.md - 包含所有配置选项的完整部署块参考references/linked-stacks.md - 用于链接 Stacks 的发布输出和上游输入references/examples.md - 多区域和组件依赖的完整工作示例references/api-monitoring.md - 用于程序化监控和自动化的完整 API 工作流references/troubleshooting.md - 常见问题和解决方案的详细故障排除指南每周安装次数
884
仓库
GitHub 星标数
477
首次出现
2026年1月26日
安全审计
安装于
github-copilot723
opencode712
codex705
gemini-cli693
kimi-cli608
amp608
Terraform Stacks simplify infrastructure provisioning and management at scale by providing a configuration layer above traditional Terraform modules. Stacks enable declarative orchestration of multiple components across environments, regions, and cloud accounts.
Stack : A complete unit of infrastructure composed of components and deployments that can be managed together.
Component : An abstraction around a Terraform module that defines infrastructure pieces. Each component specifies a source module, inputs, and providers.
Deployment : An instance of all components in a stack with specific input values. Use deployments for different environments (dev/staging/prod), regions, or cloud accounts.
Stack Language : A separate HCL-based language (not regular Terraform HCL) with distinct blocks and file extensions.
Terraform Stacks use specific file extensions:
.tfcomponent.hcl.tfdeploy.hcl.terraform.lock.hcl (generated by CLI)All configuration files must be at the root level of the Stack repository. HCP Terraform processes all files in dependency order.
my-stack/
├── .terraform-version # The required Terraform version for this Stack
├── variables.tfcomponent.hcl # Variable declarations
├── providers.tfcomponent.hcl # Provider configurations
├── components.tfcomponent.hcl # Component definitions
├── outputs.tfcomponent.hcl # Stack outputs
├── deployments.tfdeploy.hcl # Deployment definitions
├── .terraform.lock.hcl # Provider lock file (generated)
└── modules/ # Local modules (optional - only if using local modules)
├── s3/
└── compute/
Note : The modules/ directory is only required when using local module sources. Components can reference modules from:
./modules/vpcterraform-aws-modules/vpc/awsapp.terraform.io/<org-name>/vpc/awsgit::https://github.com/org/repo.git//path?ref=v1.0.0HCP Terraform processes all .tfcomponent.hcl and .tfdeploy.hcl files in dependency order.
Use Terraform v1.13.x or later to access the Stacks CLI plugin and to run terraform stacks CLI commands. Begin by adding a .terraform-version file to your Stack's root directory to specify the Terraform version required for your Stack. For example, the following file specifies Terraform v1.14.5:
1.14.5
Declare input variables for the Stack configuration. Variables must define a type field and do not support the validation argument.
variable "aws_region" {
type = string
description = "AWS region for deployments"
default = "us-west-1"
}
variable "identity_token" {
type = string
description = "OIDC identity token"
ephemeral = true # Does not persist to state file
}
variable "instance_count" {
type = number
nullable = false
}
Important : Use ephemeral = true for credentials and tokens (identity tokens, API keys, passwords) to prevent them from persisting in state files. Use stable for longer-lived values like license keys that need to persist across runs.
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.5.0"
}
}
Provider blocks differ from traditional Terraform:
for_each meta-argumentconfig blockSingle Provider Configuration:
provider "aws" "this" {
config {
region = var.aws_region
assume_role_with_web_identity {
role_arn = var.role_arn
web_identity_token = var.identity_token
}
}
}
Multiple Provider Configurations with for_each:
provider "aws" "configurations" {
for_each = var.regions
config {
region = each.value
assume_role_with_web_identity {
role_arn = var.role_arn
web_identity_token = var.identity_token
}
}
}
Authentication Best Practice : Use workload identity (OIDC) as the preferred authentication method for Stacks. This approach:
Configure workload identity using identity_token blocks and assume_role_with_web_identity in provider configuration. For detailed setup instructions for AWS, Azure, and GCP, see: https://developer.hashicorp.com/terraform/cloud-docs/dynamic-provider-credentials
Each Stack requires at least one component block. Add a component for each module to include in the Stack. Components reference modules from local paths, registries, or Git.
component "vpc" {
source = "app.terraform.io/my-org/vpc/aws" # Local, registry, or Git URL
version = "2.1.0" # For registry modules
inputs = {
cidr_block = var.vpc_cidr
name_prefix = var.name_prefix
}
providers = {
aws = provider.aws.this
}
}
See references/component-blocks.md for examples of dependencies, for_each, public registry modules, Git sources, and more.
Key Points:
component.<name>.<output> or component.<name>[key].<output> for for_each[for x in component.s3 : x.bucket_name]for_each, reference specific instances: component.<name>[each.value].<output>provider.<type>.<alias> or provider.<type>.<alias>[each.value]Outputs require a type argument and do not support preconditions:
output "vpc_id" {
type = string
description = "VPC ID"
value = component.vpc.vpc_id
}
output "endpoint_urls" {
type = map(string)
value = {
for region, comp in component.api : region => comp.endpoint_url
}
sensitive = false
}
Locals blocks work the same in both .tfcomponent.hcl and .tfdeploy.hcl files:
locals {
common_tags = {
Environment = var.environment
ManagedBy = "Terraform Stacks"
Project = var.project_name
}
region_config = {
for region in var.regions : region => {
name_suffix = "${var.environment}-${region}"
}
}
}
Use to safely remove components from a Stack. HCP Terraform requires the component's providers to remove it.
removed {
from = component.old_component
source = "./modules/old-module"
providers = {
aws = provider.aws.this
}
}
Generate JWT tokens for OIDC authentication with cloud providers:
identity_token "aws" {
audience = ["aws.workload.identity"]
}
identity_token "azure" {
audience = ["api://AzureADTokenExchange"]
}
Reference tokens in deployments using identity_token.<name>.jwt
Access HCP Terraform variable sets within Stack deployments:
store "varset" "aws_credentials" {
id = "varset-ABC123" # Alternatively use: name = "varset_name"
source = "tfc-cloud-shared"
category = "terraform" # Alternatively use: category = "env" for environment variables
}
deployment "production" {
inputs = {
aws_access_key = store.varset.aws_credentials.AWS_ACCESS_KEY_ID
}
}
Use to centralize credentials and share variables across Stacks. See references/deployment-blocks.md for details.
Define deployment instances (minimum 1, maximum 20 per Stack):
deployment "production" {
inputs = {
aws_region = "us-west-1"
instance_count = 3
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
# Create multiple deployments for different environments
deployment "development" {
inputs = {
aws_region = "us-east-1"
instance_count = 1
name_suffix = "dev"
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
To destroy a deployment : Set destroy = true, upload configuration, approve destroy run, then remove the deployment block. See references/deployment-blocks.md for details.
Group deployments together for shared settings (HCP Terraform Premium tier feature). Free/standard tiers use default groups named {deployment-name}_default.
deployment_group "canary" {
auto_approve_checks = [deployment_auto_approve.safe_changes]
}
deployment "dev" {
inputs = { /* ... */ }
deployment_group = deployment_group.canary
}
Multiple deployments can reference the same group. See references/deployment-blocks.md for details.
Define rules to automatically approve deployment plans (HCP Terraform Premium tier feature):
deployment_auto_approve "safe_changes" {
deployment_group = deployment_group.canary
check {
condition = context.plan.changes.remove == 0
reason = "Cannot auto-approve plans with resource deletions"
}
}
Available context variables : context.plan.applyable, context.plan.changes.add/change/remove/total, context.success
Note: orchestrate blocks are deprecated. Use deployment_group and deployment_auto_approve instead.
See references/deployment-blocks.md for all context variables and patterns.
Link Stacks together by publishing outputs from one Stack and consuming them in another:
# In network Stack - publish outputs
publish_output "vpc_id_network" {
type = string
value = deployment.network.vpc_id
}
# In application Stack - consume outputs
upstream_input "network_stack" {
type = "stack"
source = "app.terraform.io/my-org/my-project/networking-stack"
}
deployment "app" {
inputs = {
vpc_id = upstream_input.network_stack.vpc_id_network
}
}
See references/linked-stacks.md for complete documentation and examples.
Note : Terraform Stacks is Generally Available (GA) as of Terraform CLI v1.13+. Stacks now count toward Resources Under Management (RUM) for HCP Terraform billing.
terraform stacks init # Download providers, modules, generate lock file
terraform stacks providers-lock # Regenerate lock file (add platforms if needed)
terraform stacks validate # Check syntax without uploading
Important : No plan or apply commands. Upload configuration triggers deployment runs automatically.
# 1. Upload configuration (triggers deployment runs)
terraform stacks configuration upload
# 2. Monitor deployments
terraform stacks deployment-run list # List runs (non-interactive)
terraform stacks deployment-group watch -deployment-group=... # Stream status updates
# 3. Approve deployments (if auto-approve not configured)
terraform stacks deployment-run approve-all-plans -deployment-run-id=...
terraform stacks deployment-group approve-all-plans -deployment-group=...
terraform stacks deployment-run cancel -deployment-run-id=... # Cancel if needed
terraform stacks configuration list # List configuration versions
terraform stacks configuration fetch -configuration-id=... # Download configuration
terraform stacks configuration watch # Monitor upload status
terraform stacks create # Create new Stack (interactive)
terraform stacks fmt # Format Stack files
terraform stacks list # Show all Stacks
terraform stacks version # Display version
terraform stacks deployment-group rerun -deployment-group=... # Rerun deployment
For programmatic monitoring in automation, CI/CD, or non-interactive environments (like AI agents), use the HCP Terraform API instead of CLI watch commands. The API provides endpoints for:
Key points:
GET /api/v2/stack-deployment-steps/{step-id}/artifacts?name=apply-descriptionstack_deployment_step_id query parametercurl -L)For complete API workflow, authentication, polling best practices, and example scripts, see references/api-monitoring.md.
Component Dependencies : Dependencies are automatically inferred when one component references another's output (e.g., subnet_ids = component.vpc.private_subnet_ids).
Multi-Region Deployment : Use for_each on providers and components to deploy across multiple regions. Each region gets its own provider configuration and component instances.
Deferred Changes : Stacks support deferred changes to handle dependencies where values are only known after apply. This enables complex multi-component deployments where some resources depend on runtime values from other components (cluster endpoints, generated passwords, etc.).
For complete examples including multi-region deployments, component dependencies, deferred changes patterns, and linked Stacks, see references/examples.md.
.terraform.lock.hcl to version controlCircular Dependencies : Refactor to break circular references or use intermediate components.
Deployment Destruction : Cannot destroy from UI. Set destroy = true in deployment block, upload configuration, and HCP Terraform creates a destroy run.
Empty Diagnostics : Add required stack_deployment_step_id query parameter to diagnostics API requests.
Module Compatibility : Test public registry modules before production use. Some modules may have compatibility issues with Stacks.
For detailed documentation, see:
references/component-blocks.md - Complete component block reference with all arguments and syntaxreferences/deployment-blocks.md - Complete deployment block reference with all configuration optionsreferences/linked-stacks.md - Publish outputs and upstream inputs for linking Stacks togetherreferences/examples.md - Complete working examples for multi-region and component dependenciesreferences/api-monitoring.md - Full API workflow for programmatic monitoring and automationreferences/troubleshooting.md - Detailed troubleshooting guide for common issues and solutionsWeekly Installs
884
Repository
GitHub Stars
477
First Seen
Jan 26, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
github-copilot723
opencode712
codex705
gemini-cli693
kimi-cli608
amp608
Azure 升级评估与自动化工具 - 轻松迁移 Functions 计划、托管层级和 SKU
59,200 周安装
Gemini Interactions API 指南:统一接口、智能体交互与服务器端状态管理
833 周安装
Apollo MCP 服务器:让AI代理通过GraphQL API交互的完整指南
834 周安装
智能体记忆系统构建指南:分块策略、向量存储与检索优化
835 周安装
Scrapling官方网络爬虫框架 - 自适应解析、绕过Cloudflare、Python爬虫库
836 周安装
抽奖赢家选取器 - 随机选择工具,支持CSV、Excel、Google Sheets,公平透明
838 周安装
Medusa 前端开发指南:使用 SDK、React Query 构建电商商店
839 周安装