In this post, we will demonstrate two(2) methods for deploying AWS resources using CodePipeline between two accounts. The CodePipeline will be hosted in one account, which we will call the CI/CD account, while the resources will be deployed in another account, which we will call the target account.
Overview
The first method is to use CodeBuild to execute Terraform configuration files that will provision the AWS resources. The second method is to use the CodePipeline’s Commands action to execute a CloudFormation template that will provision the AWS resources.
Method 1: CodeBuild1 + Terraform
In this method, we will deploy the resources using Terraform through a CodeBuild project.

Our sample infrastructure will consist of the following:
CICD Account
- A CodePipeline that retrieves the source or Terraform configuration files from the artefact store, saves them in the CICD bucket, and triggers CodeBuild.
- A CodeBuild Project that executes the Terraform configuration files in the target account.
- A Source/CICD S3 bucket that will function both as the CodePipeline artefact store and ‘Source’ action provider. The reason for combining the two under one bucket is to simplify the sample infrastructure by removing the need for an SCM or a separate source S3 bucket.
- A Terraform State S3 bucket that will centrally store the Terraform state files.
- A CodeBuild Service Role that allows CodeBuild to access the CICD bucket and the Terraform state bucket.
- A CodePipeline Service Role that allows the pipeline to have access to the artefact store, the CICD bucket, and manage the CodeBuild project.
Target Account
- The Cross-account Role that has all the required permissions for the resources deployed by Terraform. CodeBuild will assume this role.
- The AWS resources created by the pipeline, which include an EC2 instance and an EC2 instance profile.
Steps
The first step that CodeBuild will do is to initialise the backend configuration by setting the S3 backend store and the state file object path.
terraform init --backend-config="bucket=$TFSTATE_BUCKET_NAME" --backend-config="key=$TFSTATE_FILE_NAME"
Then, it will assume the cross-account role. This role should have a trust relationship with the CodeBuild service role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<CICD Account>:role/<CodeBuild Service Role>"
},
"Action": "sts:AssumeRole"
}
]
}
It should also have a policy that allows access to the Terraform S3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<Terraform State Bucket>",
"arn:aws:s3:::<Terraform State Bucket>/*",
]
}
}
The Teraform State bucket should have a resource policy that allows the cross-account role access to it.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Target Account>:root"
},
"Action": [
"s3:PutObject",
"s3:ListBucket",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<Terraform State Bucket>/<State Filename>",
"arn:aws:s3:::<Terraform State Bucket>"
],
"Condition": {
"StringEquals": {
"aws:PrincipalArn": "arn:aws:iam::<Target Account>:role/<Cross-account Role>"
}
}
}
]
}
Finally, CodeBuild will apply the Terraform configuration files.
Below is a snippet of the buildspec.yaml file that we used in our demo infrastructure.
pre_build:
commands:
- cd terraform
- aws sts get-caller-identity
- terraform init --backend-config="bucket=$TFSTATE_BUCKET_NAME" --backend-config="key=$TFSTATE_FILE_NAME"
- terraform validate
build:
commands:
- CROSS_ACCT_TOKENS=$(aws sts assume-role --role-arn $TARGET_ROLE_ARN --role-session-name $TARGET_ROLE_SESSION --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text)
- export AWS_ACCESS_KEY_ID=$(echo $CROSS_ACCT_TOKENS | cut -d' ' -f1)
- export AWS_SECRET_ACCESS_KEY=$(echo $CROSS_ACCT_TOKENS | cut -d' ' -f2)
- export AWS_SESSION_TOKEN=$(echo $CROSS_ACCT_TOKENS | cut -d' ' -f3)
- aws sts get-caller-identity
- if [ "$TF_ACTION" == "destroy" ]; then tf_plan_destroy="-destroy"; fi
- terraform plan -input=false $tf_plan_destroy
- terraform $TF_ACTION -auto-approve -input=false
The above code allows the pipeline to ‘destroy‘ the created resources by passing the value ‘destroy‘ in the TF_ACTION pipeline variable.

The Demo Infrastructure
These Terraform configuration files create the infrastructure described above. The files must be applied in a proper sequence to build the infrastructure successfully. The CICD account resources must be created first. Therefore, apply the files under the pipeline folder before the files under the target-account folder. The reason for this is that the target account refers to the CodeBuild service role in its trust relationship, and referring to a non-existent role in an IAM policy is not allowed.
A convenience script called deploy-all.sh is developed to automate the whole process. The script expects the following mandatory inputs: (1) the target account AWS credential profile, (2) the CICD account AWS credential profile, (3) a subnet ID in the target account, and (4) a security group ID in the target account. The rest of the variable inputs will be taken from the default values, so if you need to supply different values, you need to modify the code.
$ ./deploy-all.sh -t target_acct_profile -c cicd_acct_profile -n subnet-1234567652753a0cb -g sg-123456744c3fbdeed
You also have the option to clean up the resources by passing the -d option in the script. However, you must first empty the S3 buckets before starting the cleanup process.
Method 2: CodePipeline + Commands Action + CloudFormation
In this method, we will deploy the resources using CloudFormation through the CodePipeline’s commands action.

Our sample infrastructure will consist of the following:
CICD Account
- A CodePipeline that retrieves the source or CloudFormation template from the artefact store, saves them in the CICD bucket, and triggers deployment.
- A CodePipeline Service Role that allows the pipeline to have access to the artefact store, the CICD bucket, and write logs in CloudWatch.
- A Source/CICD S3 bucket that will function both as the CodePipeline artefact store and ‘Source’ action provider. The reason for combining the two under one bucket is to simplify the sample infrastructure by removing the need for an SCM or a separate source S3 bucket.
- A Commands Action instance that will execute our shell scripts. This will eliminate the need for a CodeBuild project.
- A CloudWatch Log Group. We do not need to create this log group. CodePipeline will create this log group with the name “aws/codepipeline/
<pipeline name>
when it runs the action.
Target Account
- A Cross-account Role that has all the permissions to pass a role to and execute a CloudFormation stack. This role will be assumed by the Commands Action instance. Note that, unlike Method 1, this cross-account role does not need to have the permission to deploy the resources in the stack.
- A CloudFormation Execution Role that has all the permissions to deploy the resources in the stack. This role will be passed to the stack by the Commands Action instance.
- A CloudFormation Stack. Note that, unlike Method 1, the ‘state’ of the resources is maintained within the target account.
- The AWS resources created by the CloudFormation stack, which include an EC2 instance and an EC2 instance profile.
Steps
Similar to the previous method, the first step is to assume the cross-account role, which also has a trust relationship with the CodePipeline service role. Unlike the last method, however, it should have access to the Source/CICD S3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<Source/CICD Bucket>",
"arn:aws:s3:::<Source/CICD Bucket>/*",
]
}
}
After assuming the role, the AWS CLI will be called to create the stack in the target account.
Below is the Commands section of the CloudFormation template that shows the shell commands that will be executed. Notice that we are passing an execution role when we create the stack.
Commands:
- export TARGET_ROLE_ARN=#{variables.TARGET_ROLE_ARN}
- export ACTION=#{variables.BUILD_ACTION}
- export STACK_NAME=#{variables.BUILD_STACK_NAME}
- export CFN_ROLE_ARN=#{variables.CFN_ROLE_ARN}
- CROSS_ACCT_TOKENS=$(aws sts assume-role --role-arn $TARGET_ROLE_ARN --role-session-name CodePipeline --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text)
- export AWS_ACCESS_KEY_ID=$(echo $CROSS_ACCT_TOKENS | cut -d' ' -f1)
- export AWS_SECRET_ACCESS_KEY=$(echo $CROSS_ACCT_TOKENS | cut -d' ' -f2)
- export AWS_SESSION_TOKEN=$(echo $CROSS_ACCT_TOKENS | cut -d' ' -f3)
- aws sts get-caller-identity
- if [ "$ACTION" == "delete-stack" ]; then aws cloudformation $ACTION --stack-name $STACK_NAME; fi
- if [ ! "$ACTION" == "delete-stack" ]; then aws cloudformation $ACTION --stack-name $STACK_NAME --role-arn $CFN_ROLE_ARN --template-body file://artifacts/input/cloudformation/template.yaml --capabilities "CAPABILITY_IAM" "CAPABILITY_NAMED_IAM" "CAPABILITY_AUTO_EXPAND" --parameters file://artifacts/input/cloudformation/parameters.json; fi
Similar to Method 1, the above code can delete the stack by passing the value ‘delete-stack‘ in the BUILD_ACTION pipeline variable.
The Sample Infrastructure
These CloudFormation Templates create the infrastructure described above. Similar to the previous method, the templates must be applied in a proper sequence, starting with the CICD account and followed by the target account. However, unlike Method 1, the source or input artefact has to be uploaded separately to the Source bucket after it has been created. The reason is that, unlike Terraform, CloudFormation cannot upload files from your local machine. So you need to issue a separate command to upload the source. You can either use ‘aws cloudformation package‘ or ‘aws s3 cp‘.
Luckily, a convenience script called deploy-all.sh has also been developed to automate the whole process, including the uploading of the source. The script expects the following mandatory inputs: (1) the stack name, (2) the CICD account AWS credential profile, (3) the target account AWS credential profile, (4) a subnet ID in the target account, and (5) a security group ID in the target account. Similarly, you need to modify the code if you need to have different values for the rest of the variable inputs.
./deploy-all.sh -s CfnCrossAccount -t target_acct -c cicd -n subnet-1234567652753a0cb -g sg-123456744c3fbdeed
You also have the option to clean up the resources by passing the -d option in the script, but you must first empty the S3 bucket before starting the cleanup process.
Conclusion
In this post, we showed two (2) ways of deploying resources using CodePipeline across accounts. The first is the use of CodeBuild and Terraform. This approach has the advantage of having a centralised state file location. So the CICD account not only manages the resource configuration, but also the state file.
The second approach involves the use of CloudFormation and its CodePipeline’s Commands action. You can still use CodeBuild in this approach if the shell scripts that you will execute are complex. For this approach, the advantage is that the CICD account does not need to have the privilege to create the resources in the target account. It only requires the capability to pass an IAM role to the CloudFormation stack.
Footnotes
- As of this writing, Terraform does not support CodePipeline’s Commands action.