CodePipeline:如何从 GitHub 引用嵌套的 CloudFormation 堆栈作为源
CodePipeline: How to reference nested CloudFormation Stacks from GitHub as Source
我们的 CloudFormation 模板存储在 GitHub 中。在 CodePipeline 中,我们使用 GitHub 作为我们的来源,但当它们未存储在 S3 上时,我们无法引用嵌套的 CloudFormation 堆栈。
在 CodePipeline 中使用 GitHub 作为我们的源时,我们如何引用 CloudFormation 嵌套堆栈?
如果这不可能,我们如何将 CloudFormation 模板从 GitHub 上传到源阶段(来自 GitHub)和 CodePipeline 中的部署阶段之间的 S3?
我可以想到两种方法来从 GitHub 源引用嵌套的 CloudFormation 堆栈以进行 CodePipeline 部署:
1。 pre-commit Git 勾
在您的模板上添加一个 pre-commit
client-side Git hook that runs aws cloudformation package
,将带有 S3 引用的生成模板与对源模板的更改一起提交到您的 GitHub 存储库。
这种方法的好处是您可以利用 aws cloudformation package
中现有的 template-rewriting 逻辑,而不必修改现有的 CodePipeline 配置。
2。 Lambda 管道阶段
添加一个 Lambda-based 管道阶段,从 GitHub Source Artifact 中提取指定的 nested-stack 模板文件,并将其上传到父堆栈模板中引用的 S3 中的指定位置.
这种方法的好处是管道将完全保持 self-contained,提交者不需要任何额外的 pre-processing 步骤。
我已将完整的参考示例实现发布到 wjordan/aws-codepipeline-nested-stack
:
AWSTemplateFormatVersion: 2010-09-09
Description: Infrastructure Continuous Delivery with CodePipeline and CloudFormation, with a project containing a nested stack.
Parameters:
ArtifactBucket:
Type: String
Description: Name of existing S3 bucket for storing pipeline artifacts
StackFilename:
Type: String
Default: cfn-template.yml
Description: CloudFormation stack template filename in the Git repo
GitHubOwner:
Type: String
Description: GitHub repository owner
GitHubRepo:
Type: String
Default: aws-codepipeline-nested-stack
Description: GitHub repository name
GitHubBranch:
Type: String
Default: master
Description: GitHub repository branch
GitHubToken:
Type: String
Description: GitHub repository OAuth token
NestedStackFilename:
Type: String
Description: GitHub filename (and S3 Object Key) for nested stack template.
Default: nested.yml
Resources:
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
RoleArn: !GetAtt [PipelineRole, Arn]
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Source
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref GitHubRepo
Branch: !Ref GitHubBranch
OAuthToken: !Ref GitHubToken
OutputArtifacts: [Name: Template]
RunOrder: 1
- Name: Deploy
Actions:
- Name: S3Upload
ActionTypeId:
Category: Invoke
Owner: AWS
Provider: Lambda
Version: 1
InputArtifacts: [Name: Template]
Configuration:
FunctionName: !Ref S3UploadObject
UserParameters: !Ref NestedStackFilename
RunOrder: 1
- Name: Deploy
RunOrder: 2
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CloudFormation
InputArtifacts: [Name: Template]
Configuration:
ActionMode: REPLACE_ON_FAILURE
RoleArn: !GetAtt [CFNRole, Arn]
StackName: !Ref AWS::StackName
TemplatePath: !Sub "Template::${StackFilename}"
Capabilities: CAPABILITY_IAM
ParameterOverrides: !Sub |
{
"ArtifactBucket": "${ArtifactBucket}",
"StackFilename": "${StackFilename}",
"GitHubOwner": "${GitHubOwner}",
"GitHubRepo": "${GitHubRepo}",
"GitHubBranch": "${GitHubBranch}",
"GitHubToken": "${GitHubToken}",
"NestedStackFilename": "${NestedStackFilename}"
}
CFNRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal: {Service: [cloudformation.amazonaws.com]}
Version: '2012-10-17'
Path: /
ManagedPolicyArns:
# TODO grant least privilege to only allow managing your CloudFormation stack resources
- "arn:aws:iam::aws:policy/AdministratorAccess"
PipelineRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal: {Service: [codepipeline.amazonaws.com]}
Version: '2012-10-17'
Path: /
Policies:
- PolicyName: CodePipelineAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- 's3:*'
- 'cloudformation:*'
- 'iam:PassRole'
- 'lambda:*'
Effect: Allow
Resource: '*'
Dummy:
Type: AWS::CloudFormation::WaitConditionHandle
NestedStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Sub "https://s3.amazonaws.com/${ArtifactBucket}/${NestedStackFilename}"
S3UploadObject:
Type: AWS::Lambda::Function
Properties:
Description: Extracts and uploads the specified InputArtifact file to S3.
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: !Sub |
var exec = require('child_process').exec;
var AWS = require('aws-sdk');
var codePipeline = new AWS.CodePipeline();
exports.handler = function(event, context, callback) {
var job = event["CodePipeline.job"];
var s3Download = new AWS.S3({
credentials: job.data.artifactCredentials,
signatureVersion: 'v4'
});
var s3Upload = new AWS.S3({
signatureVersion: 'v4'
});
var jobId = job.id;
function respond(e) {
var params = {jobId: jobId};
if (e) {
params['failureDetails'] = {
message: JSON.stringify(e),
type: 'JobFailed',
externalExecutionId: context.invokeid
};
codePipeline.putJobFailureResult(params, (err, data) => callback(e));
} else {
codePipeline.putJobSuccessResult(params, (err, data) => callback(e));
}
}
var filename = job.data.actionConfiguration.configuration.UserParameters;
var location = job.data.inputArtifacts[0].location.s3Location;
var bucket = location.bucketName;
var key = location.objectKey;
var tmpFile = '/tmp/file.zip';
s3Download.getObject({Bucket: bucket, Key: key})
.createReadStream()
.pipe(require('fs').createWriteStream(tmpFile))
.on('finish', ()=>{
exec(`unzip -p ${!tmpFile} ${!filename}`, (err, stdout)=>{
if (err) { respond(err); }
s3Upload.putObject({Bucket: bucket, Key: filename, Body: stdout}, (err, data) => respond(err));
});
});
};
Timeout: 30
Runtime: nodejs4.3
LambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal: {Service: [lambda.amazonaws.com]}
Action: ['sts:AssumeRole']
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
- "arn:aws:iam::aws:policy/AWSCodePipelineCustomActionAccess"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 's3:PutObjectAcl'
Resource: !Sub "arn:aws:s3:::${ArtifactBucket}/${NestedStackFilename}"
除了使用 lambda 阶段的解决方案,一个简单的方法是使用 CodeBuild 和 AWS SAM。
在主 CloudFormation 模板(我们称之为 main.yaml)中,使用 'Transform: AWS::Serverless-2016-10-31'
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
NestedTemplate:
AWS::CloudFormation::Stack
Properties:
TemplateUri: ./nested-template.yaml
请注意,您只需要将相对路径放入子模板而不是 absolution s3 uri。
添加具有以下内容的 CodeBuild 阶段 buildspecification.yaml
version: 0.1
phases:
build:
commands:
aws cloudformation package --template-file main.yaml --output-template-file transformed_main.yaml --s3-bucket my_bucket
artifacts:
type: zip
files:
- transformed_main.yaml
构建命令 'aws cloudformation package' 会将嵌套的 template.yaml 上传到 s3 存储桶 'my_bucket' 并将绝对 s3 uri 注入转换后的模板。
在 CloudFormation 部署阶段,使用 'Create change set' 和 'Execute change set' 创建堆栈。请注意,'Create or update stack' 不适用于 'Transform: AWS::Serverless-2016-10-31'。
以下是您可能会觉得有用的文档:
- http://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html
- http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
第二个文档展示了如何部署 lambda 函数,但引用嵌套堆栈在本质上是相同的。
我们的 CloudFormation 模板存储在 GitHub 中。在 CodePipeline 中,我们使用 GitHub 作为我们的来源,但当它们未存储在 S3 上时,我们无法引用嵌套的 CloudFormation 堆栈。
在 CodePipeline 中使用 GitHub 作为我们的源时,我们如何引用 CloudFormation 嵌套堆栈?
如果这不可能,我们如何将 CloudFormation 模板从 GitHub 上传到源阶段(来自 GitHub)和 CodePipeline 中的部署阶段之间的 S3?
我可以想到两种方法来从 GitHub 源引用嵌套的 CloudFormation 堆栈以进行 CodePipeline 部署:
1。 pre-commit Git 勾
在您的模板上添加一个 pre-commit
client-side Git hook that runs aws cloudformation package
,将带有 S3 引用的生成模板与对源模板的更改一起提交到您的 GitHub 存储库。
这种方法的好处是您可以利用 aws cloudformation package
中现有的 template-rewriting 逻辑,而不必修改现有的 CodePipeline 配置。
2。 Lambda 管道阶段
添加一个 Lambda-based 管道阶段,从 GitHub Source Artifact 中提取指定的 nested-stack 模板文件,并将其上传到父堆栈模板中引用的 S3 中的指定位置.
这种方法的好处是管道将完全保持 self-contained,提交者不需要任何额外的 pre-processing 步骤。
我已将完整的参考示例实现发布到 wjordan/aws-codepipeline-nested-stack
:
AWSTemplateFormatVersion: 2010-09-09
Description: Infrastructure Continuous Delivery with CodePipeline and CloudFormation, with a project containing a nested stack.
Parameters:
ArtifactBucket:
Type: String
Description: Name of existing S3 bucket for storing pipeline artifacts
StackFilename:
Type: String
Default: cfn-template.yml
Description: CloudFormation stack template filename in the Git repo
GitHubOwner:
Type: String
Description: GitHub repository owner
GitHubRepo:
Type: String
Default: aws-codepipeline-nested-stack
Description: GitHub repository name
GitHubBranch:
Type: String
Default: master
Description: GitHub repository branch
GitHubToken:
Type: String
Description: GitHub repository OAuth token
NestedStackFilename:
Type: String
Description: GitHub filename (and S3 Object Key) for nested stack template.
Default: nested.yml
Resources:
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
RoleArn: !GetAtt [PipelineRole, Arn]
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Source
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref GitHubRepo
Branch: !Ref GitHubBranch
OAuthToken: !Ref GitHubToken
OutputArtifacts: [Name: Template]
RunOrder: 1
- Name: Deploy
Actions:
- Name: S3Upload
ActionTypeId:
Category: Invoke
Owner: AWS
Provider: Lambda
Version: 1
InputArtifacts: [Name: Template]
Configuration:
FunctionName: !Ref S3UploadObject
UserParameters: !Ref NestedStackFilename
RunOrder: 1
- Name: Deploy
RunOrder: 2
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CloudFormation
InputArtifacts: [Name: Template]
Configuration:
ActionMode: REPLACE_ON_FAILURE
RoleArn: !GetAtt [CFNRole, Arn]
StackName: !Ref AWS::StackName
TemplatePath: !Sub "Template::${StackFilename}"
Capabilities: CAPABILITY_IAM
ParameterOverrides: !Sub |
{
"ArtifactBucket": "${ArtifactBucket}",
"StackFilename": "${StackFilename}",
"GitHubOwner": "${GitHubOwner}",
"GitHubRepo": "${GitHubRepo}",
"GitHubBranch": "${GitHubBranch}",
"GitHubToken": "${GitHubToken}",
"NestedStackFilename": "${NestedStackFilename}"
}
CFNRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal: {Service: [cloudformation.amazonaws.com]}
Version: '2012-10-17'
Path: /
ManagedPolicyArns:
# TODO grant least privilege to only allow managing your CloudFormation stack resources
- "arn:aws:iam::aws:policy/AdministratorAccess"
PipelineRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal: {Service: [codepipeline.amazonaws.com]}
Version: '2012-10-17'
Path: /
Policies:
- PolicyName: CodePipelineAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- 's3:*'
- 'cloudformation:*'
- 'iam:PassRole'
- 'lambda:*'
Effect: Allow
Resource: '*'
Dummy:
Type: AWS::CloudFormation::WaitConditionHandle
NestedStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Sub "https://s3.amazonaws.com/${ArtifactBucket}/${NestedStackFilename}"
S3UploadObject:
Type: AWS::Lambda::Function
Properties:
Description: Extracts and uploads the specified InputArtifact file to S3.
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: !Sub |
var exec = require('child_process').exec;
var AWS = require('aws-sdk');
var codePipeline = new AWS.CodePipeline();
exports.handler = function(event, context, callback) {
var job = event["CodePipeline.job"];
var s3Download = new AWS.S3({
credentials: job.data.artifactCredentials,
signatureVersion: 'v4'
});
var s3Upload = new AWS.S3({
signatureVersion: 'v4'
});
var jobId = job.id;
function respond(e) {
var params = {jobId: jobId};
if (e) {
params['failureDetails'] = {
message: JSON.stringify(e),
type: 'JobFailed',
externalExecutionId: context.invokeid
};
codePipeline.putJobFailureResult(params, (err, data) => callback(e));
} else {
codePipeline.putJobSuccessResult(params, (err, data) => callback(e));
}
}
var filename = job.data.actionConfiguration.configuration.UserParameters;
var location = job.data.inputArtifacts[0].location.s3Location;
var bucket = location.bucketName;
var key = location.objectKey;
var tmpFile = '/tmp/file.zip';
s3Download.getObject({Bucket: bucket, Key: key})
.createReadStream()
.pipe(require('fs').createWriteStream(tmpFile))
.on('finish', ()=>{
exec(`unzip -p ${!tmpFile} ${!filename}`, (err, stdout)=>{
if (err) { respond(err); }
s3Upload.putObject({Bucket: bucket, Key: filename, Body: stdout}, (err, data) => respond(err));
});
});
};
Timeout: 30
Runtime: nodejs4.3
LambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal: {Service: [lambda.amazonaws.com]}
Action: ['sts:AssumeRole']
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
- "arn:aws:iam::aws:policy/AWSCodePipelineCustomActionAccess"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 's3:PutObjectAcl'
Resource: !Sub "arn:aws:s3:::${ArtifactBucket}/${NestedStackFilename}"
除了使用 lambda 阶段的解决方案,一个简单的方法是使用 CodeBuild 和 AWS SAM。
在主 CloudFormation 模板(我们称之为 main.yaml)中,使用 'Transform: AWS::Serverless-2016-10-31'
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
NestedTemplate:
AWS::CloudFormation::Stack
Properties:
TemplateUri: ./nested-template.yaml
请注意,您只需要将相对路径放入子模板而不是 absolution s3 uri。
添加具有以下内容的 CodeBuild 阶段 buildspecification.yaml
version: 0.1
phases:
build:
commands:
aws cloudformation package --template-file main.yaml --output-template-file transformed_main.yaml --s3-bucket my_bucket
artifacts:
type: zip
files:
- transformed_main.yaml
构建命令 'aws cloudformation package' 会将嵌套的 template.yaml 上传到 s3 存储桶 'my_bucket' 并将绝对 s3 uri 注入转换后的模板。
在 CloudFormation 部署阶段,使用 'Create change set' 和 'Execute change set' 创建堆栈。请注意,'Create or update stack' 不适用于 'Transform: AWS::Serverless-2016-10-31'。
以下是您可能会觉得有用的文档:
- http://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html
- http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
第二个文档展示了如何部署 lambda 函数,但引用嵌套堆栈在本质上是相同的。