Azure DevOps(二)利用Azure DevOps Pipeline 构建基础设施资源
一,引言
上一篇文章记录了利用 Azure DevOps 跨云进行构建 Docker images,并且将构建好的 Docker Images 推送到 AWS 的 ECR 中。今天我们继续讲解 Azure DevOps 的 Pipeline,利用 Release Pipeline 实现 Terraform for AWS Infrastructure Resources 自动部署,我们的目标是将 images 部署到 AWS ECS 上。
-------------------- 我是分割线 --------------------
1,Azure DevOps(一)利用Azure DevOps Pipeline 构建应用程序镜像到AWS ECR
2,Azure DevOps(二)利用Azure DevOps Pipeline 构建基础设施资源
二,正文
1,Terraform Code
根据之前利用 Terrraform 部署Azure 资源的时候,我们都知道需要将各个资源模块划分 Common Module。同样的,我们当前需要部署的AWS的基础设施资源也划分出多个模块,例如,"ECS","Security Group",“ELB”,“IAM”,“VPC” 等
整个项目 mian.tf 的入口,该文件中包含了各个模块嵌套调用等
当前TF Code 中以及集成了 CodeDeploy 的Common Module 可以实现ECS 的蓝绿部署,大家下载 TF 代码后,可以自行魔改。
provider "aws" { region = "ap-northeast-1" } terraform { backend "s3" { region = "ap-northeast-1" profile = "default" } } locals { container_name = "cnbateblogweb" name = "cnbateblogwebCluster" service_name = "cnbateblogwebservice" http_port = ["80"] cidr_block = "10.255.0.0/16" container_port = tonumber(module.alb.alb_target_group_blue_port) } module "securitygroup" { source = "../modules/securitygroup" enabled_security_group = true security_group_name = "cnbateblogwebCluster_ecs_securitygroup" security_group_vpc_id = module.vpc.vpc_id from_port_ingress = 9021 to_port_ingress = 9021 from_port_egress = 0 to_port_egress = 0 } # module "codedeploy" { # source = "../modules/codedeploy" # name = "example-deploy" # ecs_cluster_name = local.name # ecs_service_name = local.service_name # lb_listener_arns = [module.alb.http_alb_listener_blue_arn] # blue_lb_target_group_name = module.alb.aws_lb_target_group_blue_name # green_lb_target_group_name = module.alb.aws_lb_target_group_green_name # auto_rollback_enabled = true # auto_rollback_events = ["DEPLOYMENT_FAILURE"] # action_on_timeout = "STOP_DEPLOYMENT" # wait_time_in_minutes = 1440 # termination_wait_time_in_minutes = 1440 # test_traffic_route_listener_arns = [] # iam_path = "/service-role/" # description = "This is example" # tags = { # Environment = "prod" # } # } module "ecs_fargate" { source = "../modules/ecs" name = local.name service_name = local.service_name container_name = local.container_name container_port = local.container_port subnets = module.vpc.public_subnet_ids security_groups = [module.securitygroup.security_group_id] target_group_arn = module.alb.alb_target_group_blue_arn vpc_id = module.vpc.vpc_id container_definitions = jsonencode([ { name = local.container_name image = "693275195242.dkr.ecr.ap-northeast-1.amazonaws.com/cnbateblogweb:28" #"693275195242.dkr.ecr.ap-northeast-1.amazonaws.com/cnbateblogweb:28" #"docker.io/yunqian44/cnbateblogweb:laster" essential = true environment = [ { name : "Location", value : "Singapore" }, { name : "ASPNETCORE_ENVIRONMENT", value : "Production" } ] portMappings = [ { containerPort = local.container_port protocol = "tcp" } ] } ]) desired_count = 1 deployment_maximum_percent = 200 deployment_minimum_healthy_percent = 100 deployment_controller_type = "ECS" assign_public_ip = true health_check_grace_period_seconds = 10 platform_version = "LATEST" cpu = 256 memory = 512 requires_compatibilities = ["FARGATE"] iam_path = "/service_role/" description = "This is example" enabled = true ecs_task_execution_role_arn = aws_iam_role.default.arn tags = { Environment = "prod" } } resource "aws_iam_role" "default" { name = "iam-rol-ecs-task-execution" assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json } data "aws_iam_policy_document" "assume_role_policy" { statement { actions = ["sts:AssumeRole"] principals { type = "Service" identifiers = ["ecs-tasks.amazonaws.com"] } } } data "aws_iam_policy" "ecs_task_execution" { arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" } resource "aws_iam_policy" "ecs_task_execution" { name = aws_iam_role.default.name policy = data.aws_iam_policy.ecs_task_execution.policy } resource "aws_iam_role_policy_attachment" "ecs_task_execution" { role = aws_iam_role.default.name policy_arn = aws_iam_policy.ecs_task_execution.arn } module "alb" { source = "../modules/elb" name = "elb-cnbateblogweb" vpc_id = module.vpc.vpc_id subnets = module.vpc.public_subnet_ids enable_https_listener = false enable_http_listener = true enable_deletion_protection = false enabled_lb_target_group_blue = true aws_lb_target_group_blue_name = "elb-cnbateblogweb-blue" health_check_path = "" enabled_lb_target_group_green = true aws_lb_target_group_green_name = "elb-cnbateblogweb-green" enable_http_listener_blue = true http_port_blue = 80 target_group_blue_port = 9021 enable_http_listener_rule_blue = true enable_http_listener_green = true http_port_green = 8080 target_group_green_port = 8080 enable_http_listener_rule_green = true } module "vpc" { source = "../modules/vpc" cidr_block = local.cidr_block name = "ecs-fargate" public_subnet_cidr_blocks = [cidrsubnet(local.cidr_block, 2, 0), cidrsubnet(local.cidr_block, 2, 1)] public_availability_zones = data.aws_availability_zones.available.names } data "aws_caller_identity" "current" {} data "aws_availability_zones" "available" {}
具体的各个模块的Common Moudle 点击文章底部的 github 链接
2,Azure DevOps 设置 Release Pipeline
回到上一篇文章中创建好的 Azure DebOps 的项目中,docker images 的构筑我们已经在CI 阶段的 pipeline 中已经完成了。我们需要创建部署阶段的 Release Pipeline
选择 “CnBateBlogWeb_AWS” 的项目,选择 “Release” 菜单,点击 “New Pipeline”
选择模板的时候,先点击 “Empty”
修改当前阶段的名称
Stage name:”Deploy AWS ECS“
下一步,我们就得先添加 ”Artifacts“(工件),也就是我们写的 TF 代码
点击 ”+Add“,选择 TF 代码源 ”GitHub“
Service:”Github_Connection44“
Source(repository):”yunqian44/AWS_ECS“(注意:大家选择fork 到自己的github 仓库名称)
Default branch:“master”
Default version:“Latest from the default branch”
Source alias:“yunqian44_AWS_ECS”
点击 “Add”
点击 “1 job,0 task” 链接为部署阶段添加新的任务
我们都知道,如果我们的基础设施代码的开发是多人协助,并且是需要保存状态文件,那么我们就必须得先在执行Terraform 代码之前创建用于保存 terraform 状态文件的S3,所以,我们得先添加 AWS CLI 来执行创建S3的动作。
点击图中 "Agent job" 旁圈中的 “+”,并在 task 的搜索框中输入 “AWS CLI” ,点击 “Add” 进行添加
添加相关参数用于执行AWS CLI 命令创建S3
Display name:“AWS CLI:Create S3 for Saving terraform state file”
AWS Credentials 选择之前手动加的 AWS 的 ServiceConnection
AWS Region:“Asia Pacific(Tokyo)[ap-northeast-1]”
Command:“s3”
Subcommand:“mb”
Options and parameters:“s3://$(terraform_statefile)” (注意:$(terraform_statefile) 是通过Pipleline Variable 来保存参数的)
接下来添加 Terraform 依赖SDK 的 task,搜索 “Terraform tool installer”,选择添加当前任务
经过查询,我们需要修改当前 Terraform 的版本
Version 版本改为:“0.15.5”
接下来添加 Terraform 初始化的Task
Task 搜索框中搜索 “Terraform”,点击 “Add”
修改相关参数:
Display name:“Terraform:aws init”
Provider 选择:”aws“
Command:”init“
Configuration directory:选择 Terraform 代码执行的目录
AWS backend configuration
aws service connection 大家可以选择创建一个新的
Bucket:”$(terraform_statefile)“ 已通过Pipeline Variable 进行设置了
Key:”$(terraform_statefile_key)“ 已通过Pipeline Variable 进行设置了
接下来添加 Terraform 生成部署计划的Task
修改相关参数:
Display name:”Terraform:aws plan“
Provider 选择:”aws“
Command 选择:”plan“
Configuration directory: 选择 terraform 代码的工作目录
AWS Services connection 选择刚刚新添加的 terraform 的后端服务连接
最后,我们添加执行 Terraform 部署计划的 Task
Display name:”Terraform:aws auto-apply“
Provider:”aws“
Command:”validate and apply“
Configuration directory: 选择 terraform 代码的工作目录
Additional command arguments:”-auto-approve“
AWS Services connection 选择刚刚新添加的 terraform 的后端服务连接
修改当前 Release Pipeline 的名称,点击 ”Save“
关于pipeline 的触发条件,我就不进行设置了,也不是今天演示的目的。
3,测试 Azure DevOps 自动化部署基础设施资源
手动触发 Pipeline,点击 ”Create release“ 进行手动触发
点击 ”Create“
等待整个Pipeline 的执行完成,我们可以看到日志整个的输出,并成功进行部署
接下来,我们需要登陆到 AWS Console 查看资源创建的状况,ECS 运行成功
找到利用 Terraform 构筑好的ELB,复制 dns 名进行访问
Bingo!!!!! 成功。大功告成!!!!!
🎉🎉🎉🎉🎉!!!!
三,结尾
今天的实际操作比较多,大多都是集中在Azure DevOps 上,我们都是需要在Azure DevOps 上进行操作的,大家要多加练习。至于 Terraform 代码方面没有过多的讲解,主要是因为结合之前部署Azure 资源,大家都会Terraform 有了一定的理解了。所以大家可以自行下载,进行分析修改。
参考资料:Terraform 官方,Terraform For AWS 文档,AWS CLI 文档
AWS_ECS github:https://github.com/yunqian44/AWS_ECS
文章来自博主本人自己的博客:https://allenmasters.com/post/2021/6/8/azure-devopsazure-devops-pipeline
欢迎大家关注博主的博客:https://allenmasters.com/
作者:Allen
版权:转载请在文章明显位置注明作者及出处。如发现错误,欢迎批评指正。