[AWS] CI/CD by Bitbucket Pipelines

Ref: Serverless application with CI/CD based on AWS and Bitbucket Pipelines [基于Django,可以参考]

Goto: https://apiit.atlassian.net/wiki/spaces/ITSM/pages/230031626/API+Development+Practices

Ref: YAML 语言教程

Ref: Deploying a Python Flask application to AWS Lambda With Serverless Framework and CircleCI [GitLab 的 pipeline]

Ref: More than 'Hello World in Lambda': Build and Deploy Python Flask APIs in AWS Lambda via CDK [Zappa与CodeDeploy的竞争关系]

 

热身:自动配置一个容器

Ref: Bitbucket CI/CD pipelines configuration

通过点击右边栏“设置”,可输入变量的值。

CICD的结果:自动上传镜像到docker-hub。

 

 

Bitbucket Pipelines

一、bitbucket-pipelines.yml配置

如果repo底下有bitbucket-pipelines.yml這個檔案,bitbucket server上的CI/CD runner就會開始解析這個yml檔裡的內容,並按照裡面來做事。

  • Python工程的模板

bitbucket-pipelines.yml YAML文件。

#  Template python-build

#  This template allows you to validate your python code.
#  The workflow allows running tests and code linting on the default branch.

image: python:3.8

pipelines:
  default:
    - parallel:
      - step:
          name: Test
          caches:
            - pip
          script:
            - if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
            - pip install pytest
            - pytest -v tests/* --junitxml=test-reports/report.xml
      - step:
          name: Lint code
          script:
            # Enforce style consistency across Python projects https://flake8.pycqa.org/en/latest/manpage.html
            - pip install flake8
            - flake8 . --extend-exclude=dist,build --show-source --statistics

 

  • 另一个模板

Ref: 打造自己部落格的CI/CD pipeline,以BitBucket環境為例

# 聲明image的話 Bitcucket上的runner就會知道是要走docker container模式,這邊我們要用alpine這個linux os,然後上面已經安裝好最新版的node.js
image: node:current-alpine

pipelines:
# 只有在push到master 或 PR進master時才跑pipeline
  branches:
    master: 
      - step:
         caches:
           - node
         script:
# alpine上的OS library管理工具叫`apk` 這句script 等於Mac OS上的 brew update, 然後順便裝一下openssh
           - apk add --update --no-cache openssh
# alpinen雖然本身很輕量(只有5mb),但是它也缺少了很多一般OS上會有的程式,這裡是在安裝 tar, python, g++, make...這些東西,因為它們是我們之後gatsby build時的那些plugin可能會用到的程式
           - apk update && apk add tar python g++ make bash zlib-dev libpng-dev&& rm -rf /var/cache/apk/*
# access日本那邊的一個alpine軟體庫 以取得 libvips這個程式(一個node.js的影像處理library,它被我們的gatsby-transformer-sharp這個plugin所需要用到,因為我們在gatsby build的過程裡會對圖檔進行優化)
           - apk add --update --no-cache --repository http://ftp.tsukuba.wide.ad.jp/Linux/alpine/v3.10/main/ vips-dev
# 如果真正放blog source的資料夾跟 `bitbucket-pipelines.yml`不是同一層的話 不要忘了要再cd進去,在現在的template底下是在同一層,所以無須執行
          # - cd ./myblog
# 給予權限,這是用來排除 `sharp EACCES: permission denied, mkdir '/root/.npm'`這個問題
           - npm config set user 0
           - npm config set unsafe-perm true
# 終於開始看到熟悉的指令,安裝寫在package.json裡的那些套件
           - npm install
# 開始build,然後不產生sourceMap,以減少網路傳輸量和部署的速度
           - npm run buildWithouSourceMap
# 用tar 把 gatsby產生出來在public(這個template gatsby build完之後放靜態檔案的資料夾)裡的檔案做壓縮,壓縮檔取名為 `release.tar.gz`
           - tar zcvf release.tar.gz public
# ssh(所以剛剛上面才要安裝openssh)到要部署的遠端server,把檔案推過去之前先刪掉之前的所有檔案,           
           - ssh your_account@your_server_ip 'rm -rf /path/to/your/blogFiles/* && exit'
# 再把現在這個container底下剛剛做好的壓縮檔scp到server上放blog檔案的路徑底下           
           - scp release.tar.gz your_account@your_server_ip:/path/to/your/blogFiles
# 再登入一次 把剛剛scp過的檔案解壓縮,從public資料夾那一層移動所有的檔案到nginx上設定好放blog檔案的那一層,最後把壓縮檔跟已經空的了public資料夾刪掉,再登出           
           - ssh your_account@your_server_ip 'cd /path/to/your/blogFiles && tar zxvf release.tar.gz && mv ./public/* ./ && rm release.tar.gz && rm -r ./public && exit'

# 壓縮後再傳輸可以大大減少scp的傳輸時間,不然檔案太多,會傳很久

 

 

二、Deploy on AWS 

Goto: https://bitbucket.org/bitbucketpipelines/workspace/projects/TAT [官方例子]

  • S3

Example: how to upload files to S3 using a Bitbucket Pipe.

image: atlassian/default-image:2

pipelines:
  default:
    - step:
        script:
          - mkdir artifact
          - echo "Pipelines is awesome!" > artifact/index.html
        artifacts:
          - artifact/*
    - step:
        deployment: production
        script:
          - pipe: atlassian/aws-s3-deploy:0.3.7
            variables:
              AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
              AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
              AWS_DEFAULT_REGION: 'ap-southeast-2'
              S3_BUCKET: 'bbci-task-s3-deploy-test'
              EXPIRES: '2018-10-01'
              LOCAL_PATH: 'artifact'
              ACL: 'public-read'

 

  • Lambda

Example: how to deploy a Lambda function using a Bitbucket Pipe.

pipelines:
  default:
    - step:
        name: Build and package
        script:
          - apt-get update && apt-get install -y zip
          - zip code.zip index.js
        artifacts:
          - code.zip

    - step:
        name: Update Lambda code
        script:
          - pipe: atlassian/aws-lambda-deploy:0.2.1
            variables:
              AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
              AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
              AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
              FUNCTION_NAME: 'my-function'
              COMMAND: 'update'
              ZIP_FILE: 'code.zip'

        # The pipe exports the newly published Lambda version to a file.
        artifacts:
          - pipe.meta.env

    # You can optionally use AWS Lambda aliases to map the newly published Lambda
    # function version to conceptual environments.
    - step:
        name: Deploy to Test
        deployment: test
        script:
        # Read the 'function_version' from the update pipe into environment variables.
        - source pipe.meta.env
        # Point the test alias to the function.
        - pipe: atlassian/aws-lambda-deploy:0.2.1
          variables:
            AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
            AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
            AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
            FUNCTION_NAME: 'my-function'
            COMMAND: 'alias'
            ALIAS: 'test'
            VERSION: '${function_version}'

    - step:
        name: Deploy to Staging
        deployment: staging
        script:
        # Read the 'function_version' from the update pipe into environment variables.
        - source pipe.meta.env
        # Point the 'staging' alias to the function.
        - pipe: atlassian/aws-lambda-deploy:0.2.1
          variables:
            AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
            AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
            AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
            FUNCTION_NAME: 'my-function'
            COMMAND: 'alias'
            ALIAS: 'staging'
            VERSION: '${function_version}'

    - step:
        name: Deploy to Production
        deployment: production
        script:
        # Read the 'function_version' from the update pipe into environment variables.
        - source pipe.meta.env
        # Point the 'production' alias to the function.
        - pipe: atlassian/aws-lambda-deploy:0.2.1
          variables:
            AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
            AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
            AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
            FUNCTION_NAME: 'my-function'
            COMMAND: 'alias'
            ALIAS: 'production'
            VERSION: '${function_version}'

 

 

三、Django serverless app by Zappa

Ref: Project template for Django serverless app

image: python:3.6.1

pipelines:
  tags:
    release-*:
      - step:
          caches:
            - pip
          script:
            - ./ci.sh prod1
    staging-*:
      - step:
          caches:
            - pip
          script:
            - ./ci.sh stage1
  branches:
    master:
      - step:
          caches:
            - pip
          script:
            - ./ci.sh dev1

ci.sh 内容如下:

#!/bin/bash

setup () {
    echo  ------- SETUP -------
    apt-get update
    apt-get install -y zip
    pip install virtualenv
    virtualenv --python=python3 env
    source env/bin/activate
    pip install -r requirements.txt
    return $?
}


tests() {
    echo ------- TESTS -------
    pip install -r requirements-test.txt
    tox
    return $?
}

deploy() {
    echo ------- DEPLOY -------
    echo $1
    pip install awscli
    aws s3 cp s3://$CMDB/zappa_settings.json .
    zappa update $1 || zappa deploy $1
    zappa certify $1 --yes
    zappa manage $1 "migrate --noinput"
    zappa manage $1 "collectstatic --noinput"
    return $?
}

setup && test && deploy $1

 

 

四、Flask serverless app by Zappa

Goto: https://superuser.blog/serverless-meets-ci-cd/

Serverless meets CI CD

Sanket edited this page on 27 Nov 2018 · 7 revisions
Welcome to the serverless-python wiki!

 

 

 "botocore.exceptions.ProfileNotFound: The config profile (amplify-sat) could not be found"

 

有俩个办法:

AWS codepipe:先打包成zip,之后再通过cdk 去 deployment,而不是zappa deploy。

Bitbucket的话:

# Configuring AWS credentials
- aws configure set default.region us-east-1
- aws configure set aws_access_key_id 'xxxxxxxxxxxxxxxxxxxxxxxxxxxx'
- aws configure set aws_secret_access_key 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
       
    #   - mkdir ~/.aws
    #   - cp .aws/config ~/.aws/config
    #   - cp .aws/credentials ~/.aws/credentials

结合:Using environment variables (First look at BitBucket Pipelines, part 3)

 

/* implement */ 

End.

posted @ 2020-10-18 16:32  郝壹贰叁  阅读(436)  评论(0编辑  收藏  举报