Lagom production deployment

tutorial:https://developer.lightbend.com/guides/lagom-kubernetes-k8s-deploy-microservices/

 

一、harbor deployment: 

https://blog.frognew.com/2017/06/install-harbor.html

 

#harbor compose

 

wget https://github.com/vmware/harbor/releases/download/v1.1.2/harbor-offline-installer-v1.1.2.tgz

tar -zxvf harbor-offline-installer-v1.1.2.tgz

cd harbor/
ls
common  docker-compose.notary.yml  docker-compose.yml  harbor_1_1_0_template  harbor.cfg  harbor.v1.1.2.tar.gz  install.sh  LICENSE  NOTICE  prepare  upgrade

修改harbor.cfg:

#或者指定域名 如 hostname = harbor.myCompany.com
hostname = 192.168.61.11

 

二、harbor deployment

# habor
wget https://github.com/vmware/harbor/releases/download/v1.1.2/harbor-offline-installer-v1.1.2.tgz
tar -zxvf harbor-offline-installer-v1.1.2.tgz
cd harbor
vi harbor.cfg
# change the domain name, https, password,
# default user name: admin

hostname = harbor.xx.xxx.com

harbor_admin_password = Xxxxxxx

 

三、https deployment

# https(input the domain name and ip)
cd https
# 这是一个自动生成证书的工具 .
/lc-tlscert mkdir -p /data/cert/ cp server.* /data/cert/ ./install.sh

 

四、client

#####
# client
#####
vi /etc/hosts
# add 10.0.0.xxx harbor.xx.xxx.com

# docker client import *.crt
mkdir -p /usr/share/ca-certificates/extra
scp root@10.0.0.xxx:/root/harbor/https/server.crt /usr/share/ca-certificates/extra/
dpkg-reconfigure ca-certificates
systemctl restart docker

# before push
docker login -u admin -p xxxxx harbor.xx.xxx.com
 

在客户端将harbor的hostname添加到 /etc/hosts中,在本地创建文件夹,拷贝证书, depackage, 然后重启本地docker -》 登陆docker

# change image pull policy
Always

五、在本地将工程publish到harbor

sbt -DbuildTarget=kubernetes clean docker:publish

build.sbt中需要修改

lazy val friendImpl = project("friend-impl")
  .enablePlugins(LagomJava)
  .settings(
    version := buildVersion,
    version in Docker := buildVersion,
    dockerBaseImage := "openjdk:8-jre-alpine",
    dockerRepository := Some(BuildTarget.dockerRepository),
    dockerUpdateLatest := true,
    dockerEntrypoint ++= """-Dhttp.address="$(eval "echo $FRIENDSERVICE_BIND_IP")" -Dhttp.port="$(eval "echo $FRIENDSERVICE_BIND_PORT")" -Dakka.remote.netty.tcp.hostname="$(eval "echo $AKKA_REMOTING_HOST")" -Dakka.remote.netty.tcp.bind-hostname="$(eval "echo $AKKA_REMOTING_BIND_HOST")" -Dakka.remote.netty.tcp.port="$(eval "echo $AKKA_REMOTING_PORT")" -Dakka.remote.netty.tcp.bind-port="$(eval "echo $AKKA_REMOTING_BIND_PORT")" $(IFS=','; I=0; for NODE in $AKKA_SEED_NODES; do echo "-Dakka.cluster.seed-nodes.$I=akka.tcp://friendservice@$NODE"; I=$(expr $I + 1); done)""".split(" ").toSeq,
    dockerCommands :=
      dockerCommands.value.flatMap {
        case ExecCmd("ENTRYPOINT", args @ _*) => Seq(Cmd("ENTRYPOINT", args.mkString(" ")))
        case c @ Cmd("FROM", _) => Seq(c, ExecCmd("RUN", "/bin/sh", "-c", "apk add --no-cache bash && ln -sf /bin/bash /bin/sh"))
        case v => Seq(v)
      },
    resolvers += bintrayRepo("hajile", "maven"),
    resolvers += bintrayRepo("hseeberger", "maven"),
    libraryDependencies ++= Seq(
      lagomJavadslPersistenceCassandra,
      lagomJavadslTestKit
    )
  )
  .settings(BuildTarget.additionalSettings)
  .settings(lagomForkedTest
dockerRepository的值设置成publishTo的目标,比如: harbor.xx.xxx.com/chirper


六、kubenets拉取image
注意:上述在master节点做的CA工作,在slave节点上也要做,另外slave节点上也需要做hosts的配置

edit /etc/hosts
cd lagom-java-chirper-example/deploy/kubernetes/resources/chirper

kubectl create -f  friend-impl-service.json


kubectl create -f  friend-impl-statefulset.json
如果出现
2015/07/21 11:11:00 Get https://kubernetes.default.svc.cluster.local/api/v1/nodes: dial tcp: lookup kubernetes.default.svc.cluster.local: no such host
 
 这类错误,可以
vim /etc/hosts  10.0.0.xxx(harbor) harbor
vim /etc/resolv.conf  114.114.114.114 (修改至harbor的url)

 

七、kubenetes deployment:

ref: https://blog.frognew.com/2017/09/kubeadm-install-kubernetes-1.8.html

 

# 0. Verify the mac address and product_uuid

ifconfig -a

cat /sys/class/dmi/id/product_uuid

 

# 1. Turn off swap

swapoff -a

vi /etc/fstab

 

vi /etc/hosts

10.0.0.56 master.dev.xx.xxx.com

10.0.0.51 slave01.dev.xx.xxx.com

10.0.0.52 slave02.dev.xx.xxx.com

10.0.0.53 slave03.dev.xx.xxx.com

 

#swap line

 

# 2. Docker

apt install -y docker.io

cat << EOF > /etc/docker/daemon.json

{

  "exec-opts": ["native.cgroupdriver=cgroupfs"]

}

EOF

? service docker restart

 

# 3. Kubeadm, Kubectl, Kubelet

apt update && apt install -y apt-transport-https

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb http://apt.kubernetes.io/ kubernetes-xenial main

EOF

apt update

apt install -y kubelet kubeadm kubectl

 

# 4. Init. Take the token!!!

vi ~/.profile

export KUBECONFIG=/etc/kubernetes/admin.conf

 

./init.sh

 

# 5. Flannel

# copy portmap to master and all slaves

cp flannel/portmap /opt/cni/bin

kubectl apply -f flannel/kube-flannel.yml

 

# 6. Dashboard

kubectl create -f dashboard/kubernetes-dashboard.yaml

kubectl create -f dashboard/kubernetes-dashboard-admin.rbac.yaml

kubectl create -f dashboard/grafana.yaml

kubectl create -f dashboard/influxdb.yaml

kubectl create -f dashboard/heapster.yaml

kubectl create -f dashboard/heapster-rbac.yaml

 

#7.

kubectl get pod --all-namespaces -o wide

找出dashboard 的ip

如:

kube-system   kubernetes-dashboard-7486b894c6-4rhfn    1/1       Running   0          1h        10.244.0.3   k8s-dev-master

 

./rinetd -c  10.0.0.56 8443 (target ip)  10.244.0.3 8443 (dashboard ip)

 

如果:rinetd: couldn't bind to address

关闭rinetd重新run即可

pkill rinetd

 

集群清理,不管master还是slave 都是:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

 

查看kubernete-dashboard-admin的token:

根据第一句命令查看token,把token写入第二句

kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
kubernetes-dashboard-admin-token-pfss5   kubernetes.io/service-account-token   3         14s

kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-pfss5

 

八、deploy chirper-ingress

kubectl create -f /deploy/kubernetes/resources/nginx/chirper-ingress.json

 

九、set up static cassandra

在impl的application.conf中加入

 

cassandra.default {

  ## list the contact points  here

  contact-points = ["10.0.0.58", "23.51.143.11"]

  ## override Lagom’s ServiceLocator-based ConfigSessionProvider

  session-provider = akka.persistence.cassandra.ConfigSessionProvider

}

 

cassandra-journal {

  contact-points = ${cassandra.default.contact-points}

  session-provider = ${cassandra.default.session-provider}

}

 

cassandra-snapshot-store {

  contact-points = ${cassandra.default.contact-points}

  session-provider = ${cassandra.default.session-provider}

}

 

lagom.persistence.read-side.cassandra {

  contact-points = ${cassandra.default.contact-points}

  session-provider = ${cassandra.default.session-provider}

}

 

十、部署自己的scala application时,发现一直会抛一个异常说是一个名为“application”的akka-remote-actor无法定位,发现通过play.akka.actor-system配置的name被篡改成application,

发现因为ConductR是收费的,而我们使用了DnsServiceLocator,所以Loader中需要把extends ConductR改成extends DnsServiceLocator,这样actor-system就可以正常配置了。

ref:https://index.scala-lang.org/lightbend/service-locator-dns/lagom-service-locator-dns/1.0.2?target=_2.11

 

十一、kafka配置问题。

在配置外部静态kafka时,会遇到下面这个问题。

[error] com.lightbend.lagom.internal.broker.kafka.KafkaSubscriberActor [sourceThread=myservice-akka.actor.default-dispatcher-19, akkaTimestamp=07:51:51.537UTC, akkaSource=akka.tcp://myservice@myservice-0.myservice.default.svc.cluster.local:2551/user/KafkaBackoffConsumer1-myEvents/KafkaConsumerActor1-myEvents, sourceActorSystem=myservice] - Unable to locate Kafka service named [myservice] 

这个问题解决的方法是,我们不仅要在build.sbt中配置

lagomKafkaEnabled in ThisBuild := false
lagomKafkaAddress in ThisBuild := "10.0.0.xx:9092"

还要在serviceImpl的applicaton.conf中配置,首先要把service-name置空,brokers=ip:port,其他和Lagom官网上Kafka client配置一样。

lagom.broker.kafka {
  # The name of the Kafka service to look up out of the service locator.
  # If this is an empty string, then a service locator lookup will not be done,
  # and the brokers configuration will be used instead.
  service-name = ""

  # The URLs of the Kafka brokers. Separate each URL with a comma.
  # This will be ignored if the service-name configuration is non empty.
  brokers = "10.0.0.58:9092"
}

 如果kafka抛出WakeupException,Consumer actor terminated,详情见:https://github.com/lagom/lagom/issues/705,需要修改kafka server.properties中的

advertised.listeners=PLAINTEXT://your IP:9092

 

十二、k8s健康检查探针的问题

k8s部署之后,通过kube get pods发现我们刚deploy的服务状态是Running,但是容器状态Ready是false,通过kubectl describe pod servicename,发现

readiness probe failed: Get http://10.108.88.40:8080/healthz: dial tcp 10.108.88.40:8080: getsockopt: connection refused
这里是因为我们URL错误。

Lagom需要我们在使用circuit-breaker来配置这个URL。
application.conf

lagom.circuit-breaker {

  # Default configuration that is used if a configuration section
  # with the circuit breaker identifier is not defined.
  default {
    # Possibility to disable a given circuit breaker.
    enabled = on

    # Number of failures before opening the circuit.
    max-failures = 10

    # Duration of time after which to consider a call a failure.
    call-timeout = 10s

    # Duration of time in open state after which to attempt to close
    # the circuit, by first entering the half-open state.
    reset-timeout = 15s
  }
}

在scala版本的serviceApplication中,

lazy val lagomServer = LagomServer.forServices(
    bindService[YourService].to(wire[YourImpl]),
    metricsServiceBinding
  )

加入metricsServiceBinding,这里就可以解决这个问题。

 

十三、traefik会把k8s对外的ip设置成80.

 

 

harbor:https://www.jianshu.com/p/2ebadd9a323d

docker-compose.yml主要修改registry容器参数,在network下增加如下图中框内的内容:

 

 
 
 

4、harbor.cfg只需要修改hostname为你自己的机器IP或者域名,harbor默认的db连接密码为root123,可以自己修改,也可以保持默认,harbor初始管理员密码为Harbor12345,可以根据自己需要进行修改,email选项是用来忘记密码重置用的,根据实际情况修改,如果使用163或者qq邮箱等,需要使用授权码进行登录,此时就不能使用密码登录了,会无效的(qq使用授权码登录第三方邮箱客户端自行百度

5.访问harbor, 登陆之后的页面:

 

 

posted on 2018-01-12 18:17  重八  阅读(592)  评论(0编辑  收藏  举报

导航