在dockerhub上发现了一些更加小的官方镜像:

link:    https://registry.hub.docker.com/_/python?tab=tags&page=1&ordering=last_updated

  docker pull  python:3.7.11-slim                 113MB

  docker pull python:3.7.11-alpine3.13         42.5MB

Dockerfile:

FROM python:3.7.11-alpine3.13
VOLUME /tmp
#ADD xxxSNAPSHOT.jar   /opt/app.jar/
WORKDIR  /opt
COPY docker-entrypoint.sh .
#add code
COPY project_path/ /opt/app/
#EXPOSE 7097 ENTRYPOINT ["sh", "docker-entrypoint.sh"]

启动脚本: 

docker-entrypoint.sh
#!/bin/sh
 #install dependence
 pip3 install --default-timeout=100 --no-cache-dir --upgrade pip setuptools pymysql pymongo redis scrapy-redis ipython Scrapy requests retrying kafka
 echo " try to start the service !"
 cd /opt/app/project_path/
 python3 main.py

 条件参考:

if [[ $FE_ROLE = 'fe-leader' ]]; then
    /home/doris/fe/bin/start_fe.sh
elif [[ $FE_ROLE = 'be' ]]; then
    /home/doris/be/bin/start_be.sh
elif [[ $FE_ROLE = 'fe-follower' ]]; then
    /home/doris/fe/bin/start_fe.sh --helper $FE_LEADER
else
    /home/doris/fs_broker/bin/start_broker.sh
fi

 

创建镜像:

   docker build -t  scrapy_redis_pj:v1 . 

 

启动:

 sudo docker run -itd --restart unless-stopped --name scrapy_redis -v /opt/scrapy:/opt/app/ -v /etc/localtime:/etc/localtime:ro    scrapy_redis_pj 

 

KO!

 >>>>>>>>>>>>>>

原文:https://www.cnblogs.com/zhujingzhi/p/9766965.html

 1> 编写dockerfile文件

# 指定创建的基础镜像
FROM alpine
  
# 作者描述信息
MAINTAINER alpine_python3_scrapy (lshan523@163.com)
  
# 替换阿里云的源
RUN echo "http://mirrors.aliyun.com/alpine/latest-stable/main/" > /etc/apk/repositories && \
    echo "http://mirrors.aliyun.com/alpine/latest-stable/community/" >> /etc/apk/repositories
    
# 更新源、安装openssh 并修改配置文件和生成key 并且同步时间
RUN apk update && \
    apk add --no-cache openssh-server tzdata && \
    sed -i "s/#PermitRootLogin.*/PermitRootLogin yes/g" /etc/ssh/sshd_config && \
    ssh-keygen -t rsa -P "" -f /etc/ssh/ssh_host_rsa_key && \
    ssh-keygen -t ecdsa -P "" -f /etc/ssh/ssh_host_ecdsa_key && \
    ssh-keygen -t ed25519 -P "" -f /etc/ssh/ssh_host_ed25519_key && \
    echo "root:h056zHJLg85oW5xh7VtSa" | chpasswd
 
# 安装Scrapy依赖包(必须安装的依赖包)
RUN apk add --no-cache python3 python3-dev gcc openssl-dev openssl libressl libc-dev linux-headers libffi-dev libxml2-dev libxml2 libxslt-dev openssh-client openssh-sftp-server
 
# 安装环境需要pip包(这里的包可以按照需求添加或者删除)
RUN pip3 install --default-timeout=100 --no-cache-dir --upgrade pip setuptools pymysql pymongo redis scrapy-redis ipython Scrapy requests
 
# 启动ssh脚本
RUN echo "/usr/sbin/sshd -D" >> /etc/start.sh && \
    chmod +x /etc/start.sh
 
# 开放22端口
EXPOSE 22
  
# 执行ssh启动命令
CMD ["/bin/sh","/etc/start.sh"]

实现了容器可以SSH远程访问 基于Python3 环境安装的Scrapy,通过start.sh脚本启动SSH服务

如果直接运行可定的项目->修改: 添加:dockerfile, 去掉CMD

#自己的代码
COPY   scrapy_project/ /usr/local/app/
COPY docker-entrypoint.sh .
ENTRYPOINT ["/bin/sh", "docker-entrypoint.sh"]

启动脚本: 

docker-entrypoint.sh
#!/bin/sh
 cd /usr/local/app/project_path/
  echo " try to start the service !" python3 main.py

 

 

 

创建镜像:

>      docker build -t scrapy_redis_ssh:v1 . 

查看镜像:

[root@DockerBrian scrapy]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
scrapy_redis_ssh    v1                  b2c95ef95fb9        4 hours ago         282 MB
docker.io/alpine    latest              196d12cf6ab1        4 weeks ago         4.41 MB

创建容器:

docker run -itd --restart=always --name scrapy10086 -p 10086:22 scrapy_redis_ssh:v1

 

posted on 2021-07-14 10:32  lshan  阅读(82)  评论(0编辑  收藏  举报