hj_服务器操作记录-6月
基本信息:
1 服务器配置: CentOS Stream 8 64位 2核8G 1M带宽
2 开放端口: mysql->3306-3309 redis->6379-6380 nacos->8848-8850
jar服务->8000-9999 web->80 443 22
3 连接信息: 账号->root 密码-<xxx
注: 后面的一些挂载目录文件保存等都放在 /horus下 mkdir /hours
安装podman
- 1 yum install podman -y
- 2 podman --version --podman version 4.0.2
- 注: podman近同于docker 操作命令将docker换成podman即可
或者可以设置别名: alias docker=podman
安装mysql8.0.27
- podman pull mysql:8.0.27 选择docker.io镜像
- podman run -d -p 3306:3306 --name mysql -e MYSQL_ROOT_PASSWORD=root 镜像ID
- podman exec -it 容器ID /bin/bash -- 可进入容器
- mkdir -p /horus/mysql_8.0.27/conf mkdir /horus/mysql_8.0.27/data
- podman cp 容器ID:/etc/mysql/. /horus/mysql_8.0.27/conf
- 修改配置文件 my.cnf 端口改为3307
-
# Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # # The MySQL Server configuration file. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html [mysqld] port = 3307 pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock datadir = /var/lib/mysql secure-file-priv= NULL # Custom config should go here !includedir /etc/mysql/conf.d/
-
关闭原容器 重新启动 podman stop podman rm 指令
- 启动容器
-
podman run -d --privileged=true --name mysql_8.0.27 -p 3307:3307 -v /horus/mysql_8.0.27/data:/var/lib/mysql -v /horus/mysql_8.0.27/conf:/etc/mysql -e MYSQL_ROOT_PASSWORD=hj2022 3218b38490ce
- podman run -d --privileged=true --name mysql_8.0.27 \
- -p 3307:3307 \
- -v /horus/mysql_8.0.27/data:/var/lib/mysql \
- -v /horus/mysql_8.0.27/conf:/etc/mysql \
- -e MYSQL_ROOT_PASSWORD=hj2022 镜像ID
- Navicat 连接成功 账号: root 密码: hj2022
安装redis6.2.6
- podman pull redis:6.2.6 选择docker.io版本
- // 创建2个目录文件,保存redis的数据和配置文件
- mkdir -p /horus/redis_6.2.6/conf mkdir /horus/redis_6.2.6/data
- 在conf目录下创建redis.conf 从redis官方复制一份配置文件进行修改.
-
################################## NETWORK ##################################### # IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES # JUST COMMENT THE FOLLOWING LINE. # bind 127.0.0.1 # By default protected mode is enabled. You should disable it only if # you are sure you want clients from other hosts to connect to Redis # even if no authentication is configured, nor a specific set of interfaces # are explicitly listed using the "bind" directive. # protected-mode no # Accept connections on the specified port, default is 6379 (IANA #815344). # If port 0 is specified Redis will not listen on a TCP socket. # port 6380 # TCP listen() backlog. # # In high requests-per-second environments you need an high backlog in order # to avoid slow clients connections issues. Note that the Linux kernel # will silently truncate it to the value of /proc/sys/net/core/somaxconn so # make sure to raise both the value of somaxconn and tcp_max_syn_backlog # in order to get the desired effect. # tcp-backlog 511 # Close the connection after a client is idle for N seconds (0 to disable) # timeout 0 # A reasonable value for this option is 300 seconds, which is the new # Redis default starting with Redis 3.2.1. # tcp-keepalive 300 ################################# GENERAL ##################################### # By default Redis does not run as a daemon. Use 'yes' if you need it. # Note that Redis will write a pid file in /var/run/redis.pid when daemonized. # daemonize no # If you run Redis from upstart or systemd, Redis can interact with your # supervision tree. Options: # supervised no - no supervision interaction # supervised upstart - signal upstart by putting Redis into SIGSTOP mode # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET # supervised auto - detect upstart or systemd method based on # UPSTART_JOB or NOTIFY_SOCKET environment variables # Note: these supervision methods only signal "process is ready." # They do not enable continuous liveness pings back to your supervisor. # supervised no # If a pid file is specified, Redis writes it where specified at startup # and removes it at exit. # # When the server runs non daemonized, no pid file is created if none is # specified in the configuration. When the server is daemonized, the pid file # is used even if not specified, defaulting to "/var/run/redis.pid". # # Creating a pid file is best effort: if Redis is not able to create it # nothing bad happens, the server will start and run normally. # pidfile /var/run/redis_6380.pid # Specify the server verbosity level. # This can be one of: # debug (a lot of information, useful for development/testing) # verbose (many rarely useful info, but not a mess like the debug level) # notice (moderately verbose, what you want in production probably) # warning (only very important / critical messages are logged) # loglevel notice # Specify the log file name. Also the empty string can be used to force # Redis to log on the standard output. Note that if you use standard # output for logging but daemonize, logs will be sent to /dev/null # logfile "" # Set the number of databases. The default database is DB 0, you can select # a different one on a per-connection basis using SELECT <dbid> where # dbid is a number between 0 and 'databases'-1 # databases 16 # However it is possible to force the pre-4.0 behavior and always show a # ASCII art logo in startup logs by setting the following option to yes. # always-show-logo yes ################################ SNAPSHOTTING ################################ # # Save the DB on disk: # # 这里是用来配置触发 Redis的 RDB 持久化条件,也就是什么时候将内存中的数据保存到硬盘. # 比如“save m n”.表示m秒内数据集存在n次修改时,自动触发bgsave # 当然如果你只是用Redis的缓存功能,不需要持久化: # save "" # save 900 1 save 300 10 save 60 10000 # 默认值为yes.当启用了RDB且最后一次后台保存数据失败,Redis是否停止接收数据. # 这会让用户意识到数据没有正确持久化到磁盘上,否则没有人会注意到灾难(disaster)发生了. # 如果Redis重启了,那么又可以重新开始接收数据了 # stop-writes-on-bgsave-error yes # 建议设置为 no 否则程序可能就报这个错.<br># MISCONF Redis is configured to save RDB snapshots,<br># but it is currently not able to persist on disk. <br># Commands that may modify the data set are disabled, <br># because this instance is configured to report errors <br># during writes if RDB snapshotting fails <br># (stop-writes-on-bgsave-error option). <br># Please check the Redis logs for details about the RDB error. # 默认值是yes.对于存储到磁盘中的快照,可以设置是否进行压缩存储. # 如果是的话,redis会采用LZF算法进行压缩. # 如果你不想消耗CPU来进行压缩的话,可以设置为关闭此功能,但是存储在磁盘上的快照会比较大. # rdbcompression yes # 默认值是yes.在存储快照后,我们还可以让redis使用CRC64算法来进行数据校验, # 但是这样做会增加大约10%的性能消耗,如果希望获取到最大的性能提升,可以关闭此功能. # rdbchecksum yes # 设置快照的文件名,默认是 dump.rdb # dbfilename dump.rdb # 设置快照文件的存放路径,这个配置项一定是个目录,而不能是文件名.默认是和当前配置文件保存在同一目录. # dir ./ ################################# REPLICATION ################################# # When a slave loses its connection with the master, or when the replication # is still in progress, the slave can act in two different ways: # # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will # still reply to client requests, possibly with out of date data, or the # data set may just be empty if this is the first synchronization. # # 2) if slave-serve-stale-data is set to 'no' the slave will reply with # an error "SYNC with master in progress" to all the kind of commands # but to INFO and SLAVEOF. # slave-serve-stale-data yes # Note: read only slaves are not designed to be exposed to untrusted clients # on the internet. It's just a protection layer against misuse of the instance. # Still a read only slave exports by default all the administrative commands # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve # security of read only slaves using 'rename-command' to shadow all the # administrative / dangerous commands. # slave-read-only yes # With slow disks and fast (large bandwidth) networks, diskless replication # works better. # repl-diskless-sync no # The delay is specified in seconds, and by default is 5 seconds. To disable # it entirely just set it to 0 seconds and the transfer will start ASAP. # repl-diskless-sync-delay 5 # By default we optimize for low latency, but in very high traffic conditions # or when the master and slaves are many hops away, turning this to "yes" may # be a good idea. # repl-disable-tcp-nodelay no # By default the priority is 100. # slave-priority 100 ################################## SECURITY ################################### # Warning: since Redis is pretty fast an outside user can try up to # 150k passwords per second against a good box. This means that you should # use a very strong password otherwise it will be very easy to break. # requirepass hj2022 ############################# LAZY FREEING #################################### # In all the above cases the default is to delete objects in a blocking way, # like if DEL was called. However you can configure each case specifically # in order to instead release memory in a non-blocking way like if UNLINK # was called, using the following configuration directives: # lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no slave-lazy-flush no ############################## APPEND ONLY MODE ############################### # 默认值为no,也就是说redis 默认使用的是rdb方式持久化, # 如果想要开启 AOF 持久化方式,需要将 appendonly 修改为 yes. # appendonly no # aof文件名,默认是"appendonly.aof" # appendfilename "appendonly.aof" # aof持久化策略的配置; # # no表示不执行fsync,由操作系统保证数据同步到磁盘,速度最快,但是不太安全; # # always表示每次写入都执行fsync,以保证数据同步到磁盘,效率很低; # # everysec表示每秒执行一次fsync,可能会导致丢失这1s数据.通常选择 everysec ,兼顾安全性和效率 # appendfsync everysec # 在aof重写或者写入rdb文件的时候,会执行大量IO,此时对于everysec和always的aof模式来说, # 执行fsync会造成阻塞过长时间,no-appendfsync-on-rewrite字段设置为默认设置为no. # 如果对延迟要求很高的应用,这个字段可以设置为yes,否则还是设置为no, # 这样对持久化特性来说这是更安全的选择. # 设置为yes表示rewrite期间对新写操作不fsync,暂时存在内存中,等rewrite完成后再写入,默认为no,建议yes. # Linux的默认fsync策略是30秒.可能丢失30秒数据.默认值为no. # no-appendfsync-on-rewrite no # 默认值为100.aof自动重写配置,当目前aof文件大小超过上一次重写的aof文件大小的百分之多少进行重写, # 即当aof文件增长到一定大小的时候,Redis能够调用bgrewriteaof对日志文件进行重写. # 当前AOF文件大小是上次日志重写得到AOF文件大小的二倍(设置为100)时,自动启动新的日志重写过程. # auto-aof-rewrite-percentage 100 # 设置允许重写的最小aof文件大小,避免了达到约定百分比但尺寸仍然很小的情况还要重写. # auto-aof-rewrite-min-size 64mb # aof文件可能在尾部是不完整的,当redis启动的时候,aof文件的数据被载入内存. # 重启可能发生在redis所在的主机操作系统宕机后,尤其在ext4文件系统没有加上data=ordered选项, # 出现这种现象 redis宕机或者异常终止不会造成尾部不完整现象,可以选择让redis退出, # 或者导入尽可能多的数据.如果选择的是yes,当截断的aof文件被导入的时候, # 会自动发布一个log给客户端然后load.如果是no,用户必须手动redis-check-aof修复AOF文件才可以. # 默认值为 yes # aof-load-truncated yes # This is currently turned off by default in order to avoid the surprise # of a format change, but will at some point be used as the default. # aof-use-rdb-preamble no # Set it to 0 or a negative value for unlimited execution without warnings. # lua-time-limit 5000 ################################## SLOW LOG ################################### # The following time is expressed in microseconds, so 1000000 is equivalent # to one second. Note that a negative number disables the slow log, while # a value of zero forces the logging of every command. # slowlog-log-slower-than 10000 # There is no limit to this length. Just be aware that it will consume memory. # You can reclaim memory used by the slow log with SLOWLOG RESET. # slowlog-max-len 128 ################################ LATENCY MONITOR ############################## # By default latency monitoring is disabled since it is mostly not needed # if you don't have latency issues, and collecting data has a performance # impact, that while very small, can be measured under big load. Latency # monitoring can easily be enabled at runtime using the command # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed. # latency-monitor-threshold 0 ############################# EVENT NOTIFICATION ############################## # By default all notifications are disabled because most users don't need # this feature and the feature has some overhead. Note that if you don't # specify at least one of K or E, no events will be delivered. # notify-keyspace-events "" ############################### ADVANCED CONFIG ############################### # Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. # hash-max-ziplist-entries 512 hash-max-ziplist-value 64 # Positive numbers mean store up to _exactly_ that number of elements # per list node. # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size), # but if your use case is unique, adjust the settings as necessary. # list-max-ziplist-size -2 # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail] # etc. # list-compress-depth 0 # Sets have a special encoding in just one case: when a set is composed # of just strings that happen to be integers in radix 10 in the range # of 64 bit signed integers. # The following configuration setting sets the limit in the size of the # set in order to use this special memory saving encoding. # set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in # order to save a lot of space. This encoding is only used when the length and # elements of a sorted set are below the following limits: # zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # The suggested value is ~ 3000 in order to have the benefits of # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can be raised to # ~ 10000 when CPU is not a concern, but space is, and the data set is # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. # hll-sparse-max-bytes 3000 # use "activerehashing yes" if you don't have such hard requirements but # want to free memory asap when possible. # activerehashing yes # Both the hard or the soft limit can be disabled by setting them to zero. # client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. # hz 10 # When a child rewrites the AOF file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. # aof-rewrite-incremental-fsync yes
-
podman run -d --privileged=true -p 6380:6380 -v /horus/redis_6.2.6/conf/redis.conf:/etc/redis/redis.conf -v /horus/redis_6.2.6/data:/data --name redis_6.2.6 3c3da61c4be0 /etc/redis/redis.conf --appendonly yes
- podman run -d --privileged=true -p 6380:6380 \
- -v /horus/redis_6.2.6/conf/redis.conf:/etc/redis/redis.conf \ 映射配置文件
- -v /horus/redis_6.2.6/data:/data \ 映射挂载目录
- --name redis_6.2.6 镜像ID /etc/redis/redis.conf \ 指定配置文件启动redis-server进程
- --appendonly yes 开启数据持久化
-
podman exec -it b087c14607bf redis-cli -p 6380 auth hj2022
- podman exec -it 容器ID redis-cli -p 6380 进入容器
- auth xxx 校验密码
- IDEA连接校验ok
安装nacos v2.1.0
- podman pull nacos/nacos-server:v2.1.0 选择dockers.io版本
- podman run -p 8848:8848 --name nacostest -d 镜像id
- 创建目录用来外挂文件 mkdir -p /horus/nacos_2.1.0/
- podman cp 容器ID:/home/nacos/logs/ /horus/nacos_2.1.0/
- podman cp 容器ID:/home/nacos/conf/ /horus/nacos_2.1.0/
- 删除这个测试容器 podman stop rm 命令 修改配置文件(application.properties),主要是配置mysql连接信息
- nacos 运行sql脚本文件 nacos-mysql.sql 下载Windows版本的解压后有这文件
- https://github.com/alibaba/nacos/blob/develop/distribution/conf/nacos-mysql.sql mysql地址
-
/* * Copyright 1999-2018 Alibaba Group Holding Ltd. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info */ /******************************************/ CREATE TABLE `config_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) DEFAULT NULL, `content` longtext NOT NULL COMMENT 'content', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', `app_name` varchar(128) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `c_desc` varchar(256) DEFAULT NULL, `c_use` varchar(64) DEFAULT NULL, `effect` varchar(64) DEFAULT NULL, `type` varchar(64) DEFAULT NULL, `c_schema` text, `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_aggr */ /******************************************/ CREATE TABLE `config_info_aggr` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) NOT NULL COMMENT 'group_id', `datum_id` varchar(255) NOT NULL COMMENT 'datum_id', `content` longtext NOT NULL COMMENT '内容', `gmt_modified` datetime NOT NULL COMMENT '修改时间', `app_name` varchar(128) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='增加租户字段'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_beta */ /******************************************/ CREATE TABLE `config_info_beta` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL COMMENT 'content', `beta_ips` varchar(1024) DEFAULT NULL COMMENT 'betaIps', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_tag */ /******************************************/ CREATE TABLE `config_info_tag` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id', `tag_id` varchar(128) NOT NULL COMMENT 'tag_id', `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL COMMENT 'content', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_tags_relation */ /******************************************/ CREATE TABLE `config_tags_relation` ( `id` bigint(20) NOT NULL COMMENT 'id', `tag_name` varchar(128) NOT NULL COMMENT 'tag_name', `tag_type` varchar(64) DEFAULT NULL COMMENT 'tag_type', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id', `nid` bigint(20) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`nid`), UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`), KEY `idx_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = group_capacity */ /******************************************/ CREATE TABLE `group_capacity` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID', `group_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Group ID,空字符表示整个集群', `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值', `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量', `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值', `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数,,0表示使用默认值', `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值', `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_group_id` (`group_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='集群、各Group容量信息表'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = his_config_info */ /******************************************/ CREATE TABLE `his_config_info` ( `id` bigint(64) unsigned NOT NULL, `nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `data_id` varchar(255) NOT NULL, `group_id` varchar(128) NOT NULL, `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL, `md5` varchar(32) DEFAULT NULL, `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `src_user` text, `src_ip` varchar(50) DEFAULT NULL, `op_type` char(10) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`nid`), KEY `idx_gmt_create` (`gmt_create`), KEY `idx_gmt_modified` (`gmt_modified`), KEY `idx_did` (`data_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='多租户改造'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = tenant_capacity */ /******************************************/ CREATE TABLE `tenant_capacity` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID', `tenant_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID', `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值', `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量', `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值', `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数', `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值', `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='租户容量信息表'; CREATE TABLE `tenant_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `kp` varchar(128) NOT NULL COMMENT 'kp', `tenant_id` varchar(128) default '' COMMENT 'tenant_id', `tenant_name` varchar(128) default '' COMMENT 'tenant_name', `tenant_desc` varchar(256) DEFAULT NULL COMMENT 'tenant_desc', `create_source` varchar(32) DEFAULT NULL COMMENT 'create_source', `gmt_create` bigint(20) NOT NULL COMMENT '创建时间', `gmt_modified` bigint(20) NOT NULL COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`), KEY `idx_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info'; CREATE TABLE `users` ( `username` varchar(50) NOT NULL PRIMARY KEY, `password` varchar(500) NOT NULL, `enabled` boolean NOT NULL ); CREATE TABLE `roles` ( `username` varchar(50) NOT NULL, `role` varchar(50) NOT NULL, UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE ); CREATE TABLE `permissions` ( `role` varchar(50) NOT NULL, `resource` varchar(255) NOT NULL, `action` varchar(8) NOT NULL, UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE ); INSERT INTO users (username, password, enabled) VALUES ('nacos', '$2a$10$EuWPZHzz32dJN7jexM34MOeYirDdFAZm2kuWj7VEOJhhZkDrxfvUu', TRUE); INSERT INTO roles (username, role) VALUES ('nacos', 'ROLE_ADMIN');
-
# spring server.servlet.contextPath=${SERVER_SERVLET_CONTEXTPATH:/nacos} server.contextPath=/nacos server.port=${NACOS_APPLICATION_PORT:8848} spring.datasource.platform=${SPRING_DATASOURCE_PLATFORM:mysql} nacos.cmdb.dumpTaskInterval=3600 nacos.cmdb.eventTaskInterval=10 nacos.cmdb.labelTaskInterval=300 nacos.cmdb.loadDataAtStart=false db.num=${MYSQL_DATABASE_NUM:1} db.url.0=jdbc:mysql://120.78.129.105:3307/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC db.user.0=root db.password.0=hj2022 ### The auth system to use, currently only 'nacos' is supported: nacos.core.auth.system.type=${NACOS_AUTH_SYSTEM_TYPE:nacos} ### The token expiration in seconds: nacos.core.auth.default.token.expire.seconds=${NACOS_AUTH_TOKEN_EXPIRE_SECONDS:18000} ### The default token: nacos.core.auth.default.token.secret.key=${NACOS_AUTH_TOKEN:SecretKey012345678901234567890123456789012345678901234567890123456789} ### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay. nacos.core.auth.caching.enabled=${NACOS_AUTH_CACHE_ENABLE:false} nacos.core.auth.enable.userAgentAuthWhite=${NACOS_AUTH_USER_AGENT_AUTH_WHITE_ENABLE:false} nacos.core.auth.server.identity.key=${NACOS_AUTH_IDENTITY_KEY:serverIdentity} nacos.core.auth.server.identity.value=${NACOS_AUTH_IDENTITY_VALUE:security} server.tomcat.accesslog.enabled=${TOMCAT_ACCESSLOG_ENABLED:false} server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D # default current work dir server.tomcat.basedir= ## spring security config ### turn off security nacos.security.ignore.urls=${NACOS_SECURITY_IGNORE_URLS:/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**} # metrics for elastic search management.metrics.export.elastic.enabled=false management.metrics.export.influx.enabled=false nacos.naming.distro.taskDispatchThreadCount=10 nacos.naming.distro.taskDispatchPeriod=200 nacos.naming.distro.batchSyncKeyCount=1000 nacos.naming.distro.initDataRatio=0.9 nacos.naming.distro.syncRetryDelay=5000 nacos.naming.data.warmup=true
- 运行
-
podman run -d -e MODE=standalone -e TIME_ZONE='Asia/Shanghai' -v /horus/nacos_2.1.0/logs:/home/nacos/logs -v /horus/nacos_2.1.0/conf:/home/nacos/conf -p 8848:8848 -p 9848:9848 --name nacos --restart=always b0a4aba28604
- podman run -d \
- -e MODE=standalone \ // 单机模式
- -e TIME_ZONE='Asia/Shanghai' \
- -v /horus/nacos_2.1.0/logs:/home/nacos/logs \
- -v /horus/nacos_2.1.0/conf:/home/nacos/conf \
- -p 8848:8848 \
- -p 9848:9848 \
- --name nacos \
- --restart=always \
- 镜像ID
- 访问 http://120.78.129.105:8848/nacos 账号密码初始都是: nacos
- 访问成功
jdk8安装
- 下载安装包 (https://www.oracle.com/java/technologies/downloads/#java8)上传服务器
- tar -zxvf jdk-8u333-linux-x64.tar.gz
- mkdir /usr/local/jdk
- cp -r /horus/jdk/jdk1.8.0_333 /usr/local/jdk
- 设置环境变量,配置 /etc/profile 文件, 文件尾部添加:
- # JAVA8_HOME
- export JAVA8_HOME=/usr/local/jdk/jdk1.8.0_333
- export CLASSPATH=$JAVA8_HOME/lib
- export PATH=$JAVA8_HOME/bin:$PATH
-
# /etc/profile # System wide environment and startup programs, for login setup # Functions and aliases go in /etc/bashrc # It's NOT a good idea to change this file unless you know what you # are doing. It's much better to create a custom.sh shell script in # /etc/profile.d/ to make custom changes to your environment, as this # will prevent the need for merging in future updates. pathmunge () { case ":${PATH}:" in *:"$1":*) ;; *) if [ "$2" = "after" ] ; then PATH=$PATH:$1 else PATH=$1:$PATH fi esac } if [ -x /usr/bin/id ]; then if [ -z "$EUID" ]; then # ksh workaround EUID=`/usr/bin/id -u` UID=`/usr/bin/id -ru` fi USER="`/usr/bin/id -un`" LOGNAME=$USER MAIL="/var/spool/mail/$USER" fi # Path manipulation if [ "$EUID" = "0" ]; then pathmunge /usr/sbin pathmunge /usr/local/sbin else pathmunge /usr/local/sbin after pathmunge /usr/sbin after fi HOSTNAME=`/usr/bin/hostname 2>/dev/null` HISTSIZE=1000 if [ "$HISTCONTROL" = "ignorespace" ] ; then export HISTCONTROL=ignoreboth else export HISTCONTROL=ignoredups fi export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL # By default, we want umask to get set. This sets it for login shell # Current threshold for system reserved uid/gids is 200 # You could check uidgid reservation validity in # /usr/share/doc/setup-*/uidgid file if [ $UID -gt 199 ] && [ "`/usr/bin/id -gn`" = "`/usr/bin/id -un`" ]; then umask 002 else umask 022 fi for i in /etc/profile.d/*.sh /etc/profile.d/sh.local ; do if [ -r "$i" ]; then if [ "${-#*i}" != "$-" ]; then . "$i" else . "$i" >/dev/null fi fi done unset i unset -f pathmunge # JAVA8_HOME export JAVA8_HOME=/usr/local/jdk/jdk1.8.0_333 export CLASSPATH=$JAVA8_HOME/lib export PATH=$JAVA8_HOME/bin:$PATH if [ -n "${BASH_VERSION-}" ] ; then if [ -f /etc/bashrc ] ; then # Bash login shells run only /etc/profile # Bash non-login shells run only /etc/bashrc # Check for double sourcing is done in /etc/bashrc. . /etc/bashrc fi fi
- source /etc/profile
- Java 验证ok 可以使用 jps命令查看运行的jar服务
-
切记这个文件编辑使用 vi命令 而不要直接点开用finalshell来编辑, 否则其他指令都用不了了...
nginx本地容器安装
- // 依赖包
- yum install -y make gcc-c++ pcre-devel zlib-devel
- yum install -y gcc
- yum install -y pcre pcre-devel
- yum install -y zlib zlib-devel
- yum -y install gcc openssl openssl-devel pcre-devel zlib zlib-devel //这样装依赖包一步到位
- 下载tar包并上传到服务器.在这目录进行解压
- tar -zxvf nginx-1.21.6.tar.gz
- 进入到解压后目录 进行安装
- ./configure --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module
- make && make install
- cd /usr/local/nginx/sbin 目录 ./nginx 启动 ./nginx -s stop 停止
- ./nginx -t 查看配置文件是否正确 ./nginx -s reload 重新加载配置文件
- 浏览器访问,ok welcome to nginx!
- 在 /lib/systemd/system/ 目录下创建 nginx.service 文件 内容见文件
- systemctl enable nginx
- systemctl start nginx
- systemctl status nginx 查看状态
- 配置文件,html修改目录在 /usr/local/nginx/
-
#user nobody; worker_processes 4; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 443 ssl; server_name test.upleaf.cn; #证书 ssl_certificate /horus/ssl/upleaf/7633311_test.upleaf.cn.pem; ssl_certificate_key /horus/ssl/upleaf/7633311_test.upleaf.cn.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE; ssl_prefer_server_ciphers on; charset utf-8; add_header X-Content-Type-Options "nosniff"; add_header X-XSS-Protection "1"; error_log off; sendfile off; client_max_body_size 100m; gzip on; gzip_min_length 1k; gzip_comp_level 9; gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png; gzip_vary on; gzip_disable "MSIE [1-6]\."; set $realip $remote_addr; if ($http_x_forwarded_for ~ "^(\d+\.\d+\.\d+\.\d+)") { set $realip $1; } fastcgi_param REMOTE_ADDR $realip; location / { root /horus/front_end/upleaf/dist; index index.html index.htm; try_files $uri $uri/ /index.html; } location /upleaf/api/ { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'POST,GET,OPTIONS'; add_header 'Access-Control-Allow-Headers' 'Authorization'; proxy_pass http://120.78.129.105:9083/; } location /nacos/ { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'POST,GET,OPTIONS'; add_header 'Access-Control-Allow-Headers' 'Authorization'; proxy_pass http://120.78.129.105:8848/nacos/; } } server { listen 443 ssl; server_name test.hauscal.com; ssl_certificate /horus/ssl/hauscal/7633344_test.hauscal.com.pem; ssl_certificate_key /horus/ssl/hauscal/7633344_test.hauscal.com.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE; ssl_prefer_server_ciphers on; charset utf-8; add_header X-Content-Type-Options "nosniff"; add_header X-XSS-Protection "1"; error_log off; sendfile off; client_max_body_size 100m; gzip on; gzip_min_length 1k; gzip_comp_level 9; gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png; gzip_vary on; gzip_disable "MSIE [1-6]\."; set $realip $remote_addr; if ($http_x_forwarded_for ~ "^(\d+\.\d+\.\d+\.\d+)") { set $realip $1; } fastcgi_param REMOTE_ADDR $realip; location /hauscal/api/ { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'POST,GET,OPTIONS'; add_header 'Access-Control-Allow-Headers' 'Authorization'; proxy_pass http://120.78.129.105:8000/; } location /nacos/ { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'POST,GET,OPTIONS'; add_header 'Access-Control-Allow-Headers' 'Authorization'; proxy_pass http://120.78.129.105:8848/nacos/; } } server{ listen 80; server_name test.upleaf.cn test.hauscal.com; rewrite ^(.*)$ https://$host$1 permanent; } }
配置域名 ssl证书 跑动项目 挂载html目录
Docker 容器跑jar服务
- 上传jar包到指定位置
- yum -y install lrzsz 装完这个文件上传的 才能使用 rz命令上传文件
- 制作Dockerfile 文件 touch Dockerfile 配置
- podman build -f Dockerfile -t hj0606 // 构建xx镜像
- podman run -d --privileged=true --name hj0606 -p 9083:9083 镜像ID //运行容器
- ok
- 也可以直接运行jar包 sh脚本已经编号.
- ./app.sh start && tail -f app-log.out 直接可运行
-
# 基础镜像使用java FROM docker.io/library/java:8 # 作者 MAINTAINER hj # VOLUME 容器挂载目录/tmp。 VOLUME /tmp # 将jar包添加到容器中并更名为hj.jar ADD hj-1.1.1.jar hj.jar # 时区 RUN echo "Asia/Shanghai" > /etc/localtime_bak # 暴露端口 EXPOSE 9083 # 包前面的add命令把jar复制添加,这个touch命令的作用是修改这个文件的(访问,修改时间)为当前时间,可有可无 RUN bash -c 'touch /hj.jar' # 运行jar ENTRYPOINT ["java","-jar","/hj.jar"]
#!/bin/bash version="1.0.1"; appName=$2 if [ -z $appName ];then appName=`ls -t |grep .jar$ |head -n1` fi function start() { count=`ps -ef |grep java|grep $appName|wc -l` if [ $count != 0 ];then echo "Maybe $appName is running, please check it..." else echo "The $appName is starting..." nohup java -Xms512m -Xmx512m -Xmn256m -Xloggc:./gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -jar ./$appName > ./app-log.out 2>&1 & fi } function stop() { appId=`ps -ef |grep java|grep $appName|awk '{print $2}'` if [ -z $appId ];then echo "Maybe $appName not running, please check it..." else echo "The $appName is stopping..." kill -9 $appId fi } function restart() { # get release version releaseApp=`ls -t |grep .jar$ |head -n1` # get last version lastVersionApp=`ls -t |grep .jar$ |head -n2 |tail -n1` appName=$lastVersionApp stop for i in {5..1} do echo -n "$i " sleep 1 done echo 0 backup appName=$releaseApp start } function backup() { # get backup version backupApp=`ls |grep -wv $releaseApp$ |grep .jar$` # create backup dir if [ ! -d "backup" ];then mkdir backup fi # backup for i in ${backupApp[@]} do echo "backup" $i mv $i backup done } function status() { appId=`ps -ef |grep java|grep $appName|awk '{print $2}'` if [ -z $appId ] then echo -e "\033[31m Not running \033[0m" else echo -e "\033[32m Running [$appId] \033[0m" fi } function usage() { echo "Usage: $0 {start|stop|restart|status|stop -f}" echo "Example: $0 start" exit 1 } case $1 in start) start;; stop) stop;; restart) restart;; status) status;; *) usage;; esac
更正: /etc/localtime_bak --> /etc/timezone 这个才能正确使用utf-8时间
- 如果没有正确配置这个时间 "9999-12-31 23:59:59" 转化的日期 一切打印正常 在插入数据库表的时候 会多8个小时--> '10000-01-01 07:59:59.0',
- 从而报错 , 但是奇怪的是,new Date() 赋值的字段保存数据库并没有因此多或者少8个小时,日志打印也正常,待继续探究.
- 如果要使用外部文件打jar镜像 需要类似如下配置:
-
# 基础镜像使用java FROM docker.io/library/java:8 # 作者 MAINTAINER hj # VOLUME 容器挂载目录/tmp。 VOLUME /tmp # 将jar包添加到容器中并更名为upleaf.jar ADD gateway-1.0-SNAPSHOT.jar gateway.jar # 时区 RUN echo "Asia/Shanghai" > /etc/localtime_bak # 暴露端口 EXPOSE 8000 # 包前面的add命令把jar复制添加,这个touch命令的作用是修改这个文件的(访问,修改时间)为当前时间,可有可无 RUN bash -c 'touch /gateway.jar' # 运行jar ENTRYPOINT ["java","-jar","/gateway.jar","--spring.config.location=/data/java/config/bootstrap.yml"]
- podman build -f Dockerfile -t hj0609 // 构建xx镜像
- podman run -d --privileged=true -v /home/hj/gateway/bootstrap.yml:/data/java/config/bootstrap.yml --name gateway0609 -p 8000:8000 a9a582ad441c
- 一键部署 删除原容器 镜像 打包镜像 运行容器 // 在同级.sh目录配置好Dockerfile与放好jar包
-
app_name='hjapp' app_port='8000' # 停止正在运行的容器 echo '......stop container hjapp......' podman stop ${app_name} # 删除容器 echo '......rm container hjapp......' podman rm ${app_name} # 删除 名称为 app_name 镜像 echo '......rmi none images hjapp......' podman rmi `podman images | grep ${app_name} | awk '{print $3}'` # 构建镜像 podman build -f Dockerfile -t ${app_name} # 重新生成并运行容器 # echo '......start container hjapp......' podman run -p ${app_port}:${app_port} -d --name ${app_name} ${app_name} podman run -d --privileged=true -v /horus/back_end/hauscal/gateway/bootstrap.yml:/data/java/config/bootstrap.yml --name ${app_name} -p ${app_port}:${app_port} ${app_name} # 重新生成并运行容器 echo '......Success hjapp......'
- 挂载日志文件去外部
-
# 基础镜像使用java FROM docker.io/library/java:8 # 作者 MAINTAINER hj # VOLUME 容器挂载目录/tmp。 VOLUME /tmp # 将jar包添加到容器中并更名为upleaf.jar ADD gateway-1.0-SNAPSHOT.jar gateway.jar # 时区 RUN echo "Asia/Shanghai" > /etc/timezone # 暴露端口 EXPOSE 8000 # 包前面的add命令把jar复制添加,这个touch命令的作用是修改这个文件的(访问,修改时间)为当前时间,可有可无 RUN bash -c 'touch /gateway.jar' # 运行jar ENTRYPOINT ["java","-jar","/gateway.jar","--spring.config.location=/data/java/config/bootstrap.yml"]
server: port: 8000 spring: profiles: active: prod application: name: gateway cloud: nacos: discovery: ip: 120.74.183.105 port: 8848 server-addr: http://120.74.183.105:8848 config: server-addr: http://120.74.183.105:8848 file-extension: yml shared-configs: # 共享配置 ${spring.application.name}-${spring.profiles.active}.${spring.cloud.nacos.config.file-extension} logging: level: root: INFO file: name: /data/java/logs/${spring.application.name}.log
app_name='gateway' app_port='8000' # 停止正在运行的容器 echo '......stop container ......'${app_name} podman stop ${app_name} # 删除容器 echo '......rm container ......'${app_name} podman rm ${app_name} # 删除 名称为 app_name 镜像 echo '......rmi none images ......'${app_name} podman rmi `podman images | grep ${app_name} | awk '{print $3}'` # 构建镜像 podman build -f Dockerfile -t ${app_name} # 重新生成并运行容器 # echo '......start container gateway......' podman run -d --privileged=true -v /horus/back_end/hauscal/gateway/bootstrap.yml:/data/java/config/bootstrap.yml -v /horus/back_end/hauscal/logs/:/data/java/logs/ --name ${app_name} -p ${app_port}:${app_port} ${app_name} # 重新生成并运行容器 echo '......Success ......'${app_name}
- over