搭建OpenStack基础服务
一、OpenStack硬件需求
OpenStack架构需要至少2个(主机)节点来启动基础服务。
控制器
控制节点上运行身份认证服务,镜像服务,计算服务的管理部分,网络服务的管理部分,多种网络代理以及仪表板。也需要包含一些支持服务,例如:SQL数据库,term:消息队列 and NTP服务。
计算
计算节点上运行计算服务中管理实例的管理程序部分。默认情况下,计算服务使用 KVM
可选的,可以在计算节点上运行部分块存储,对象存储,Orchestration 和 Telemetry 服务
计算节点上需要至少两块网卡。
块设备存储
可选的块存储节点上包含了磁盘,块存储服务和共享文件系统会向实例提供这些磁盘
为了简单起见,计算节点和本节点之间的服务流量使用管理网络。生产环境中应该部署一个单独的存储网络以增强性能和安全。
可以部署超过一个块存储节点。每个块存储节点要求至少一块网卡。
对象存储
可选的对象存储节点包含了磁盘。对象存储服务用这些磁盘来存储账号,容器和对象。
为了简单起见,计算节点和本节点之间的服务流量使用管理网络。生产环境中应该部署一个单独的存储网络以增强性能和安全。
这个服务要求两个节点。每个节点要求最少一块网卡。可以部署超过两个对象存储节点
网络
公有网络
公有网络选项使用尽可能简单的方式主要通过layer-2(网桥/交换机)服务以及VLAN网络的分割来部署OpenStack网络服务。本质上,它建立虚拟网络到物理网络的桥,依靠物理网络基础设施提供layer-3服务(路由)
私有网络
私有网络选项扩展了公有网络选项,增加了启用 self-service`覆盖分段方法的layer-3(路由)服务。
二、OpenStack搭建环境准备
注:提前已经制作好openstack模板机,并导出文件openstack.ova
1. 以vmware station方式打开openstack.ova,导入成功后如图所示
2. 编辑虚拟机设置
设置虚拟化
设置iso文件
3. 克隆虚拟机
设置完成后,克隆一台机器
启动虚拟机,修改克隆机器的ip地址,主机名和host解析
[root@controller ~]# ifconfig eth0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.11 netmask 255.255.255.0 broadcast 10.0.0.255 inet6 fe80::20c:29ff:fe73:28a2 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:73:28:a2 txqueuelen 1000 (Ethernet) RX packets 173 bytes 16421 (16.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 140 bytes 15393 (15.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.0.0.11 controller 10.0.0.12 computer1
[root@computer1 ~]# ifconfig eth0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.12 netmask 255.255.255.0 broadcast 10.0.0.255 inet6 fe80::20c:29ff:fecb:31af prefixlen 64 scopeid 0x20<link> ether 00:0c:29:cb:31:af txqueuelen 1000 (Ethernet) RX packets 68 bytes 8151 (7.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 81 bytes 9053 (8.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@computer1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.0.0.11 controller 10.0.0.12 computer1
准备环境初步完成!!!
三、搭建OpenStack基础服务
1. 配置yum源(包括控制节点和计算节点)
以控制节点为例:操作步骤如下
#挂载光盘
[root@controller ~]# mount /dev/cdrom /mnt mount: /dev/sr0 is write-protected, mounting read-only [root@controller ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 48G 1.5G 47G 4% / devtmpfs 2.0G 0 2.0G 0% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 2.0G 8.6M 2.0G 1% /run tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 394M 0 394M 0% /run/user/0 /dev/sr0 4.1G 4.1G 0 100% /mnt [root@controller ~]# ll /mnt total 654 -rw-r--r-- 1 root root 14 Dec 5 2016 CentOS_BuildTag drwxr-xr-x 3 root root 2048 Dec 5 2016 EFI -rw-r--r-- 1 root root 215 Dec 10 2015 EULA -rw-r--r-- 1 root root 18009 Dec 10 2015 GPL drwxr-xr-x 3 root root 2048 Dec 5 2016 images drwxr-xr-x 2 root root 2048 Dec 5 2016 isolinux drwxr-xr-x 2 root root 2048 Dec 5 2016 LiveOS drwxrwxr-x 2 root root 630784 Dec 5 2016 Packages drwxrwxr-x 2 root root 4096 Dec 5 2016 repodata -rw-r--r-- 1 root root 1690 Dec 10 2015 RPM-GPG-KEY-CentOS-7 -rw-r--r-- 1 root root 1690 Dec 10 2015 RPM-GPG-KEY-CentOS-Testing-7 -r--r--r-- 1 root root 2883 Dec 5 2016 TRANS.TBL
上传openstack_rpm.tar.gz至/opt目录下
[root@controller ~]# ll /opt total 241672 -rw-r--r-- 1 root root 247468369 Sep 9 01:08 openstack_rpm.tar.gz [root@controller ~]# scp /opt/openstack_rpm.tar.gz 10.0.0.12:/opt/ The authenticity of host '10.0.0.12 (10.0.0.12)' can't be established. ECDSA key fingerprint is SHA256:GYtp4W43k6E/1PUlY9PGAT6HR+oI6j4E4HJF19ZuCHU. ECDSA key fingerprint is MD5:3f:b3:8b:8e:21:38:6f:51:ba:f4:67:ca:2a:bc:e1:34. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.0.12' (ECDSA) to the list of known hosts. root@10.0.0.12's password: openstack_rpm.tar.gz 100% 236MB 17.6MB/s 00:13 [root@controller ~]# cd /opt [root@controller opt]# tar -zxvf openstack_rpm.tar.gz
生成repo配置文件
[root@controller opt]# echo '[local] > name=local > baseurl=file:///mnt > gpgcheck=0 > [openstack] > name=openstack > baseurl=file:///opt/repo > gpgcheck=0' >/etc/yum.repos.d/local.repo [root@controller opt]# cat /etc/yum.repos.d/local.repo [local] name=local baseurl=file:///mnt gpgcheck=0 [openstack] name=openstack baseurl=file:///opt/repo gpgcheck=0
[root@controller opt]# yum makecache Loaded plugins: fastestmirror local | 3.6 kB 00:00:00 Not using downloaded local/repomd.xml because it is older than what we have: Current : Tue Sep 5 21:43:04 2017 Downloaded: Mon Dec 5 21:37:18 2016 openstack | 2.9 kB 00:00:00 (1/3): openstack/filelists_db | 465 kB 00:00:00 (2/3): openstack/other_db | 211 kB 00:00:00 (3/3): openstack/primary_db | 398 kB 00:00:00 Determining fastest mirrors Metadata Cache Created [root@controller opt]# echo 'mount /dev/cdrom /mnt' >>/etc/rc.local [root@controller opt]# chmod +x /etc/rc.d/rc.local
2. 时间同步(控制节点和计算节点)
控制节点:控制节点作为一个时间同步服务器,其他节点都去和控制节点进行时间同步
[root@controller opt]# rpm -qa |grep chrony chrony-3.1-2.el7.centos.x86_64 [root@controller opt]# vim /etc/chrony.conf [root@controller opt]# grep 'allow' /etc/chrony.conf allow 10.0.0.0/24 [root@controller opt]# systemctl restart chronyd [root@controller opt]# systemctl status chronyd ● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2020-11-12 22:20:11 CST; 11s ago Docs: man:chronyd(8) man:chrony.conf(5) Process: 15676 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS) Process: 15672 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS) Main PID: 15674 (chronyd) CGroup: /system.slice/chronyd.service └─15674 /usr/sbin/chronyd Nov 12 22:20:11 controller systemd[1]: Starting NTP client/server... Nov 12 22:20:11 controller chronyd[15674]: chronyd version 3.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHA...+DEBUG) Nov 12 22:20:11 controller chronyd[15674]: Frequency -1.301 +/- 2.107 ppm read from /var/lib/chrony/drift Nov 12 22:20:11 controller systemd[1]: Started NTP client/server. Hint: Some lines were ellipsized, use -l to show in full.
[root@controller opt]# netstat -lntup Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1128/sshd tcp6 0 0 :::22 :::* LISTEN 1128/sshd udp 0 0 0.0.0.0:123 0.0.0.0:* 15674/chronyd udp 0 0 127.0.0.1:323 0.0.0.0:* 15674/chronyd udp6 0 0 ::1:323 :::* 15674/chronyd
可以看出服务端监听的端口为123和323
计算节点:
[root@computer1 opt]# rpm -qa |grep chrony chrony-3.1-2.el7.centos.x86_64
#编辑``/etc/chrony.conf`` 文件并注释除``server`` 值外的所有内容。修改它引用控制节点: [root@computer1 opt]# vim /etc/chrony.conf [root@computer1 opt]# grep server /etc/chrony.conf # Use public servers from the pool.ntp.org project. server 10.0.0.11 iburst #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst [root@computer1 opt]# systemctl restart chronyd [root@computer1 opt]# systemctl status chronyd ● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2020-11-12 22:38:21 CST; 4s ago Docs: man:chronyd(8) man:chrony.conf(5) Process: 15695 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS) Process: 15692 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS) Main PID: 15694 (chronyd) CGroup: /system.slice/chronyd.service └─15694 /usr/sbin/chronyd Nov 12 22:38:21 computer1 systemd[1]: Starting NTP client/server... Nov 12 22:38:21 computer1 chronyd[15694]: chronyd version 3.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHAS...+DEBUG) Nov 12 22:38:21 computer1 chronyd[15694]: Frequency -6.299 +/- 16.148 ppm read from /var/lib/chrony/drift Nov 12 22:38:21 computer1 systemd[1]: Started NTP client/server. Hint: Some lines were ellipsized, use -l to show in full. [root@computer1 opt]# netstat -lntup Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1144/sshd tcp6 0 0 :::22 :::* LISTEN 1144/sshd udp 0 0 127.0.0.1:323 0.0.0.0:* 15694/chronyd udp6 0 0 ::1:323 :::* 15694/chronyd
客户端监听端口为323
3. OpenStack包(控制节点和计算节点)
参考文档:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment-packages.html
在CentOS中, ``extras``仓库提供用于启用 OpenStack 仓库的RPM包。 CentOS 默认启用``extras``仓库,因此你可以直接安装用于启用OpenStack仓库的包
yum install centos-release-openstack-mitaka
安装包获取地址:https://mirrors.aliyun.com/centos/7.8.2003/extras/x86_64/Packages/
1)在主机上升级包
[root@controller opt]# yum upgrade
2)安装 OpenStack 客户端
[root@controller ~]# yum install python-openstackclient -y
3)安装 openstack-selinux 软件包(自动管理 OpenStack 服务的安全策略)
[root@controller ~]# yum install openstack-selinux -y
4, 安装数据库(安装配置mariadb)
大多数 OpenStack 服务使用 SQL 数据库来存储信息。 典型地,数据库运行在控制节点上。
1)安装软件包
[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL -y
2)创建并编辑 /etc/my.cnf.d/openstack.cnf,然后完成如下动作
在 [mysqld]部分,设置 ``bind-address``值为控制节点的管理网络IP地址以使得其它节点可以通过管理网络访问数据库和以下键值来启用一起有用的选项和 UTF-8 字符集
[root@controller ~]# echo '[mysqld] > bind-address = 10.0.0.11 > default-storage-engine = innodb > innodb_file_per_table > max_connections = 4096 > collation-server = utf8_general_ci > character-set-server = utf8' >/etc/my.cnf.d/openstack.cnf [root@controller ~]# cat /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
3)启动数据库服务,并将其配置为开机自启:
[root@controller ~]# systemctl start mariadb.service [root@controller ~]# systemctl status mariadb.service ● mariadb.service - MariaDB 10.1 database server Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2020-11-13 11:55:30 CST; 6s ago Process: 66078 ExecStartPost=/usr/libexec/mysql-check-upgrade (code=exited, status=0/SUCCESS) Process: 65891 ExecStartPre=/usr/libexec/mysql-prepare-db-dir %n (code=exited, status=0/SUCCESS) Process: 65869 ExecStartPre=/usr/libexec/mysql-check-socket (code=exited, status=0/SUCCESS) Main PID: 66050 (mysqld) Status: "Taking your SQL requests now..." CGroup: /system.slice/mariadb.service └─66050 /usr/libexec/mysqld --basedir=/usr Nov 13 11:55:26 controller mysql-prepare-db-dir[65891]: 2020-11-13 11:55:26 140308555983040 [Note] /usr/libexec/mysqld (mysqld 1...20 ... Nov 13 11:55:29 controller mysql-prepare-db-dir[65891]: PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! Nov 13 11:55:29 controller mysql-prepare-db-dir[65891]: To do so, start the server, then issue the following commands: Nov 13 11:55:29 controller mysql-prepare-db-dir[65891]: '/usr/bin/mysqladmin' -u root password 'new-password' Nov 13 11:55:29 controller mysql-prepare-db-dir[65891]: '/usr/bin/mysqladmin' -u root -h controller password 'new-password' Nov 13 11:55:29 controller mysql-prepare-db-dir[65891]: Alternatively you can run: Nov 13 11:55:29 controller mysql-prepare-db-dir[65891]: '/usr/bin/mysql_secure_installation' Nov 13 11:55:29 controller mysql-prepare-db-dir[65891]: which will also give you the option of removing the test Nov 13 11:55:30 controller mysqld[66050]: 2020-11-13 11:55:29 140150481619136 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaD...050 ... Nov 13 11:55:30 controller systemd[1]: Started MariaDB 10.1 database server. Hint: Some lines were ellipsized, use -l to show in full. [root@controller ~]# netstat -lntup Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 31823/sshd tcp 0 0 10.0.0.11:3306 0.0.0.0:* LISTEN 66050/mysqld tcp6 0 0 :::22 :::* LISTEN 31823/sshd udp 0 0 0.0.0.0:123 0.0.0.0:* 32062/chronyd udp 0 0 127.0.0.1:323 0.0.0.0:* 32062/chronyd udp6 0 0 ::1:323 :::* 32062/chronyd
4)为了保证数据库服务的安全性,运行``mysql_secure_installation``脚本。特别需要说明的是,为数据库的root用户设置一个适当的密码
[root@controller ~]# mysql_secure_installation NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and you haven't set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MariaDB root user without the proper authorisation. Set root password? [Y/n] y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] n ... skipping. Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. Thanks for using MariaDB!
5)测试登录数据库
[root@controller ~]# mysql -uroot -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 8 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.05 sec)
5. 消息队列(安装RabbitMQ)
OpenStack 使用 message queue 协调操作和各服务的状态信息。消息队列服务一般运行在控制节点上。OpenStack支持好几种消息队列服务包括 RabbitMQ, Qpid, 和 ZeroMQ。本文安装RabbitMQ 消息队列服务,因为大部分发行版本都支持它。
1)安装包
[root@controller ~]# yum install rabbitmq-server -y
2)启动消息队列服务并将其配置为随系统启动:
[root@controller ~]# systemctl enable rabbitmq-server.service Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service. [root@controller ~]# systemctl start rabbitmq-server.service [root@controller ~]# systemctl status rabbitmq-server.service ● rabbitmq-server.service - RabbitMQ broker Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2020-11-13 14:33:34 CST; 6s ago Main PID: 81485 (beam) Status: "Initialized" CGroup: /system.slice/rabbitmq-server.service ├─81485 /usr/lib64/erlang/erts-7.3.1.2/bin/beam -W w -A 64 -P 1048576 -t 5000000 -stbt db -K true -- -root /usr/lib64/erlan... ├─81667 inet_gethost 4 └─81668 inet_gethost 4 Nov 13 14:33:32 controller systemd[1]: Starting RabbitMQ broker... Nov 13 14:33:33 controller rabbitmq-server[81485]: RabbitMQ 3.6.5. Copyright (C) 2007-2016 Pivotal Software, Inc. Nov 13 14:33:33 controller rabbitmq-server[81485]: ## ## Licensed under the MPL. See http://www.rabbitmq.com/ Nov 13 14:33:33 controller rabbitmq-server[81485]: ## ## Nov 13 14:33:33 controller rabbitmq-server[81485]: ########## Logs: /var/log/rabbitmq/rabbit@controller.log Nov 13 14:33:33 controller rabbitmq-server[81485]: ###### ## /var/log/rabbitmq/rabbit@controller-sasl.log Nov 13 14:33:33 controller rabbitmq-server[81485]: ########## Nov 13 14:33:33 controller rabbitmq-server[81485]: Starting broker... Nov 13 14:33:34 controller systemd[1]: Started RabbitMQ broker. Nov 13 14:33:34 controller rabbitmq-server[81485]: completed with 0 plugins.
[root@controller ~]# netstat -lntup Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1128/sshd tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 15660/beam tcp 0 0 10.0.0.11:3306 0.0.0.0:* LISTEN 15490/mysqld tcp6 0 0 :::22 :::* LISTEN 1128/sshd tcp6 0 0 :::5672 :::* LISTEN 15660/beam udp 0 0 0.0.0.0:123 0.0.0.0:* 15195/chronyd udp 0 0 127.0.0.1:323 0.0.0.0:* 15195/chronyd udp6 0 0 ::1:323 :::* 15195/chronyd
3)添加 openstack用户
用户为openstack,密码为123456
[root@controller ~]# rabbitmqctl add_user openstack 123456 Creating user "openstack" ...
4)给``openstack``用户配置写和读权限
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" Setting permissions for user "openstack" in vhost "/" ...
5)开启rabbitmq的web界面,来监控rabbitmq的运行状态
[root@controller ~]# rabbitmq-plugins enable rabbitmq_management The following plugins have been enabled: mochiweb webmachine rabbitmq_web_dispatch amqp_client rabbitmq_management_agent rabbitmq_management Applying plugin configuration to rabbit@controller... started 6 plugins. [root@controller ~]# netstat -lntup |grep 15672 tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 81485/beam
6)web页面查看
输入:控制节点ip地址:15672
用户名:guest,密码默认为:guest
6. 缓存服务memecached安装
认证服务认证缓存使用Memcached缓存令牌。缓存服务memecached运行在控制节点。在生产部署中,推荐联合启用防火墙、认证和加密保证它的安全。
1)安装包
[root@controller ~]# yum install memcached python-memcached -y
2)修改配置文件
[root@controller ~]# sed -i 's#127.0.0.1#10.0.0.11#g' /etc/sysconfig/memcached [root@controller ~]# cat /etc/sysconfig/memcached PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l 10.0.0.11,::1"
3)启动Memcached服务,并且配置它随机启动
[root@controller ~]# systemctl enable memcached.service Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service. [root@controller ~]# systemctl start memcached.service [root@controller ~]# systemctl status memcached.service ● memcached.service - memcached daemon Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2020-11-13 14:54:40 CST; 9s ago Main PID: 82414 (memcached) CGroup: /system.slice/memcached.service └─82414 /usr/bin/memcached -p 11211 -u memcached -m 64 -c 1024 -l 10.0.0.11,::1 Nov 13 14:54:40 controller systemd[1]: Started memcached daemon. [root@controller ~]# netstat -lntup |grep 11211 tcp 0 0 10.0.0.11:11211 0.0.0.0:* LISTEN 82414/memcached tcp6 0 0 ::1:11211 :::* LISTEN 82414/memcached udp 0 0 10.0.0.11:11211 0.0.0.0:* 82414/memcached udp6 0 0 ::1:11211 :::* 82414/memcached
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 记一次.NET内存居高不下排查解决与启示