KVM虚拟化
虚拟化介绍
虚拟化是云计算的基础。简单的说,虚拟化使得在一台物理的服务器上可以跑多台虚拟机,虚拟机共享物理机的 CPU、内存、IO 硬件资源,但逻辑上虚拟机之间是相互隔离的。
物理机我们一般称为宿主机(Host),宿主机上面的虚拟机称为客户机(Guest)。
那么 Host 是如何将自己的硬件资源虚拟化,并提供给 Guest 使用的呢?
这个主要是通过一个叫做 Hypervisor 的程序实现的。
根据 Hypervisor 的实现方式和所处的位置,虚拟化又分为两种:
- 全虚拟化
- 半虚拟化
全虚拟化:
Hypervisor 直接安装在物理机上,多个虚拟机在 Hypervisor 上运行。Hypervisor 实现方式一般是一个特殊定制的 Linux 系统。Xen 和 VMWare 的 ESXi 都属于这个类型
半虚拟化:
物理机上首先安装常规的操作系统,比如 Redhat、Ubuntu 和 Windows。Hypervisor 作为 OS 上的一个程序模块运行,并对管理虚拟机进行管理。KVM、VirtualBox 和 VMWare Workstation 都属于这个类型
理论上讲:
全虚拟化一般对硬件虚拟化功能进行了特别优化,性能上比半虚拟化要高;
半虚拟化因为基于普通的操作系统,会比较灵活,比如支持虚拟机嵌套。嵌套意味着可以在KVM虚拟机中再运行KVM。
kvm介绍
kVM 全称是 Kernel-Based Virtual Machine。也就是说 KVM 是基于 Linux 内核实现的。
KVM有一个内核模块叫 kvm.ko,只用于管理虚拟 CPU 和内存。
那 IO 的虚拟化,比如存储和网络设备则是由 Linux 内核与Qemu来实现。
作为一个 Hypervisor,KVM 本身只关注虚拟机调度和内存管理这两个方面。IO 外设的任务交给 Linux 内核和 Qemu。
大家在网上看 KVM 相关文章的时候肯定经常会看到 Libvirt 这个东西。
Libvirt 就是 KVM 的管理工具。
其实,Libvirt 除了能管理 KVM 这种 Hypervisor,还能管理 Xen,VirtualBox 等。
Libvirt 包含 3 个东西:后台 daemon 程序 libvirtd、API 库和命令行工具 virsh
- libvirtd是服务程序,接收和处理 API 请求;
- API 库使得其他人可以开发基于 Libvirt 的高级工具,比如 virt-manager,这是个图形化的 KVM 管理工具;
- virsh 是我们经常要用的 KVM 命令行工具
kvm部署
环境说明:
系统类型 | IP |
---|---|
RHEL8 | 192.168.186.130 |
kvm安装
部署前请确保你的CPU虚拟化功能已开启。分为两种情况:
- 虚拟机要关机设置CPU虚拟化
- 物理机要在BIOS里开启CPU虚拟化
//关闭防火墙与SELINUX
[root@MF ~]# systemctl stop firewalld
[root@MF ~]# setenforce 0
[root@MF ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config
//配置网络源
[root@MF ~]# yum -y install epel-release vim wget net-tools unzip zip gcc gcc-c++
//验证CPU是否支持KVM;如果结果中有vmx(Intel)或svm(AMD)字样,就说明CPU的支持的
[root@MF ~]# egrep -o 'vmx|svm' /proc/cpuinfo
vmx
vmx
vmx
vmx
//kvm安装
[root@MF ~]# yum -y install qemu-kvm qemu-kvm-common qemu-img virt-manager libvirt python3-libvirt libvirt-client virt-install virt-viewer bridge-utils libguestfs-tools
//因为虚拟机中网络,我们一般都是和公司的其他服务器是同一个网段,所以我们需要把 \
KVM服务器的网卡配置成桥接模式。这样的话KVM的虚拟机就可以通过该桥接网卡和公司内部 \
其他服务器处于同一网段
//此处我的网卡是ens33,所以用br0来桥接ens33网卡
[root@MF ~]# cd /etc/sysconfig/network-scripts/
[root@MF network-scripts]# ls
ifcfg-ens160
[root@MF network-scripts]# cp ifcfg-ens160 ifcfg-br0
[root@MF network-scripts]# cat ifcfg-br0
TYPE=Bridge
BOOTPROTO=static
NAME=br0
DEVICE=br0
ONBOOT=yes
IPADDR=192.168.186.130
NETMASK=255.255.255.0
GATEWAY=192.168.186.1
DNS1=114.114.114.114
[root@MF network-scripts]# cat ifcfg-ens160
TYPE=Ethernet
BOOTPROTO=static
NAME=ens160
DEVICE=ens160
ONBOOT=yes
BRIDGE=br0
[root@MF ~]# reboot
//重启网络
[root@MF ~]# systemctl restart NetworkManager
[root@MF ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 00:0c:29:d4:5d:1d brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:28:3d:f8:54 brd ff:ff:ff:ff:ff:ff
inet 192.168.30.1/24 brd 192.168.30.255 scope global docker0
valid_lft forever preferred_lft forever
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:0c:29:d4:5d:1d brd ff:ff:ff:ff:ff:ff
inet 192.168.186.130/24 brd 192.168.186.255 scope global noprefixroute br0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed4:5d1d/64 scope link
valid_lft forever preferred_lft forever
//启动服务
[root@MF ~]# systemctl start libvirtd
[root@MF ~]# systemctl enable libvirtd
Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /usr/lib/systemd/system/libvirtd.service.
Created symlink /etc/systemd/system/sockets.target.wants/virtlockd.socket → /usr/lib/systemd/system/virtlockd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtlogd.socket → /usr/lib/systemd/system/virtlogd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd.socket → /usr/lib/systemd/system/libvirtd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd-ro.socket → /usr/lib/systemd/system/libvirtd-ro.socket.
//验证安装结果
[root@MF ~]# lsmod|grep kvm
kvm_intel 245760 0
kvm 745472 1 kvm_intel
irqbypass 16384 1 kvm
//测试并验证安装结果
[root@MF ~]# virsh -c qemu:///system list
Id Name State
--------------------
[root@MF ~]# virsh --version
6.0.0
[root@MF ~]# virt-install --version
2.2.1
[root@MF ~]# ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-kvm
[root@MF ~]# ll /usr/bin/qemu-kvm
lrwxrwxrwx 1 root root 21 May 19 23:21 /usr/bin/qemu-kvm -> /usr/libexec/qemu-kvm
//查看网桥信息
[root@MF ~]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000c29d45d1d no ens160
docker0 8000.0242283df854 no
virbr0 8000.525400fb4ccd yes virbr0-nic
//把X11转发功能取消注释
[root@MF ~]# vim /etc/ssh/sshd_config
。。。。。。
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
[root@MF ~]# systemctl restart sshd
kvm web管理界面安装
环境说明 | ip |
RHEL8 | 192.168.186.131 |
kvm 的 web 管理界面是由 webvirtmgr 程序提供的。
//安装依赖包
[root@mf1 ~]# yum -y install git python2-pip python3-libvirt python3-libxml2 libxml2 supervisor nginx python2-devel
//升级pip
[root@mf1 ~]# pip2 install --upgrade pip
//从github上下载webvirtmgr代码
[root@mf1 ~]# cd /usr/local/src/
[root@mf1 src]# git clone git://github.com/retspen/webvirtmgr.git
Cloning into 'webvirtmgr'...
remote: Enumerating objects: 5614, done.
remote: Total 5614 (delta 0), reused 0 (delta 0), pack-reused 5614
Receiving objects: 100% (5614/5614), 2.97 MiB | 2.16 MiB/s, done.
Resolving deltas: 100% (3606/3606), done.
//安装webvirtmgr
[root@mf1 webvirtmgr]# pip2 install -r requirements.txt
//检查sqlite3是否安装
[root@mf1 ~]# python2
Python 3.6.8 (default, Jan 11 2019, 02:17:16)
[GCC 8.2.1 20180905 (Red Hat 8.2.1-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exit
Use exit() or Ctrl-D (i.e. EOF) to exit
>>> exit()
//初始化帐号信息
[root@mf1 webvirtmgr]# python2 manage.py syncdb
WARNING:root:No local_settings file found.
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_groups
Creating table auth_user_user_permissions
Creating table auth_user
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table servers_compute
Creating table instance_instance
Creating table create_flavor
You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes //问你是否创建超级管理员帐号
Username (leave blank to use 'root'): admin //指定超级管理员帐号用户名,默认留空为root
Email address: 123@2.com //设置超级管理员邮箱
Password: //设置超级管理员密码
Password (again): //再次输入超级管理员密码
Superuser created successfully.
Installing custom SQL ...
Installing indexes ...
Installed 6 object(s) from 1 fixture(s)
//拷贝web网页至指定目录
[root@mf1 webvirtmgr]# mkdir /var/www
mkdir: cannot create directory ‘/var/www’: File exists
[root@mf1 webvirtmgr]# mkdir -p /var/www
[root@mf1 webvirtmgr]# cd ..
[root@mf1 src]# cp -a webvirtmgr /var/www/
[root@mf1 src]# ls /var/www/
cgi-bin html webvirtmgr
[root@mf1 src]# chown -R nginx.nginx /var/www/webvirtmgr/
[root@mf1 src]# ll /var/www/
total 4
drwxr-xr-x 2 root root 6 Nov 4 2020 cgi-bin
drwxr-xr-x 2 root root 6 Nov 4 2020 html
drwxr-xr-x 20 nginx nginx 4096 May 20 01:58 webvirtmgr
//生成密钥
[root@mf1 src]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:EJLiHckdrBPNOi6SSBi0y5Yx+LeTNkJCuW2DITMnSSg root@mf1
The key's randomart image is:
+---[RSA 2048]----+
|oo ..Bo. |
|E.+ *.=. |
|XX.o =. |
|=B@ * . |
|+Oo=.o S |
|=oo.oo |
| ...* |
| o o |
| |
+----[SHA256]-----+
//设置免密登录
[root@mf1 ~]# ssh-copy-id 192.168.186.130
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.186.130 (192.168.186.130)' can't be established.
ECDSA key fingerprint is SHA256:GkbsEDjf2WVhJrVAqPHtTL2UVCfWCvxdgMFIbTGrwII.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.186.130's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '192.168.186.130'"
and check to make sure that only the key(s) you wanted were added.
//配置端口转发
[root@mf1 ~]# ssh 192.168.186.130 -L localhost:8000:localhost:8000 -L localhost:6080:localhost:60
Last login: Thu May 20 01:25:34 2021 from 192.168.186.1
[root@zabbix ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 32 192.168.122.1:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 100 127.0.0.1:25 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6080 0.0.0.0:*
LISTEN 0 128 127.0.0.1:8000 0.0.0.0:*
LISTEN 0 128 0.0.0.0:10050 0.0.0.0:*
LISTEN 0 128 0.0.0.0:10051 0.0.0.0:*
LISTEN 0 128 0.0.0.0:9000 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 100 [::1]:25 [::]:*
LISTEN 0 128 [::1]:6080 [::]:*
LISTEN 0 128 [::1]:8000 [::]:*
LISTEN 0 80 *:3306 *:*
[root@zabbix ~]# exit
logout
Connection to 192.168.186.130 closed.
//配置nginx
[root@mf1 ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name localhost;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
root html;
index index.html;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
[root@mf1 ~]# cd /etc/nginx/conf.d/
[root@mf1 conf.d]# vim webvirtmgr.conf
server {
listen 80 default_server;
server_name $hostname;
#access_log /var/log/nginx/webvirtmgr_access_log;
location /static/ {
root /var/www/webvirtmgr/webvirtmgr;
expires max;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $remote_addr;
proxy_connect_timeout 600;
proxy_read_timeout 600;
proxy_send_timeout 600;
client_max_body_size 1024M;
}
}
//确保bind绑定的是本机的8000端口
[root@mf1 ~]# cd /var/www/webvirtmgr/
[root@mf1 webvirtmgr]# cd conf/
[root@mf1 conf]# vim gunicorn.conf.py
。。。。。。
bind = '0.0.0.0:8000' //确保此处绑定的是本机的8000端口,这个在nginx配置中定义了,被代理的端口
backlog = 2048
。。。。。。
//重启nginx
[root@mf1 conf]# systemctl restart nginx
[root@mf1 conf]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 0.0.0.0:9000 0.0.0.0:*
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
//设置supervisor
[root@mf1 ~]# vi /etc/supervisord.conf
。。。。。。
[program:webvirtmgr]
command=/usr/bin/python2 /var/www/webvirtmgr/manage.py run_gunicorn -c /var/www/webvirtmgr/conf/gunicorn.conf.py
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
logfile=/var/log/supervisor/webvirtmgr.log
log_stderr=true
user=nginx
[program:webvirtmgr-console]
command=/usr/bin/python2 /var/www/webvirtmgr/console/webvirtmgr-console
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/webvirtmgr-console.log
redirect_stderr=true
user=nginx
//启动supervisor并设置开机自启
[root@mf1 ~]# systemctl start supervisord
[root@mf1 ~]# systemctl enable supervisord
//配置nginx用户
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/nginx/.ssh/id_rsa):
Created directory '/var/lib/nginx/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/nginx/.ssh/id_rsa.
Your public key has been saved in /var/lib/nginx/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Ie6tcuptxaslcQKH28dInL9sbBOg+S+F1hh2PE26VbU nginx@mf1
The key's randomart image is:
+---[RSA 2048]----+
| .. |
| o . . . . |
| o B..+ . E |
| Xo*=.o |
| +.*BS+ |
| o+O*o |
| .+oX. |
| ..+B.. |
| .o=+o. |
+----[SHA256]-----+
[nginx@mf1 ~]$ touch ~/.ssh/config && echo -e "StrictHostKeyChecking=no\nUserKnownHostsFile=/dev/null" >> ~/.ssh/config
[nginx@mf1 ~]$ chmod 0600 ~/.ssh/config
[nginx@mf1 ~]$ ssh-copy-id root@192.168.186.130
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/nginx/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '192.168.186.130' (ECDSA) to the list of known hosts.
root@192.168.186.130's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@192.168.186.130'"
and check to make sure that only the key(s) you wanted were added.
[nginx@mf1 ~]$ exit
logout
[root@mf1 ~]# vim /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
[Remote libvirt SSH access]
Identity=unix-user:root
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
[root@mf1 ~]# systemctl restart libvirtd
[root@MF ~]# systemctl restart libvirtd
访问报错
解决方法
[root@KVM ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
worker_rlimit_nofile 655350; //添加此行
[root@KVM ~]# systemctl restart nginx
[root@KVM ~]# systemctl enable --now nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[root@KVM ~]# vim /etc/security/limits.conf
。。。。。。
# End of file
* soft nofile 655350
* hard nofile 655350
访问成功
添加连接
做完上面步骤后需要给虚拟机添加一个磁盘,用来安装系统(此步骤需要在虚拟机关机的状态下进行)
通过远程连接软件上传ISO镜像文件至存储,目录为刚才创建并挂载的 /kvmdata
[root@KVM kvmdata]# ls
CentOS-7-x86_64-DVD-1708.iso
kvm网络管理
实例(虚拟机)创建
虚拟机插入光盘
设置在 web 上访问虚拟机的密码
原因:版本过早,使用旧的服务器作为计算节点