KVM虚拟化

KVM虚拟化

1. 虚拟化介绍

虚拟化是云计算的基础。简单的说,虚拟化使得在一台物理的服务器上可以跑多台虚拟机,虚拟机共享物理机的 CPU、内存、IO 硬件资源,但逻辑上虚拟机之间是相互隔离的。

物理机我们一般称为宿主机(Host),宿主机上面的虚拟机称为客户机(Guest)。

那么 Host 是如何将自己的硬件资源虚拟化,并提供给 Guest 使用的呢?
这个主要是通过一个叫做 Hypervisor 的程序实现的。

根据 Hypervisor 的实现方式和所处的位置,虚拟化又分为两种:

  • 全虚拟化
  • 半虚拟化

全虚拟化:
Hypervisor 直接安装在物理机上,多个虚拟机在 Hypervisor 上运行。Hypervisor 实现方式一般是一个特殊定制的 Linux 系统。Xen 和 VMWare 的 ESXi 都属于这个类型
img

半虚拟化:
物理机上首先安装常规的操作系统,比如 Redhat、Ubuntu 和 Windows。Hypervisor 作为 OS 上的一个程序模块运行,并对管理虚拟机进行管理。KVM、VirtualBox 和 VMWare Workstation 都属于这个类型
img

理论上讲:
全虚拟化一般对硬件虚拟化功能进行了特别优化,性能上比半虚拟化要高;
半虚拟化因为基于普通的操作系统,会比较灵活,比如支持虚拟机嵌套。嵌套意味着可以在KVM虚拟机中再运行KVM。

2. KVM介绍

KVM体系结构

KVM内核模块

  • 初始化CPU硬件,打开虚拟化模式,以支持虚拟机的运行。

  • 负责CPU、内存、中断控制器、时钟

QEMU设备模拟

  • 模拟网卡、显卡、存储控制器和硬盘

libvirt

  • 他提供一个API、守护进程libvirtd和一个默认命令行工具virsh

https://www.linux-kvm.org

kVM 全称是 Kernel-Based Virtual Machine(基于内核的虚拟机)。也就是说 KVM 是基于 Linux 内核实现的。
KVM有一个内核模块叫 kvm.ko,只用于管理虚拟 CPU 和内存。

那 IO 的虚拟化,比如存储和网络设备则是由 Linux 内核与Qemu来实现。

作为一个 Hypervisor,KVM 本身只关注虚拟机调度和内存管理这两个方面。IO 外设的任务交给 Linux 内核和 Qemu。

Libvirt 就是 KVM 的管理工具。

其实,Libvirt 除了能管理 KVM 这种 Hypervisor,还能管理 Xen,VirtualBox 等。

Libvirt 包含 3 个东西:后台 daemon 程序 libvirtd、API 库和命令行工具 virsh

  • libvirtd是服务程序,接收和处理 API 请求;
  • API 库使得其他人可以开发基于 Libvirt 的高级工具,比如 virt-manager,这是个图形化的 KVM 管理工具;
  • virsh 是我们经常要用的 KVM 命令行工具

3. KVM部署

环境说明:

系统类型 IP
centos7 192.168.32.125

3.1 KVM安装

署前请确保你的CPU虚拟化功能已开启。分为两种情况:

  • 虚拟机要关机设置CPU虚拟化
  • 物理机要在BIOS里开启CPU虚拟化(BIOS开启VT)
#确保关闭防火墙和selinux
#配置好yum源,包括epel原

#验证CPU是否支持KVM;如果结果中有vmx(Intel)或svm(AMD)字样,就说明CPU的支持的
[root@localhost ~]# egrep -o 'vmx|svm' /proc/cpuinfo
vmx
vmx
vmx
vmx



#安装 KVM 模块、管理工具和 libvirt
[root@localhost ~]# yum -y install qemu-kvm qemu-kvm-tools qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils libguestfs-tools
......

//因为虚拟机中网络,我们一般都是和公司的其他服务器是同一个网段,所以我们需要把 \
KVM服务器的网卡配置成桥接模式。这样的话KVM的虚拟机就可以通过该桥接网卡和公司内部 \
其他服务器处于同一网段
//我的网卡是ens33,所以用br0来桥接ens33网卡

[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# ls
ifcfg-ens33  ifdown-ppp       ifup-ib      ifup-Team
ifcfg-lo     ifdown-routes    ifup-ippp    ifup-TeamPort
ifdown       ifdown-sit       ifup-ipv6    ifup-tunnel
......

#复制网卡配置后修改文件
[root@localhost network-scripts]# cp ifcfg-ens33 ifcfg-br0

#修改后的配置信息
[root@localhost network-scripts]# cat ifcfg-ens33 
TYPE="Ethernet"
BOOTPROTO="static"
DEFROUTE="yes"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
BRIDGE=br0
NM_CONTROLLED=no
[root@localhost network-scripts]# cat ifcfg-br0 
TYPE="Bridge"
BOOTPROTO="static"
NAME="br0"
DEVICE="br0"
ONBOOT="yes"
IPADDR=192.168.32.125
NETMASK=255.255.255.0
GATEWAY=192.168.32.2
DNS1=114.114.114.114
DNS2=8.8.8.8
NM_CONTROLLED=no

#重启网络
[root@localhost network-scripts]# systemctl restart network
[root@localhost network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
    link/ether 00:0c:29:f6:6c:bc brd ff:ff:ff:ff:ff:ff
    inet6 fe80::20c:29ff:fef6:6cbc/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:d0:ef:aa brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:d0:ef:aa brd ff:ff:ff:ff:ff:ff
7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:f6:6c:bc brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.125/24 brd 192.168.32.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fef6:6cbc/64 scope link 
       valid_lft forever preferred_lft forever



#启动服务
[root@localhost ~]# systemctl start libvirtd
[root@localhost ~]# systemctl enable libvirtd


#确认加载kvm模块
[root@localhost ~]# lsmod|grep kvm
kvm_intel             188644  0 
kvm                   621480  1 kvm_intel
irqbypass              13503  1 kvm

#测试并验证安装结果
[root@localhost ~]# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------

[root@localhost ~]# virsh --version
4.5.0
[root@localhost ~]# virt-install --version
1.5.0
[root@localhost ~]# ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-kvm
[root@localhost ~]# ll /usr/bin/qemu-kvm
lrwxrwxrwx 1 root root 21 Aug  3 23:43 /usr/bin/qemu-kvm -> /usr/libexec/qemu-kvm


#查看网桥信息
[root@localhost ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.000c29f66cbc	no		ens33
virbr0		8000.525400d0efaa	yes		virbr0-nic


3.2 KVM web管理界面安装

kvm 的 web 管理界面是由 webvirtmgr 程序提供的。

#安装依赖包
[root@localhost ~]# yum -y install git python-pip libvirt-python libxml2-python python-websockify supervisor nginx python-devel

#升级pip   
#-i 指定清华大学镜像源
[root@localhost ~]# pip install --upgrade pip -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/

#从github上下载webvirtmgr代码
[root@localhost ~]# cd /usr/local/src/
[root@localhost src]# git clone git://github.com/retspen/webvirtmgr.git
Cloning into 'webvirtmgr'...
remote: Enumerating objects: 5614, done.
remote: Total 5614 (delta 0), reused 0 (delta 0), pack-reused 5614
Receiving objects: 100% (5614/5614), 2.98 MiB | 432.00 KiB/s, done.
Resolving deltas: 100% (3602/3602), done.


#安装webvirtmgr
[root@localhost src]# ls
webvirtmgr
[root@localhost src]# cd webvirtmgr/
[root@localhost webvirtmgr]# ls
conf                  hostdetail  manage.py         secrets    templates
console               images      MANIFEST.in       serverlog  Vagrantfile
create                instance    networks          servers    vrtManager
deploy                interfaces  README.rst        setup.py   webvirtmgr
dev-requirements.txt  locale      requirements.txt  storages
[root@localhost webvirtmgr]# pip install -r requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/
......

#检查sqlite3是否安装
[root@localhost webvirtmgr]# python
Python 2.7.5 (default, Apr  2 2020, 13:16:51) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
>>> exit()


#初始化webvirtmgr的帐号信息
[root@localhost webvirtmgr]# python manage.py syncdb
WARNING:root:No local_settings file found.
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_groups
Creating table auth_user_user_permissions
Creating table auth_user
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table servers_compute
Creating table instance_instance
Creating table create_flavor

You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes		//是否创建超级管理员帐号
Username (leave blank to use 'root'):     
Email address: 1197691518@qq.com
Password: 
Password (again): 
Superuser created successfully.
Installing custom SQL ...
Installing indexes ...
Installed 6 object(s) from 1 fixture(s)



#拷贝web网页至指定目录
[root@localhost webvirtmgr]# mkdir /var/www
[root@localhost webvirtmgr]# cp -r /usr/local/src/webvirtmgr /var/www/
[root@localhost webvirtmgr]# chown -R nginx.nginx /var/www/webvirtmgr/


#生成密钥
[root@localhost ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:qR1jjvVZc/2IAsNnx+rnvIfQlr/dqCvAlYJvJkBca78 root@localhost.localdomain
The key's randomart image is:
+---[RSA 2048]----+
|   . ..          |
|    o  .         |
|   .  o.   .     |
|    ...o..o.   . |
|     . oSoo.=.. .|
|      .B*X.=++ ..|
|      o+E.=o.o. .|
|         ..oo oo.|
|          .+*=o.o|
+----[SHA256]-----+

#由于这里webvirtmgr和kvm服务部署在同一台机器,所以这里本地信任。如果kvm部署在其他机器,那么这个是它的ip
[root@localhost ~]# ssh-copy-id 192.168.32.125
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.32.125 (192.168.32.125)' can't be established.
ECDSA key fingerprint is SHA256:frx90ADy/hsYsjrFg0CGVr1aMVpLECeXnXsTnerpZNg.
ECDSA key fingerprint is MD5:0f:89:af:a1:cb:02:5a:a5:f0:00:50:49:bc:53:97:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.32.125's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.32.125'"
and check to make sure that only the key(s) you wanted were added.


#配置端口转发
[root@localhost ~]# ssh 192.168.32.125 -L localhost:8000:localhost:8000 -L localhost:6080:localhost:60
Last login: Mon Aug  3 23:37:13 2020 from 192.168.32.1
[root@localhost ~]# ss -tanl
State       Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN      0      100    127.0.0.1:25                      *:*                  
LISTEN      0      128    127.0.0.1:6011                    *:*                  
LISTEN      0      128    127.0.0.1:6080                    *:*                  
LISTEN      0      128    127.0.0.1:8000                    *:*                  
LISTEN      0      128         *:111                     *:*                  
LISTEN      0      5      192.168.122.1:53                      *:*                  
LISTEN      0      128         *:22                      *:*                  
LISTEN      0      100     [::1]:25                   [::]:*                  
LISTEN      0      128     [::1]:6011                 [::]:*                  
LISTEN      0      128     [::1]:6080                 [::]:*                  
LISTEN      0      128     [::1]:8000                 [::]:*                  
LISTEN      0      128      [::]:111                  [::]:*                  
LISTEN      0      128      [::]:22                   [::]:*   



#配置nginx
[root@localhost ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80;
        server_name  localhost;

        include /etc/nginx/default.d/*.conf;

        location / {
            root html;
            index index.html index.htm;
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
}



[root@localhost ~]# vim /etc/nginx/conf.d/webvirtmgr.conf
server {
    listen 80 default_server;

    server_name $hostname;
    #access_log /var/log/nginx/webvirtmgr_access_log;

    location /static/ {
        root /var/www/webvirtmgr/webvirtmgr;
        expires max;
    }

    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
        proxy_set_header Host $host:$server_port;
        proxy_set_header X-Forwarded-Proto $remote_addr;
        proxy_connect_timeout 600;
        proxy_read_timeout 600;
        proxy_send_timeout 600;
        client_max_body_size 1024M;
    }
}




#确保bind绑定的是本机的8000端口
[root@localhost ~]# vim /var/www/webvirtmgr/conf/gunicorn.conf.py
......
bind = '0.0.0.0:8000'     //确保此处绑定的是本机的8000端口,这个在nginx配置中定义了,被代理的端口
backlog = 2048
......

#重启nginx
[root@localhost ~]# systemctl restart nginx
[root@localhost ~]# systemctl enable nginx
[root@localhost ~]# ss -tanl
State       Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN      0      100    127.0.0.1:25                      *:*                  
LISTEN      0      128    127.0.0.1:6011                    *:*                  
LISTEN      0      128    127.0.0.1:6080                    *:*                  
LISTEN      0      128    127.0.0.1:8000                    *:*                  
LISTEN      0      128         *:111                     *:*                  
LISTEN      0      128         *:80                      *:*                  
LISTEN      0      5      192.168.122.1:53                      *:*                  
LISTEN      0      128         *:22                      *:*                  
LISTEN      0      100     [::1]:25                   [::]:*                  
LISTEN      0      128     [::1]:6011                 [::]:*                  
LISTEN      0      128     [::1]:6080                 [::]:*                  
LISTEN      0      128     [::1]:8000                 [::]:*                  
LISTEN      0      128      [::]:111                  [::]:*                  
LISTEN      0      128      [::]:22                   [::]:* 





#设置supervisor
[root@localhost ~]# vim /etc/supervisord.conf
#.....此处省略上面的内容,在文件最后加上以下内容
[program:webvirtmgr]
command=/usr/bin/python2 /var/www/webvirtmgr/manage.py run_gunicorn -c /var/www/webvirtmgr/conf/gunicorn.conf.py
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
logfile=/var/log/supervisor/webvirtmgr.log
log_stderr=true
user=nginx

[program:webvirtmgr-console]
command=/usr/bin/python2 /var/www/webvirtmgr/console/webvirtmgr-console
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/webvirtmgr-console.log
redirect_stderr=true
user=nginx


#启动supervisor并设置开机自启
[root@localhost ~]# systemctl start supervisord
[root@localhost ~]# systemctl enable supervisord
Created symlink from /etc/systemd/system/multi-user.target.wants/supervisord.service to /usr/lib/systemd/system/supervisord.service.
[root@localhost ~]# systemctl status supervisord
● supervisord.service - Process Monitoring and Control Daemon
   Loaded: loaded (/usr/lib/systemd/system/supervisord.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-08-04 00:17:14 EDT; 16s ago
 Main PID: 13109 (supervisord)
   CGroup: /system.slice/supervisord.service
           └─13109 /usr/bin/python /usr/bin/supervisord -c /etc/supervisord....

Aug 04 00:17:14 localhost.localdomain systemd[1]: Starting Process Monitorin...
Aug 04 00:17:14 localhost.localdomain systemd[1]: Started Process Monitoring...
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost ~]# ss -tanl
State       Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN      0      100    127.0.0.1:25                      *:*                  
LISTEN      0      128    127.0.0.1:6011                    *:*                  
LISTEN      0      128    127.0.0.1:6080                    *:*                  
LISTEN      0      128    127.0.0.1:8000                    *:*                  
LISTEN      0      128         *:111                     *:*                  
LISTEN      0      128         *:80                      *:*                  
LISTEN      0      5      192.168.122.1:53                      *:*                  
LISTEN      0      128         *:22                      *:*                  
LISTEN      0      100     [::1]:25                   [::]:*                  
LISTEN      0      128     [::1]:6011                 [::]:*                  
LISTEN      0      128     [::1]:6080                 [::]:*                  
LISTEN      0      128     [::1]:8000                 [::]:*                  
LISTEN      0      128      [::]:111                  [::]:*                  
LISTEN      0      128      [::]:22                   [::]:*  





#配置nginx用户
[root@localhost ~]# su - nginx -s /bin/bash
-bash-4.2$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/nginx/.ssh/id_rsa): 
Created directory '/var/lib/nginx/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /var/lib/nginx/.ssh/id_rsa.
Your public key has been saved in /var/lib/nginx/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:paBaAvS8BtEV+ZzZfhbtRFNv/KjfFWvCD4I5oJzw1Gw nginx@localhost.localdomain
The key's randomart image is:
+---[RSA 2048]----+
| o. .oo       .. |
|. +. .       o ..|
|.. o  + + . o . +|
| .. ..o* + . o o.|
|  .+o. ES   + ...|
|  .+= + ..oo.o  o|
|  .  =   +o..+ o.|
|          . ..=..|
|              ...|
+----[SHA256]-----+
-bash-4.2$ touch ~/.ssh/config && echo -e "StrictHostKeyChecking=no\nUserKnownHostsFile=/dev/null" >> ~/.ssh/config
-bash-4.2$ chmod 0600 ~/.ssh/config

-bash-4.2$ ssh-copy-id root@192.168.32.125
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/nginx/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '192.168.32.125' (ECDSA) to the list of known hosts.
root@192.168.32.125's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.32.125'"
and check to make sure that only the key(s) you wanted were added.

-bash-4.2$ exit
logout




[root@localhost ~]# vim /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
[Remote libvirt SSH access]
Identity=unix-user:root
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

[root@localhost ~]# chown -R root.root /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
[root@localhost ~]# systemctl restart nginx
[root@localhost ~]# systemctl restart libvirtd


3.3 web界面管理

3.3.1 kvm连接管理

通过ip地址访问,登录
账号密码为执行python manage.py syncdb 时设置的账号密码

3.3.2 kvm存储管理

#添加硬盘,格式化并挂载在/storage
[root@localhost ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   60G  0 disk 
├─sda1            8:1    0  500M  0 part /boot
└─sda2            8:2    0   59G  0 part 
  ├─centos-root 253:0    0   55G  0 lvm  /
  └─centos-swap 253:1    0    4G  0 lvm  [SWAP]
sdb               8:16   0  100G  0 disk 
sr0              11:0    1 10.3G  0 rom  


[root@localhost ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x757b3590.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): 
Using default response p
Partition number (1-4, default 1): 
First sector (2048-209715199, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-209715199, default 209715199): 
Using default value 209715199
Partition 1 of type Linux and of size 100 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@localhost ~]# partprobe 
Warning: Unable to open /dev/sr0 read-write (Read-only file system).  /dev/sr0 has been opened read-only.

[root@localhost ~]# mkfs.xfs /dev/sdb1 
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=6553536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=26214144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=12799, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@localhost ~]# blkid | grep sdb1
/dev/sdb1: UUID="ba64bed7-99f6-43ad-9d7c-67fdb3b6d1d1" TYPE="xfs" 
[root@localhost ~]# vim /etc/fstab
#最后一行添加
UUID="ba64bed7-99f6-43ad-9d7c-67fdb3b6d1d1" /storage xfs defaults 0 0

[root@localhost ~]# mkdir /storage
[root@localhost ~]# mount -a
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 3.8G     0  3.8G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G   12M  3.8G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   55G  1.9G   54G   4% /
/dev/sda1                497M  144M  354M  29% /boot
tmpfs                    781M     0  781M   0% /run/user/0
/dev/sdb1                100G   33M  100G   1% /storage


过远程连接软件上传ISO镜像文件至存储目录/storage

[root@localhost storage]# ls
rhel-server-7.4-x86_64-dvd.iso

创建系统安装镜像


3.3.3 kvm网络管理

添加桥接网络


插入光盘
点击链接

设置在web访问虚拟机的密码

启动虚拟机

3.3.4 实例管理

创建实例

4. 故障案例

案例一

第一次通过web访问kvm时可能会一直访问不了,一直转圈,而命令行界面一直报错(too many open files)

此时需要对nginx进行配置

[root@localhost ~]# vim /etc/nginx/nginx.conf
....此处省略N行
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
worker_rlimit_nofile 655350;    //添加此行配置

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
....此处省略N行

然后对系统参数进行设置

[root@localhost ~]# vim /etc/security/limits.conf
....此处省略N行
# End of file
* soft nofile 655350
* hard nofile 655350
[root@localhost ~]# systemctl restart nginx

到此问题即可解决

案例二

#解决方法是安装novnc并通过novnc_server启动一个vnc
[root@localhost ~]# yum -y install novnc
[root@localhost ~]# ll /etc/rc.local
lrwxrwxrwx. 1 root root 13 Aug  6  2018 /etc/rc.local -> rc.d/rc.local
[root@localhost ~]# ll /etc/rc.d/rc.local
-rw-r--r-- 1 root root 513 Mar 11 22:35 /etc/rc.d/rc.local
[root@localhost ~]# chmod +x /etc/rc.d/rc.local
[root@localhost ~]# ll /etc/rc.d/rc.local
-rwxr-xr-x 1 root root 513 Mar 11 22:35 /etc/rc.d/rc.local

[root@localhost ~]# vim /etc/rc.d/rc.local
......
# that this script will be executed during boot.

touch /var/lock/subsys/local
nohup novnc_server 192.168.32.125:5920 &

[root@localhost ~]# . /etc/rc.d/rc.local


重新访问

posted @ 2020-08-04 14:05  EverEternity  阅读(530)  评论(0编辑  收藏  举报