KVM虚拟化

KVM虚拟化


虚拟化介绍

虚拟化是云计算的基础。简单的说,虚拟化使得在一台物理的服务器上可以跑多台虚拟机,虚拟机共享物理机的 CPU、内存、IO 硬件资源,但逻辑上虚拟机之间是相互隔离的。

物理机我们一般称为宿主机(Host),宿主机上面的虚拟机称为客户机(Guest)。

那么 Host 是如何将自己的硬件资源虚拟化,并提供给 Guest 使用的呢?
这个主要是通过一个叫做 Hypervisor 的程序实现的。

根据 Hypervisor 的实现方式和所处的位置,虚拟化又分为两种:

  • 全虚拟化
  • 半虚拟化

全虚拟化:
Hypervisor 直接安装在物理机上,多个虚拟机在 Hypervisor 上运行。Hypervisor 实现方式一般是一个特殊定制的 Linux 系统。Xen 和 VMWare 的 ESXi 都属于这个类型
1

半虚拟化:
物理机上首先安装常规的操作系统,比如 Redhat、Ubuntu 和 Windows。Hypervisor 作为 OS 上的一个程序模块运行,并对管理虚拟机进行管理。KVM、VirtualBox 和 VMWare Workstation 都属于这个类型
2

理论上讲:
全虚拟化一般对硬件虚拟化功能进行了特别优化,性能上比半虚拟化要高;
半虚拟化因为基于普通的操作系统,会比较灵活,比如支持虚拟机嵌套。嵌套意味着可以在KVM虚拟机中再运行KVM。


kvm介绍

kVM 全称是 Kernel-Based Virtual Machine。也就是说 KVM 是基于 Linux 内核实现的。
KVM有一个内核模块叫 kvm.ko,只用于管理虚拟 CPU 和内存。

那 IO 的虚拟化,比如存储和网络设备则是由 Linux 内核与Qemu来实现。

作为一个 Hypervisor,KVM 本身只关注虚拟机调度和内存管理这两个方面。IO 外设的任务交给 Linux 内核和 Qemu。

大家在网上看 KVM 相关文章的时候肯定经常会看到 Libvirt 这个东西。

Libvirt 就是 KVM 的管理工具。

其实,Libvirt 除了能管理 KVM 这种 Hypervisor,还能管理 Xen,VirtualBox 等。

Libvirt 包含 3 个东西:后台 daemon 程序 libvirtd、API 库和命令行工具 virsh

  • libvirtd是服务程序,接收和处理 API 请求;
  • API 库使得其他人可以开发基于 Libvirt 的高级工具,比如 virt-manager,这是个图形化的 KVM 管理工具;
  • virsh 是我们经常要用的 KVM 命令行工具

KVM部署

环境说明:

系统类型 IP
CentOS7 192.168.100.5

准备工作:

开启虚拟化

3

新加一个磁盘并且格式化分区

3.1

[root@kvm ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0   20G  0 disk 
sr0              11:0    1  4.2G  0 rom  
[root@kvm ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xc5c857bd.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): 
Using default response p
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): 
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xc5c857bd

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

//刷新分区表
[root@kvm ~]# partprobe 
Warning: Unable to open /dev/sr0 read-write (Read-only file system).  /dev/sr0 has been opened read-only.
[root@kvm ~]# mkfs.xfs /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=1310656 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5242624, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

//查看UUID
[root@kvm ~]# blkid /dev/sdb1
/dev/sdb1: UUID="de254d9d-e18d-4fde-9d92-1ccfab198d4d" TYPE="xfs" 

//自动挂载
[root@kvm ~]# vim /etc/fstab 
······
#在最后面加入以下内容
UUID="de254d9d-e18d-4fde-9d92-1ccfab198d4d" /kvmdata xfs defaults 0 0

[root@kvm ~]# mkdir /kvmdata
[root@kvm ~]# mount -a
[root@kvm ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   17G  1.7G   16G  10% /
devtmpfs                 478M     0  478M   0% /dev
tmpfs                    489M     0  489M   0% /dev/shm
tmpfs                    489M  6.7M  482M   2% /run
tmpfs                    489M     0  489M   0% /sys/fs/cgroup
/dev/sda1               1014M  126M  889M  13% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sdb1                 20G   33M   20G   1% /kvmdata

KVM安装

//关闭防火墙与SELINUX
[root@kvm ~]# systemctl disable --now firewalld
[root@kvm ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config
[root@kvm ~]# setenforce 0

//配置网络源
[root@kvm ~]# cd /etc/yum.repos.d/
[root@kvm yum.repos.d]# curl -o /etc/yum.repos.d/CentOS7-Base-163.repo 
[root@kvm yum.repos.d]# sed -i 's/\$releasever/7/g' /etc/yum.repos.d/CentOS7-Base-163.repo
[root@kvm yum.repos.d]# sed -i 's/^enabled=.*/enabled=1/g' /etc/yum.repos.d/CentOS7-Base-163.repo
[root@kvm yum.repos.d]# yum -y install epel-release vim wget net-tools unzip zip gcc gcc-c++

//验证CPU是否支持KVM;如果结果中有vmx(Intel)或svm(AMD)字样,就说明CPU的支持的
[root@kvm ~]# egrep -o 'vmx|svm' /proc/cpuinfo
vmx

//kvm安装
[root@kvm ~]# yum -y install qemu-kvm qemu-kvm-tools qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils libguestfs-tools

//因为虚拟机中网络,我们一般都是和公司的其他服务器是同一个网段,所以我们需要把
KVM服务器的网卡配置成桥接模式。这样的话KVM的虚拟机就可以通过该桥接网卡和公司内部
其他服务器处于同一网段
//此处我的网卡是ens32,所以用br0来桥接ens32网卡
[root@kvm ~]# cd /etc/sysconfig/network-scripts/
[root@kvm network-scripts]# ls
ifcfg-ens32  ······
[root@kvm network-scripts]# cp ifcfg-ens32 ifcfg-br0
[root@kvm network-scripts]# vim ifcfg-br0

TYPE=Bridge
DEVICE=br0
BOOTPROTO=static
NAME=br0
ONBOOT=yes
IPADDR=192.168.100.5
PREFIX=24
GATEWAY=192.168.100.254
DNS1=114.114.114.114
[root@vmx network-scripts]# vim ifcfg-ens32

TYPE=Ethernet
BOOTPROTO=static
ONBOOT=yes
NAME=ens32
DEVICE=ens32
BRIDGE=br0

//重启网络
[root@kvm ~]# systemctl restart network
[root@kvm ~]# reboot
[root@kvm ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000
    link/ether 00:0c:29:4c:0d:b3 brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 00:0c:29:4c:0d:b3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.5/24 brd 192.168.100.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::b07a:fcff:fee9:ca39/64 scope link 
       valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:a8:2a:31 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:a8:2a:31 brd ff:ff:ff:ff:ff:ff

//启动服务
[root@kvm ~]# systemctl enable --now libvirtd

//验证安装结果
[root@kvm ~]# lsmod|grep kvm
kvm_intel             170086  0 
kvm                   566340  1 kvm_intel
irqbypass              13503  1 kvm

//测试并验证安装结果
[root@kvm ~]# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------
[root@kvm ~]# virsh --version
4.5.0
[root@kvm ~]# virt-install --version
1.5.0
[root@kvm ~]# ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-kvm
[root@kvm ~]# ll /usr/bin/qemu-kvm
lrwxrwxrwx 1 root root 21 May 22 15:48 /usr/bin/qemu-kvm -> /usr/libexec/qemu-kvm

//查看网桥信息
[root@kvm ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.000c294c0db3	no		ens32
virbr0		8000.525400a82a31	yes		virbr0-nic

KVM Web管理界面安装

kvm 的 web 管理界面是由 webvirtmgr 程序提供的

//安装依赖包
[root@kvm ~]# yum -y install git python-pip libvirt-python libxml2-python python-websockify supervisor nginx python-devel

//从github上下载webvirtmgr代码
[root@kvm ~]# cd /usr/local/src/
[root@kvm src]# git clone git://github.com/retspen/webvirtmgr.git

//安装webvirtmgr
[root@kvm src]# cd webvirtmgr/
[root@kvm webvirtmgr]# pip install -r requirements.txt

//检查sqlite3是否安装
[root@kvm webvirtmgr]# python
Python 2.7.5 (default, Nov 16 2020, 22:23:17) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
>>> exit()

//初始化帐号信息
[root@kvm webvirtmgr]# python manage.py syncdb
WARNING:root:No local_settings file found.
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_groups
Creating table auth_user_user_permissions
Creating table auth_user
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table servers_compute
Creating table instance_instance
Creating table create_flavor

You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes      //是否创建超级管理员帐号
Username (leave blank to use 'root'): admin          //指定超级管理员帐号用户名,默认留空为root
Email address: qinghao_yu@163.com                    //设置超级管理员邮箱
Password:                                            //设置超级管理员密码
Password (again):                                    //再次输入超级管理员密码
Superuser created successfully.
Installing custom SQL ...
Installing indexes ...
Installed 6 object(s) from 1 fixture(s)

//拷贝web网页至指定目录
[root@kvm webvirtmgr]# mkdir -p /var/www
[root@kvm webvirtmgr]# cp -r /usr/local/src/webvirtmgr /var/www/
[root@kvm webvirtmgr]# chown -R nginx.nginx /var/www/webvirtmgr/

//生成密钥
[root@kvm ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:O1fwe3n4DLjpFZJiIfsMHuQZ7f1Ddf5IDiXHTWWfAOM root@kvm
The key's randomart image is:
+---[RSA 2048]----+
|           o..  =|
|         .. ...+o|
|        + +E. +.=|
|       o * = = o.|
|        S + B + .|
|       . B o X =.|
|        + + o @ o|
|         o   = * |
|           .+   o|
+----[SHA256]-----+

//设置免密登录
[root@kvm ~]# ssh-copy-id 192.168.100.5
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.100.5 (192.168.100.5)' can't be established.
ECDSA key fingerprint is SHA256:IQzMft7VJBCONnZGbOcS/1mJPTG6It2y+xcUF92wMn4.
ECDSA key fingerprint is MD5:6d:cb:51:01:ee:1b:c6:85:1d:d8:8e:2b:7d:f4:5d:ef.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.100.5's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.100.5'"
and check to make sure that only the key(s) you wanted were added.

//配置端口转发
[root@kvm ~]# ssh 192.168.100.5 -L localhost:8000:localhost:8000 -L localhost:6080:localhost:60
Last login: Sat May 22 15:46:41 2021 from 192.168.100.250
[root@kvm ~]# ss -antl
State       Recv-Q Send-Q Local Address:Port               Peer Address:Port             
LISTEN      0      128              *:111                          *:*                  
LISTEN      0      5      192.168.122.1:53                         *:*                 
LISTEN      0      128              *:22                           *:*                  
LISTEN      0      100      127.0.0.1:25                           *:*                  
LISTEN      0      128      127.0.0.1:6010                         *:*                  
LISTEN      0      128      127.0.0.1:6080                         *:*                  
LISTEN      0      128      127.0.0.1:8000                         *:*                  
LISTEN      0      128             :::111                         :::*                  
LISTEN      0      128             :::22                          :::*                  
LISTEN      0      100            ::1:25                          :::*                  
LISTEN      0      128            ::1:6010                        :::*                  
LISTEN      0      128            ::1:6080                        :::*                  
LISTEN      0      128            ::1:8000                        :::*         

//配置nginx
[root@kvm ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
worker_rlimit_nofile 655350;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80;
        server_name  localhost;

        include /etc/nginx/default.d/*.conf;

        location / {
            root html;
            index index.html index.htm;
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
}

[root@kvm ~]# vim /etc/nginx/conf.d/webvirtmgr.conf
server {
    listen 80 default_server;

    server_name $hostname;
    #access_log /var/log/nginx/webvirtmgr_access_log;

    location /static/ {
        root /var/www/webvirtmgr/webvirtmgr;
        expires max;
    }

    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
        proxy_set_header Host $host:$server_port;
        proxy_set_header X-Forwarded-Proto $remote_addr;
        proxy_connect_timeout 600;
        proxy_read_timeout 600;
        proxy_send_timeout 600;
        client_max_body_size 1024M;
    }
}

//确保bind绑定的是本机的8000端口
[root@kvm ~]# vim /var/www/webvirtmgr/conf/gunicorn.conf.py
······
bind = '0.0.0.0:8000'     //确保此处绑定的是本机的8000端口,这个在nginx配置中定义了,被代理的端口
backlog = 2048
······

//启动nginx
[root@kvm ~]# systemctl enable --now nginx
[root@kvm ~]# ss -antl
State       Recv-Q Send-Q Local Address:Port               Peer Address:Port             
LISTEN      0      128              *:111                          *:*                  
LISTEN      0      128              *:80                           *:*                  
LISTEN      0      5      192.168.122.1:53                         *:*                 
LISTEN      0      128              *:22                           *:*                  
LISTEN      0      100      127.0.0.1:25                           *:*                  
LISTEN      0      128      127.0.0.1:6010                         *:*                  
LISTEN      0      128      127.0.0.1:6080                         *:*                  
LISTEN      0      128      127.0.0.1:8000                         *:*                  
LISTEN      0      128             :::111                         :::*                  
LISTEN      0      128             :::22                          :::*                  
LISTEN      0      100            ::1:25                          :::*                  
LISTEN      0      128            ::1:6010                        :::*                  
LISTEN      0      128            ::1:6080                        :::*                  
LISTEN      0      128            ::1:8000                        :::*   

//设置supervisor
[root@kvm ~]# vim /etc/supervisord.conf
······
#在最后面加入以下内容
[program:webvirtmgr]
command=/usr/bin/python2 /var/www/webvirtmgr/manage.py run_gunicorn -c /var/www/webvirtmgr/conf/gunicorn.conf.py
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
logfile=/var/log/supervisor/webvirtmgr.log
log_stderr=true
user=nginx

[program:webvirtmgr-console]
command=/usr/bin/python2 /var/www/webvirtmgr/console/webvirtmgr-console
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/webvirtmgr-console.log
redirect_stderr=true
user=nginx

//启动supervisor并设置开机自启
[root@kvm ~]# systemctl enable --now supervisord

//配置nginx用户
[root@kvm ~]# su - nginx -s /bin/bash
-bash-4.2$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/nginx/.ssh/id_rsa): 
Created directory '/var/lib/nginx/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /var/lib/nginx/.ssh/id_rsa.
Your public key has been saved in /var/lib/nginx/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:NNvrln8kfjziWD+akDAPZ7tZvEDcVvEE9e0C0Uw4MOU nginx@kvm
The key's randomart image is:
+---[RSA 2048]----+
|          ooo=+oo|
|           oo.o+o|
|        o   E.. +|
|       . = . o . |
|        S * o . .|
|         O *. .. |
|          Bo++   |
|         .oOo+*  |
|         .=o*=.o |
+----[SHA256]-----+
-bash-4.2$ touch ~/.ssh/config && echo -e "StrictHostKeyChecking=no\nUserKnownHostsFile=/dev/null" >> ~/.ssh/config
-bash-4.2$ chmod 0600 ~/.ssh/config

-bash-4.2$ ssh-copy-id root@192.168.100.5
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/nginx/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '192.168.100.5' (ECDSA) to the list of known hosts.
root@192.168.100.5's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.100.5'"
and check to make sure that only the key(s) you wanted were added.

-bash-4.2$ exit
logout

[root@kvm ~]# vim /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
[Remote libvirt SSH access]
Identity=unix-user:root
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

[root@kvm ~]# chown -R root.root /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
[root@kvm ~]# systemctl restart nginx
[root@kvm ~]# systemctl restart libvirtd

故障问题

第一次通过web访问kvm时可能会一直访问不了,一直转圈,而命令行界面一直报错(too many open files)

此时需要对nginx进行配置

[root@kvm ~]# vim /etc/nginx/nginx.conf
······
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
worker_rlimit_nofile 655350;    //添加此行配置

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
······

[root@localhost ~]# systemctl restart nginx

然后对系统参数进行设置

[root@kvm ~]# vim /etc/security/limits.conf
······
# End of file
* soft nofile 655350
* hard nofile 655350

KVM Web界面管理

通过ip地址在浏览器上访问kvm,例如我这里就是:http://192.168.100.5

此处的超级管理员用户、密码是初始化帐号信息的时候设置的

4

KVM连接管理

创建SSH连接:

5

6

7

kvm存储管理

创建存储:

8

9

10

通过远程连接软件上传ISO镜像文件至存储目录/kvmdata

[root@kvm ~]# cd /kvmdata/
[root@kvm kvmdata]# ls
rhel-8.2-x86_64-dvd.iso

在 web 界面查看ISO镜像是否存在

11

创建系统安装镜像

12

添加成功如下图所示

13

kvm网络管理

添加桥接网络

14

15

16

实例管理

实例(虚拟机)创建

17

18

19

虚拟机插入光盘

20

设置在 web 上访问虚拟机的密码

21

启动虚拟机

22

23

虚拟机安装

24

虚拟机安装步骤就是安装系统的步骤,此处就不再赘述

故障问题

web界面配置完成后可能会出现以下错误界面

25

解决方法是安装novnc并通过novnc_server启动一个vnc

[root@kvm ~]# yum -y install novnc
[root@kvm ~]# ll /etc/rc.local
lrwxrwxrwx. 1 root root 13 May 22 15:10 /etc/rc.local -> rc.d/rc.local
[root@kvm ~]# ll /etc/rc.d/rc.local
-rw-r--r--. 1 root root 473 Aug  5  2017 /etc/rc.d/rc.local
[root@kvm ~]# chmod +x /etc/rc.d/rc.local
[root@kvm ~]# ll /etc/rc.d/rc.local
-rwxr-xr-x. 1 root root 473 Aug  5  2017 /etc/rc.d/rc.local

[root@kvm ~]# vim /etc/rc.d/rc.local
······
touch /var/lock/subsys/local
#在最后面加入以下内容
nohup novnc_server 192.168.100.5:5920 &

[root@kvm ~]# . /etc/rc.d/rc.local

做完以上操作后再次访问即可正常访问

26

posted @ 2021-05-22 17:12  Serein)  阅读(505)  评论(0编辑  收藏  举报