Projekt

Allgemein

Profil

Setup kvm » Historie » Version 3

Jeremias Keihsler, 26.08.2021 12:30

1 1 Jeremias Keihsler
h1. KVM
2
3 2 Jeremias Keihsler
this is for a vanilla CentOS 8 minimal installation,
4 1 Jeremias Keihsler
largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@
5
6
good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/
7
8 3 Jeremias Keihsler
br0 -sources:
9
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli
10
https://www.tecmint.com/create-network-bridge-in-rhel-centos-8/
11
https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/
12
https://extravm.com/billing/knowledgebase/114/CentOS-8-ifup-unknown-connection---Add-Second-IP.html
13
14
15 1 Jeremias Keihsler
h2. basic updates/installs
16
17
<pre><code class="bash">
18
yum update
19
yum install wget
20
yum install vim
21
reboot
22
</code></pre>
23
24
h2. check machine capability
25
26
<pre><code class="bash">
27
grep -E 'svm|vmx' /proc/cpuinfo
28
</code></pre>
29
30
vmx ... Intel
31
svm ... AMD
32
33
h2. install KVM on CentOS minimal
34
35
<pre><code class="bash">
36
yum install qemu-kvm libvirt libguestfs-tools virt-install
37
systemctl enable libvirtd && systemctl start libvirtd
38
</code></pre>
39
40
verify the following kernel modules are loaded
41
<pre><code class="bash">
42
lsmod | grep kvm
43
</code></pre>
44
45
<pre><code class="bash">
46
kvm
47
kvm_intel
48
</code></pre>
49
<pre><code class="bash">
50
kvm
51
kvm_amd
52
</code></pre>
53
h2. setup networking
54
55
add to the network controller configuration file @/etc/sysconfig/network-scripts/ifcfg-em1@
56
<pre>
57
...
58
BRIDGE=br0
59
</pre>
60
61
add following new file @/etc/sysconfig/network-scripts/ifcfg-br0@
62
<pre>
63
DEVICE="br0"
64
# BOOTPROTO is up to you. If you prefer “static”, you will need to
65
# specify the IP address, netmask, gateway and DNS information.
66
BOOTPROTO="dhcp"
67
IPV6INIT="yes"
68
IPV6_AUTOCONF="yes"
69
ONBOOT="yes"
70
TYPE="Bridge"
71
DELAY="0"
72
</pre>
73
74
enable network forwarding @/etc/sysctl.conf@
75
<pre>
76
...
77
net.ipv4.ip_forward = 1
78
</pre>
79
80
read the file and restart NetworkManager
81
<pre><code class="bash">
82
sysctl -p /etc/sysctl.conf
83
systemctl restart NetworkManager
84
</code></pre>
85
86
h2. can KVM and Virtualbox coexist
87
88
http://www.dedoimedo.com/computers/kvm-virtualbox.html
89
90
h2. convert Virtualbox to KVM
91
92
h3. uninstall Virtualbox-guest-additions
93
94
<pre><code class="bash">
95
opt/[VboxAddonsFolder]/uninstall.sh
96
</code></pre>
97
98
some people had to remove @/etc/X11/xorg.conf@
99
100
h3. convert image from Virtualbox to KWM
101
102
<pre><code class="bash">
103
VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img
104
</code></pre>
105
106
RAW-Datei nach qcow konvertieren
107
<pre><code class="bash">
108
qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow
109
</code></pre>
110
111
h2. automatic start/shutdown of VMs with Host
112
113
taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html
114
115
h3. enable libvirt-guests service
116
<pre><code class="bash">
117
systemctl enable libvirt-guests
118
systemctl start libvirt-guests
119
</code></pre>
120
121
all settings are to be done in @/etc/sysconfig/libvirt-guests@
122
123
h2. install
124
125
126
<pre><code class="bash">
127
yum install virt-manager
128
</code></pre>
129
130
<pre><code class="bash">
131
usermod -a -G libvirt username
132
</code></pre>
133
134
h2. rename KVM-guest
135
136
taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html
137
138
Power off the virtual machine and export the machine's XML configuration file:
139
140
<pre><code class="bash">
141
virsh dumpxml name_of_vm > name_of_vm.xml
142
</code></pre>
143
144
Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>.
145
146
Save the XML file and undefine the old VM name with:
147
148
<pre><code class="bash">
149
virsh undefine name_of_vm
150
</code></pre>
151
152
Now just import the edited XML file to define the VM:
153
154
<pre><code class="bash">
155
virsh define name_of_vm.xml
156
</code></pre>
157
158
And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using:
159
160
<pre><code class="bash">
161
virsh start name_of_vm
162
</code></pre>
163
164
h2. set fixed IP-adr via DHCP (default-network)
165
166
taken from https://wiki.libvirt.org/page/Networking
167
168
<pre><code class="bash">
169
virsh edit <guest>
170
</code></pre>
171
172
where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file: 
173
174
<pre><code class="bash">
175
<interface type='network'>
176
  <source network='default'/>
177
  <mac address='00:16:3e:1a:b3:4a'/>
178
</interface>
179
</code></pre>
180
181
Applying modifications to the network
182
183
Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached.
184
virsh net-update
185
186
Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command: 
187
188
<pre><code class="bash">
189
virsh net-update default add ip-dhcp-host \
190
          "<host mac='52:54:00:00:00:01' \
191
           name='bob' ip='192.168.122.45' />" \
192
           --live --config
193
</code></pre>
194
195
h2. forwarding incoming connections
196
197
taken from https://wiki.libvirt.org/page/Networking
198
199
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
200
201
If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME:
202
203
1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP".
204
205
(To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.)
206
207
2) Stop the guest if it's running.
208
209
3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup):
210
211
Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome): 
212
213
<pre>
214
#!/bin/bash
215
# used some from advanced script to have multiple ports: use an equal number of guest and host ports
216
217
# Update the following variables to fit your setup
218
Guest_name=GUEST_NAME
219
Guest_ipaddr=GUEST_IP
220
Host_ipaddr=HOST_IP
221
Host_port=(  'HOST_PORT1' 'HOST_PORT2' )
222
Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' )
223
224
length=$(( ${#Host_port[@]} - 1 ))
225
if [ "${1}" = "${Guest_name}" ]; then
226
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
227
       for i in `seq 0 $length`; do
228
               iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
229
               iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
230
       done
231
   fi
232
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
233
       for i in `seq 0 $length`; do
234
               iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
235
               iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
236
       done
237
   fi
238
fi
239
</pre>
240
4) chmod +x /etc/libvirt/hooks/qemu
241
242
5) Restart the libvirtd service.
243
244
6) Start the guest.
245
246
(NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack). 
247
248
h2. wrapper script for virsh
249
250
<pre>
251
#! /bin/sh
252
# kvm_control   Startup script for KVM Virtual Machines
253
#
254
# description: Manages KVM VMs
255
# processname: kvm_control.sh
256
#
257
# pidfile: /var/run/kvm_control/kvm_control.pid
258
#
259
### BEGIN INIT INFO
260
#
261
### END INIT INFO
262
#
263
# Version 20171103 by Jeremias Keihsler added ionice prio 'idle'
264
# Version 20161228 by Jeremias Keihsler based on:
265
# virsh-specific parts are taken from:
266
#  https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh
267
# Version 20110509 by Jeremias Keihsler (vboxcontrol) based on:
268
# Version 20090301 by Kevin Swanson <kswan.info> based on:
269
# Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com>
270
# http://farfewertoes.com
271
#
272
# Released in the public domain
273
#
274
# This file came with a README file containing the instructions on how
275
# to use this script.
276
# 
277
# this is no more to be used as an init.d-script (vboxcontrol was an init.d-script)
278
#
279
280
################################################################################
281
# INITIAL CONFIGURATION
282
283
export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin"
284
285
VIRSH=/usr/bin/virsh
286
TIMEOUT=300
287
288
declare -i VM_isrunning
289
290
################################################################################
291
# FUNCTIONS
292
293
log_failure_msg() {
294
echo $1
295
}
296
297
log_action_msg() {
298
echo $1
299
}
300
301
# list running domains
302
list_running_domains() {
303
  $VIRSH list | grep running | awk '{ print $2}'
304
}
305
306
# Check for running machines every few seconds; return when all machines are
307
# down
308
wait_for_closing_machines() {
309
RUNNING_MACHINES=`list_running_domains | wc -l`
310
if [ $RUNNING_MACHINES != 0 ]; then
311
  log_action_msg "machines running: "$RUNNING_MACHINES
312
  sleep 2
313
314
  wait_for_closing_machines
315
fi
316
}
317
318
################################################################################
319
# RUN
320
case "$1" in
321
  start)
322
    if [ -f /etc/kvm_box/machines_enabled_start ]; then
323
324
      cat /etc/kvm_box/machines_enabled_start | while read VM; do
325
        log_action_msg "Starting VM: $VM ..."
326
        $VIRSH start $VM
327
        sleep 20
328
        RETVAL=$?
329
      done
330
      touch /tmp/kvm_control
331
    fi
332
  ;;
333
  stop)
334
    # NOTE: this stops first the listed VMs in the given order
335
    # and later all running VM's. 
336
    # After the defined timeout all remaining VMs are killed
337
338
    # Create some sort of semaphore.
339
    touch /tmp/shutdown-kvm-guests
340
341
    echo "Try to cleanly shut down all listed KVM domains..."
342
    # Try to shutdown each listed domain, one by one.
343
    if [ -f /etc/kvm_box/machines_enabled_stop ]; then
344
      cat /etc/kvm_box/machines_enabled_stop | while read VM; do
345
        log_action_msg "Shutting down VM: $VM ..."
346
        $VIRSH shutdown $VM --mode acpi
347
        sleep 10
348
        RETVAL=$?
349
      done
350
    fi
351
    sleep 10
352
353
    echo "give still running machines some more time..."
354
    # wait 20s per still running machine
355
    list_running_domains | while read VM; do
356
      log_action_msg "waiting 20s ... for: $VM ..."
357
      sleep 20
358
    done
359
360
    echo "Try to cleanly shut down all running KVM domains..."
361
    # Try to shutdown each remaining domain, one by one.
362
    list_running_domains | while read VM; do
363
      log_action_msg "Shutting down VM: $VM ..."
364
      $VIRSH shutdown $VM --mode acpi
365
      sleep 10
366
    done
367
368
    # Wait until all domains are shut down or timeout has reached.
369
    END_TIME=$(date -d "$TIMEOUT seconds" +%s)
370
371
    while [ $(date +%s) -lt $END_TIME ]; do
372
      # Break while loop when no domains are left.
373
      test -z "$(list_running_domains)" && break
374
      # Wait a litte, we don't want to DoS libvirt.
375
      sleep 2
376
    done
377
378
    # Clean up left over domains, one by one.
379
    list_running_domains | while read DOMAIN; do
380
      # Try to shutdown given domain.
381
      $VIRSH destroy $DOMAIN
382
      # Give libvirt some time for killing off the domain.
383
      sleep 10
384
    done
385
386
    wait_for_closing_machines
387
    rm -f /tmp/shutdown-kvm-guests
388
    rm -f /tmp/kvm_control
389
  ;;
390
  export)
391
    JKE_DATE=$(date +%F)
392
    if [ -f /etc/kvm_box/machines_enabled_export ]; then
393
      cat /etc/kvm_box/machines_enabled_export  | while read VM; do
394
        rm -f /tmp/kvm_control_VM_isrunning
395
        VM_isrunning=0
396
        list_running_domains | while read RVM; do
397
          #echo "VM list -$VM- : -$RVM-"
398
          if [[ "$VM" ==  "$RVM" ]]; then
399
            #echo "VM found running..."
400
            touch /tmp/kvm_control_VM_isrunning
401
            VM_isrunning=1
402
            #echo "$VM_isrunning"
403
            break
404
          fi
405
          #echo "$VM_isrunning"
406
        done
407
408
        # took me a while to figure out that the above 'while'-loop 
409
        # runs in a separate process ... let's use the 'file' as a 
410
        # kind of interprocess-communication :-) JKE 20161229
411
        if [ -f /tmp/kvm_control_VM_isrunning ]; then
412
          VM_isrunning=1
413
        fi
414
        rm -f /tmp/kvm_control_VM_isrunning
415
416
        #echo "VM status $VM_isrunning"
417
        if [ "$VM_isrunning" -ne 0 ]; then
418
          log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
419
        else
420
          log_action_msg "Exporting VM: $VM ..."
421
          VM_BAK_DIR="$VM"_"$JKE_DATE"
422
          mkdir "$VM_BAK_DIR"
423
          $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml
424
          $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do
425
            echo "$VM hdd=$VMHDD"
426
            if [ -f "$VMHDD" ]; then
427
              ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
428
            else
429
              log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..."
430
            fi
431
          done
432
        fi
433
      done
434
    else
435
      log_action_msg "export-list not found"
436
    fi
437
  ;;
438
  start-vm)
439
    log_action_msg "Starting VM: $2 ..."
440
    $VIRSH start $2
441
    RETVAL=$?
442
  ;;
443
  stop-vm)
444
    log_action_msg "Stopping VM: $2 ..."
445
    $VIRSH shutdown $2 --mode acpi
446
    RETVAL=$?
447
  ;;
448
  poweroff-vm)
449
    log_action_msg "Powering off VM: $2 ..."
450
    $VIRSH destroy $2
451
    RETVAL=$?
452
  ;;
453
  export-vm)
454
    # NOTE: this exports the given VM
455
    log_action_msg "Exporting VM: $2 ..."
456
    rm -f /tmp/kvm_control_VM_isrunning
457
    VM_isrunning=0
458
    JKE_DATE=$(date +%F)
459
    list_running_domains | while read RVM; do
460
      #echo "VM list -$VM- : -$RVM-"
461
      if [[ "$2" ==  "$RVM" ]]; then
462
        #echo "VM found running..."
463
        touch /tmp/kvm_control_VM_isrunning
464
        VM_isrunning=1
465
        #echo "$VM_isrunning"
466
        break
467
      fi
468
      #echo "$VM_isrunning"
469
    done
470
471
    # took me a while to figure out that the above 'while'-loop 
472
    # runs in a separate process ... let's use the 'file' as a 
473
    # kind of interprocess-communication :-) JKE 20161229
474
    if [ -f /tmp/kvm_control_VM_isrunning ]; then
475
      VM_isrunning=1
476
    fi
477
    rm -f /tmp/kvm_control_VM_isrunning
478
479
    #echo "VM status $VM_isrunning"
480
    if [ "$VM_isrunning" -ne 0 ]; then
481
      log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
482
    else
483
      log_action_msg "Exporting VM: $VM ..."
484
      VM_BAK_DIR="$2"_"$JKE_DATE"
485
      mkdir "$VM_BAK_DIR"
486
      $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml
487
      $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do
488
        echo "$2 hdd=$VMHDD"
489
        if [ -f "$VMHDD" ]; then
490
          ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
491
        else
492
          log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..."
493
        fi
494
      done
495
    fi
496
  ;;
497
  status)
498
    echo "The following virtual machines are currently running:"
499
    list_running_domains | while read VM; do
500
      echo -n "  $VM"
501
      echo " ... is running"
502
    done
503
  ;;
504
505
  *)
506
    echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>"
507
    echo "  start      start all VMs listed in '/etc/kvm_box/machines_enabled_start'"
508
    echo "  stop       1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'"
509
    echo "             2nd step: wait 20s for each still running machine to give a chance to shut-down on their own"
510
    echo "             3rd step: acpi-shutdown all running VMs"
511
    echo "             4th step: wait for all machines shutdown or $TIMEOUT s"
512
    echo "             5th step: destroy all sitting VMs"
513
    echo "  status     list all running VMs"
514
    echo "  export     export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory"
515
    echo "  start-vm <VM name>     start the given VM"
516
    echo "  stop-vm <VM name>      acpi-shutdown the given VM"
517
    echo "  poweroff-vm <VM name>  poweroff the given VM"
518
    echo "  export-vm <VM name>    export the given VM to the current directory"
519
    exit 3
520
esac
521
522
exit 0
523
524
</pre>
525
526
h2. restore 'exported' kvm-machines
527
528
<pre><code class="shell">
529
tar xvf mach-name_202x-01-01.tar.gz 
530
</code></pre>
531
532
* copy the image-files to @/var/lib/libvirt/images/@
533
534
set ownership
535
<pre><code class="shell">
536
chown qemu:qemu /var/lib/libvirt/images/*
537
</code></pre>
538
539
540
define the machine by
541
542
<pre><code class="shell">
543
virsh define mach-name.xml
544
</code></pre>