Projekt

Allgemein

Profil

Setup kvm » Historie » Version 3

Jeremias Keihsler, 29.09.2024 14:52

1 1 Jeremias Keihsler
h1. KVM
2
3
this is for a vanilla CentOS 9 minimal installation,
4
largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@
5
6
https://www.linuxtechi.com/install-kvm-on-rocky-linux-almalinux/
7
8
good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/
9
10
br0 -sources:
11
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli
12
https://www.tecmint.com/create-network-bridge-in-rhel-centos-8/
13
https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/
14
https://extravm.com/billing/knowledgebase/114/CentOS-8-ifup-unknown-connection---Add-Second-IP.html
15
16
17
h2. basic updates/installs
18
19
<pre><code class="bash">
20
yum update
21
yum install wget
22
yum install vim
23
reboot
24
</code></pre>
25
26
h2. check machine capability
27
28
<pre><code class="bash">
29
grep -E 'svm|vmx' /proc/cpuinfo
30
</code></pre>
31
32
vmx ... Intel
33
svm ... AMD
34
35
h2. install KVM on CentOS minimal
36
37
<pre><code class="bash">
38 2 Jeremias Keihsler
dnf install qemu-kvm libvirt libguestfs-tools virt-install virt-viewer
39 3 Jeremias Keihsler
for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
40 1 Jeremias Keihsler
</code></pre>
41
42
verify the following kernel modules are loaded
43
<pre><code class="bash">
44
lsmod | grep kvm
45
</code></pre>
46
47
<pre><code class="bash">
48
kvm
49
kvm_intel
50
</code></pre>
51
<pre><code class="bash">
52
kvm
53
kvm_amd
54
</code></pre>
55 2 Jeremias Keihsler
56
h3. Verification
57
58
<pre><code class="bash">
59
virt-host-validate
60
</code></pre>
61 1 Jeremias Keihsler
62
h2. setup networking
63
64
add to the network controller configuration file @/etc/sysconfig/network-scripts/ifcfg-em1@
65
<pre>
66
...
67
BRIDGE=br0
68
</pre>
69
70
add following new file @/etc/sysconfig/network-scripts/ifcfg-br0@
71
<pre>
72
DEVICE="br0"
73
# BOOTPROTO is up to you. If you prefer “static”, you will need to
74
# specify the IP address, netmask, gateway and DNS information.
75
BOOTPROTO="dhcp"
76
IPV6INIT="yes"
77
IPV6_AUTOCONF="yes"
78
ONBOOT="yes"
79
TYPE="Bridge"
80
DELAY="0"
81
</pre>
82
83
enable network forwarding @/etc/sysctl.conf@
84
<pre>
85
...
86
net.ipv4.ip_forward = 1
87
</pre>
88
89
read the file and restart NetworkManager
90
<pre><code class="bash">
91
sysctl -p /etc/sysctl.conf
92
systemctl restart NetworkManager
93
</code></pre>
94
95
h2. can KVM and Virtualbox coexist
96
97
http://www.dedoimedo.com/computers/kvm-virtualbox.html
98
99
h2. convert Virtualbox to KVM
100
101
h3. uninstall Virtualbox-guest-additions
102
103
<pre><code class="bash">
104
opt/[VboxAddonsFolder]/uninstall.sh
105
</code></pre>
106
107
some people had to remove @/etc/X11/xorg.conf@
108
109
h3. convert image from Virtualbox to KWM
110
111
<pre><code class="bash">
112
VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img
113
</code></pre>
114
115
RAW-Datei nach qcow konvertieren
116
<pre><code class="bash">
117
qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow
118
</code></pre>
119
120
h2. automatic start/shutdown of VMs with Host
121
122
taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html
123
124
h3. enable libvirt-guests service
125
126
<pre><code class="bash">
127
systemctl enable libvirt-guests
128
systemctl start libvirt-guests
129
</code></pre>
130
131
all settings are to be done in @/etc/sysconfig/libvirt-guests@
132
133
h2. install
134
135
136
<pre><code class="bash">
137
yum install virt-manager
138
</code></pre>
139
140
<pre><code class="bash">
141
usermod -a -G libvirt username
142
</code></pre>
143
144
h2. rename KVM-guest
145
146
taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html
147
148
Power off the virtual machine and export the machine's XML configuration file:
149
150
<pre><code class="bash">
151
virsh dumpxml name_of_vm > name_of_vm.xml
152
</code></pre>
153
154
Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>.
155
156
Save the XML file and undefine the old VM name with:
157
158
<pre><code class="bash">
159
virsh undefine name_of_vm
160
</code></pre>
161
162
Now just import the edited XML file to define the VM:
163
164
<pre><code class="bash">
165
virsh define name_of_vm.xml
166
</code></pre>
167
168
And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using:
169
170
<pre><code class="bash">
171
virsh start name_of_vm
172
</code></pre>
173
174
h2. set fixed IP-adr via DHCP (default-network)
175
176
taken from https://wiki.libvirt.org/page/Networking
177
178
<pre><code class="bash">
179
virsh edit <guest>
180
</code></pre>
181
182
where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file: 
183
184
<pre><code class="bash">
185
<interface type='network'>
186
  <source network='default'/>
187
  <mac address='00:16:3e:1a:b3:4a'/>
188
</interface>
189
</code></pre>
190
191
Applying modifications to the network
192
193
Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached.
194
virsh net-update
195
196
Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command: 
197
198
<pre><code class="bash">
199
virsh net-update default add ip-dhcp-host \
200
          "<host mac='52:54:00:00:00:01' \
201
           name='bob' ip='192.168.122.45' />" \
202
           --live --config
203
</code></pre>
204
205
h2. forwarding incoming connections
206
207
taken from https://wiki.libvirt.org/page/Networking
208
209
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
210
211
If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME:
212
213
1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP".
214
215
(To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.)
216
217
2) Stop the guest if it's running.
218
219
3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup):
220
221
Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome): 
222
223
<pre>
224
#!/bin/bash
225
# used some from advanced script to have multiple ports: use an equal number of guest and host ports
226
227
# Update the following variables to fit your setup
228
Guest_name=GUEST_NAME
229
Guest_ipaddr=GUEST_IP
230
Host_ipaddr=HOST_IP
231
Host_port=(  'HOST_PORT1' 'HOST_PORT2' )
232
Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' )
233
234
length=$(( ${#Host_port[@]} - 1 ))
235
if [ "${1}" = "${Guest_name}" ]; then
236
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
237
       for i in `seq 0 $length`; do
238
               iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
239
               iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
240
       done
241
   fi
242
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
243
       for i in `seq 0 $length`; do
244
               iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
245
               iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
246
       done
247
   fi
248
fi
249
</pre>
250
4) chmod +x /etc/libvirt/hooks/qemu
251
252
5) Restart the libvirtd service.
253
254
6) Start the guest.
255
256
(NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack). 
257
258
h2. wrapper script for virsh
259
260
<pre>
261
#! /bin/sh
262
# kvm_control   Startup script for KVM Virtual Machines
263
#
264
# description: Manages KVM VMs
265
# processname: kvm_control.sh
266
#
267
# pidfile: /var/run/kvm_control/kvm_control.pid
268
#
269
### BEGIN INIT INFO
270
#
271
### END INIT INFO
272
#
273
# Version 20171103 by Jeremias Keihsler added ionice prio 'idle'
274
# Version 20161228 by Jeremias Keihsler based on:
275
# virsh-specific parts are taken from:
276
#  https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh
277
# Version 20110509 by Jeremias Keihsler (vboxcontrol) based on:
278
# Version 20090301 by Kevin Swanson <kswan.info> based on:
279
# Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com>
280
# http://farfewertoes.com
281
#
282
# Released in the public domain
283
#
284
# This file came with a README file containing the instructions on how
285
# to use this script.
286
# 
287
# this is no more to be used as an init.d-script (vboxcontrol was an init.d-script)
288
#
289
290
################################################################################
291
# INITIAL CONFIGURATION
292
293
export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin"
294
295
VIRSH=/usr/bin/virsh
296
TIMEOUT=300
297
298
declare -i VM_isrunning
299
300
################################################################################
301
# FUNCTIONS
302
303
log_failure_msg() {
304
echo $1
305
}
306
307
log_action_msg() {
308
echo $1
309
}
310
311
# list running domains
312
list_running_domains() {
313
  $VIRSH list | grep running | awk '{ print $2}'
314
}
315
316
# Check for running machines every few seconds; return when all machines are
317
# down
318
wait_for_closing_machines() {
319
RUNNING_MACHINES=`list_running_domains | wc -l`
320
if [ $RUNNING_MACHINES != 0 ]; then
321
  log_action_msg "machines running: "$RUNNING_MACHINES
322
  sleep 2
323
324
  wait_for_closing_machines
325
fi
326
}
327
328
################################################################################
329
# RUN
330
case "$1" in
331
  start)
332
    if [ -f /etc/kvm_box/machines_enabled_start ]; then
333
334
      cat /etc/kvm_box/machines_enabled_start | while read VM; do
335
        log_action_msg "Starting VM: $VM ..."
336
        $VIRSH start $VM
337
        sleep 20
338
        RETVAL=$?
339
      done
340
      touch /tmp/kvm_control
341
    fi
342
  ;;
343
  stop)
344
    # NOTE: this stops first the listed VMs in the given order
345
    # and later all running VM's. 
346
    # After the defined timeout all remaining VMs are killed
347
348
    # Create some sort of semaphore.
349
    touch /tmp/shutdown-kvm-guests
350
351
    echo "Try to cleanly shut down all listed KVM domains..."
352
    # Try to shutdown each listed domain, one by one.
353
    if [ -f /etc/kvm_box/machines_enabled_stop ]; then
354
      cat /etc/kvm_box/machines_enabled_stop | while read VM; do
355
        log_action_msg "Shutting down VM: $VM ..."
356
        $VIRSH shutdown $VM --mode acpi
357
        sleep 10
358
        RETVAL=$?
359
      done
360
    fi
361
    sleep 10
362
363
    echo "give still running machines some more time..."
364
    # wait 20s per still running machine
365
    list_running_domains | while read VM; do
366
      log_action_msg "waiting 20s ... for: $VM ..."
367
      sleep 20
368
    done
369
370
    echo "Try to cleanly shut down all running KVM domains..."
371
    # Try to shutdown each remaining domain, one by one.
372
    list_running_domains | while read VM; do
373
      log_action_msg "Shutting down VM: $VM ..."
374
      $VIRSH shutdown $VM --mode acpi
375
      sleep 10
376
    done
377
378
    # Wait until all domains are shut down or timeout has reached.
379
    END_TIME=$(date -d "$TIMEOUT seconds" +%s)
380
381
    while [ $(date +%s) -lt $END_TIME ]; do
382
      # Break while loop when no domains are left.
383
      test -z "$(list_running_domains)" && break
384
      # Wait a litte, we don't want to DoS libvirt.
385
      sleep 2
386
    done
387
388
    # Clean up left over domains, one by one.
389
    list_running_domains | while read DOMAIN; do
390
      # Try to shutdown given domain.
391
      $VIRSH destroy $DOMAIN
392
      # Give libvirt some time for killing off the domain.
393
      sleep 10
394
    done
395
396
    wait_for_closing_machines
397
    rm -f /tmp/shutdown-kvm-guests
398
    rm -f /tmp/kvm_control
399
  ;;
400
  export)
401
    JKE_DATE=$(date +%F)
402
    if [ -f /etc/kvm_box/machines_enabled_export ]; then
403
      cat /etc/kvm_box/machines_enabled_export  | while read VM; do
404
        rm -f /tmp/kvm_control_VM_isrunning
405
        VM_isrunning=0
406
        list_running_domains | while read RVM; do
407
          #echo "VM list -$VM- : -$RVM-"
408
          if [[ "$VM" ==  "$RVM" ]]; then
409
            #echo "VM found running..."
410
            touch /tmp/kvm_control_VM_isrunning
411
            VM_isrunning=1
412
            #echo "$VM_isrunning"
413
            break
414
          fi
415
          #echo "$VM_isrunning"
416
        done
417
418
        # took me a while to figure out that the above 'while'-loop 
419
        # runs in a separate process ... let's use the 'file' as a 
420
        # kind of interprocess-communication :-) JKE 20161229
421
        if [ -f /tmp/kvm_control_VM_isrunning ]; then
422
          VM_isrunning=1
423
        fi
424
        rm -f /tmp/kvm_control_VM_isrunning
425
426
        #echo "VM status $VM_isrunning"
427
        if [ "$VM_isrunning" -ne 0 ]; then
428
          log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
429
        else
430
          log_action_msg "Exporting VM: $VM ..."
431
          VM_BAK_DIR="$VM"_"$JKE_DATE"
432
          mkdir "$VM_BAK_DIR"
433
          $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml
434
          $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do
435
            echo "$VM hdd=$VMHDD"
436
            if [ -f "$VMHDD" ]; then
437
              ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
438
            else
439
              log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..."
440
            fi
441
          done
442
        fi
443
      done
444
    else
445
      log_action_msg "export-list not found"
446
    fi
447
  ;;
448
  start-vm)
449
    log_action_msg "Starting VM: $2 ..."
450
    $VIRSH start $2
451
    RETVAL=$?
452
  ;;
453
  stop-vm)
454
    log_action_msg "Stopping VM: $2 ..."
455
    $VIRSH shutdown $2 --mode acpi
456
    RETVAL=$?
457
  ;;
458
  poweroff-vm)
459
    log_action_msg "Powering off VM: $2 ..."
460
    $VIRSH destroy $2
461
    RETVAL=$?
462
  ;;
463
  export-vm)
464
    # NOTE: this exports the given VM
465
    log_action_msg "Exporting VM: $2 ..."
466
    rm -f /tmp/kvm_control_VM_isrunning
467
    VM_isrunning=0
468
    JKE_DATE=$(date +%F)
469
    list_running_domains | while read RVM; do
470
      #echo "VM list -$VM- : -$RVM-"
471
      if [[ "$2" ==  "$RVM" ]]; then
472
        #echo "VM found running..."
473
        touch /tmp/kvm_control_VM_isrunning
474
        VM_isrunning=1
475
        #echo "$VM_isrunning"
476
        break
477
      fi
478
      #echo "$VM_isrunning"
479
    done
480
481
    # took me a while to figure out that the above 'while'-loop 
482
    # runs in a separate process ... let's use the 'file' as a 
483
    # kind of interprocess-communication :-) JKE 20161229
484
    if [ -f /tmp/kvm_control_VM_isrunning ]; then
485
      VM_isrunning=1
486
    fi
487
    rm -f /tmp/kvm_control_VM_isrunning
488
489
    #echo "VM status $VM_isrunning"
490
    if [ "$VM_isrunning" -ne 0 ]; then
491
      log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
492
    else
493
      log_action_msg "Exporting VM: $VM ..."
494
      VM_BAK_DIR="$2"_"$JKE_DATE"
495
      mkdir "$VM_BAK_DIR"
496
      $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml
497
      $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do
498
        echo "$2 hdd=$VMHDD"
499
        if [ -f "$VMHDD" ]; then
500
          ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
501
        else
502
          log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..."
503
        fi
504
      done
505
    fi
506
  ;;
507
  status)
508
    echo "The following virtual machines are currently running:"
509
    list_running_domains | while read VM; do
510
      echo -n "  $VM"
511
      echo " ... is running"
512
    done
513
  ;;
514
515
  *)
516
    echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>"
517
    echo "  start      start all VMs listed in '/etc/kvm_box/machines_enabled_start'"
518
    echo "  stop       1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'"
519
    echo "             2nd step: wait 20s for each still running machine to give a chance to shut-down on their own"
520
    echo "             3rd step: acpi-shutdown all running VMs"
521
    echo "             4th step: wait for all machines shutdown or $TIMEOUT s"
522
    echo "             5th step: destroy all sitting VMs"
523
    echo "  status     list all running VMs"
524
    echo "  export     export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory"
525
    echo "  start-vm <VM name>     start the given VM"
526
    echo "  stop-vm <VM name>      acpi-shutdown the given VM"
527
    echo "  poweroff-vm <VM name>  poweroff the given VM"
528
    echo "  export-vm <VM name>    export the given VM to the current directory"
529
    exit 3
530
esac
531
532
exit 0
533
534
</pre>
535
536
h2. restore 'exported' kvm-machines
537
538
<pre><code class="shell">
539
tar xvf mach-name_202x-01-01.tar.gz 
540
</code></pre>
541
542
* copy the image-files to @/var/lib/libvirt/images/@
543
544
set ownership
545
<pre><code class="shell">
546
chown qemu:qemu /var/lib/libvirt/images/*
547
</code></pre>
548
549
550
define the machine by
551
552
<pre><code class="shell">
553
virsh define mach-name.xml
554
</code></pre>