Projekt

Allgemein

Profil

Setup kvm » Historie » Version 4

Jeremias Keihsler, 29.09.2024 14:52

1 1 Jeremias Keihsler
h1. KVM
2
3
this is for a vanilla CentOS 9 minimal installation,
4
largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@
5
6 4 Jeremias Keihsler
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_enabling-virtualization-in-rhel-9_configuring-and-managing-virtualization#proc_enabling-virtualization-in-rhel-9_assembly_enabling-virtualization-in-rhel-9
7
8 1 Jeremias Keihsler
https://www.linuxtechi.com/install-kvm-on-rocky-linux-almalinux/
9
10
good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/
11
12
br0 -sources:
13
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli
14
https://www.tecmint.com/create-network-bridge-in-rhel-centos-8/
15
https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/
16
https://extravm.com/billing/knowledgebase/114/CentOS-8-ifup-unknown-connection---Add-Second-IP.html
17
18
19
h2. basic updates/installs
20
21
<pre><code class="bash">
22
yum update
23
yum install wget
24
yum install vim
25
reboot
26
</code></pre>
27
28
h2. check machine capability
29
30
<pre><code class="bash">
31
grep -E 'svm|vmx' /proc/cpuinfo
32
</code></pre>
33
34
vmx ... Intel
35
svm ... AMD
36
37
h2. install KVM on CentOS minimal
38
39
<pre><code class="bash">
40 2 Jeremias Keihsler
dnf install qemu-kvm libvirt libguestfs-tools virt-install virt-viewer
41 3 Jeremias Keihsler
for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
42 1 Jeremias Keihsler
</code></pre>
43
44
verify the following kernel modules are loaded
45
<pre><code class="bash">
46
lsmod | grep kvm
47
</code></pre>
48
49
<pre><code class="bash">
50
kvm
51
kvm_intel
52
</code></pre>
53
<pre><code class="bash">
54
kvm
55
kvm_amd
56
</code></pre>
57 2 Jeremias Keihsler
58
h3. Verification
59
60
<pre><code class="bash">
61
virt-host-validate
62
</code></pre>
63 1 Jeremias Keihsler
64
h2. setup networking
65
66
add to the network controller configuration file @/etc/sysconfig/network-scripts/ifcfg-em1@
67
<pre>
68
...
69
BRIDGE=br0
70
</pre>
71
72
add following new file @/etc/sysconfig/network-scripts/ifcfg-br0@
73
<pre>
74
DEVICE="br0"
75
# BOOTPROTO is up to you. If you prefer “static”, you will need to
76
# specify the IP address, netmask, gateway and DNS information.
77
BOOTPROTO="dhcp"
78
IPV6INIT="yes"
79
IPV6_AUTOCONF="yes"
80
ONBOOT="yes"
81
TYPE="Bridge"
82
DELAY="0"
83
</pre>
84
85
enable network forwarding @/etc/sysctl.conf@
86
<pre>
87
...
88
net.ipv4.ip_forward = 1
89
</pre>
90
91
read the file and restart NetworkManager
92
<pre><code class="bash">
93
sysctl -p /etc/sysctl.conf
94
systemctl restart NetworkManager
95
</code></pre>
96
97
h2. can KVM and Virtualbox coexist
98
99
http://www.dedoimedo.com/computers/kvm-virtualbox.html
100
101
h2. convert Virtualbox to KVM
102
103
h3. uninstall Virtualbox-guest-additions
104
105
<pre><code class="bash">
106
opt/[VboxAddonsFolder]/uninstall.sh
107
</code></pre>
108
109
some people had to remove @/etc/X11/xorg.conf@
110
111
h3. convert image from Virtualbox to KWM
112
113
<pre><code class="bash">
114
VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img
115
</code></pre>
116
117
RAW-Datei nach qcow konvertieren
118
<pre><code class="bash">
119
qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow
120
</code></pre>
121
122
h2. automatic start/shutdown of VMs with Host
123
124
taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html
125
126
h3. enable libvirt-guests service
127
128
<pre><code class="bash">
129
systemctl enable libvirt-guests
130
systemctl start libvirt-guests
131
</code></pre>
132
133
all settings are to be done in @/etc/sysconfig/libvirt-guests@
134
135
h2. install
136
137
138
<pre><code class="bash">
139
yum install virt-manager
140
</code></pre>
141
142
<pre><code class="bash">
143
usermod -a -G libvirt username
144
</code></pre>
145
146
h2. rename KVM-guest
147
148
taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html
149
150
Power off the virtual machine and export the machine's XML configuration file:
151
152
<pre><code class="bash">
153
virsh dumpxml name_of_vm > name_of_vm.xml
154
</code></pre>
155
156
Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>.
157
158
Save the XML file and undefine the old VM name with:
159
160
<pre><code class="bash">
161
virsh undefine name_of_vm
162
</code></pre>
163
164
Now just import the edited XML file to define the VM:
165
166
<pre><code class="bash">
167
virsh define name_of_vm.xml
168
</code></pre>
169
170
And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using:
171
172
<pre><code class="bash">
173
virsh start name_of_vm
174
</code></pre>
175
176
h2. set fixed IP-adr via DHCP (default-network)
177
178
taken from https://wiki.libvirt.org/page/Networking
179
180
<pre><code class="bash">
181
virsh edit <guest>
182
</code></pre>
183
184
where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file: 
185
186
<pre><code class="bash">
187
<interface type='network'>
188
  <source network='default'/>
189
  <mac address='00:16:3e:1a:b3:4a'/>
190
</interface>
191
</code></pre>
192
193
Applying modifications to the network
194
195
Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached.
196
virsh net-update
197
198
Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command: 
199
200
<pre><code class="bash">
201
virsh net-update default add ip-dhcp-host \
202
          "<host mac='52:54:00:00:00:01' \
203
           name='bob' ip='192.168.122.45' />" \
204
           --live --config
205
</code></pre>
206
207
h2. forwarding incoming connections
208
209
taken from https://wiki.libvirt.org/page/Networking
210
211
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
212
213
If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME:
214
215
1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP".
216
217
(To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.)
218
219
2) Stop the guest if it's running.
220
221
3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup):
222
223
Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome): 
224
225
<pre>
226
#!/bin/bash
227
# used some from advanced script to have multiple ports: use an equal number of guest and host ports
228
229
# Update the following variables to fit your setup
230
Guest_name=GUEST_NAME
231
Guest_ipaddr=GUEST_IP
232
Host_ipaddr=HOST_IP
233
Host_port=(  'HOST_PORT1' 'HOST_PORT2' )
234
Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' )
235
236
length=$(( ${#Host_port[@]} - 1 ))
237
if [ "${1}" = "${Guest_name}" ]; then
238
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
239
       for i in `seq 0 $length`; do
240
               iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
241
               iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
242
       done
243
   fi
244
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
245
       for i in `seq 0 $length`; do
246
               iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
247
               iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
248
       done
249
   fi
250
fi
251
</pre>
252
4) chmod +x /etc/libvirt/hooks/qemu
253
254
5) Restart the libvirtd service.
255
256
6) Start the guest.
257
258
(NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack). 
259
260
h2. wrapper script for virsh
261
262
<pre>
263
#! /bin/sh
264
# kvm_control   Startup script for KVM Virtual Machines
265
#
266
# description: Manages KVM VMs
267
# processname: kvm_control.sh
268
#
269
# pidfile: /var/run/kvm_control/kvm_control.pid
270
#
271
### BEGIN INIT INFO
272
#
273
### END INIT INFO
274
#
275
# Version 20171103 by Jeremias Keihsler added ionice prio 'idle'
276
# Version 20161228 by Jeremias Keihsler based on:
277
# virsh-specific parts are taken from:
278
#  https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh
279
# Version 20110509 by Jeremias Keihsler (vboxcontrol) based on:
280
# Version 20090301 by Kevin Swanson <kswan.info> based on:
281
# Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com>
282
# http://farfewertoes.com
283
#
284
# Released in the public domain
285
#
286
# This file came with a README file containing the instructions on how
287
# to use this script.
288
# 
289
# this is no more to be used as an init.d-script (vboxcontrol was an init.d-script)
290
#
291
292
################################################################################
293
# INITIAL CONFIGURATION
294
295
export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin"
296
297
VIRSH=/usr/bin/virsh
298
TIMEOUT=300
299
300
declare -i VM_isrunning
301
302
################################################################################
303
# FUNCTIONS
304
305
log_failure_msg() {
306
echo $1
307
}
308
309
log_action_msg() {
310
echo $1
311
}
312
313
# list running domains
314
list_running_domains() {
315
  $VIRSH list | grep running | awk '{ print $2}'
316
}
317
318
# Check for running machines every few seconds; return when all machines are
319
# down
320
wait_for_closing_machines() {
321
RUNNING_MACHINES=`list_running_domains | wc -l`
322
if [ $RUNNING_MACHINES != 0 ]; then
323
  log_action_msg "machines running: "$RUNNING_MACHINES
324
  sleep 2
325
326
  wait_for_closing_machines
327
fi
328
}
329
330
################################################################################
331
# RUN
332
case "$1" in
333
  start)
334
    if [ -f /etc/kvm_box/machines_enabled_start ]; then
335
336
      cat /etc/kvm_box/machines_enabled_start | while read VM; do
337
        log_action_msg "Starting VM: $VM ..."
338
        $VIRSH start $VM
339
        sleep 20
340
        RETVAL=$?
341
      done
342
      touch /tmp/kvm_control
343
    fi
344
  ;;
345
  stop)
346
    # NOTE: this stops first the listed VMs in the given order
347
    # and later all running VM's. 
348
    # After the defined timeout all remaining VMs are killed
349
350
    # Create some sort of semaphore.
351
    touch /tmp/shutdown-kvm-guests
352
353
    echo "Try to cleanly shut down all listed KVM domains..."
354
    # Try to shutdown each listed domain, one by one.
355
    if [ -f /etc/kvm_box/machines_enabled_stop ]; then
356
      cat /etc/kvm_box/machines_enabled_stop | while read VM; do
357
        log_action_msg "Shutting down VM: $VM ..."
358
        $VIRSH shutdown $VM --mode acpi
359
        sleep 10
360
        RETVAL=$?
361
      done
362
    fi
363
    sleep 10
364
365
    echo "give still running machines some more time..."
366
    # wait 20s per still running machine
367
    list_running_domains | while read VM; do
368
      log_action_msg "waiting 20s ... for: $VM ..."
369
      sleep 20
370
    done
371
372
    echo "Try to cleanly shut down all running KVM domains..."
373
    # Try to shutdown each remaining domain, one by one.
374
    list_running_domains | while read VM; do
375
      log_action_msg "Shutting down VM: $VM ..."
376
      $VIRSH shutdown $VM --mode acpi
377
      sleep 10
378
    done
379
380
    # Wait until all domains are shut down or timeout has reached.
381
    END_TIME=$(date -d "$TIMEOUT seconds" +%s)
382
383
    while [ $(date +%s) -lt $END_TIME ]; do
384
      # Break while loop when no domains are left.
385
      test -z "$(list_running_domains)" && break
386
      # Wait a litte, we don't want to DoS libvirt.
387
      sleep 2
388
    done
389
390
    # Clean up left over domains, one by one.
391
    list_running_domains | while read DOMAIN; do
392
      # Try to shutdown given domain.
393
      $VIRSH destroy $DOMAIN
394
      # Give libvirt some time for killing off the domain.
395
      sleep 10
396
    done
397
398
    wait_for_closing_machines
399
    rm -f /tmp/shutdown-kvm-guests
400
    rm -f /tmp/kvm_control
401
  ;;
402
  export)
403
    JKE_DATE=$(date +%F)
404
    if [ -f /etc/kvm_box/machines_enabled_export ]; then
405
      cat /etc/kvm_box/machines_enabled_export  | while read VM; do
406
        rm -f /tmp/kvm_control_VM_isrunning
407
        VM_isrunning=0
408
        list_running_domains | while read RVM; do
409
          #echo "VM list -$VM- : -$RVM-"
410
          if [[ "$VM" ==  "$RVM" ]]; then
411
            #echo "VM found running..."
412
            touch /tmp/kvm_control_VM_isrunning
413
            VM_isrunning=1
414
            #echo "$VM_isrunning"
415
            break
416
          fi
417
          #echo "$VM_isrunning"
418
        done
419
420
        # took me a while to figure out that the above 'while'-loop 
421
        # runs in a separate process ... let's use the 'file' as a 
422
        # kind of interprocess-communication :-) JKE 20161229
423
        if [ -f /tmp/kvm_control_VM_isrunning ]; then
424
          VM_isrunning=1
425
        fi
426
        rm -f /tmp/kvm_control_VM_isrunning
427
428
        #echo "VM status $VM_isrunning"
429
        if [ "$VM_isrunning" -ne 0 ]; then
430
          log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
431
        else
432
          log_action_msg "Exporting VM: $VM ..."
433
          VM_BAK_DIR="$VM"_"$JKE_DATE"
434
          mkdir "$VM_BAK_DIR"
435
          $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml
436
          $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do
437
            echo "$VM hdd=$VMHDD"
438
            if [ -f "$VMHDD" ]; then
439
              ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
440
            else
441
              log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..."
442
            fi
443
          done
444
        fi
445
      done
446
    else
447
      log_action_msg "export-list not found"
448
    fi
449
  ;;
450
  start-vm)
451
    log_action_msg "Starting VM: $2 ..."
452
    $VIRSH start $2
453
    RETVAL=$?
454
  ;;
455
  stop-vm)
456
    log_action_msg "Stopping VM: $2 ..."
457
    $VIRSH shutdown $2 --mode acpi
458
    RETVAL=$?
459
  ;;
460
  poweroff-vm)
461
    log_action_msg "Powering off VM: $2 ..."
462
    $VIRSH destroy $2
463
    RETVAL=$?
464
  ;;
465
  export-vm)
466
    # NOTE: this exports the given VM
467
    log_action_msg "Exporting VM: $2 ..."
468
    rm -f /tmp/kvm_control_VM_isrunning
469
    VM_isrunning=0
470
    JKE_DATE=$(date +%F)
471
    list_running_domains | while read RVM; do
472
      #echo "VM list -$VM- : -$RVM-"
473
      if [[ "$2" ==  "$RVM" ]]; then
474
        #echo "VM found running..."
475
        touch /tmp/kvm_control_VM_isrunning
476
        VM_isrunning=1
477
        #echo "$VM_isrunning"
478
        break
479
      fi
480
      #echo "$VM_isrunning"
481
    done
482
483
    # took me a while to figure out that the above 'while'-loop 
484
    # runs in a separate process ... let's use the 'file' as a 
485
    # kind of interprocess-communication :-) JKE 20161229
486
    if [ -f /tmp/kvm_control_VM_isrunning ]; then
487
      VM_isrunning=1
488
    fi
489
    rm -f /tmp/kvm_control_VM_isrunning
490
491
    #echo "VM status $VM_isrunning"
492
    if [ "$VM_isrunning" -ne 0 ]; then
493
      log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
494
    else
495
      log_action_msg "Exporting VM: $VM ..."
496
      VM_BAK_DIR="$2"_"$JKE_DATE"
497
      mkdir "$VM_BAK_DIR"
498
      $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml
499
      $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do
500
        echo "$2 hdd=$VMHDD"
501
        if [ -f "$VMHDD" ]; then
502
          ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
503
        else
504
          log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..."
505
        fi
506
      done
507
    fi
508
  ;;
509
  status)
510
    echo "The following virtual machines are currently running:"
511
    list_running_domains | while read VM; do
512
      echo -n "  $VM"
513
      echo " ... is running"
514
    done
515
  ;;
516
517
  *)
518
    echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>"
519
    echo "  start      start all VMs listed in '/etc/kvm_box/machines_enabled_start'"
520
    echo "  stop       1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'"
521
    echo "             2nd step: wait 20s for each still running machine to give a chance to shut-down on their own"
522
    echo "             3rd step: acpi-shutdown all running VMs"
523
    echo "             4th step: wait for all machines shutdown or $TIMEOUT s"
524
    echo "             5th step: destroy all sitting VMs"
525
    echo "  status     list all running VMs"
526
    echo "  export     export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory"
527
    echo "  start-vm <VM name>     start the given VM"
528
    echo "  stop-vm <VM name>      acpi-shutdown the given VM"
529
    echo "  poweroff-vm <VM name>  poweroff the given VM"
530
    echo "  export-vm <VM name>    export the given VM to the current directory"
531
    exit 3
532
esac
533
534
exit 0
535
536
</pre>
537
538
h2. restore 'exported' kvm-machines
539
540
<pre><code class="shell">
541
tar xvf mach-name_202x-01-01.tar.gz 
542
</code></pre>
543
544
* copy the image-files to @/var/lib/libvirt/images/@
545
546
set ownership
547
<pre><code class="shell">
548
chown qemu:qemu /var/lib/libvirt/images/*
549
</code></pre>
550
551
552
define the machine by
553
554
<pre><code class="shell">
555
virsh define mach-name.xml
556
</code></pre>