Projekt

Allgemein

Profil

Setup kvm » Historie » Version 2

Jeremias Keihsler, 29.09.2024 13:35

1 1 Jeremias Keihsler
h1. KVM
2
3
this is for a vanilla CentOS 9 minimal installation,
4
largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@
5
6
https://www.linuxtechi.com/install-kvm-on-rocky-linux-almalinux/
7
8
good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/
9
10
br0 -sources:
11
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli
12
https://www.tecmint.com/create-network-bridge-in-rhel-centos-8/
13
https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/
14
https://extravm.com/billing/knowledgebase/114/CentOS-8-ifup-unknown-connection---Add-Second-IP.html
15
16
17
h2. basic updates/installs
18
19
<pre><code class="bash">
20
yum update
21
yum install wget
22
yum install vim
23
reboot
24
</code></pre>
25
26
h2. check machine capability
27
28
<pre><code class="bash">
29
grep -E 'svm|vmx' /proc/cpuinfo
30
</code></pre>
31
32
vmx ... Intel
33
svm ... AMD
34
35
h2. install KVM on CentOS minimal
36
37
<pre><code class="bash">
38 2 Jeremias Keihsler
dnf install qemu-kvm libvirt libguestfs-tools virt-install virt-viewer
39 1 Jeremias Keihsler
systemctl enable libvirtd && systemctl start libvirtd
40
</code></pre>
41
42
verify the following kernel modules are loaded
43
<pre><code class="bash">
44
lsmod | grep kvm
45
</code></pre>
46
47
<pre><code class="bash">
48
kvm
49
kvm_intel
50
</code></pre>
51
<pre><code class="bash">
52
kvm
53
kvm_amd
54
</code></pre>
55 2 Jeremias Keihsler
56
h3. Verification
57
58
<pre><code class="bash">
59
virt-host-validate
60
</code></pre>
61
62 1 Jeremias Keihsler
63
h2. setup networking
64
65
add to the network controller configuration file @/etc/sysconfig/network-scripts/ifcfg-em1@
66
<pre>
67
...
68
BRIDGE=br0
69
</pre>
70
71
add following new file @/etc/sysconfig/network-scripts/ifcfg-br0@
72
<pre>
73
DEVICE="br0"
74
# BOOTPROTO is up to you. If you prefer “static”, you will need to
75
# specify the IP address, netmask, gateway and DNS information.
76
BOOTPROTO="dhcp"
77
IPV6INIT="yes"
78
IPV6_AUTOCONF="yes"
79
ONBOOT="yes"
80
TYPE="Bridge"
81
DELAY="0"
82
</pre>
83
84
enable network forwarding @/etc/sysctl.conf@
85
<pre>
86
...
87
net.ipv4.ip_forward = 1
88
</pre>
89
90
read the file and restart NetworkManager
91
<pre><code class="bash">
92
sysctl -p /etc/sysctl.conf
93
systemctl restart NetworkManager
94
</code></pre>
95
96
h2. can KVM and Virtualbox coexist
97
98
http://www.dedoimedo.com/computers/kvm-virtualbox.html
99
100
h2. convert Virtualbox to KVM
101
102
h3. uninstall Virtualbox-guest-additions
103
104
<pre><code class="bash">
105
opt/[VboxAddonsFolder]/uninstall.sh
106
</code></pre>
107
108
some people had to remove @/etc/X11/xorg.conf@
109
110
h3. convert image from Virtualbox to KWM
111
112
<pre><code class="bash">
113
VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img
114
</code></pre>
115
116
RAW-Datei nach qcow konvertieren
117
<pre><code class="bash">
118
qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow
119
</code></pre>
120
121
h2. automatic start/shutdown of VMs with Host
122
123
taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html
124
125
h3. enable libvirt-guests service
126
127
<pre><code class="bash">
128
systemctl enable libvirt-guests
129
systemctl start libvirt-guests
130
</code></pre>
131
132
all settings are to be done in @/etc/sysconfig/libvirt-guests@
133
134
h2. install
135
136
137
<pre><code class="bash">
138
yum install virt-manager
139
</code></pre>
140
141
<pre><code class="bash">
142
usermod -a -G libvirt username
143
</code></pre>
144
145
h2. rename KVM-guest
146
147
taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html
148
149
Power off the virtual machine and export the machine's XML configuration file:
150
151
<pre><code class="bash">
152
virsh dumpxml name_of_vm > name_of_vm.xml
153
</code></pre>
154
155
Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>.
156
157
Save the XML file and undefine the old VM name with:
158
159
<pre><code class="bash">
160
virsh undefine name_of_vm
161
</code></pre>
162
163
Now just import the edited XML file to define the VM:
164
165
<pre><code class="bash">
166
virsh define name_of_vm.xml
167
</code></pre>
168
169
And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using:
170
171
<pre><code class="bash">
172
virsh start name_of_vm
173
</code></pre>
174
175
h2. set fixed IP-adr via DHCP (default-network)
176
177
taken from https://wiki.libvirt.org/page/Networking
178
179
<pre><code class="bash">
180
virsh edit <guest>
181
</code></pre>
182
183
where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file: 
184
185
<pre><code class="bash">
186
<interface type='network'>
187
  <source network='default'/>
188
  <mac address='00:16:3e:1a:b3:4a'/>
189
</interface>
190
</code></pre>
191
192
Applying modifications to the network
193
194
Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached.
195
virsh net-update
196
197
Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command: 
198
199
<pre><code class="bash">
200
virsh net-update default add ip-dhcp-host \
201
          "<host mac='52:54:00:00:00:01' \
202
           name='bob' ip='192.168.122.45' />" \
203
           --live --config
204
</code></pre>
205
206
h2. forwarding incoming connections
207
208
taken from https://wiki.libvirt.org/page/Networking
209
210
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
211
212
If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME:
213
214
1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP".
215
216
(To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.)
217
218
2) Stop the guest if it's running.
219
220
3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup):
221
222
Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome): 
223
224
<pre>
225
#!/bin/bash
226
# used some from advanced script to have multiple ports: use an equal number of guest and host ports
227
228
# Update the following variables to fit your setup
229
Guest_name=GUEST_NAME
230
Guest_ipaddr=GUEST_IP
231
Host_ipaddr=HOST_IP
232
Host_port=(  'HOST_PORT1' 'HOST_PORT2' )
233
Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' )
234
235
length=$(( ${#Host_port[@]} - 1 ))
236
if [ "${1}" = "${Guest_name}" ]; then
237
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
238
       for i in `seq 0 $length`; do
239
               iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
240
               iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
241
       done
242
   fi
243
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
244
       for i in `seq 0 $length`; do
245
               iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
246
               iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
247
       done
248
   fi
249
fi
250
</pre>
251
4) chmod +x /etc/libvirt/hooks/qemu
252
253
5) Restart the libvirtd service.
254
255
6) Start the guest.
256
257
(NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack). 
258
259
h2. wrapper script for virsh
260
261
<pre>
262
#! /bin/sh
263
# kvm_control   Startup script for KVM Virtual Machines
264
#
265
# description: Manages KVM VMs
266
# processname: kvm_control.sh
267
#
268
# pidfile: /var/run/kvm_control/kvm_control.pid
269
#
270
### BEGIN INIT INFO
271
#
272
### END INIT INFO
273
#
274
# Version 20171103 by Jeremias Keihsler added ionice prio 'idle'
275
# Version 20161228 by Jeremias Keihsler based on:
276
# virsh-specific parts are taken from:
277
#  https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh
278
# Version 20110509 by Jeremias Keihsler (vboxcontrol) based on:
279
# Version 20090301 by Kevin Swanson <kswan.info> based on:
280
# Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com>
281
# http://farfewertoes.com
282
#
283
# Released in the public domain
284
#
285
# This file came with a README file containing the instructions on how
286
# to use this script.
287
# 
288
# this is no more to be used as an init.d-script (vboxcontrol was an init.d-script)
289
#
290
291
################################################################################
292
# INITIAL CONFIGURATION
293
294
export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin"
295
296
VIRSH=/usr/bin/virsh
297
TIMEOUT=300
298
299
declare -i VM_isrunning
300
301
################################################################################
302
# FUNCTIONS
303
304
log_failure_msg() {
305
echo $1
306
}
307
308
log_action_msg() {
309
echo $1
310
}
311
312
# list running domains
313
list_running_domains() {
314
  $VIRSH list | grep running | awk '{ print $2}'
315
}
316
317
# Check for running machines every few seconds; return when all machines are
318
# down
319
wait_for_closing_machines() {
320
RUNNING_MACHINES=`list_running_domains | wc -l`
321
if [ $RUNNING_MACHINES != 0 ]; then
322
  log_action_msg "machines running: "$RUNNING_MACHINES
323
  sleep 2
324
325
  wait_for_closing_machines
326
fi
327
}
328
329
################################################################################
330
# RUN
331
case "$1" in
332
  start)
333
    if [ -f /etc/kvm_box/machines_enabled_start ]; then
334
335
      cat /etc/kvm_box/machines_enabled_start | while read VM; do
336
        log_action_msg "Starting VM: $VM ..."
337
        $VIRSH start $VM
338
        sleep 20
339
        RETVAL=$?
340
      done
341
      touch /tmp/kvm_control
342
    fi
343
  ;;
344
  stop)
345
    # NOTE: this stops first the listed VMs in the given order
346
    # and later all running VM's. 
347
    # After the defined timeout all remaining VMs are killed
348
349
    # Create some sort of semaphore.
350
    touch /tmp/shutdown-kvm-guests
351
352
    echo "Try to cleanly shut down all listed KVM domains..."
353
    # Try to shutdown each listed domain, one by one.
354
    if [ -f /etc/kvm_box/machines_enabled_stop ]; then
355
      cat /etc/kvm_box/machines_enabled_stop | while read VM; do
356
        log_action_msg "Shutting down VM: $VM ..."
357
        $VIRSH shutdown $VM --mode acpi
358
        sleep 10
359
        RETVAL=$?
360
      done
361
    fi
362
    sleep 10
363
364
    echo "give still running machines some more time..."
365
    # wait 20s per still running machine
366
    list_running_domains | while read VM; do
367
      log_action_msg "waiting 20s ... for: $VM ..."
368
      sleep 20
369
    done
370
371
    echo "Try to cleanly shut down all running KVM domains..."
372
    # Try to shutdown each remaining domain, one by one.
373
    list_running_domains | while read VM; do
374
      log_action_msg "Shutting down VM: $VM ..."
375
      $VIRSH shutdown $VM --mode acpi
376
      sleep 10
377
    done
378
379
    # Wait until all domains are shut down or timeout has reached.
380
    END_TIME=$(date -d "$TIMEOUT seconds" +%s)
381
382
    while [ $(date +%s) -lt $END_TIME ]; do
383
      # Break while loop when no domains are left.
384
      test -z "$(list_running_domains)" && break
385
      # Wait a litte, we don't want to DoS libvirt.
386
      sleep 2
387
    done
388
389
    # Clean up left over domains, one by one.
390
    list_running_domains | while read DOMAIN; do
391
      # Try to shutdown given domain.
392
      $VIRSH destroy $DOMAIN
393
      # Give libvirt some time for killing off the domain.
394
      sleep 10
395
    done
396
397
    wait_for_closing_machines
398
    rm -f /tmp/shutdown-kvm-guests
399
    rm -f /tmp/kvm_control
400
  ;;
401
  export)
402
    JKE_DATE=$(date +%F)
403
    if [ -f /etc/kvm_box/machines_enabled_export ]; then
404
      cat /etc/kvm_box/machines_enabled_export  | while read VM; do
405
        rm -f /tmp/kvm_control_VM_isrunning
406
        VM_isrunning=0
407
        list_running_domains | while read RVM; do
408
          #echo "VM list -$VM- : -$RVM-"
409
          if [[ "$VM" ==  "$RVM" ]]; then
410
            #echo "VM found running..."
411
            touch /tmp/kvm_control_VM_isrunning
412
            VM_isrunning=1
413
            #echo "$VM_isrunning"
414
            break
415
          fi
416
          #echo "$VM_isrunning"
417
        done
418
419
        # took me a while to figure out that the above 'while'-loop 
420
        # runs in a separate process ... let's use the 'file' as a 
421
        # kind of interprocess-communication :-) JKE 20161229
422
        if [ -f /tmp/kvm_control_VM_isrunning ]; then
423
          VM_isrunning=1
424
        fi
425
        rm -f /tmp/kvm_control_VM_isrunning
426
427
        #echo "VM status $VM_isrunning"
428
        if [ "$VM_isrunning" -ne 0 ]; then
429
          log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
430
        else
431
          log_action_msg "Exporting VM: $VM ..."
432
          VM_BAK_DIR="$VM"_"$JKE_DATE"
433
          mkdir "$VM_BAK_DIR"
434
          $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml
435
          $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do
436
            echo "$VM hdd=$VMHDD"
437
            if [ -f "$VMHDD" ]; then
438
              ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
439
            else
440
              log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..."
441
            fi
442
          done
443
        fi
444
      done
445
    else
446
      log_action_msg "export-list not found"
447
    fi
448
  ;;
449
  start-vm)
450
    log_action_msg "Starting VM: $2 ..."
451
    $VIRSH start $2
452
    RETVAL=$?
453
  ;;
454
  stop-vm)
455
    log_action_msg "Stopping VM: $2 ..."
456
    $VIRSH shutdown $2 --mode acpi
457
    RETVAL=$?
458
  ;;
459
  poweroff-vm)
460
    log_action_msg "Powering off VM: $2 ..."
461
    $VIRSH destroy $2
462
    RETVAL=$?
463
  ;;
464
  export-vm)
465
    # NOTE: this exports the given VM
466
    log_action_msg "Exporting VM: $2 ..."
467
    rm -f /tmp/kvm_control_VM_isrunning
468
    VM_isrunning=0
469
    JKE_DATE=$(date +%F)
470
    list_running_domains | while read RVM; do
471
      #echo "VM list -$VM- : -$RVM-"
472
      if [[ "$2" ==  "$RVM" ]]; then
473
        #echo "VM found running..."
474
        touch /tmp/kvm_control_VM_isrunning
475
        VM_isrunning=1
476
        #echo "$VM_isrunning"
477
        break
478
      fi
479
      #echo "$VM_isrunning"
480
    done
481
482
    # took me a while to figure out that the above 'while'-loop 
483
    # runs in a separate process ... let's use the 'file' as a 
484
    # kind of interprocess-communication :-) JKE 20161229
485
    if [ -f /tmp/kvm_control_VM_isrunning ]; then
486
      VM_isrunning=1
487
    fi
488
    rm -f /tmp/kvm_control_VM_isrunning
489
490
    #echo "VM status $VM_isrunning"
491
    if [ "$VM_isrunning" -ne 0 ]; then
492
      log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
493
    else
494
      log_action_msg "Exporting VM: $VM ..."
495
      VM_BAK_DIR="$2"_"$JKE_DATE"
496
      mkdir "$VM_BAK_DIR"
497
      $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml
498
      $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do
499
        echo "$2 hdd=$VMHDD"
500
        if [ -f "$VMHDD" ]; then
501
          ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
502
        else
503
          log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..."
504
        fi
505
      done
506
    fi
507
  ;;
508
  status)
509
    echo "The following virtual machines are currently running:"
510
    list_running_domains | while read VM; do
511
      echo -n "  $VM"
512
      echo " ... is running"
513
    done
514
  ;;
515
516
  *)
517
    echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>"
518
    echo "  start      start all VMs listed in '/etc/kvm_box/machines_enabled_start'"
519
    echo "  stop       1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'"
520
    echo "             2nd step: wait 20s for each still running machine to give a chance to shut-down on their own"
521
    echo "             3rd step: acpi-shutdown all running VMs"
522
    echo "             4th step: wait for all machines shutdown or $TIMEOUT s"
523
    echo "             5th step: destroy all sitting VMs"
524
    echo "  status     list all running VMs"
525
    echo "  export     export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory"
526
    echo "  start-vm <VM name>     start the given VM"
527
    echo "  stop-vm <VM name>      acpi-shutdown the given VM"
528
    echo "  poweroff-vm <VM name>  poweroff the given VM"
529
    echo "  export-vm <VM name>    export the given VM to the current directory"
530
    exit 3
531
esac
532
533
exit 0
534
535
</pre>
536
537
h2. restore 'exported' kvm-machines
538
539
<pre><code class="shell">
540
tar xvf mach-name_202x-01-01.tar.gz 
541
</code></pre>
542
543
* copy the image-files to @/var/lib/libvirt/images/@
544
545
set ownership
546
<pre><code class="shell">
547
chown qemu:qemu /var/lib/libvirt/images/*
548
</code></pre>
549
550
551
define the machine by
552
553
<pre><code class="shell">
554
virsh define mach-name.xml
555
</code></pre>