Projekt

Allgemein

Profil

Setup kvm » Historie » Version 1

Jeremias Keihsler, 23.09.2024 10:52

1 1 Jeremias Keihsler
h1. KVM
2
3
this is for a vanilla CentOS 9 minimal installation,
4
largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@
5
6
https://www.linuxtechi.com/install-kvm-on-rocky-linux-almalinux/
7
8
good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/
9
10
br0 -sources:
11
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli
12
https://www.tecmint.com/create-network-bridge-in-rhel-centos-8/
13
https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/
14
https://extravm.com/billing/knowledgebase/114/CentOS-8-ifup-unknown-connection---Add-Second-IP.html
15
16
17
h2. basic updates/installs
18
19
<pre><code class="bash">
20
yum update
21
yum install wget
22
yum install vim
23
reboot
24
</code></pre>
25
26
h2. check machine capability
27
28
<pre><code class="bash">
29
grep -E 'svm|vmx' /proc/cpuinfo
30
</code></pre>
31
32
vmx ... Intel
33
svm ... AMD
34
35
h2. install KVM on CentOS minimal
36
37
<pre><code class="bash">
38
yum install qemu-kvm libvirt libguestfs-tools virt-install
39
systemctl enable libvirtd && systemctl start libvirtd
40
</code></pre>
41
42
verify the following kernel modules are loaded
43
<pre><code class="bash">
44
lsmod | grep kvm
45
</code></pre>
46
47
<pre><code class="bash">
48
kvm
49
kvm_intel
50
</code></pre>
51
<pre><code class="bash">
52
kvm
53
kvm_amd
54
</code></pre>
55
56
h2. setup networking
57
58
add to the network controller configuration file @/etc/sysconfig/network-scripts/ifcfg-em1@
59
<pre>
60
...
61
BRIDGE=br0
62
</pre>
63
64
add following new file @/etc/sysconfig/network-scripts/ifcfg-br0@
65
<pre>
66
DEVICE="br0"
67
# BOOTPROTO is up to you. If you prefer “static”, you will need to
68
# specify the IP address, netmask, gateway and DNS information.
69
BOOTPROTO="dhcp"
70
IPV6INIT="yes"
71
IPV6_AUTOCONF="yes"
72
ONBOOT="yes"
73
TYPE="Bridge"
74
DELAY="0"
75
</pre>
76
77
enable network forwarding @/etc/sysctl.conf@
78
<pre>
79
...
80
net.ipv4.ip_forward = 1
81
</pre>
82
83
read the file and restart NetworkManager
84
<pre><code class="bash">
85
sysctl -p /etc/sysctl.conf
86
systemctl restart NetworkManager
87
</code></pre>
88
89
h2. can KVM and Virtualbox coexist
90
91
http://www.dedoimedo.com/computers/kvm-virtualbox.html
92
93
h2. convert Virtualbox to KVM
94
95
h3. uninstall Virtualbox-guest-additions
96
97
<pre><code class="bash">
98
opt/[VboxAddonsFolder]/uninstall.sh
99
</code></pre>
100
101
some people had to remove @/etc/X11/xorg.conf@
102
103
h3. convert image from Virtualbox to KWM
104
105
<pre><code class="bash">
106
VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img
107
</code></pre>
108
109
RAW-Datei nach qcow konvertieren
110
<pre><code class="bash">
111
qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow
112
</code></pre>
113
114
h2. automatic start/shutdown of VMs with Host
115
116
taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html
117
118
h3. enable libvirt-guests service
119
120
<pre><code class="bash">
121
systemctl enable libvirt-guests
122
systemctl start libvirt-guests
123
</code></pre>
124
125
all settings are to be done in @/etc/sysconfig/libvirt-guests@
126
127
h2. install
128
129
130
<pre><code class="bash">
131
yum install virt-manager
132
</code></pre>
133
134
<pre><code class="bash">
135
usermod -a -G libvirt username
136
</code></pre>
137
138
h2. rename KVM-guest
139
140
taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html
141
142
Power off the virtual machine and export the machine's XML configuration file:
143
144
<pre><code class="bash">
145
virsh dumpxml name_of_vm > name_of_vm.xml
146
</code></pre>
147
148
Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>.
149
150
Save the XML file and undefine the old VM name with:
151
152
<pre><code class="bash">
153
virsh undefine name_of_vm
154
</code></pre>
155
156
Now just import the edited XML file to define the VM:
157
158
<pre><code class="bash">
159
virsh define name_of_vm.xml
160
</code></pre>
161
162
And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using:
163
164
<pre><code class="bash">
165
virsh start name_of_vm
166
</code></pre>
167
168
h2. set fixed IP-adr via DHCP (default-network)
169
170
taken from https://wiki.libvirt.org/page/Networking
171
172
<pre><code class="bash">
173
virsh edit <guest>
174
</code></pre>
175
176
where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file: 
177
178
<pre><code class="bash">
179
<interface type='network'>
180
  <source network='default'/>
181
  <mac address='00:16:3e:1a:b3:4a'/>
182
</interface>
183
</code></pre>
184
185
Applying modifications to the network
186
187
Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached.
188
virsh net-update
189
190
Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command: 
191
192
<pre><code class="bash">
193
virsh net-update default add ip-dhcp-host \
194
          "<host mac='52:54:00:00:00:01' \
195
           name='bob' ip='192.168.122.45' />" \
196
           --live --config
197
</code></pre>
198
199
h2. forwarding incoming connections
200
201
taken from https://wiki.libvirt.org/page/Networking
202
203
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
204
205
If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME:
206
207
1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP".
208
209
(To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.)
210
211
2) Stop the guest if it's running.
212
213
3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup):
214
215
Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome): 
216
217
<pre>
218
#!/bin/bash
219
# used some from advanced script to have multiple ports: use an equal number of guest and host ports
220
221
# Update the following variables to fit your setup
222
Guest_name=GUEST_NAME
223
Guest_ipaddr=GUEST_IP
224
Host_ipaddr=HOST_IP
225
Host_port=(  'HOST_PORT1' 'HOST_PORT2' )
226
Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' )
227
228
length=$(( ${#Host_port[@]} - 1 ))
229
if [ "${1}" = "${Guest_name}" ]; then
230
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
231
       for i in `seq 0 $length`; do
232
               iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
233
               iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
234
       done
235
   fi
236
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
237
       for i in `seq 0 $length`; do
238
               iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
239
               iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
240
       done
241
   fi
242
fi
243
</pre>
244
4) chmod +x /etc/libvirt/hooks/qemu
245
246
5) Restart the libvirtd service.
247
248
6) Start the guest.
249
250
(NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack). 
251
252
h2. wrapper script for virsh
253
254
<pre>
255
#! /bin/sh
256
# kvm_control   Startup script for KVM Virtual Machines
257
#
258
# description: Manages KVM VMs
259
# processname: kvm_control.sh
260
#
261
# pidfile: /var/run/kvm_control/kvm_control.pid
262
#
263
### BEGIN INIT INFO
264
#
265
### END INIT INFO
266
#
267
# Version 20171103 by Jeremias Keihsler added ionice prio 'idle'
268
# Version 20161228 by Jeremias Keihsler based on:
269
# virsh-specific parts are taken from:
270
#  https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh
271
# Version 20110509 by Jeremias Keihsler (vboxcontrol) based on:
272
# Version 20090301 by Kevin Swanson <kswan.info> based on:
273
# Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com>
274
# http://farfewertoes.com
275
#
276
# Released in the public domain
277
#
278
# This file came with a README file containing the instructions on how
279
# to use this script.
280
# 
281
# this is no more to be used as an init.d-script (vboxcontrol was an init.d-script)
282
#
283
284
################################################################################
285
# INITIAL CONFIGURATION
286
287
export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin"
288
289
VIRSH=/usr/bin/virsh
290
TIMEOUT=300
291
292
declare -i VM_isrunning
293
294
################################################################################
295
# FUNCTIONS
296
297
log_failure_msg() {
298
echo $1
299
}
300
301
log_action_msg() {
302
echo $1
303
}
304
305
# list running domains
306
list_running_domains() {
307
  $VIRSH list | grep running | awk '{ print $2}'
308
}
309
310
# Check for running machines every few seconds; return when all machines are
311
# down
312
wait_for_closing_machines() {
313
RUNNING_MACHINES=`list_running_domains | wc -l`
314
if [ $RUNNING_MACHINES != 0 ]; then
315
  log_action_msg "machines running: "$RUNNING_MACHINES
316
  sleep 2
317
318
  wait_for_closing_machines
319
fi
320
}
321
322
################################################################################
323
# RUN
324
case "$1" in
325
  start)
326
    if [ -f /etc/kvm_box/machines_enabled_start ]; then
327
328
      cat /etc/kvm_box/machines_enabled_start | while read VM; do
329
        log_action_msg "Starting VM: $VM ..."
330
        $VIRSH start $VM
331
        sleep 20
332
        RETVAL=$?
333
      done
334
      touch /tmp/kvm_control
335
    fi
336
  ;;
337
  stop)
338
    # NOTE: this stops first the listed VMs in the given order
339
    # and later all running VM's. 
340
    # After the defined timeout all remaining VMs are killed
341
342
    # Create some sort of semaphore.
343
    touch /tmp/shutdown-kvm-guests
344
345
    echo "Try to cleanly shut down all listed KVM domains..."
346
    # Try to shutdown each listed domain, one by one.
347
    if [ -f /etc/kvm_box/machines_enabled_stop ]; then
348
      cat /etc/kvm_box/machines_enabled_stop | while read VM; do
349
        log_action_msg "Shutting down VM: $VM ..."
350
        $VIRSH shutdown $VM --mode acpi
351
        sleep 10
352
        RETVAL=$?
353
      done
354
    fi
355
    sleep 10
356
357
    echo "give still running machines some more time..."
358
    # wait 20s per still running machine
359
    list_running_domains | while read VM; do
360
      log_action_msg "waiting 20s ... for: $VM ..."
361
      sleep 20
362
    done
363
364
    echo "Try to cleanly shut down all running KVM domains..."
365
    # Try to shutdown each remaining domain, one by one.
366
    list_running_domains | while read VM; do
367
      log_action_msg "Shutting down VM: $VM ..."
368
      $VIRSH shutdown $VM --mode acpi
369
      sleep 10
370
    done
371
372
    # Wait until all domains are shut down or timeout has reached.
373
    END_TIME=$(date -d "$TIMEOUT seconds" +%s)
374
375
    while [ $(date +%s) -lt $END_TIME ]; do
376
      # Break while loop when no domains are left.
377
      test -z "$(list_running_domains)" && break
378
      # Wait a litte, we don't want to DoS libvirt.
379
      sleep 2
380
    done
381
382
    # Clean up left over domains, one by one.
383
    list_running_domains | while read DOMAIN; do
384
      # Try to shutdown given domain.
385
      $VIRSH destroy $DOMAIN
386
      # Give libvirt some time for killing off the domain.
387
      sleep 10
388
    done
389
390
    wait_for_closing_machines
391
    rm -f /tmp/shutdown-kvm-guests
392
    rm -f /tmp/kvm_control
393
  ;;
394
  export)
395
    JKE_DATE=$(date +%F)
396
    if [ -f /etc/kvm_box/machines_enabled_export ]; then
397
      cat /etc/kvm_box/machines_enabled_export  | while read VM; do
398
        rm -f /tmp/kvm_control_VM_isrunning
399
        VM_isrunning=0
400
        list_running_domains | while read RVM; do
401
          #echo "VM list -$VM- : -$RVM-"
402
          if [[ "$VM" ==  "$RVM" ]]; then
403
            #echo "VM found running..."
404
            touch /tmp/kvm_control_VM_isrunning
405
            VM_isrunning=1
406
            #echo "$VM_isrunning"
407
            break
408
          fi
409
          #echo "$VM_isrunning"
410
        done
411
412
        # took me a while to figure out that the above 'while'-loop 
413
        # runs in a separate process ... let's use the 'file' as a 
414
        # kind of interprocess-communication :-) JKE 20161229
415
        if [ -f /tmp/kvm_control_VM_isrunning ]; then
416
          VM_isrunning=1
417
        fi
418
        rm -f /tmp/kvm_control_VM_isrunning
419
420
        #echo "VM status $VM_isrunning"
421
        if [ "$VM_isrunning" -ne 0 ]; then
422
          log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
423
        else
424
          log_action_msg "Exporting VM: $VM ..."
425
          VM_BAK_DIR="$VM"_"$JKE_DATE"
426
          mkdir "$VM_BAK_DIR"
427
          $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml
428
          $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do
429
            echo "$VM hdd=$VMHDD"
430
            if [ -f "$VMHDD" ]; then
431
              ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
432
            else
433
              log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..."
434
            fi
435
          done
436
        fi
437
      done
438
    else
439
      log_action_msg "export-list not found"
440
    fi
441
  ;;
442
  start-vm)
443
    log_action_msg "Starting VM: $2 ..."
444
    $VIRSH start $2
445
    RETVAL=$?
446
  ;;
447
  stop-vm)
448
    log_action_msg "Stopping VM: $2 ..."
449
    $VIRSH shutdown $2 --mode acpi
450
    RETVAL=$?
451
  ;;
452
  poweroff-vm)
453
    log_action_msg "Powering off VM: $2 ..."
454
    $VIRSH destroy $2
455
    RETVAL=$?
456
  ;;
457
  export-vm)
458
    # NOTE: this exports the given VM
459
    log_action_msg "Exporting VM: $2 ..."
460
    rm -f /tmp/kvm_control_VM_isrunning
461
    VM_isrunning=0
462
    JKE_DATE=$(date +%F)
463
    list_running_domains | while read RVM; do
464
      #echo "VM list -$VM- : -$RVM-"
465
      if [[ "$2" ==  "$RVM" ]]; then
466
        #echo "VM found running..."
467
        touch /tmp/kvm_control_VM_isrunning
468
        VM_isrunning=1
469
        #echo "$VM_isrunning"
470
        break
471
      fi
472
      #echo "$VM_isrunning"
473
    done
474
475
    # took me a while to figure out that the above 'while'-loop 
476
    # runs in a separate process ... let's use the 'file' as a 
477
    # kind of interprocess-communication :-) JKE 20161229
478
    if [ -f /tmp/kvm_control_VM_isrunning ]; then
479
      VM_isrunning=1
480
    fi
481
    rm -f /tmp/kvm_control_VM_isrunning
482
483
    #echo "VM status $VM_isrunning"
484
    if [ "$VM_isrunning" -ne 0 ]; then
485
      log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
486
    else
487
      log_action_msg "Exporting VM: $VM ..."
488
      VM_BAK_DIR="$2"_"$JKE_DATE"
489
      mkdir "$VM_BAK_DIR"
490
      $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml
491
      $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do
492
        echo "$2 hdd=$VMHDD"
493
        if [ -f "$VMHDD" ]; then
494
          ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
495
        else
496
          log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..."
497
        fi
498
      done
499
    fi
500
  ;;
501
  status)
502
    echo "The following virtual machines are currently running:"
503
    list_running_domains | while read VM; do
504
      echo -n "  $VM"
505
      echo " ... is running"
506
    done
507
  ;;
508
509
  *)
510
    echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>"
511
    echo "  start      start all VMs listed in '/etc/kvm_box/machines_enabled_start'"
512
    echo "  stop       1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'"
513
    echo "             2nd step: wait 20s for each still running machine to give a chance to shut-down on their own"
514
    echo "             3rd step: acpi-shutdown all running VMs"
515
    echo "             4th step: wait for all machines shutdown or $TIMEOUT s"
516
    echo "             5th step: destroy all sitting VMs"
517
    echo "  status     list all running VMs"
518
    echo "  export     export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory"
519
    echo "  start-vm <VM name>     start the given VM"
520
    echo "  stop-vm <VM name>      acpi-shutdown the given VM"
521
    echo "  poweroff-vm <VM name>  poweroff the given VM"
522
    echo "  export-vm <VM name>    export the given VM to the current directory"
523
    exit 3
524
esac
525
526
exit 0
527
528
</pre>
529
530
h2. restore 'exported' kvm-machines
531
532
<pre><code class="shell">
533
tar xvf mach-name_202x-01-01.tar.gz 
534
</code></pre>
535
536
* copy the image-files to @/var/lib/libvirt/images/@
537
538
set ownership
539
<pre><code class="shell">
540
chown qemu:qemu /var/lib/libvirt/images/*
541
</code></pre>
542
543
544
define the machine by
545
546
<pre><code class="shell">
547
virsh define mach-name.xml
548
</code></pre>