Projekt

Allgemein

Profil

Setup kvm » Historie » Version 1

Jeremias Keihsler, 26.08.2021 10:49

1 1 Jeremias Keihsler
h1. KVM
2
3
this is for a vanilla CentOS 7 minimal installation,
4
largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@
5
6
good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/
7
8
h2. basic updates/installs
9
10
<pre><code class="bash">
11
yum update
12
yum install wget
13
yum install vim
14
reboot
15
</code></pre>
16
17
h2. check machine capability
18
19
<pre><code class="bash">
20
grep -E 'svm|vmx' /proc/cpuinfo
21
</code></pre>
22
23
vmx ... Intel
24
svm ... AMD
25
26
h2. install KVM on CentOS minimal
27
28
<pre><code class="bash">
29
yum install qemu-kvm libvirt libguestfs-tools virt-install
30
systemctl enable libvirtd && systemctl start libvirtd
31
</code></pre>
32
33
verify the following kernel modules are loaded
34
<pre><code class="bash">
35
lsmod | grep kvm
36
</code></pre>
37
38
<pre><code class="bash">
39
kvm
40
kvm_intel
41
</code></pre>
42
<pre><code class="bash">
43
kvm
44
kvm_amd
45
</code></pre>
46
h2. setup networking
47
48
add to the network controller configuration file @/etc/sysconfig/network-scripts/ifcfg-em1@
49
<pre>
50
...
51
BRIDGE=br0
52
</pre>
53
54
add following new file @/etc/sysconfig/network-scripts/ifcfg-br0@
55
<pre>
56
DEVICE="br0"
57
# BOOTPROTO is up to you. If you prefer “static”, you will need to
58
# specify the IP address, netmask, gateway and DNS information.
59
BOOTPROTO="dhcp"
60
IPV6INIT="yes"
61
IPV6_AUTOCONF="yes"
62
ONBOOT="yes"
63
TYPE="Bridge"
64
DELAY="0"
65
</pre>
66
67
enable network forwarding @/etc/sysctl.conf@
68
<pre>
69
...
70
net.ipv4.ip_forward = 1
71
</pre>
72
73
read the file and restart NetworkManager
74
<pre><code class="bash">
75
sysctl -p /etc/sysctl.conf
76
systemctl restart NetworkManager
77
</code></pre>
78
79
h2. can KVM and Virtualbox coexist
80
81
http://www.dedoimedo.com/computers/kvm-virtualbox.html
82
83
h2. convert Virtualbox to KVM
84
85
h3. uninstall Virtualbox-guest-additions
86
87
<pre><code class="bash">
88
opt/[VboxAddonsFolder]/uninstall.sh
89
</code></pre>
90
91
some people had to remove @/etc/X11/xorg.conf@
92
93
h3. convert image from Virtualbox to KWM
94
95
<pre><code class="bash">
96
VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img
97
</code></pre>
98
99
RAW-Datei nach qcow konvertieren
100
<pre><code class="bash">
101
qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow
102
</code></pre>
103
104
h2. automatic start/shutdown of VMs with Host
105
106
taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html
107
108
h3. enable libvirt-guests service
109
<pre><code class="bash">
110
systemctl enable libvirt-guests
111
systemctl start libvirt-guests
112
</code></pre>
113
114
all settings are to be done in @/etc/sysconfig/libvirt-guests@
115
116
h2. install
117
118
119
<pre><code class="bash">
120
yum install virt-manager
121
</code></pre>
122
123
<pre><code class="bash">
124
usermod -a -G libvirt username
125
</code></pre>
126
127
h2. rename KVM-guest
128
129
taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html
130
131
Power off the virtual machine and export the machine's XML configuration file:
132
133
<pre><code class="bash">
134
virsh dumpxml name_of_vm > name_of_vm.xml
135
</code></pre>
136
137
Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>.
138
139
Save the XML file and undefine the old VM name with:
140
141
<pre><code class="bash">
142
virsh undefine name_of_vm
143
</code></pre>
144
145
Now just import the edited XML file to define the VM:
146
147
<pre><code class="bash">
148
virsh define name_of_vm.xml
149
</code></pre>
150
151
And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using:
152
153
<pre><code class="bash">
154
virsh start name_of_vm
155
</code></pre>
156
157
h2. set fixed IP-adr via DHCP (default-network)
158
159
taken from https://wiki.libvirt.org/page/Networking
160
161
<pre><code class="bash">
162
virsh edit <guest>
163
</code></pre>
164
165
where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file: 
166
167
<pre><code class="bash">
168
<interface type='network'>
169
  <source network='default'/>
170
  <mac address='00:16:3e:1a:b3:4a'/>
171
</interface>
172
</code></pre>
173
174
Applying modifications to the network
175
176
Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached.
177
virsh net-update
178
179
Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command: 
180
181
<pre><code class="bash">
182
virsh net-update default add ip-dhcp-host \
183
          "<host mac='52:54:00:00:00:01' \
184
           name='bob' ip='192.168.122.45' />" \
185
           --live --config
186
</code></pre>
187
188
h2. forwarding incoming connections
189
190
taken from https://wiki.libvirt.org/page/Networking
191
192
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
193
194
If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME:
195
196
1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP".
197
198
(To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.)
199
200
2) Stop the guest if it's running.
201
202
3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup):
203
204
Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome): 
205
206
<pre>
207
#!/bin/bash
208
# used some from advanced script to have multiple ports: use an equal number of guest and host ports
209
210
# Update the following variables to fit your setup
211
Guest_name=GUEST_NAME
212
Guest_ipaddr=GUEST_IP
213
Host_ipaddr=HOST_IP
214
Host_port=(  'HOST_PORT1' 'HOST_PORT2' )
215
Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' )
216
217
length=$(( ${#Host_port[@]} - 1 ))
218
if [ "${1}" = "${Guest_name}" ]; then
219
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
220
       for i in `seq 0 $length`; do
221
               iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
222
               iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
223
       done
224
   fi
225
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
226
       for i in `seq 0 $length`; do
227
               iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
228
               iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
229
       done
230
   fi
231
fi
232
</pre>
233
4) chmod +x /etc/libvirt/hooks/qemu
234
235
5) Restart the libvirtd service.
236
237
6) Start the guest.
238
239
(NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack). 
240
241
h2. wrapper script for virsh
242
243
<pre>
244
#! /bin/sh
245
# kvm_control   Startup script for KVM Virtual Machines
246
#
247
# description: Manages KVM VMs
248
# processname: kvm_control.sh
249
#
250
# pidfile: /var/run/kvm_control/kvm_control.pid
251
#
252
### BEGIN INIT INFO
253
#
254
### END INIT INFO
255
#
256
# Version 20171103 by Jeremias Keihsler added ionice prio 'idle'
257
# Version 20161228 by Jeremias Keihsler based on:
258
# virsh-specific parts are taken from:
259
#  https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh
260
# Version 20110509 by Jeremias Keihsler (vboxcontrol) based on:
261
# Version 20090301 by Kevin Swanson <kswan.info> based on:
262
# Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com>
263
# http://farfewertoes.com
264
#
265
# Released in the public domain
266
#
267
# This file came with a README file containing the instructions on how
268
# to use this script.
269
# 
270
# this is no more to be used as an init.d-script (vboxcontrol was an init.d-script)
271
#
272
273
################################################################################
274
# INITIAL CONFIGURATION
275
276
export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin"
277
278
VIRSH=/usr/bin/virsh
279
TIMEOUT=300
280
281
declare -i VM_isrunning
282
283
################################################################################
284
# FUNCTIONS
285
286
log_failure_msg() {
287
echo $1
288
}
289
290
log_action_msg() {
291
echo $1
292
}
293
294
# list running domains
295
list_running_domains() {
296
  $VIRSH list | grep running | awk '{ print $2}'
297
}
298
299
# Check for running machines every few seconds; return when all machines are
300
# down
301
wait_for_closing_machines() {
302
RUNNING_MACHINES=`list_running_domains | wc -l`
303
if [ $RUNNING_MACHINES != 0 ]; then
304
  log_action_msg "machines running: "$RUNNING_MACHINES
305
  sleep 2
306
307
  wait_for_closing_machines
308
fi
309
}
310
311
################################################################################
312
# RUN
313
case "$1" in
314
  start)
315
    if [ -f /etc/kvm_box/machines_enabled_start ]; then
316
317
      cat /etc/kvm_box/machines_enabled_start | while read VM; do
318
        log_action_msg "Starting VM: $VM ..."
319
        $VIRSH start $VM
320
        sleep 20
321
        RETVAL=$?
322
      done
323
      touch /tmp/kvm_control
324
    fi
325
  ;;
326
  stop)
327
    # NOTE: this stops first the listed VMs in the given order
328
    # and later all running VM's. 
329
    # After the defined timeout all remaining VMs are killed
330
331
    # Create some sort of semaphore.
332
    touch /tmp/shutdown-kvm-guests
333
334
    echo "Try to cleanly shut down all listed KVM domains..."
335
    # Try to shutdown each listed domain, one by one.
336
    if [ -f /etc/kvm_box/machines_enabled_stop ]; then
337
      cat /etc/kvm_box/machines_enabled_stop | while read VM; do
338
        log_action_msg "Shutting down VM: $VM ..."
339
        $VIRSH shutdown $VM --mode acpi
340
        sleep 10
341
        RETVAL=$?
342
      done
343
    fi
344
    sleep 10
345
346
    echo "give still running machines some more time..."
347
    # wait 20s per still running machine
348
    list_running_domains | while read VM; do
349
      log_action_msg "waiting 20s ... for: $VM ..."
350
      sleep 20
351
    done
352
353
    echo "Try to cleanly shut down all running KVM domains..."
354
    # Try to shutdown each remaining domain, one by one.
355
    list_running_domains | while read VM; do
356
      log_action_msg "Shutting down VM: $VM ..."
357
      $VIRSH shutdown $VM --mode acpi
358
      sleep 10
359
    done
360
361
    # Wait until all domains are shut down or timeout has reached.
362
    END_TIME=$(date -d "$TIMEOUT seconds" +%s)
363
364
    while [ $(date +%s) -lt $END_TIME ]; do
365
      # Break while loop when no domains are left.
366
      test -z "$(list_running_domains)" && break
367
      # Wait a litte, we don't want to DoS libvirt.
368
      sleep 2
369
    done
370
371
    # Clean up left over domains, one by one.
372
    list_running_domains | while read DOMAIN; do
373
      # Try to shutdown given domain.
374
      $VIRSH destroy $DOMAIN
375
      # Give libvirt some time for killing off the domain.
376
      sleep 10
377
    done
378
379
    wait_for_closing_machines
380
    rm -f /tmp/shutdown-kvm-guests
381
    rm -f /tmp/kvm_control
382
  ;;
383
  export)
384
    JKE_DATE=$(date +%F)
385
    if [ -f /etc/kvm_box/machines_enabled_export ]; then
386
      cat /etc/kvm_box/machines_enabled_export  | while read VM; do
387
        rm -f /tmp/kvm_control_VM_isrunning
388
        VM_isrunning=0
389
        list_running_domains | while read RVM; do
390
          #echo "VM list -$VM- : -$RVM-"
391
          if [[ "$VM" ==  "$RVM" ]]; then
392
            #echo "VM found running..."
393
            touch /tmp/kvm_control_VM_isrunning
394
            VM_isrunning=1
395
            #echo "$VM_isrunning"
396
            break
397
          fi
398
          #echo "$VM_isrunning"
399
        done
400
401
        # took me a while to figure out that the above 'while'-loop 
402
        # runs in a separate process ... let's use the 'file' as a 
403
        # kind of interprocess-communication :-) JKE 20161229
404
        if [ -f /tmp/kvm_control_VM_isrunning ]; then
405
          VM_isrunning=1
406
        fi
407
        rm -f /tmp/kvm_control_VM_isrunning
408
409
        #echo "VM status $VM_isrunning"
410
        if [ "$VM_isrunning" -ne 0 ]; then
411
          log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
412
        else
413
          log_action_msg "Exporting VM: $VM ..."
414
          VM_BAK_DIR="$VM"_"$JKE_DATE"
415
          mkdir "$VM_BAK_DIR"
416
          $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml
417
          $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do
418
            echo "$VM hdd=$VMHDD"
419
            if [ -f "$VMHDD" ]; then
420
              ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
421
            else
422
              log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..."
423
            fi
424
          done
425
        fi
426
      done
427
    else
428
      log_action_msg "export-list not found"
429
    fi
430
  ;;
431
  start-vm)
432
    log_action_msg "Starting VM: $2 ..."
433
    $VIRSH start $2
434
    RETVAL=$?
435
  ;;
436
  stop-vm)
437
    log_action_msg "Stopping VM: $2 ..."
438
    $VIRSH shutdown $2 --mode acpi
439
    RETVAL=$?
440
  ;;
441
  poweroff-vm)
442
    log_action_msg "Powering off VM: $2 ..."
443
    $VIRSH destroy $2
444
    RETVAL=$?
445
  ;;
446
  export-vm)
447
    # NOTE: this exports the given VM
448
    log_action_msg "Exporting VM: $2 ..."
449
    rm -f /tmp/kvm_control_VM_isrunning
450
    VM_isrunning=0
451
    JKE_DATE=$(date +%F)
452
    list_running_domains | while read RVM; do
453
      #echo "VM list -$VM- : -$RVM-"
454
      if [[ "$2" ==  "$RVM" ]]; then
455
        #echo "VM found running..."
456
        touch /tmp/kvm_control_VM_isrunning
457
        VM_isrunning=1
458
        #echo "$VM_isrunning"
459
        break
460
      fi
461
      #echo "$VM_isrunning"
462
    done
463
464
    # took me a while to figure out that the above 'while'-loop 
465
    # runs in a separate process ... let's use the 'file' as a 
466
    # kind of interprocess-communication :-) JKE 20161229
467
    if [ -f /tmp/kvm_control_VM_isrunning ]; then
468
      VM_isrunning=1
469
    fi
470
    rm -f /tmp/kvm_control_VM_isrunning
471
472
    #echo "VM status $VM_isrunning"
473
    if [ "$VM_isrunning" -ne 0 ]; then
474
      log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
475
    else
476
      log_action_msg "Exporting VM: $VM ..."
477
      VM_BAK_DIR="$2"_"$JKE_DATE"
478
      mkdir "$VM_BAK_DIR"
479
      $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml
480
      $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do
481
        echo "$2 hdd=$VMHDD"
482
        if [ -f "$VMHDD" ]; then
483
          ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
484
        else
485
          log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..."
486
        fi
487
      done
488
    fi
489
  ;;
490
  status)
491
    echo "The following virtual machines are currently running:"
492
    list_running_domains | while read VM; do
493
      echo -n "  $VM"
494
      echo " ... is running"
495
    done
496
  ;;
497
498
  *)
499
    echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>"
500
    echo "  start      start all VMs listed in '/etc/kvm_box/machines_enabled_start'"
501
    echo "  stop       1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'"
502
    echo "             2nd step: wait 20s for each still running machine to give a chance to shut-down on their own"
503
    echo "             3rd step: acpi-shutdown all running VMs"
504
    echo "             4th step: wait for all machines shutdown or $TIMEOUT s"
505
    echo "             5th step: destroy all sitting VMs"
506
    echo "  status     list all running VMs"
507
    echo "  export     export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory"
508
    echo "  start-vm <VM name>     start the given VM"
509
    echo "  stop-vm <VM name>      acpi-shutdown the given VM"
510
    echo "  poweroff-vm <VM name>  poweroff the given VM"
511
    echo "  export-vm <VM name>    export the given VM to the current directory"
512
    exit 3
513
esac
514
515
exit 0
516
517
</pre>
518
519
h2. restore 'exported' kvm-machines
520
521
<pre><code class="shell">
522
tar xvf mach-name_202x-01-01.tar.gz 
523
</code></pre>
524
525
* copy the image-files to @/var/lib/libvirt/images/@
526
527
set ownership
528
<pre><code class="shell">
529
chown qemu:qemu /var/lib/libvirt/images/*
530
</code></pre>
531
532
533
define the machine by
534
535
<pre><code class="shell">
536
virsh define mach-name.xml
537
</code></pre>