Projekt

Allgemein

Profil

Setup kvm » Historie » Version 9

Jeremias Keihsler, 11.12.2024 12:33

1 1 Jeremias Keihsler
h1. KVM
2
3
this is for a vanilla CentOS 9 minimal installation,
4
largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@
5
6 4 Jeremias Keihsler
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_enabling-virtualization-in-rhel-9_configuring-and-managing-virtualization#proc_enabling-virtualization-in-rhel-9_assembly_enabling-virtualization-in-rhel-9
7
8 1 Jeremias Keihsler
https://www.linuxtechi.com/install-kvm-on-rocky-linux-almalinux/
9
10
good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/
11
12
br0 -sources:
13
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli
14
https://www.tecmint.com/create-network-bridge-in-rhel-centos-8/
15
https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/
16
https://extravm.com/billing/knowledgebase/114/CentOS-8-ifup-unknown-connection---Add-Second-IP.html
17
18
h2. basic updates/installs
19
20
<pre><code class="bash">
21 9 Jeremias Keihsler
dnf update
22
dnf install wget
23
dnf install vim
24 1 Jeremias Keihsler
reboot
25
</code></pre>
26
27
h2. check machine capability
28
29
<pre><code class="bash">
30
grep -E 'svm|vmx' /proc/cpuinfo
31
</code></pre>
32
33
vmx ... Intel
34
svm ... AMD
35
36
h2. install KVM on CentOS minimal
37
38
<pre><code class="bash">
39 2 Jeremias Keihsler
dnf install qemu-kvm libvirt libguestfs-tools virt-install virt-viewer
40 3 Jeremias Keihsler
for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
41 1 Jeremias Keihsler
</code></pre>
42
43
verify the following kernel modules are loaded
44
<pre><code class="bash">
45
lsmod | grep kvm
46
</code></pre>
47
48
<pre><code class="bash">
49
kvm
50
kvm_intel
51
</code></pre>
52
<pre><code class="bash">
53
kvm
54
kvm_amd
55
</code></pre>
56 2 Jeremias Keihsler
57
h3. Verification
58
59
<pre><code class="bash">
60
virt-host-validate
61
</code></pre>
62 1 Jeremias Keihsler
63 5 Jeremias Keihsler
h3. change from libvirtd to modular libvirt daemons
64
65
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/optimizing-virtual-machine-performance-in-rhel_configuring-and-managing-virtualization#proc_enabling-modular-libvirt-daemons_assembly_optimizing-libvirt-daemons
66
67 6 Jeremias Keihsler
stop @libvirtd@ and its sockets
68
69
<pre><code class="shell">
70
systemctl stop libvirtd.service
71
systemctl stop libvirtd{,-ro,-admin,-tcp,-tls}.socket
72
</code></pre>
73
74
disable @libvirtd@
75
76
<pre><code class="shell">
77
systemctl disable libvirtd.service
78
systemctl disable libvirtd{,-ro,-admin,-tcp,-tls}.socket
79
</code></pre>
80
81
enable modular @libvirt@ daemons
82
83
<pre><code class="shell">
84
for drv in qemu interface network nodedev nwfilter secret storage; do systemctl unmask virt${drv}d.service; systemctl unmask virt${drv}d{,-ro,-admin}.socket; systemctl enable virt${drv}d.service; systemctl enable virt${drv}d{,-ro,-admin}.socket; done
85
</code></pre>
86
87
start sockets for modular daemons
88
89
<pre><code class="shell">
90
for drv in qemu network nodedev nwfilter secret storage; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
91
</code></pre>
92
93
check whether the @libvirtd-tls.socket@ service is enabled on your system. 
94
95
<pre><code class="shell">
96
grep listen_tls /etc/libvirt/libvirtd.conf
97
</code></pre>
98
99
if @listen_tls = 0@ then
100
101
<pre><code class="shell">
102
systemctl unmask virtproxyd.service
103
systemctl unmask virtproxyd{,-ro,-admin}.socket
104
systemctl enable virtproxyd.service
105
systemctl enable virtproxyd{,-ro,-admin}.socket
106
systemctl start virtproxyd{,-ro,-admin}.socket
107
</code></pre>
108
109
elseif @listen_tls = 1@ then
110
111
<pre><code class="shell">
112
systemctl unmask virtproxyd.service
113
systemctl unmask virtproxyd{,-ro,-admin,-tls}.socket
114
systemctl enable virtproxyd.service
115
systemctl enable virtproxyd{,-ro,-admin,-tls}.socket
116
systemctl start virtproxyd{,-ro,-admin,-tls}.socket
117
</code></pre>
118 5 Jeremias Keihsler
119 7 Jeremias Keihsler
Verification
120
121
<pre><code class="shell">
122
virsh uri
123
</code></pre>
124
125
should result in @qemu:///system@
126
127
Verify that your host is using the @virtqemud@ modular daemon. 
128
129
<pre><code class="shell">
130
systemctl is-active virtqemud.service
131
</code></pre>
132
133
should result in @active@
134 5 Jeremias Keihsler
135 1 Jeremias Keihsler
h2. setup networking
136
137 8 Jeremias Keihsler
the configuration via @/etc/sysconfig/network-scripts/@ is depreciated
138 1 Jeremias Keihsler
139 8 Jeremias Keihsler
use nmtui to setup the bridge
140 1 Jeremias Keihsler
141 8 Jeremias Keihsler
see also https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/configuring-a-network-bridge_configuring-and-managing-networking#proc_configuring-a-network-bridge-by-using-nmtui_configuring-a-network-bridge
142 1 Jeremias Keihsler
143
144
h2. can KVM and Virtualbox coexist
145
146
http://www.dedoimedo.com/computers/kvm-virtualbox.html
147
148
h2. convert Virtualbox to KVM
149
150
h3. uninstall Virtualbox-guest-additions
151
152
<pre><code class="bash">
153
opt/[VboxAddonsFolder]/uninstall.sh
154
</code></pre>
155
156
some people had to remove @/etc/X11/xorg.conf@
157
158
h3. convert image from Virtualbox to KWM
159
160
<pre><code class="bash">
161
VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img
162
</code></pre>
163
164
RAW-Datei nach qcow konvertieren
165
<pre><code class="bash">
166
qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow
167
</code></pre>
168
169
h2. automatic start/shutdown of VMs with Host
170
171
taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html
172
173
h3. enable libvirt-guests service
174
175
<pre><code class="bash">
176
systemctl enable libvirt-guests
177
systemctl start libvirt-guests
178
</code></pre>
179
180
all settings are to be done in @/etc/sysconfig/libvirt-guests@
181
182
h2. install
183
184
185
<pre><code class="bash">
186
yum install virt-manager
187
</code></pre>
188
189
<pre><code class="bash">
190
usermod -a -G libvirt username
191
</code></pre>
192
193
h2. rename KVM-guest
194
195
taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html
196
197
Power off the virtual machine and export the machine's XML configuration file:
198
199
<pre><code class="bash">
200
virsh dumpxml name_of_vm > name_of_vm.xml
201
</code></pre>
202
203
Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>.
204
205
Save the XML file and undefine the old VM name with:
206
207
<pre><code class="bash">
208
virsh undefine name_of_vm
209
</code></pre>
210
211
Now just import the edited XML file to define the VM:
212
213
<pre><code class="bash">
214
virsh define name_of_vm.xml
215
</code></pre>
216
217
And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using:
218
219
<pre><code class="bash">
220
virsh start name_of_vm
221
</code></pre>
222
223
h2. set fixed IP-adr via DHCP (default-network)
224
225
taken from https://wiki.libvirt.org/page/Networking
226
227
<pre><code class="bash">
228
virsh edit <guest>
229
</code></pre>
230
231
where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file: 
232
233
<pre><code class="bash">
234
<interface type='network'>
235
  <source network='default'/>
236
  <mac address='00:16:3e:1a:b3:4a'/>
237
</interface>
238
</code></pre>
239
240
Applying modifications to the network
241
242
Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached.
243
virsh net-update
244
245
Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command: 
246
247
<pre><code class="bash">
248
virsh net-update default add ip-dhcp-host \
249
          "<host mac='52:54:00:00:00:01' \
250
           name='bob' ip='192.168.122.45' />" \
251
           --live --config
252
</code></pre>
253
254
h2. forwarding incoming connections
255
256
taken from https://wiki.libvirt.org/page/Networking
257
258
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
259
260
If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME:
261
262
1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP".
263
264
(To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.)
265
266
2) Stop the guest if it's running.
267
268
3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup):
269
270
Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome): 
271
272
<pre>
273
#!/bin/bash
274
# used some from advanced script to have multiple ports: use an equal number of guest and host ports
275
276
# Update the following variables to fit your setup
277
Guest_name=GUEST_NAME
278
Guest_ipaddr=GUEST_IP
279
Host_ipaddr=HOST_IP
280
Host_port=(  'HOST_PORT1' 'HOST_PORT2' )
281
Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' )
282
283
length=$(( ${#Host_port[@]} - 1 ))
284
if [ "${1}" = "${Guest_name}" ]; then
285
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
286
       for i in `seq 0 $length`; do
287
               iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
288
               iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
289
       done
290
   fi
291
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
292
       for i in `seq 0 $length`; do
293
               iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
294
               iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
295
       done
296
   fi
297
fi
298
</pre>
299
4) chmod +x /etc/libvirt/hooks/qemu
300
301
5) Restart the libvirtd service.
302
303
6) Start the guest.
304
305
(NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack). 
306
307
h2. wrapper script for virsh
308
309
<pre>
310
#! /bin/sh
311
# kvm_control   Startup script for KVM Virtual Machines
312
#
313
# description: Manages KVM VMs
314
# processname: kvm_control.sh
315
#
316
# pidfile: /var/run/kvm_control/kvm_control.pid
317
#
318
### BEGIN INIT INFO
319
#
320
### END INIT INFO
321
#
322
# Version 20171103 by Jeremias Keihsler added ionice prio 'idle'
323
# Version 20161228 by Jeremias Keihsler based on:
324
# virsh-specific parts are taken from:
325
#  https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh
326
# Version 20110509 by Jeremias Keihsler (vboxcontrol) based on:
327
# Version 20090301 by Kevin Swanson <kswan.info> based on:
328
# Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com>
329
# http://farfewertoes.com
330
#
331
# Released in the public domain
332
#
333
# This file came with a README file containing the instructions on how
334
# to use this script.
335
# 
336
# this is no more to be used as an init.d-script (vboxcontrol was an init.d-script)
337
#
338
339
################################################################################
340
# INITIAL CONFIGURATION
341
342
export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin"
343
344
VIRSH=/usr/bin/virsh
345
TIMEOUT=300
346
347
declare -i VM_isrunning
348
349
################################################################################
350
# FUNCTIONS
351
352
log_failure_msg() {
353
echo $1
354
}
355
356
log_action_msg() {
357
echo $1
358
}
359
360
# list running domains
361
list_running_domains() {
362
  $VIRSH list | grep running | awk '{ print $2}'
363
}
364
365
# Check for running machines every few seconds; return when all machines are
366
# down
367
wait_for_closing_machines() {
368
RUNNING_MACHINES=`list_running_domains | wc -l`
369
if [ $RUNNING_MACHINES != 0 ]; then
370
  log_action_msg "machines running: "$RUNNING_MACHINES
371
  sleep 2
372
373
  wait_for_closing_machines
374
fi
375
}
376
377
################################################################################
378
# RUN
379
case "$1" in
380
  start)
381
    if [ -f /etc/kvm_box/machines_enabled_start ]; then
382
383
      cat /etc/kvm_box/machines_enabled_start | while read VM; do
384
        log_action_msg "Starting VM: $VM ..."
385
        $VIRSH start $VM
386
        sleep 20
387
        RETVAL=$?
388
      done
389
      touch /tmp/kvm_control
390
    fi
391
  ;;
392
  stop)
393
    # NOTE: this stops first the listed VMs in the given order
394
    # and later all running VM's. 
395
    # After the defined timeout all remaining VMs are killed
396
397
    # Create some sort of semaphore.
398
    touch /tmp/shutdown-kvm-guests
399
400
    echo "Try to cleanly shut down all listed KVM domains..."
401
    # Try to shutdown each listed domain, one by one.
402
    if [ -f /etc/kvm_box/machines_enabled_stop ]; then
403
      cat /etc/kvm_box/machines_enabled_stop | while read VM; do
404
        log_action_msg "Shutting down VM: $VM ..."
405
        $VIRSH shutdown $VM --mode acpi
406
        sleep 10
407
        RETVAL=$?
408
      done
409
    fi
410
    sleep 10
411
412
    echo "give still running machines some more time..."
413
    # wait 20s per still running machine
414
    list_running_domains | while read VM; do
415
      log_action_msg "waiting 20s ... for: $VM ..."
416
      sleep 20
417
    done
418
419
    echo "Try to cleanly shut down all running KVM domains..."
420
    # Try to shutdown each remaining domain, one by one.
421
    list_running_domains | while read VM; do
422
      log_action_msg "Shutting down VM: $VM ..."
423
      $VIRSH shutdown $VM --mode acpi
424
      sleep 10
425
    done
426
427
    # Wait until all domains are shut down or timeout has reached.
428
    END_TIME=$(date -d "$TIMEOUT seconds" +%s)
429
430
    while [ $(date +%s) -lt $END_TIME ]; do
431
      # Break while loop when no domains are left.
432
      test -z "$(list_running_domains)" && break
433
      # Wait a litte, we don't want to DoS libvirt.
434
      sleep 2
435
    done
436
437
    # Clean up left over domains, one by one.
438
    list_running_domains | while read DOMAIN; do
439
      # Try to shutdown given domain.
440
      $VIRSH destroy $DOMAIN
441
      # Give libvirt some time for killing off the domain.
442
      sleep 10
443
    done
444
445
    wait_for_closing_machines
446
    rm -f /tmp/shutdown-kvm-guests
447
    rm -f /tmp/kvm_control
448
  ;;
449
  export)
450
    JKE_DATE=$(date +%F)
451
    if [ -f /etc/kvm_box/machines_enabled_export ]; then
452
      cat /etc/kvm_box/machines_enabled_export  | while read VM; do
453
        rm -f /tmp/kvm_control_VM_isrunning
454
        VM_isrunning=0
455
        list_running_domains | while read RVM; do
456
          #echo "VM list -$VM- : -$RVM-"
457
          if [[ "$VM" ==  "$RVM" ]]; then
458
            #echo "VM found running..."
459
            touch /tmp/kvm_control_VM_isrunning
460
            VM_isrunning=1
461
            #echo "$VM_isrunning"
462
            break
463
          fi
464
          #echo "$VM_isrunning"
465
        done
466
467
        # took me a while to figure out that the above 'while'-loop 
468
        # runs in a separate process ... let's use the 'file' as a 
469
        # kind of interprocess-communication :-) JKE 20161229
470
        if [ -f /tmp/kvm_control_VM_isrunning ]; then
471
          VM_isrunning=1
472
        fi
473
        rm -f /tmp/kvm_control_VM_isrunning
474
475
        #echo "VM status $VM_isrunning"
476
        if [ "$VM_isrunning" -ne 0 ]; then
477
          log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
478
        else
479
          log_action_msg "Exporting VM: $VM ..."
480
          VM_BAK_DIR="$VM"_"$JKE_DATE"
481
          mkdir "$VM_BAK_DIR"
482
          $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml
483
          $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do
484
            echo "$VM hdd=$VMHDD"
485
            if [ -f "$VMHDD" ]; then
486
              ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
487
            else
488
              log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..."
489
            fi
490
          done
491
        fi
492
      done
493
    else
494
      log_action_msg "export-list not found"
495
    fi
496
  ;;
497
  start-vm)
498
    log_action_msg "Starting VM: $2 ..."
499
    $VIRSH start $2
500
    RETVAL=$?
501
  ;;
502
  stop-vm)
503
    log_action_msg "Stopping VM: $2 ..."
504
    $VIRSH shutdown $2 --mode acpi
505
    RETVAL=$?
506
  ;;
507
  poweroff-vm)
508
    log_action_msg "Powering off VM: $2 ..."
509
    $VIRSH destroy $2
510
    RETVAL=$?
511
  ;;
512
  export-vm)
513
    # NOTE: this exports the given VM
514
    log_action_msg "Exporting VM: $2 ..."
515
    rm -f /tmp/kvm_control_VM_isrunning
516
    VM_isrunning=0
517
    JKE_DATE=$(date +%F)
518
    list_running_domains | while read RVM; do
519
      #echo "VM list -$VM- : -$RVM-"
520
      if [[ "$2" ==  "$RVM" ]]; then
521
        #echo "VM found running..."
522
        touch /tmp/kvm_control_VM_isrunning
523
        VM_isrunning=1
524
        #echo "$VM_isrunning"
525
        break
526
      fi
527
      #echo "$VM_isrunning"
528
    done
529
530
    # took me a while to figure out that the above 'while'-loop 
531
    # runs in a separate process ... let's use the 'file' as a 
532
    # kind of interprocess-communication :-) JKE 20161229
533
    if [ -f /tmp/kvm_control_VM_isrunning ]; then
534
      VM_isrunning=1
535
    fi
536
    rm -f /tmp/kvm_control_VM_isrunning
537
538
    #echo "VM status $VM_isrunning"
539
    if [ "$VM_isrunning" -ne 0 ]; then
540
      log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
541
    else
542
      log_action_msg "Exporting VM: $VM ..."
543
      VM_BAK_DIR="$2"_"$JKE_DATE"
544
      mkdir "$VM_BAK_DIR"
545
      $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml
546
      $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do
547
        echo "$2 hdd=$VMHDD"
548
        if [ -f "$VMHDD" ]; then
549
          ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
550
        else
551
          log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..."
552
        fi
553
      done
554
    fi
555
  ;;
556
  status)
557
    echo "The following virtual machines are currently running:"
558
    list_running_domains | while read VM; do
559
      echo -n "  $VM"
560
      echo " ... is running"
561
    done
562
  ;;
563
564
  *)
565
    echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>"
566
    echo "  start      start all VMs listed in '/etc/kvm_box/machines_enabled_start'"
567
    echo "  stop       1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'"
568
    echo "             2nd step: wait 20s for each still running machine to give a chance to shut-down on their own"
569
    echo "             3rd step: acpi-shutdown all running VMs"
570
    echo "             4th step: wait for all machines shutdown or $TIMEOUT s"
571
    echo "             5th step: destroy all sitting VMs"
572
    echo "  status     list all running VMs"
573
    echo "  export     export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory"
574
    echo "  start-vm <VM name>     start the given VM"
575
    echo "  stop-vm <VM name>      acpi-shutdown the given VM"
576
    echo "  poweroff-vm <VM name>  poweroff the given VM"
577
    echo "  export-vm <VM name>    export the given VM to the current directory"
578
    exit 3
579
esac
580
581
exit 0
582
583
</pre>
584
585
h2. restore 'exported' kvm-machines
586
587
<pre><code class="shell">
588
tar xvf mach-name_202x-01-01.tar.gz 
589
</code></pre>
590
591
* copy the image-files to @/var/lib/libvirt/images/@
592
593
set ownership
594
<pre><code class="shell">
595
chown qemu:qemu /var/lib/libvirt/images/*
596
</code></pre>
597
598
599
define the machine by
600
601
<pre><code class="shell">
602
virsh define mach-name.xml
603
</code></pre>