Projekt

Allgemein

Profil

Setup kvm » Historie » Version 6

Jeremias Keihsler, 29.09.2024 15:08

1 1 Jeremias Keihsler
h1. KVM
2
3
this is for a vanilla CentOS 9 minimal installation,
4
largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@
5
6 4 Jeremias Keihsler
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_enabling-virtualization-in-rhel-9_configuring-and-managing-virtualization#proc_enabling-virtualization-in-rhel-9_assembly_enabling-virtualization-in-rhel-9
7
8 1 Jeremias Keihsler
https://www.linuxtechi.com/install-kvm-on-rocky-linux-almalinux/
9
10
good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/
11
12
br0 -sources:
13
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli
14
https://www.tecmint.com/create-network-bridge-in-rhel-centos-8/
15
https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/
16
https://extravm.com/billing/knowledgebase/114/CentOS-8-ifup-unknown-connection---Add-Second-IP.html
17
18
19
h2. basic updates/installs
20
21
<pre><code class="bash">
22
yum update
23
yum install wget
24
yum install vim
25
reboot
26
</code></pre>
27
28
h2. check machine capability
29
30
<pre><code class="bash">
31
grep -E 'svm|vmx' /proc/cpuinfo
32
</code></pre>
33
34
vmx ... Intel
35
svm ... AMD
36
37
h2. install KVM on CentOS minimal
38
39
<pre><code class="bash">
40 2 Jeremias Keihsler
dnf install qemu-kvm libvirt libguestfs-tools virt-install virt-viewer
41 3 Jeremias Keihsler
for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
42 1 Jeremias Keihsler
</code></pre>
43
44
verify the following kernel modules are loaded
45
<pre><code class="bash">
46
lsmod | grep kvm
47
</code></pre>
48
49
<pre><code class="bash">
50
kvm
51
kvm_intel
52
</code></pre>
53
<pre><code class="bash">
54
kvm
55
kvm_amd
56
</code></pre>
57 2 Jeremias Keihsler
58
h3. Verification
59
60
<pre><code class="bash">
61
virt-host-validate
62
</code></pre>
63 1 Jeremias Keihsler
64 5 Jeremias Keihsler
h3. change from libvirtd to modular libvirt daemons
65
66
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/optimizing-virtual-machine-performance-in-rhel_configuring-and-managing-virtualization#proc_enabling-modular-libvirt-daemons_assembly_optimizing-libvirt-daemons
67
68 6 Jeremias Keihsler
stop @libvirtd@ and its sockets
69
70
<pre><code class="shell">
71
systemctl stop libvirtd.service
72
systemctl stop libvirtd{,-ro,-admin,-tcp,-tls}.socket
73
</code></pre>
74
75
disable @libvirtd@
76
77
<pre><code class="shell">
78
systemctl disable libvirtd.service
79
systemctl disable libvirtd{,-ro,-admin,-tcp,-tls}.socket
80
</code></pre>
81
82
enable modular @libvirt@ daemons
83
84
<pre><code class="shell">
85
for drv in qemu interface network nodedev nwfilter secret storage; do systemctl unmask virt${drv}d.service; systemctl unmask virt${drv}d{,-ro,-admin}.socket; systemctl enable virt${drv}d.service; systemctl enable virt${drv}d{,-ro,-admin}.socket; done
86
</code></pre>
87
88
start sockets for modular daemons
89
90
<pre><code class="shell">
91
for drv in qemu network nodedev nwfilter secret storage; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
92
</code></pre>
93
94
check whether the @libvirtd-tls.socket@ service is enabled on your system. 
95
96
<pre><code class="shell">
97
grep listen_tls /etc/libvirt/libvirtd.conf
98
</code></pre>
99
100
if @listen_tls = 0@ then
101
102
<pre><code class="shell">
103
systemctl unmask virtproxyd.service
104
systemctl unmask virtproxyd{,-ro,-admin}.socket
105
systemctl enable virtproxyd.service
106
systemctl enable virtproxyd{,-ro,-admin}.socket
107
systemctl start virtproxyd{,-ro,-admin}.socket
108
</code></pre>
109
110
elseif @listen_tls = 1@ then
111
112
<pre><code class="shell">
113
systemctl unmask virtproxyd.service
114
systemctl unmask virtproxyd{,-ro,-admin,-tls}.socket
115
systemctl enable virtproxyd.service
116
systemctl enable virtproxyd{,-ro,-admin,-tls}.socket
117
systemctl start virtproxyd{,-ro,-admin,-tls}.socket
118
</code></pre>
119 5 Jeremias Keihsler
120
121 1 Jeremias Keihsler
h2. setup networking
122
123
add to the network controller configuration file @/etc/sysconfig/network-scripts/ifcfg-em1@
124
<pre>
125
...
126
BRIDGE=br0
127
</pre>
128
129
add following new file @/etc/sysconfig/network-scripts/ifcfg-br0@
130
<pre>
131
DEVICE="br0"
132
# BOOTPROTO is up to you. If you prefer “static”, you will need to
133
# specify the IP address, netmask, gateway and DNS information.
134
BOOTPROTO="dhcp"
135
IPV6INIT="yes"
136
IPV6_AUTOCONF="yes"
137
ONBOOT="yes"
138
TYPE="Bridge"
139
DELAY="0"
140
</pre>
141
142
enable network forwarding @/etc/sysctl.conf@
143
<pre>
144
...
145
net.ipv4.ip_forward = 1
146
</pre>
147
148
read the file and restart NetworkManager
149
<pre><code class="bash">
150
sysctl -p /etc/sysctl.conf
151
systemctl restart NetworkManager
152
</code></pre>
153
154
h2. can KVM and Virtualbox coexist
155
156
http://www.dedoimedo.com/computers/kvm-virtualbox.html
157
158
h2. convert Virtualbox to KVM
159
160
h3. uninstall Virtualbox-guest-additions
161
162
<pre><code class="bash">
163
opt/[VboxAddonsFolder]/uninstall.sh
164
</code></pre>
165
166
some people had to remove @/etc/X11/xorg.conf@
167
168
h3. convert image from Virtualbox to KWM
169
170
<pre><code class="bash">
171
VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img
172
</code></pre>
173
174
RAW-Datei nach qcow konvertieren
175
<pre><code class="bash">
176
qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow
177
</code></pre>
178
179
h2. automatic start/shutdown of VMs with Host
180
181
taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html
182
183
h3. enable libvirt-guests service
184
185
<pre><code class="bash">
186
systemctl enable libvirt-guests
187
systemctl start libvirt-guests
188
</code></pre>
189
190
all settings are to be done in @/etc/sysconfig/libvirt-guests@
191
192
h2. install
193
194
195
<pre><code class="bash">
196
yum install virt-manager
197
</code></pre>
198
199
<pre><code class="bash">
200
usermod -a -G libvirt username
201
</code></pre>
202
203
h2. rename KVM-guest
204
205
taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html
206
207
Power off the virtual machine and export the machine's XML configuration file:
208
209
<pre><code class="bash">
210
virsh dumpxml name_of_vm > name_of_vm.xml
211
</code></pre>
212
213
Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>.
214
215
Save the XML file and undefine the old VM name with:
216
217
<pre><code class="bash">
218
virsh undefine name_of_vm
219
</code></pre>
220
221
Now just import the edited XML file to define the VM:
222
223
<pre><code class="bash">
224
virsh define name_of_vm.xml
225
</code></pre>
226
227
And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using:
228
229
<pre><code class="bash">
230
virsh start name_of_vm
231
</code></pre>
232
233
h2. set fixed IP-adr via DHCP (default-network)
234
235
taken from https://wiki.libvirt.org/page/Networking
236
237
<pre><code class="bash">
238
virsh edit <guest>
239
</code></pre>
240
241
where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file: 
242
243
<pre><code class="bash">
244
<interface type='network'>
245
  <source network='default'/>
246
  <mac address='00:16:3e:1a:b3:4a'/>
247
</interface>
248
</code></pre>
249
250
Applying modifications to the network
251
252
Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached.
253
virsh net-update
254
255
Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command: 
256
257
<pre><code class="bash">
258
virsh net-update default add ip-dhcp-host \
259
          "<host mac='52:54:00:00:00:01' \
260
           name='bob' ip='192.168.122.45' />" \
261
           --live --config
262
</code></pre>
263
264
h2. forwarding incoming connections
265
266
taken from https://wiki.libvirt.org/page/Networking
267
268
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
269
270
If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME:
271
272
1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP".
273
274
(To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.)
275
276
2) Stop the guest if it's running.
277
278
3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup):
279
280
Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome): 
281
282
<pre>
283
#!/bin/bash
284
# used some from advanced script to have multiple ports: use an equal number of guest and host ports
285
286
# Update the following variables to fit your setup
287
Guest_name=GUEST_NAME
288
Guest_ipaddr=GUEST_IP
289
Host_ipaddr=HOST_IP
290
Host_port=(  'HOST_PORT1' 'HOST_PORT2' )
291
Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' )
292
293
length=$(( ${#Host_port[@]} - 1 ))
294
if [ "${1}" = "${Guest_name}" ]; then
295
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
296
       for i in `seq 0 $length`; do
297
               iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
298
               iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
299
       done
300
   fi
301
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
302
       for i in `seq 0 $length`; do
303
               iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
304
               iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
305
       done
306
   fi
307
fi
308
</pre>
309
4) chmod +x /etc/libvirt/hooks/qemu
310
311
5) Restart the libvirtd service.
312
313
6) Start the guest.
314
315
(NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack). 
316
317
h2. wrapper script for virsh
318
319
<pre>
320
#! /bin/sh
321
# kvm_control   Startup script for KVM Virtual Machines
322
#
323
# description: Manages KVM VMs
324
# processname: kvm_control.sh
325
#
326
# pidfile: /var/run/kvm_control/kvm_control.pid
327
#
328
### BEGIN INIT INFO
329
#
330
### END INIT INFO
331
#
332
# Version 20171103 by Jeremias Keihsler added ionice prio 'idle'
333
# Version 20161228 by Jeremias Keihsler based on:
334
# virsh-specific parts are taken from:
335
#  https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh
336
# Version 20110509 by Jeremias Keihsler (vboxcontrol) based on:
337
# Version 20090301 by Kevin Swanson <kswan.info> based on:
338
# Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com>
339
# http://farfewertoes.com
340
#
341
# Released in the public domain
342
#
343
# This file came with a README file containing the instructions on how
344
# to use this script.
345
# 
346
# this is no more to be used as an init.d-script (vboxcontrol was an init.d-script)
347
#
348
349
################################################################################
350
# INITIAL CONFIGURATION
351
352
export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin"
353
354
VIRSH=/usr/bin/virsh
355
TIMEOUT=300
356
357
declare -i VM_isrunning
358
359
################################################################################
360
# FUNCTIONS
361
362
log_failure_msg() {
363
echo $1
364
}
365
366
log_action_msg() {
367
echo $1
368
}
369
370
# list running domains
371
list_running_domains() {
372
  $VIRSH list | grep running | awk '{ print $2}'
373
}
374
375
# Check for running machines every few seconds; return when all machines are
376
# down
377
wait_for_closing_machines() {
378
RUNNING_MACHINES=`list_running_domains | wc -l`
379
if [ $RUNNING_MACHINES != 0 ]; then
380
  log_action_msg "machines running: "$RUNNING_MACHINES
381
  sleep 2
382
383
  wait_for_closing_machines
384
fi
385
}
386
387
################################################################################
388
# RUN
389
case "$1" in
390
  start)
391
    if [ -f /etc/kvm_box/machines_enabled_start ]; then
392
393
      cat /etc/kvm_box/machines_enabled_start | while read VM; do
394
        log_action_msg "Starting VM: $VM ..."
395
        $VIRSH start $VM
396
        sleep 20
397
        RETVAL=$?
398
      done
399
      touch /tmp/kvm_control
400
    fi
401
  ;;
402
  stop)
403
    # NOTE: this stops first the listed VMs in the given order
404
    # and later all running VM's. 
405
    # After the defined timeout all remaining VMs are killed
406
407
    # Create some sort of semaphore.
408
    touch /tmp/shutdown-kvm-guests
409
410
    echo "Try to cleanly shut down all listed KVM domains..."
411
    # Try to shutdown each listed domain, one by one.
412
    if [ -f /etc/kvm_box/machines_enabled_stop ]; then
413
      cat /etc/kvm_box/machines_enabled_stop | while read VM; do
414
        log_action_msg "Shutting down VM: $VM ..."
415
        $VIRSH shutdown $VM --mode acpi
416
        sleep 10
417
        RETVAL=$?
418
      done
419
    fi
420
    sleep 10
421
422
    echo "give still running machines some more time..."
423
    # wait 20s per still running machine
424
    list_running_domains | while read VM; do
425
      log_action_msg "waiting 20s ... for: $VM ..."
426
      sleep 20
427
    done
428
429
    echo "Try to cleanly shut down all running KVM domains..."
430
    # Try to shutdown each remaining domain, one by one.
431
    list_running_domains | while read VM; do
432
      log_action_msg "Shutting down VM: $VM ..."
433
      $VIRSH shutdown $VM --mode acpi
434
      sleep 10
435
    done
436
437
    # Wait until all domains are shut down or timeout has reached.
438
    END_TIME=$(date -d "$TIMEOUT seconds" +%s)
439
440
    while [ $(date +%s) -lt $END_TIME ]; do
441
      # Break while loop when no domains are left.
442
      test -z "$(list_running_domains)" && break
443
      # Wait a litte, we don't want to DoS libvirt.
444
      sleep 2
445
    done
446
447
    # Clean up left over domains, one by one.
448
    list_running_domains | while read DOMAIN; do
449
      # Try to shutdown given domain.
450
      $VIRSH destroy $DOMAIN
451
      # Give libvirt some time for killing off the domain.
452
      sleep 10
453
    done
454
455
    wait_for_closing_machines
456
    rm -f /tmp/shutdown-kvm-guests
457
    rm -f /tmp/kvm_control
458
  ;;
459
  export)
460
    JKE_DATE=$(date +%F)
461
    if [ -f /etc/kvm_box/machines_enabled_export ]; then
462
      cat /etc/kvm_box/machines_enabled_export  | while read VM; do
463
        rm -f /tmp/kvm_control_VM_isrunning
464
        VM_isrunning=0
465
        list_running_domains | while read RVM; do
466
          #echo "VM list -$VM- : -$RVM-"
467
          if [[ "$VM" ==  "$RVM" ]]; then
468
            #echo "VM found running..."
469
            touch /tmp/kvm_control_VM_isrunning
470
            VM_isrunning=1
471
            #echo "$VM_isrunning"
472
            break
473
          fi
474
          #echo "$VM_isrunning"
475
        done
476
477
        # took me a while to figure out that the above 'while'-loop 
478
        # runs in a separate process ... let's use the 'file' as a 
479
        # kind of interprocess-communication :-) JKE 20161229
480
        if [ -f /tmp/kvm_control_VM_isrunning ]; then
481
          VM_isrunning=1
482
        fi
483
        rm -f /tmp/kvm_control_VM_isrunning
484
485
        #echo "VM status $VM_isrunning"
486
        if [ "$VM_isrunning" -ne 0 ]; then
487
          log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
488
        else
489
          log_action_msg "Exporting VM: $VM ..."
490
          VM_BAK_DIR="$VM"_"$JKE_DATE"
491
          mkdir "$VM_BAK_DIR"
492
          $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml
493
          $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do
494
            echo "$VM hdd=$VMHDD"
495
            if [ -f "$VMHDD" ]; then
496
              ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
497
            else
498
              log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..."
499
            fi
500
          done
501
        fi
502
      done
503
    else
504
      log_action_msg "export-list not found"
505
    fi
506
  ;;
507
  start-vm)
508
    log_action_msg "Starting VM: $2 ..."
509
    $VIRSH start $2
510
    RETVAL=$?
511
  ;;
512
  stop-vm)
513
    log_action_msg "Stopping VM: $2 ..."
514
    $VIRSH shutdown $2 --mode acpi
515
    RETVAL=$?
516
  ;;
517
  poweroff-vm)
518
    log_action_msg "Powering off VM: $2 ..."
519
    $VIRSH destroy $2
520
    RETVAL=$?
521
  ;;
522
  export-vm)
523
    # NOTE: this exports the given VM
524
    log_action_msg "Exporting VM: $2 ..."
525
    rm -f /tmp/kvm_control_VM_isrunning
526
    VM_isrunning=0
527
    JKE_DATE=$(date +%F)
528
    list_running_domains | while read RVM; do
529
      #echo "VM list -$VM- : -$RVM-"
530
      if [[ "$2" ==  "$RVM" ]]; then
531
        #echo "VM found running..."
532
        touch /tmp/kvm_control_VM_isrunning
533
        VM_isrunning=1
534
        #echo "$VM_isrunning"
535
        break
536
      fi
537
      #echo "$VM_isrunning"
538
    done
539
540
    # took me a while to figure out that the above 'while'-loop 
541
    # runs in a separate process ... let's use the 'file' as a 
542
    # kind of interprocess-communication :-) JKE 20161229
543
    if [ -f /tmp/kvm_control_VM_isrunning ]; then
544
      VM_isrunning=1
545
    fi
546
    rm -f /tmp/kvm_control_VM_isrunning
547
548
    #echo "VM status $VM_isrunning"
549
    if [ "$VM_isrunning" -ne 0 ]; then
550
      log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
551
    else
552
      log_action_msg "Exporting VM: $VM ..."
553
      VM_BAK_DIR="$2"_"$JKE_DATE"
554
      mkdir "$VM_BAK_DIR"
555
      $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml
556
      $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do
557
        echo "$2 hdd=$VMHDD"
558
        if [ -f "$VMHDD" ]; then
559
          ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
560
        else
561
          log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..."
562
        fi
563
      done
564
    fi
565
  ;;
566
  status)
567
    echo "The following virtual machines are currently running:"
568
    list_running_domains | while read VM; do
569
      echo -n "  $VM"
570
      echo " ... is running"
571
    done
572
  ;;
573
574
  *)
575
    echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>"
576
    echo "  start      start all VMs listed in '/etc/kvm_box/machines_enabled_start'"
577
    echo "  stop       1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'"
578
    echo "             2nd step: wait 20s for each still running machine to give a chance to shut-down on their own"
579
    echo "             3rd step: acpi-shutdown all running VMs"
580
    echo "             4th step: wait for all machines shutdown or $TIMEOUT s"
581
    echo "             5th step: destroy all sitting VMs"
582
    echo "  status     list all running VMs"
583
    echo "  export     export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory"
584
    echo "  start-vm <VM name>     start the given VM"
585
    echo "  stop-vm <VM name>      acpi-shutdown the given VM"
586
    echo "  poweroff-vm <VM name>  poweroff the given VM"
587
    echo "  export-vm <VM name>    export the given VM to the current directory"
588
    exit 3
589
esac
590
591
exit 0
592
593
</pre>
594
595
h2. restore 'exported' kvm-machines
596
597
<pre><code class="shell">
598
tar xvf mach-name_202x-01-01.tar.gz 
599
</code></pre>
600
601
* copy the image-files to @/var/lib/libvirt/images/@
602
603
set ownership
604
<pre><code class="shell">
605
chown qemu:qemu /var/lib/libvirt/images/*
606
</code></pre>
607
608
609
define the machine by
610
611
<pre><code class="shell">
612
virsh define mach-name.xml
613
</code></pre>