Projekt

Allgemein

Profil

Setup kvm » Historie » Version 8

Jeremias Keihsler, 29.09.2024 16:02

1 1 Jeremias Keihsler
h1. KVM
2
3
this is for a vanilla CentOS 9 minimal installation,
4
largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@
5
6 4 Jeremias Keihsler
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_enabling-virtualization-in-rhel-9_configuring-and-managing-virtualization#proc_enabling-virtualization-in-rhel-9_assembly_enabling-virtualization-in-rhel-9
7
8 1 Jeremias Keihsler
https://www.linuxtechi.com/install-kvm-on-rocky-linux-almalinux/
9
10
good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/
11
12
br0 -sources:
13
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli
14
https://www.tecmint.com/create-network-bridge-in-rhel-centos-8/
15
https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/
16
https://extravm.com/billing/knowledgebase/114/CentOS-8-ifup-unknown-connection---Add-Second-IP.html
17
18
19
h2. basic updates/installs
20
21
<pre><code class="bash">
22
yum update
23
yum install wget
24
yum install vim
25
reboot
26
</code></pre>
27
28
h2. check machine capability
29
30
<pre><code class="bash">
31
grep -E 'svm|vmx' /proc/cpuinfo
32
</code></pre>
33
34
vmx ... Intel
35
svm ... AMD
36
37
h2. install KVM on CentOS minimal
38
39
<pre><code class="bash">
40 2 Jeremias Keihsler
dnf install qemu-kvm libvirt libguestfs-tools virt-install virt-viewer
41 3 Jeremias Keihsler
for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
42 1 Jeremias Keihsler
</code></pre>
43
44
verify the following kernel modules are loaded
45
<pre><code class="bash">
46
lsmod | grep kvm
47
</code></pre>
48
49
<pre><code class="bash">
50
kvm
51
kvm_intel
52
</code></pre>
53
<pre><code class="bash">
54
kvm
55
kvm_amd
56
</code></pre>
57 2 Jeremias Keihsler
58
h3. Verification
59
60
<pre><code class="bash">
61
virt-host-validate
62
</code></pre>
63 1 Jeremias Keihsler
64 5 Jeremias Keihsler
h3. change from libvirtd to modular libvirt daemons
65
66
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/optimizing-virtual-machine-performance-in-rhel_configuring-and-managing-virtualization#proc_enabling-modular-libvirt-daemons_assembly_optimizing-libvirt-daemons
67
68 6 Jeremias Keihsler
stop @libvirtd@ and its sockets
69
70
<pre><code class="shell">
71
systemctl stop libvirtd.service
72
systemctl stop libvirtd{,-ro,-admin,-tcp,-tls}.socket
73
</code></pre>
74
75
disable @libvirtd@
76
77
<pre><code class="shell">
78
systemctl disable libvirtd.service
79
systemctl disable libvirtd{,-ro,-admin,-tcp,-tls}.socket
80
</code></pre>
81
82
enable modular @libvirt@ daemons
83
84
<pre><code class="shell">
85
for drv in qemu interface network nodedev nwfilter secret storage; do systemctl unmask virt${drv}d.service; systemctl unmask virt${drv}d{,-ro,-admin}.socket; systemctl enable virt${drv}d.service; systemctl enable virt${drv}d{,-ro,-admin}.socket; done
86
</code></pre>
87
88
start sockets for modular daemons
89
90
<pre><code class="shell">
91
for drv in qemu network nodedev nwfilter secret storage; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
92
</code></pre>
93
94
check whether the @libvirtd-tls.socket@ service is enabled on your system. 
95
96
<pre><code class="shell">
97
grep listen_tls /etc/libvirt/libvirtd.conf
98
</code></pre>
99
100
if @listen_tls = 0@ then
101
102
<pre><code class="shell">
103
systemctl unmask virtproxyd.service
104
systemctl unmask virtproxyd{,-ro,-admin}.socket
105
systemctl enable virtproxyd.service
106
systemctl enable virtproxyd{,-ro,-admin}.socket
107
systemctl start virtproxyd{,-ro,-admin}.socket
108
</code></pre>
109
110
elseif @listen_tls = 1@ then
111
112
<pre><code class="shell">
113
systemctl unmask virtproxyd.service
114
systemctl unmask virtproxyd{,-ro,-admin,-tls}.socket
115
systemctl enable virtproxyd.service
116
systemctl enable virtproxyd{,-ro,-admin,-tls}.socket
117
systemctl start virtproxyd{,-ro,-admin,-tls}.socket
118
</code></pre>
119 5 Jeremias Keihsler
120 7 Jeremias Keihsler
Verification
121
122
<pre><code class="shell">
123
virsh uri
124
</code></pre>
125
126
should result in @qemu:///system@
127
128
Verify that your host is using the @virtqemud@ modular daemon. 
129
130
<pre><code class="shell">
131
systemctl is-active virtqemud.service
132
</code></pre>
133
134
should result in @active@
135 5 Jeremias Keihsler
136 1 Jeremias Keihsler
h2. setup networking
137
138 8 Jeremias Keihsler
the configuration via @/etc/sysconfig/network-scripts/@ is depreciated
139 1 Jeremias Keihsler
140 8 Jeremias Keihsler
use nmtui to setup the bridge
141 1 Jeremias Keihsler
142 8 Jeremias Keihsler
see also https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/configuring-a-network-bridge_configuring-and-managing-networking#proc_configuring-a-network-bridge-by-using-nmtui_configuring-a-network-bridge
143 1 Jeremias Keihsler
144
145
h2. can KVM and Virtualbox coexist
146
147
http://www.dedoimedo.com/computers/kvm-virtualbox.html
148
149
h2. convert Virtualbox to KVM
150
151
h3. uninstall Virtualbox-guest-additions
152
153
<pre><code class="bash">
154
opt/[VboxAddonsFolder]/uninstall.sh
155
</code></pre>
156
157
some people had to remove @/etc/X11/xorg.conf@
158
159
h3. convert image from Virtualbox to KWM
160
161
<pre><code class="bash">
162
VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img
163
</code></pre>
164
165
RAW-Datei nach qcow konvertieren
166
<pre><code class="bash">
167
qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow
168
</code></pre>
169
170
h2. automatic start/shutdown of VMs with Host
171
172
taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html
173
174
h3. enable libvirt-guests service
175
176
<pre><code class="bash">
177
systemctl enable libvirt-guests
178
systemctl start libvirt-guests
179
</code></pre>
180
181
all settings are to be done in @/etc/sysconfig/libvirt-guests@
182
183
h2. install
184
185
186
<pre><code class="bash">
187
yum install virt-manager
188
</code></pre>
189
190
<pre><code class="bash">
191
usermod -a -G libvirt username
192
</code></pre>
193
194
h2. rename KVM-guest
195
196
taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html
197
198
Power off the virtual machine and export the machine's XML configuration file:
199
200
<pre><code class="bash">
201
virsh dumpxml name_of_vm > name_of_vm.xml
202
</code></pre>
203
204
Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>.
205
206
Save the XML file and undefine the old VM name with:
207
208
<pre><code class="bash">
209
virsh undefine name_of_vm
210
</code></pre>
211
212
Now just import the edited XML file to define the VM:
213
214
<pre><code class="bash">
215
virsh define name_of_vm.xml
216
</code></pre>
217
218
And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using:
219
220
<pre><code class="bash">
221
virsh start name_of_vm
222
</code></pre>
223
224
h2. set fixed IP-adr via DHCP (default-network)
225
226
taken from https://wiki.libvirt.org/page/Networking
227
228
<pre><code class="bash">
229
virsh edit <guest>
230
</code></pre>
231
232
where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file: 
233
234
<pre><code class="bash">
235
<interface type='network'>
236
  <source network='default'/>
237
  <mac address='00:16:3e:1a:b3:4a'/>
238
</interface>
239
</code></pre>
240
241
Applying modifications to the network
242
243
Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached.
244
virsh net-update
245
246
Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command: 
247
248
<pre><code class="bash">
249
virsh net-update default add ip-dhcp-host \
250
          "<host mac='52:54:00:00:00:01' \
251
           name='bob' ip='192.168.122.45' />" \
252
           --live --config
253
</code></pre>
254
255
h2. forwarding incoming connections
256
257
taken from https://wiki.libvirt.org/page/Networking
258
259
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
260
261
If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME:
262
263
1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP".
264
265
(To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.)
266
267
2) Stop the guest if it's running.
268
269
3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup):
270
271
Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome): 
272
273
<pre>
274
#!/bin/bash
275
# used some from advanced script to have multiple ports: use an equal number of guest and host ports
276
277
# Update the following variables to fit your setup
278
Guest_name=GUEST_NAME
279
Guest_ipaddr=GUEST_IP
280
Host_ipaddr=HOST_IP
281
Host_port=(  'HOST_PORT1' 'HOST_PORT2' )
282
Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' )
283
284
length=$(( ${#Host_port[@]} - 1 ))
285
if [ "${1}" = "${Guest_name}" ]; then
286
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
287
       for i in `seq 0 $length`; do
288
               iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
289
               iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
290
       done
291
   fi
292
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
293
       for i in `seq 0 $length`; do
294
               iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
295
               iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
296
       done
297
   fi
298
fi
299
</pre>
300
4) chmod +x /etc/libvirt/hooks/qemu
301
302
5) Restart the libvirtd service.
303
304
6) Start the guest.
305
306
(NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack). 
307
308
h2. wrapper script for virsh
309
310
<pre>
311
#! /bin/sh
312
# kvm_control   Startup script for KVM Virtual Machines
313
#
314
# description: Manages KVM VMs
315
# processname: kvm_control.sh
316
#
317
# pidfile: /var/run/kvm_control/kvm_control.pid
318
#
319
### BEGIN INIT INFO
320
#
321
### END INIT INFO
322
#
323
# Version 20171103 by Jeremias Keihsler added ionice prio 'idle'
324
# Version 20161228 by Jeremias Keihsler based on:
325
# virsh-specific parts are taken from:
326
#  https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh
327
# Version 20110509 by Jeremias Keihsler (vboxcontrol) based on:
328
# Version 20090301 by Kevin Swanson <kswan.info> based on:
329
# Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com>
330
# http://farfewertoes.com
331
#
332
# Released in the public domain
333
#
334
# This file came with a README file containing the instructions on how
335
# to use this script.
336
# 
337
# this is no more to be used as an init.d-script (vboxcontrol was an init.d-script)
338
#
339
340
################################################################################
341
# INITIAL CONFIGURATION
342
343
export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin"
344
345
VIRSH=/usr/bin/virsh
346
TIMEOUT=300
347
348
declare -i VM_isrunning
349
350
################################################################################
351
# FUNCTIONS
352
353
log_failure_msg() {
354
echo $1
355
}
356
357
log_action_msg() {
358
echo $1
359
}
360
361
# list running domains
362
list_running_domains() {
363
  $VIRSH list | grep running | awk '{ print $2}'
364
}
365
366
# Check for running machines every few seconds; return when all machines are
367
# down
368
wait_for_closing_machines() {
369
RUNNING_MACHINES=`list_running_domains | wc -l`
370
if [ $RUNNING_MACHINES != 0 ]; then
371
  log_action_msg "machines running: "$RUNNING_MACHINES
372
  sleep 2
373
374
  wait_for_closing_machines
375
fi
376
}
377
378
################################################################################
379
# RUN
380
case "$1" in
381
  start)
382
    if [ -f /etc/kvm_box/machines_enabled_start ]; then
383
384
      cat /etc/kvm_box/machines_enabled_start | while read VM; do
385
        log_action_msg "Starting VM: $VM ..."
386
        $VIRSH start $VM
387
        sleep 20
388
        RETVAL=$?
389
      done
390
      touch /tmp/kvm_control
391
    fi
392
  ;;
393
  stop)
394
    # NOTE: this stops first the listed VMs in the given order
395
    # and later all running VM's. 
396
    # After the defined timeout all remaining VMs are killed
397
398
    # Create some sort of semaphore.
399
    touch /tmp/shutdown-kvm-guests
400
401
    echo "Try to cleanly shut down all listed KVM domains..."
402
    # Try to shutdown each listed domain, one by one.
403
    if [ -f /etc/kvm_box/machines_enabled_stop ]; then
404
      cat /etc/kvm_box/machines_enabled_stop | while read VM; do
405
        log_action_msg "Shutting down VM: $VM ..."
406
        $VIRSH shutdown $VM --mode acpi
407
        sleep 10
408
        RETVAL=$?
409
      done
410
    fi
411
    sleep 10
412
413
    echo "give still running machines some more time..."
414
    # wait 20s per still running machine
415
    list_running_domains | while read VM; do
416
      log_action_msg "waiting 20s ... for: $VM ..."
417
      sleep 20
418
    done
419
420
    echo "Try to cleanly shut down all running KVM domains..."
421
    # Try to shutdown each remaining domain, one by one.
422
    list_running_domains | while read VM; do
423
      log_action_msg "Shutting down VM: $VM ..."
424
      $VIRSH shutdown $VM --mode acpi
425
      sleep 10
426
    done
427
428
    # Wait until all domains are shut down or timeout has reached.
429
    END_TIME=$(date -d "$TIMEOUT seconds" +%s)
430
431
    while [ $(date +%s) -lt $END_TIME ]; do
432
      # Break while loop when no domains are left.
433
      test -z "$(list_running_domains)" && break
434
      # Wait a litte, we don't want to DoS libvirt.
435
      sleep 2
436
    done
437
438
    # Clean up left over domains, one by one.
439
    list_running_domains | while read DOMAIN; do
440
      # Try to shutdown given domain.
441
      $VIRSH destroy $DOMAIN
442
      # Give libvirt some time for killing off the domain.
443
      sleep 10
444
    done
445
446
    wait_for_closing_machines
447
    rm -f /tmp/shutdown-kvm-guests
448
    rm -f /tmp/kvm_control
449
  ;;
450
  export)
451
    JKE_DATE=$(date +%F)
452
    if [ -f /etc/kvm_box/machines_enabled_export ]; then
453
      cat /etc/kvm_box/machines_enabled_export  | while read VM; do
454
        rm -f /tmp/kvm_control_VM_isrunning
455
        VM_isrunning=0
456
        list_running_domains | while read RVM; do
457
          #echo "VM list -$VM- : -$RVM-"
458
          if [[ "$VM" ==  "$RVM" ]]; then
459
            #echo "VM found running..."
460
            touch /tmp/kvm_control_VM_isrunning
461
            VM_isrunning=1
462
            #echo "$VM_isrunning"
463
            break
464
          fi
465
          #echo "$VM_isrunning"
466
        done
467
468
        # took me a while to figure out that the above 'while'-loop 
469
        # runs in a separate process ... let's use the 'file' as a 
470
        # kind of interprocess-communication :-) JKE 20161229
471
        if [ -f /tmp/kvm_control_VM_isrunning ]; then
472
          VM_isrunning=1
473
        fi
474
        rm -f /tmp/kvm_control_VM_isrunning
475
476
        #echo "VM status $VM_isrunning"
477
        if [ "$VM_isrunning" -ne 0 ]; then
478
          log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
479
        else
480
          log_action_msg "Exporting VM: $VM ..."
481
          VM_BAK_DIR="$VM"_"$JKE_DATE"
482
          mkdir "$VM_BAK_DIR"
483
          $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml
484
          $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do
485
            echo "$VM hdd=$VMHDD"
486
            if [ -f "$VMHDD" ]; then
487
              ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
488
            else
489
              log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..."
490
            fi
491
          done
492
        fi
493
      done
494
    else
495
      log_action_msg "export-list not found"
496
    fi
497
  ;;
498
  start-vm)
499
    log_action_msg "Starting VM: $2 ..."
500
    $VIRSH start $2
501
    RETVAL=$?
502
  ;;
503
  stop-vm)
504
    log_action_msg "Stopping VM: $2 ..."
505
    $VIRSH shutdown $2 --mode acpi
506
    RETVAL=$?
507
  ;;
508
  poweroff-vm)
509
    log_action_msg "Powering off VM: $2 ..."
510
    $VIRSH destroy $2
511
    RETVAL=$?
512
  ;;
513
  export-vm)
514
    # NOTE: this exports the given VM
515
    log_action_msg "Exporting VM: $2 ..."
516
    rm -f /tmp/kvm_control_VM_isrunning
517
    VM_isrunning=0
518
    JKE_DATE=$(date +%F)
519
    list_running_domains | while read RVM; do
520
      #echo "VM list -$VM- : -$RVM-"
521
      if [[ "$2" ==  "$RVM" ]]; then
522
        #echo "VM found running..."
523
        touch /tmp/kvm_control_VM_isrunning
524
        VM_isrunning=1
525
        #echo "$VM_isrunning"
526
        break
527
      fi
528
      #echo "$VM_isrunning"
529
    done
530
531
    # took me a while to figure out that the above 'while'-loop 
532
    # runs in a separate process ... let's use the 'file' as a 
533
    # kind of interprocess-communication :-) JKE 20161229
534
    if [ -f /tmp/kvm_control_VM_isrunning ]; then
535
      VM_isrunning=1
536
    fi
537
    rm -f /tmp/kvm_control_VM_isrunning
538
539
    #echo "VM status $VM_isrunning"
540
    if [ "$VM_isrunning" -ne 0 ]; then
541
      log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
542
    else
543
      log_action_msg "Exporting VM: $VM ..."
544
      VM_BAK_DIR="$2"_"$JKE_DATE"
545
      mkdir "$VM_BAK_DIR"
546
      $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml
547
      $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do
548
        echo "$2 hdd=$VMHDD"
549
        if [ -f "$VMHDD" ]; then
550
          ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
551
        else
552
          log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..."
553
        fi
554
      done
555
    fi
556
  ;;
557
  status)
558
    echo "The following virtual machines are currently running:"
559
    list_running_domains | while read VM; do
560
      echo -n "  $VM"
561
      echo " ... is running"
562
    done
563
  ;;
564
565
  *)
566
    echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>"
567
    echo "  start      start all VMs listed in '/etc/kvm_box/machines_enabled_start'"
568
    echo "  stop       1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'"
569
    echo "             2nd step: wait 20s for each still running machine to give a chance to shut-down on their own"
570
    echo "             3rd step: acpi-shutdown all running VMs"
571
    echo "             4th step: wait for all machines shutdown or $TIMEOUT s"
572
    echo "             5th step: destroy all sitting VMs"
573
    echo "  status     list all running VMs"
574
    echo "  export     export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory"
575
    echo "  start-vm <VM name>     start the given VM"
576
    echo "  stop-vm <VM name>      acpi-shutdown the given VM"
577
    echo "  poweroff-vm <VM name>  poweroff the given VM"
578
    echo "  export-vm <VM name>    export the given VM to the current directory"
579
    exit 3
580
esac
581
582
exit 0
583
584
</pre>
585
586
h2. restore 'exported' kvm-machines
587
588
<pre><code class="shell">
589
tar xvf mach-name_202x-01-01.tar.gz 
590
</code></pre>
591
592
* copy the image-files to @/var/lib/libvirt/images/@
593
594
set ownership
595
<pre><code class="shell">
596
chown qemu:qemu /var/lib/libvirt/images/*
597
</code></pre>
598
599
600
define the machine by
601
602
<pre><code class="shell">
603
virsh define mach-name.xml
604
</code></pre>