Projekt

Allgemein

Profil

Setup kvm » Historie » Version 7

Jeremias Keihsler, 29.09.2024 15:10

1 1 Jeremias Keihsler
h1. KVM
2
3
this is for a vanilla CentOS 9 minimal installation,
4
largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@
5
6 4 Jeremias Keihsler
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_enabling-virtualization-in-rhel-9_configuring-and-managing-virtualization#proc_enabling-virtualization-in-rhel-9_assembly_enabling-virtualization-in-rhel-9
7
8 1 Jeremias Keihsler
https://www.linuxtechi.com/install-kvm-on-rocky-linux-almalinux/
9
10
good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/
11
12
br0 -sources:
13
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli
14
https://www.tecmint.com/create-network-bridge-in-rhel-centos-8/
15
https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/
16
https://extravm.com/billing/knowledgebase/114/CentOS-8-ifup-unknown-connection---Add-Second-IP.html
17
18
19
h2. basic updates/installs
20
21
<pre><code class="bash">
22
yum update
23
yum install wget
24
yum install vim
25
reboot
26
</code></pre>
27
28
h2. check machine capability
29
30
<pre><code class="bash">
31
grep -E 'svm|vmx' /proc/cpuinfo
32
</code></pre>
33
34
vmx ... Intel
35
svm ... AMD
36
37
h2. install KVM on CentOS minimal
38
39
<pre><code class="bash">
40 2 Jeremias Keihsler
dnf install qemu-kvm libvirt libguestfs-tools virt-install virt-viewer
41 3 Jeremias Keihsler
for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
42 1 Jeremias Keihsler
</code></pre>
43
44
verify the following kernel modules are loaded
45
<pre><code class="bash">
46
lsmod | grep kvm
47
</code></pre>
48
49
<pre><code class="bash">
50
kvm
51
kvm_intel
52
</code></pre>
53
<pre><code class="bash">
54
kvm
55
kvm_amd
56
</code></pre>
57 2 Jeremias Keihsler
58
h3. Verification
59
60
<pre><code class="bash">
61
virt-host-validate
62
</code></pre>
63 1 Jeremias Keihsler
64 5 Jeremias Keihsler
h3. change from libvirtd to modular libvirt daemons
65
66
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/optimizing-virtual-machine-performance-in-rhel_configuring-and-managing-virtualization#proc_enabling-modular-libvirt-daemons_assembly_optimizing-libvirt-daemons
67
68 6 Jeremias Keihsler
stop @libvirtd@ and its sockets
69
70
<pre><code class="shell">
71
systemctl stop libvirtd.service
72
systemctl stop libvirtd{,-ro,-admin,-tcp,-tls}.socket
73
</code></pre>
74
75
disable @libvirtd@
76
77
<pre><code class="shell">
78
systemctl disable libvirtd.service
79
systemctl disable libvirtd{,-ro,-admin,-tcp,-tls}.socket
80
</code></pre>
81
82
enable modular @libvirt@ daemons
83
84
<pre><code class="shell">
85
for drv in qemu interface network nodedev nwfilter secret storage; do systemctl unmask virt${drv}d.service; systemctl unmask virt${drv}d{,-ro,-admin}.socket; systemctl enable virt${drv}d.service; systemctl enable virt${drv}d{,-ro,-admin}.socket; done
86
</code></pre>
87
88
start sockets for modular daemons
89
90
<pre><code class="shell">
91
for drv in qemu network nodedev nwfilter secret storage; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
92
</code></pre>
93
94
check whether the @libvirtd-tls.socket@ service is enabled on your system. 
95
96
<pre><code class="shell">
97
grep listen_tls /etc/libvirt/libvirtd.conf
98
</code></pre>
99
100
if @listen_tls = 0@ then
101
102
<pre><code class="shell">
103
systemctl unmask virtproxyd.service
104
systemctl unmask virtproxyd{,-ro,-admin}.socket
105
systemctl enable virtproxyd.service
106
systemctl enable virtproxyd{,-ro,-admin}.socket
107
systemctl start virtproxyd{,-ro,-admin}.socket
108
</code></pre>
109
110
elseif @listen_tls = 1@ then
111
112
<pre><code class="shell">
113
systemctl unmask virtproxyd.service
114
systemctl unmask virtproxyd{,-ro,-admin,-tls}.socket
115
systemctl enable virtproxyd.service
116
systemctl enable virtproxyd{,-ro,-admin,-tls}.socket
117
systemctl start virtproxyd{,-ro,-admin,-tls}.socket
118
</code></pre>
119 5 Jeremias Keihsler
120 7 Jeremias Keihsler
Verification
121
122
<pre><code class="shell">
123
virsh uri
124
</code></pre>
125
126
should result in @qemu:///system@
127
128
Verify that your host is using the @virtqemud@ modular daemon. 
129
130
<pre><code class="shell">
131
systemctl is-active virtqemud.service
132
</code></pre>
133
134
should result in @active@
135 5 Jeremias Keihsler
136 1 Jeremias Keihsler
h2. setup networking
137
138
add to the network controller configuration file @/etc/sysconfig/network-scripts/ifcfg-em1@
139
<pre>
140
...
141
BRIDGE=br0
142
</pre>
143
144
add following new file @/etc/sysconfig/network-scripts/ifcfg-br0@
145
<pre>
146
DEVICE="br0"
147
# BOOTPROTO is up to you. If you prefer “static”, you will need to
148
# specify the IP address, netmask, gateway and DNS information.
149
BOOTPROTO="dhcp"
150
IPV6INIT="yes"
151
IPV6_AUTOCONF="yes"
152
ONBOOT="yes"
153
TYPE="Bridge"
154
DELAY="0"
155
</pre>
156
157
enable network forwarding @/etc/sysctl.conf@
158
<pre>
159
...
160
net.ipv4.ip_forward = 1
161
</pre>
162
163
read the file and restart NetworkManager
164
<pre><code class="bash">
165
sysctl -p /etc/sysctl.conf
166
systemctl restart NetworkManager
167
</code></pre>
168
169
h2. can KVM and Virtualbox coexist
170
171
http://www.dedoimedo.com/computers/kvm-virtualbox.html
172
173
h2. convert Virtualbox to KVM
174
175
h3. uninstall Virtualbox-guest-additions
176
177
<pre><code class="bash">
178
opt/[VboxAddonsFolder]/uninstall.sh
179
</code></pre>
180
181
some people had to remove @/etc/X11/xorg.conf@
182
183
h3. convert image from Virtualbox to KWM
184
185
<pre><code class="bash">
186
VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img
187
</code></pre>
188
189
RAW-Datei nach qcow konvertieren
190
<pre><code class="bash">
191
qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow
192
</code></pre>
193
194
h2. automatic start/shutdown of VMs with Host
195
196
taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html
197
198
h3. enable libvirt-guests service
199
200
<pre><code class="bash">
201
systemctl enable libvirt-guests
202
systemctl start libvirt-guests
203
</code></pre>
204
205
all settings are to be done in @/etc/sysconfig/libvirt-guests@
206
207
h2. install
208
209
210
<pre><code class="bash">
211
yum install virt-manager
212
</code></pre>
213
214
<pre><code class="bash">
215
usermod -a -G libvirt username
216
</code></pre>
217
218
h2. rename KVM-guest
219
220
taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html
221
222
Power off the virtual machine and export the machine's XML configuration file:
223
224
<pre><code class="bash">
225
virsh dumpxml name_of_vm > name_of_vm.xml
226
</code></pre>
227
228
Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>.
229
230
Save the XML file and undefine the old VM name with:
231
232
<pre><code class="bash">
233
virsh undefine name_of_vm
234
</code></pre>
235
236
Now just import the edited XML file to define the VM:
237
238
<pre><code class="bash">
239
virsh define name_of_vm.xml
240
</code></pre>
241
242
And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using:
243
244
<pre><code class="bash">
245
virsh start name_of_vm
246
</code></pre>
247
248
h2. set fixed IP-adr via DHCP (default-network)
249
250
taken from https://wiki.libvirt.org/page/Networking
251
252
<pre><code class="bash">
253
virsh edit <guest>
254
</code></pre>
255
256
where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file: 
257
258
<pre><code class="bash">
259
<interface type='network'>
260
  <source network='default'/>
261
  <mac address='00:16:3e:1a:b3:4a'/>
262
</interface>
263
</code></pre>
264
265
Applying modifications to the network
266
267
Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached.
268
virsh net-update
269
270
Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command: 
271
272
<pre><code class="bash">
273
virsh net-update default add ip-dhcp-host \
274
          "<host mac='52:54:00:00:00:01' \
275
           name='bob' ip='192.168.122.45' />" \
276
           --live --config
277
</code></pre>
278
279
h2. forwarding incoming connections
280
281
taken from https://wiki.libvirt.org/page/Networking
282
283
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
284
285
If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME:
286
287
1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP".
288
289
(To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.)
290
291
2) Stop the guest if it's running.
292
293
3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup):
294
295
Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome): 
296
297
<pre>
298
#!/bin/bash
299
# used some from advanced script to have multiple ports: use an equal number of guest and host ports
300
301
# Update the following variables to fit your setup
302
Guest_name=GUEST_NAME
303
Guest_ipaddr=GUEST_IP
304
Host_ipaddr=HOST_IP
305
Host_port=(  'HOST_PORT1' 'HOST_PORT2' )
306
Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' )
307
308
length=$(( ${#Host_port[@]} - 1 ))
309
if [ "${1}" = "${Guest_name}" ]; then
310
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
311
       for i in `seq 0 $length`; do
312
               iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
313
               iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
314
       done
315
   fi
316
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
317
       for i in `seq 0 $length`; do
318
               iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]}
319
               iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT
320
       done
321
   fi
322
fi
323
</pre>
324
4) chmod +x /etc/libvirt/hooks/qemu
325
326
5) Restart the libvirtd service.
327
328
6) Start the guest.
329
330
(NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack). 
331
332
h2. wrapper script for virsh
333
334
<pre>
335
#! /bin/sh
336
# kvm_control   Startup script for KVM Virtual Machines
337
#
338
# description: Manages KVM VMs
339
# processname: kvm_control.sh
340
#
341
# pidfile: /var/run/kvm_control/kvm_control.pid
342
#
343
### BEGIN INIT INFO
344
#
345
### END INIT INFO
346
#
347
# Version 20171103 by Jeremias Keihsler added ionice prio 'idle'
348
# Version 20161228 by Jeremias Keihsler based on:
349
# virsh-specific parts are taken from:
350
#  https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh
351
# Version 20110509 by Jeremias Keihsler (vboxcontrol) based on:
352
# Version 20090301 by Kevin Swanson <kswan.info> based on:
353
# Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com>
354
# http://farfewertoes.com
355
#
356
# Released in the public domain
357
#
358
# This file came with a README file containing the instructions on how
359
# to use this script.
360
# 
361
# this is no more to be used as an init.d-script (vboxcontrol was an init.d-script)
362
#
363
364
################################################################################
365
# INITIAL CONFIGURATION
366
367
export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin"
368
369
VIRSH=/usr/bin/virsh
370
TIMEOUT=300
371
372
declare -i VM_isrunning
373
374
################################################################################
375
# FUNCTIONS
376
377
log_failure_msg() {
378
echo $1
379
}
380
381
log_action_msg() {
382
echo $1
383
}
384
385
# list running domains
386
list_running_domains() {
387
  $VIRSH list | grep running | awk '{ print $2}'
388
}
389
390
# Check for running machines every few seconds; return when all machines are
391
# down
392
wait_for_closing_machines() {
393
RUNNING_MACHINES=`list_running_domains | wc -l`
394
if [ $RUNNING_MACHINES != 0 ]; then
395
  log_action_msg "machines running: "$RUNNING_MACHINES
396
  sleep 2
397
398
  wait_for_closing_machines
399
fi
400
}
401
402
################################################################################
403
# RUN
404
case "$1" in
405
  start)
406
    if [ -f /etc/kvm_box/machines_enabled_start ]; then
407
408
      cat /etc/kvm_box/machines_enabled_start | while read VM; do
409
        log_action_msg "Starting VM: $VM ..."
410
        $VIRSH start $VM
411
        sleep 20
412
        RETVAL=$?
413
      done
414
      touch /tmp/kvm_control
415
    fi
416
  ;;
417
  stop)
418
    # NOTE: this stops first the listed VMs in the given order
419
    # and later all running VM's. 
420
    # After the defined timeout all remaining VMs are killed
421
422
    # Create some sort of semaphore.
423
    touch /tmp/shutdown-kvm-guests
424
425
    echo "Try to cleanly shut down all listed KVM domains..."
426
    # Try to shutdown each listed domain, one by one.
427
    if [ -f /etc/kvm_box/machines_enabled_stop ]; then
428
      cat /etc/kvm_box/machines_enabled_stop | while read VM; do
429
        log_action_msg "Shutting down VM: $VM ..."
430
        $VIRSH shutdown $VM --mode acpi
431
        sleep 10
432
        RETVAL=$?
433
      done
434
    fi
435
    sleep 10
436
437
    echo "give still running machines some more time..."
438
    # wait 20s per still running machine
439
    list_running_domains | while read VM; do
440
      log_action_msg "waiting 20s ... for: $VM ..."
441
      sleep 20
442
    done
443
444
    echo "Try to cleanly shut down all running KVM domains..."
445
    # Try to shutdown each remaining domain, one by one.
446
    list_running_domains | while read VM; do
447
      log_action_msg "Shutting down VM: $VM ..."
448
      $VIRSH shutdown $VM --mode acpi
449
      sleep 10
450
    done
451
452
    # Wait until all domains are shut down or timeout has reached.
453
    END_TIME=$(date -d "$TIMEOUT seconds" +%s)
454
455
    while [ $(date +%s) -lt $END_TIME ]; do
456
      # Break while loop when no domains are left.
457
      test -z "$(list_running_domains)" && break
458
      # Wait a litte, we don't want to DoS libvirt.
459
      sleep 2
460
    done
461
462
    # Clean up left over domains, one by one.
463
    list_running_domains | while read DOMAIN; do
464
      # Try to shutdown given domain.
465
      $VIRSH destroy $DOMAIN
466
      # Give libvirt some time for killing off the domain.
467
      sleep 10
468
    done
469
470
    wait_for_closing_machines
471
    rm -f /tmp/shutdown-kvm-guests
472
    rm -f /tmp/kvm_control
473
  ;;
474
  export)
475
    JKE_DATE=$(date +%F)
476
    if [ -f /etc/kvm_box/machines_enabled_export ]; then
477
      cat /etc/kvm_box/machines_enabled_export  | while read VM; do
478
        rm -f /tmp/kvm_control_VM_isrunning
479
        VM_isrunning=0
480
        list_running_domains | while read RVM; do
481
          #echo "VM list -$VM- : -$RVM-"
482
          if [[ "$VM" ==  "$RVM" ]]; then
483
            #echo "VM found running..."
484
            touch /tmp/kvm_control_VM_isrunning
485
            VM_isrunning=1
486
            #echo "$VM_isrunning"
487
            break
488
          fi
489
          #echo "$VM_isrunning"
490
        done
491
492
        # took me a while to figure out that the above 'while'-loop 
493
        # runs in a separate process ... let's use the 'file' as a 
494
        # kind of interprocess-communication :-) JKE 20161229
495
        if [ -f /tmp/kvm_control_VM_isrunning ]; then
496
          VM_isrunning=1
497
        fi
498
        rm -f /tmp/kvm_control_VM_isrunning
499
500
        #echo "VM status $VM_isrunning"
501
        if [ "$VM_isrunning" -ne 0 ]; then
502
          log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
503
        else
504
          log_action_msg "Exporting VM: $VM ..."
505
          VM_BAK_DIR="$VM"_"$JKE_DATE"
506
          mkdir "$VM_BAK_DIR"
507
          $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml
508
          $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do
509
            echo "$VM hdd=$VMHDD"
510
            if [ -f "$VMHDD" ]; then
511
              ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
512
            else
513
              log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..."
514
            fi
515
          done
516
        fi
517
      done
518
    else
519
      log_action_msg "export-list not found"
520
    fi
521
  ;;
522
  start-vm)
523
    log_action_msg "Starting VM: $2 ..."
524
    $VIRSH start $2
525
    RETVAL=$?
526
  ;;
527
  stop-vm)
528
    log_action_msg "Stopping VM: $2 ..."
529
    $VIRSH shutdown $2 --mode acpi
530
    RETVAL=$?
531
  ;;
532
  poweroff-vm)
533
    log_action_msg "Powering off VM: $2 ..."
534
    $VIRSH destroy $2
535
    RETVAL=$?
536
  ;;
537
  export-vm)
538
    # NOTE: this exports the given VM
539
    log_action_msg "Exporting VM: $2 ..."
540
    rm -f /tmp/kvm_control_VM_isrunning
541
    VM_isrunning=0
542
    JKE_DATE=$(date +%F)
543
    list_running_domains | while read RVM; do
544
      #echo "VM list -$VM- : -$RVM-"
545
      if [[ "$2" ==  "$RVM" ]]; then
546
        #echo "VM found running..."
547
        touch /tmp/kvm_control_VM_isrunning
548
        VM_isrunning=1
549
        #echo "$VM_isrunning"
550
        break
551
      fi
552
      #echo "$VM_isrunning"
553
    done
554
555
    # took me a while to figure out that the above 'while'-loop 
556
    # runs in a separate process ... let's use the 'file' as a 
557
    # kind of interprocess-communication :-) JKE 20161229
558
    if [ -f /tmp/kvm_control_VM_isrunning ]; then
559
      VM_isrunning=1
560
    fi
561
    rm -f /tmp/kvm_control_VM_isrunning
562
563
    #echo "VM status $VM_isrunning"
564
    if [ "$VM_isrunning" -ne 0 ]; then
565
      log_failure_msg "Exporting VM: $VM is not possible, it's running ..."
566
    else
567
      log_action_msg "Exporting VM: $VM ..."
568
      VM_BAK_DIR="$2"_"$JKE_DATE"
569
      mkdir "$VM_BAK_DIR"
570
      $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml
571
      $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do
572
        echo "$2 hdd=$VMHDD"
573
        if [ -f "$VMHDD" ]; then
574
          ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD`
575
        else
576
          log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..."
577
        fi
578
      done
579
    fi
580
  ;;
581
  status)
582
    echo "The following virtual machines are currently running:"
583
    list_running_domains | while read VM; do
584
      echo -n "  $VM"
585
      echo " ... is running"
586
    done
587
  ;;
588
589
  *)
590
    echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>"
591
    echo "  start      start all VMs listed in '/etc/kvm_box/machines_enabled_start'"
592
    echo "  stop       1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'"
593
    echo "             2nd step: wait 20s for each still running machine to give a chance to shut-down on their own"
594
    echo "             3rd step: acpi-shutdown all running VMs"
595
    echo "             4th step: wait for all machines shutdown or $TIMEOUT s"
596
    echo "             5th step: destroy all sitting VMs"
597
    echo "  status     list all running VMs"
598
    echo "  export     export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory"
599
    echo "  start-vm <VM name>     start the given VM"
600
    echo "  stop-vm <VM name>      acpi-shutdown the given VM"
601
    echo "  poweroff-vm <VM name>  poweroff the given VM"
602
    echo "  export-vm <VM name>    export the given VM to the current directory"
603
    exit 3
604
esac
605
606
exit 0
607
608
</pre>
609
610
h2. restore 'exported' kvm-machines
611
612
<pre><code class="shell">
613
tar xvf mach-name_202x-01-01.tar.gz 
614
</code></pre>
615
616
* copy the image-files to @/var/lib/libvirt/images/@
617
618
set ownership
619
<pre><code class="shell">
620
chown qemu:qemu /var/lib/libvirt/images/*
621
</code></pre>
622
623
624
define the machine by
625
626
<pre><code class="shell">
627
virsh define mach-name.xml
628
</code></pre>