Projekt

Allgemein

Profil

Setup kvm » Historie » Revision 7

Revision 6 (Jeremias Keihsler, 29.09.2024 15:08) → Revision 7/8 (Jeremias Keihsler, 29.09.2024 15:10)

h1. KVM 

 this is for a vanilla CentOS 9 minimal installation, 
 largely based on @kvm_virtualization_in_rhel_7_made_easy.pdf@ 

 https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_enabling-virtualization-in-rhel-9_configuring-and-managing-virtualization#proc_enabling-virtualization-in-rhel-9_assembly_enabling-virtualization-in-rhel-9 

 https://www.linuxtechi.com/install-kvm-on-rocky-linux-almalinux/ 

 good information is also found at http://virtuallyhyper.com/2013/06/migrate-from-libvirt-kvm-to-virtualbox/ 

 br0 -sources: 
 https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli 
 https://www.tecmint.com/create-network-bridge-in-rhel-centos-8/ 
 https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/ 
 https://extravm.com/billing/knowledgebase/114/CentOS-8-ifup-unknown-connection---Add-Second-IP.html 


 h2. basic updates/installs 

 <pre><code class="bash"> 
 yum update 
 yum install wget 
 yum install vim 
 reboot 
 </code></pre> 

 h2. check machine capability 

 <pre><code class="bash"> 
 grep -E 'svm|vmx' /proc/cpuinfo 
 </code></pre> 

 vmx ... Intel 
 svm ... AMD 

 h2. install KVM on CentOS minimal 

 <pre><code class="bash"> 
 dnf install qemu-kvm libvirt libguestfs-tools virt-install virt-viewer 
 for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done 
 </code></pre> 

 verify the following kernel modules are loaded 
 <pre><code class="bash"> 
 lsmod | grep kvm 
 </code></pre> 

 <pre><code class="bash"> 
 kvm 
 kvm_intel 
 </code></pre> 
 <pre><code class="bash"> 
 kvm 
 kvm_amd 
 </code></pre> 

 h3. Verification 

 <pre><code class="bash"> 
 virt-host-validate 
 </code></pre> 

 h3. change from libvirtd to modular libvirt daemons 

 https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/optimizing-virtual-machine-performance-in-rhel_configuring-and-managing-virtualization#proc_enabling-modular-libvirt-daemons_assembly_optimizing-libvirt-daemons 

 stop @libvirtd@ and its sockets 

 <pre><code class="shell"> 
 systemctl stop libvirtd.service 
 systemctl stop libvirtd{,-ro,-admin,-tcp,-tls}.socket 
 </code></pre> 

 disable @libvirtd@ 

 <pre><code class="shell"> 
 systemctl disable libvirtd.service 
 systemctl disable libvirtd{,-ro,-admin,-tcp,-tls}.socket 
 </code></pre> 

 enable modular @libvirt@ daemons 

 <pre><code class="shell"> 
 for drv in qemu interface network nodedev nwfilter secret storage; do systemctl unmask virt${drv}d.service; systemctl unmask virt${drv}d{,-ro,-admin}.socket; systemctl enable virt${drv}d.service; systemctl enable virt${drv}d{,-ro,-admin}.socket; done 
 </code></pre> 

 start sockets for modular daemons 

 <pre><code class="shell"> 
 for drv in qemu network nodedev nwfilter secret storage; do systemctl start virt${drv}d{,-ro,-admin}.socket; done 
 </code></pre> 

 check whether the @libvirtd-tls.socket@ service is enabled on your system.  

 <pre><code class="shell"> 
 grep listen_tls /etc/libvirt/libvirtd.conf 
 </code></pre> 

 if @listen_tls = 0@ then 

 <pre><code class="shell"> 
 systemctl unmask virtproxyd.service 
 systemctl unmask virtproxyd{,-ro,-admin}.socket 
 systemctl enable virtproxyd.service 
 systemctl enable virtproxyd{,-ro,-admin}.socket 
 systemctl start virtproxyd{,-ro,-admin}.socket 
 </code></pre> 

 elseif @listen_tls = 1@ then 

 <pre><code class="shell"> 
 systemctl unmask virtproxyd.service 
 systemctl unmask virtproxyd{,-ro,-admin,-tls}.socket 
 systemctl enable virtproxyd.service 
 systemctl enable virtproxyd{,-ro,-admin,-tls}.socket 
 systemctl start virtproxyd{,-ro,-admin,-tls}.socket 
 </code></pre> 

 Verification 

 <pre><code class="shell"> 
 virsh uri 
 </code></pre> 

 should result in @qemu:///system@ 

 Verify that your host is using the @virtqemud@ modular daemon.  

 <pre><code class="shell"> 
 systemctl is-active virtqemud.service 
 </code></pre> 

 should result in @active@ 

 


 h2. setup networking 

 add to the network controller configuration file @/etc/sysconfig/network-scripts/ifcfg-em1@ 
 <pre> 
 ... 
 BRIDGE=br0 
 </pre> 

 add following new file @/etc/sysconfig/network-scripts/ifcfg-br0@ 
 <pre> 
 DEVICE="br0" 
 # BOOTPROTO is up to you. If you prefer “static”, you will need to 
 # specify the IP address, netmask, gateway and DNS information. 
 BOOTPROTO="dhcp" 
 IPV6INIT="yes" 
 IPV6_AUTOCONF="yes" 
 ONBOOT="yes" 
 TYPE="Bridge" 
 DELAY="0" 
 </pre> 

 enable network forwarding @/etc/sysctl.conf@ 
 <pre> 
 ... 
 net.ipv4.ip_forward = 1 
 </pre> 

 read the file and restart NetworkManager 
 <pre><code class="bash"> 
 sysctl -p /etc/sysctl.conf 
 systemctl restart NetworkManager 
 </code></pre> 

 h2. can KVM and Virtualbox coexist 

 http://www.dedoimedo.com/computers/kvm-virtualbox.html 

 h2. convert Virtualbox to KVM 

 h3. uninstall Virtualbox-guest-additions 

 <pre><code class="bash"> 
 opt/[VboxAddonsFolder]/uninstall.sh 
 </code></pre> 

 some people had to remove @/etc/X11/xorg.conf@ 

 h3. convert image from Virtualbox to KWM 

 <pre><code class="bash"> 
 VBoxManage clonehd --format RAW Virt_Image.vdi Virt_Image.img 
 </code></pre> 

 RAW-Datei nach qcow konvertieren 
 <pre><code class="bash"> 
 qemu-img convert -f raw Virt_Image.img -O qcow2 Virt_Image.qcow 
 </code></pre> 

 h2. automatic start/shutdown of VMs with Host 

 taken from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings.html 

 h3. enable libvirt-guests service 

 <pre><code class="bash"> 
 systemctl enable libvirt-guests 
 systemctl start libvirt-guests 
 </code></pre> 

 all settings are to be done in @/etc/sysconfig/libvirt-guests@ 

 h2. install 


 <pre><code class="bash"> 
 yum install virt-manager 
 </code></pre> 

 <pre><code class="bash"> 
 usermod -a -G libvirt username 
 </code></pre> 

 h2. rename KVM-guest 

 taken from http://www.taitclarridge.com/techlog/2011/01/rename-kvm-virtual-machine-with-virsh.html 

 Power off the virtual machine and export the machine's XML configuration file: 

 <pre><code class="bash"> 
 virsh dumpxml name_of_vm > name_of_vm.xml 
 </code></pre> 

 Next, edit the XML file and change the name between the <name></name> tags (should be right near the top). As an added step you could also rename the disk file to reflect the change of the name and change the name of it in the <devices> section under <source file='/path/to/name_of_vm.img'>. 

 Save the XML file and undefine the old VM name with: 

 <pre><code class="bash"> 
 virsh undefine name_of_vm 
 </code></pre> 

 Now just import the edited XML file to define the VM: 

 <pre><code class="bash"> 
 virsh define name_of_vm.xml 
 </code></pre> 

 And that should be it! You can now start up your vm either in the Virtual Machine Manager or with virsh using: 

 <pre><code class="bash"> 
 virsh start name_of_vm 
 </code></pre> 

 h2. set fixed IP-adr via DHCP (default-network) 

 taken from https://wiki.libvirt.org/page/Networking 

 <pre><code class="bash"> 
 virsh edit <guest> 
 </code></pre> 

 where <guest> is the name or uuid of the guest. Add the following snippet of XML to the config file:  

 <pre><code class="bash"> 
 <interface type='network'> 
   <source network='default'/> 
   <mac address='00:16:3e:1a:b3:4a'/> 
 </interface> 
 </code></pre> 

 Applying modifications to the network 

 Sometimes, one needs to edit the network definition and apply the changes on the fly. The most common scenario for this is adding new static MAC+IP mappings for the network's DHCP server. If you edit the network with "virsh net-edit", any changes you make won't take effect until the network is destroyed and re-started, which unfortunately will cause a all guests to lose network connectivity with the host until their network interfaces are explicitly re-attached. 
 virsh net-update 

 Fortunately, many changes to the network configuration (including the aforementioned addition of a static MAC+IP mapping for DHCP) can be done with "virsh net-update", which can be told to enact the changes immediately. For example, to add a DHCP static host entry to the network named "default" mapping MAC address 53:54:00:00:01 to IP address 192.168.122.45 and hostname "bob", you could use this command:  

 <pre><code class="bash"> 
 virsh net-update default add ip-dhcp-host \ 
           "<host mac='52:54:00:00:00:01' \ 
            name='bob' ip='192.168.122.45' />" \ 
            --live --config 
 </code></pre> 

 h2. forwarding incoming connections 

 taken from https://wiki.libvirt.org/page/Networking 

 By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules. 

 If you would like to make a service that is on a guest behind a NATed virtual network publicly available, you can setup libvirt's "hook" script for qemu to install the necessary iptables rules to forward incoming connections to the host on any given port HP to port GP on the guest GNAME: 

 1) Determine a) the name of the guest "G" (as defined in the libvirt domain XML), b) the IP address of the guest "I", c) the port on the guest that will receive the connections "GP", and d) the port on the host that will be forwarded to the guest "HP". 

 (To assure that the guest's IP address remains unchanged, you can either configure the guest OS with static ip information, or add a <host> element inside the <dhcp> element of the network that is used by your guest. See the libvirt network XML documentation address section for defails and an example.) 

 2) Stop the guest if it's running. 

 3) Create the file /etc/libvirt/hooks/qemu (or add the following to an already existing hook script), with contents similar to the following (replace GNAME, IP, GP, and HP appropriately for your setup): 

 Use the basic script below or see an "advanced" version, which can handle several different machines and port mappings here (improvements are welcome) or here's a python script which does a similar thing and is easy to understand and configure (improvements are welcome):  

 <pre> 
 #!/bin/bash 
 # used some from advanced script to have multiple ports: use an equal number of guest and host ports 

 # Update the following variables to fit your setup 
 Guest_name=GUEST_NAME 
 Guest_ipaddr=GUEST_IP 
 Host_ipaddr=HOST_IP 
 Host_port=(    'HOST_PORT1' 'HOST_PORT2' ) 
 Guest_port=( 'GUEST_PORT1' 'GUEST_PORT2' ) 

 length=$(( ${#Host_port[@]} - 1 )) 
 if [ "${1}" = "${Guest_name}" ]; then 
    if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then 
        for i in `seq 0 $length`; do 
                iptables -t nat -D PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]} 
                iptables -D FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT 
        done 
    fi 
    if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then 
        for i in `seq 0 $length`; do 
                iptables -t nat -A PREROUTING -d ${Host_ipaddr} -p tcp --dport ${Host_port[$i]} -j DNAT --to ${Guest_ipaddr}:${Guest_port[$i]} 
                iptables -I FORWARD -d ${Guest_ipaddr}/32 -p tcp -m state --state NEW -m tcp --dport ${Guest_port[$i]} -j ACCEPT 
        done 
    fi 
 fi 
 </pre> 
 4) chmod +x /etc/libvirt/hooks/qemu 

 5) Restart the libvirtd service. 

 6) Start the guest. 

 (NB: This method is a hack, and has one annoying flaw in versions of libvirt prior to 0.9.13 - if libvirtd is restarted while the guest is running, all of the standard iptables rules to support virtual networks that were added by libvirtd will be reloaded, thus changing the order of the above FORWARD rule relative to a reject rule for the network, hence rendering this setup non-working until the guest is stopped and restarted. Thanks to the new "reconnect" hook in libvirt-0.9.13 and newer (which is used by the above script if available), this flaw is not present in newer versions of libvirt (however, this hook script should still be considered a hack).  

 h2. wrapper script for virsh 

 <pre> 
 #! /bin/sh 
 # kvm_control     Startup script for KVM Virtual Machines 
 # 
 # description: Manages KVM VMs 
 # processname: kvm_control.sh 
 # 
 # pidfile: /var/run/kvm_control/kvm_control.pid 
 # 
 ### BEGIN INIT INFO 
 # 
 ### END INIT INFO 
 # 
 # Version 20171103 by Jeremias Keihsler added ionice prio 'idle' 
 # Version 20161228 by Jeremias Keihsler based on: 
 # virsh-specific parts are taken from: 
 #    https://github.com/kumina/shutdown-kvm-guests/blob/master/shutdown-kvm-guests.sh 
 # Version 20110509 by Jeremias Keihsler (vboxcontrol) based on: 
 # Version 20090301 by Kevin Swanson <kswan.info> based on: 
 # Version 2008051100 by Jochem Kossen <jochem.kossen@gmail.com> 
 # http://farfewertoes.com 
 # 
 # Released in the public domain 
 # 
 # This file came with a README file containing the instructions on how 
 # to use this script. 
 #  
 # this is no more to be used as an init.d-script (vboxcontrol was an init.d-script) 
 # 

 ################################################################################ 
 # INITIAL CONFIGURATION 

 export PATH="${PATH:+$PATH:}/bin:/usr/bin:/usr/sbin:/sbin" 

 VIRSH=/usr/bin/virsh 
 TIMEOUT=300 

 declare -i VM_isrunning 

 ################################################################################ 
 # FUNCTIONS 

 log_failure_msg() { 
 echo $1 
 } 

 log_action_msg() { 
 echo $1 
 } 

 # list running domains 
 list_running_domains() { 
   $VIRSH list | grep running | awk '{ print $2}' 
 } 

 # Check for running machines every few seconds; return when all machines are 
 # down 
 wait_for_closing_machines() { 
 RUNNING_MACHINES=`list_running_domains | wc -l` 
 if [ $RUNNING_MACHINES != 0 ]; then 
   log_action_msg "machines running: "$RUNNING_MACHINES 
   sleep 2 

   wait_for_closing_machines 
 fi 
 } 

 ################################################################################ 
 # RUN 
 case "$1" in 
   start) 
     if [ -f /etc/kvm_box/machines_enabled_start ]; then 

       cat /etc/kvm_box/machines_enabled_start | while read VM; do 
         log_action_msg "Starting VM: $VM ..." 
         $VIRSH start $VM 
         sleep 20 
         RETVAL=$? 
       done 
       touch /tmp/kvm_control 
     fi 
   ;; 
   stop) 
     # NOTE: this stops first the listed VMs in the given order 
     # and later all running VM's.  
     # After the defined timeout all remaining VMs are killed 

     # Create some sort of semaphore. 
     touch /tmp/shutdown-kvm-guests 

     echo "Try to cleanly shut down all listed KVM domains..." 
     # Try to shutdown each listed domain, one by one. 
     if [ -f /etc/kvm_box/machines_enabled_stop ]; then 
       cat /etc/kvm_box/machines_enabled_stop | while read VM; do 
         log_action_msg "Shutting down VM: $VM ..." 
         $VIRSH shutdown $VM --mode acpi 
         sleep 10 
         RETVAL=$? 
       done 
     fi 
     sleep 10 

     echo "give still running machines some more time..." 
     # wait 20s per still running machine 
     list_running_domains | while read VM; do 
       log_action_msg "waiting 20s ... for: $VM ..." 
       sleep 20 
     done 

     echo "Try to cleanly shut down all running KVM domains..." 
     # Try to shutdown each remaining domain, one by one. 
     list_running_domains | while read VM; do 
       log_action_msg "Shutting down VM: $VM ..." 
       $VIRSH shutdown $VM --mode acpi 
       sleep 10 
     done 

     # Wait until all domains are shut down or timeout has reached. 
     END_TIME=$(date -d "$TIMEOUT seconds" +%s) 

     while [ $(date +%s) -lt $END_TIME ]; do 
       # Break while loop when no domains are left. 
       test -z "$(list_running_domains)" && break 
       # Wait a litte, we don't want to DoS libvirt. 
       sleep 2 
     done 

     # Clean up left over domains, one by one. 
     list_running_domains | while read DOMAIN; do 
       # Try to shutdown given domain. 
       $VIRSH destroy $DOMAIN 
       # Give libvirt some time for killing off the domain. 
       sleep 10 
     done 

     wait_for_closing_machines 
     rm -f /tmp/shutdown-kvm-guests 
     rm -f /tmp/kvm_control 
   ;; 
   export) 
     JKE_DATE=$(date +%F) 
     if [ -f /etc/kvm_box/machines_enabled_export ]; then 
       cat /etc/kvm_box/machines_enabled_export    | while read VM; do 
         rm -f /tmp/kvm_control_VM_isrunning 
         VM_isrunning=0 
         list_running_domains | while read RVM; do 
           #echo "VM list -$VM- : -$RVM-" 
           if [[ "$VM" ==    "$RVM" ]]; then 
             #echo "VM found running..." 
             touch /tmp/kvm_control_VM_isrunning 
             VM_isrunning=1 
             #echo "$VM_isrunning" 
             break 
           fi 
           #echo "$VM_isrunning" 
         done 

         # took me a while to figure out that the above 'while'-loop  
         # runs in a separate process ... let's use the 'file' as a  
         # kind of interprocess-communication :-) JKE 20161229 
         if [ -f /tmp/kvm_control_VM_isrunning ]; then 
           VM_isrunning=1 
         fi 
         rm -f /tmp/kvm_control_VM_isrunning 

         #echo "VM status $VM_isrunning" 
         if [ "$VM_isrunning" -ne 0 ]; then 
           log_failure_msg "Exporting VM: $VM is not possible, it's running ..." 
         else 
           log_action_msg "Exporting VM: $VM ..." 
           VM_BAK_DIR="$VM"_"$JKE_DATE" 
           mkdir "$VM_BAK_DIR" 
           $VIRSH dumpxml $VM > ./$VM_BAK_DIR/$VM.xml 
           $VIRSH -q domblklist $VM | awk '{ print$2}' | while read VMHDD; do 
             echo "$VM hdd=$VMHDD" 
             if [ -f "$VMHDD" ]; then 
               ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD` 
             else 
               log_failure_msg "Exporting VM: $VM image-file $VMHDD not found ..." 
             fi 
           done 
         fi 
       done 
     else 
       log_action_msg "export-list not found" 
     fi 
   ;; 
   start-vm) 
     log_action_msg "Starting VM: $2 ..." 
     $VIRSH start $2 
     RETVAL=$? 
   ;; 
   stop-vm) 
     log_action_msg "Stopping VM: $2 ..." 
     $VIRSH shutdown $2 --mode acpi 
     RETVAL=$? 
   ;; 
   poweroff-vm) 
     log_action_msg "Powering off VM: $2 ..." 
     $VIRSH destroy $2 
     RETVAL=$? 
   ;; 
   export-vm) 
     # NOTE: this exports the given VM 
     log_action_msg "Exporting VM: $2 ..." 
     rm -f /tmp/kvm_control_VM_isrunning 
     VM_isrunning=0 
     JKE_DATE=$(date +%F) 
     list_running_domains | while read RVM; do 
       #echo "VM list -$VM- : -$RVM-" 
       if [[ "$2" ==    "$RVM" ]]; then 
         #echo "VM found running..." 
         touch /tmp/kvm_control_VM_isrunning 
         VM_isrunning=1 
         #echo "$VM_isrunning" 
         break 
       fi 
       #echo "$VM_isrunning" 
     done 

     # took me a while to figure out that the above 'while'-loop  
     # runs in a separate process ... let's use the 'file' as a  
     # kind of interprocess-communication :-) JKE 20161229 
     if [ -f /tmp/kvm_control_VM_isrunning ]; then 
       VM_isrunning=1 
     fi 
     rm -f /tmp/kvm_control_VM_isrunning 

     #echo "VM status $VM_isrunning" 
     if [ "$VM_isrunning" -ne 0 ]; then 
       log_failure_msg "Exporting VM: $VM is not possible, it's running ..." 
     else 
       log_action_msg "Exporting VM: $VM ..." 
       VM_BAK_DIR="$2"_"$JKE_DATE" 
       mkdir "$VM_BAK_DIR" 
       $VIRSH dumpxml $2 > ./$VM_BAK_DIR/$2.xml 
       $VIRSH -q domblklist $2 | awk '{ print$2}' | while read VMHDD; do 
         echo "$2 hdd=$VMHDD" 
         if [ -f "$VMHDD" ]; then 
           ionice -c 3 rsync --progress $VMHDD ./$VM_BAK_DIR/`basename $VMHDD` 
         else 
           log_failure_msg "Exporting VM: $2 image-file $VMHDD not found ..." 
         fi 
       done 
     fi 
   ;; 
   status) 
     echo "The following virtual machines are currently running:" 
     list_running_domains | while read VM; do 
       echo -n "    $VM" 
       echo " ... is running" 
     done 
   ;; 

   *) 
     echo "Usage: $0 {start|stop|status|export|start-vm <VM name>|stop-vm <VM name>|poweroff-vm <VM name>}|export-vm <VMname>" 
     echo "    start        start all VMs listed in '/etc/kvm_box/machines_enabled_start'" 
     echo "    stop         1st step: acpi-shutdown all VMs listed in '/etc/kvm_box/machines_enabled_stop'" 
     echo "               2nd step: wait 20s for each still running machine to give a chance to shut-down on their own" 
     echo "               3rd step: acpi-shutdown all running VMs" 
     echo "               4th step: wait for all machines shutdown or $TIMEOUT s" 
     echo "               5th step: destroy all sitting VMs" 
     echo "    status       list all running VMs" 
     echo "    export       export all VMs listed in '/etc/kvm_box/machines_enabled_export' to the current directory" 
     echo "    start-vm <VM name>       start the given VM" 
     echo "    stop-vm <VM name>        acpi-shutdown the given VM" 
     echo "    poweroff-vm <VM name>    poweroff the given VM" 
     echo "    export-vm <VM name>      export the given VM to the current directory" 
     exit 3 
 esac 

 exit 0 

 </pre> 

 h2. restore 'exported' kvm-machines 

 <pre><code class="shell"> 
 tar xvf mach-name_202x-01-01.tar.gz  
 </code></pre> 

 * copy the image-files to @/var/lib/libvirt/images/@ 

 set ownership 
 <pre><code class="shell"> 
 chown qemu:qemu /var/lib/libvirt/images/* 
 </code></pre> 


 define the machine by 

 <pre><code class="shell"> 
 virsh define mach-name.xml 
 </code></pre>