User:Merovingians

From Gentoo Wiki
Jump to:navigation Jump to:search

Test Page for Formatting (Prior to Wiki Updates)

Note
Legend:

*D*: Draft

*C*: Complete; integrated into Main Wiki

Hardware

Mainboard: Supermicro X13SWA-TF

https://www.supermicro.com/en/products/motherboard/x13swa-tf

KERNEL X13SWA-TF
TEST TEST TEST
[*] General setup  --->
    Processor type and features  --->
    Bus options (PCI etc.)  --->
[*] Networking support  --->    
    Device Drivers  --->
    [*] PCI support  --->
        Bus devices
        [*] Block devices  --->
            NVME Support  --->
            Misc devices  --->
        [*] Networking device support  --->
            [*] Ethernet driver support  --->

                Marvell AQC113C 10Gbe
                [*]   aQuantia devices
                <M>     aQuantia AQtion(tm) Support

                Intel Ethernet Controller i210AT
                [*]   aQuantia devices
                <M>     aQuantia AQtion(tm) Support
        Input device support  --->
        I2C support  --->
    <M> I3C support  --->
    <M> Sound card support  --->
    [*] IOMMU Hardware Support  --->
    [*] Trusted Execution Environment support  --->

CPU: Intel(R) Xeon(R) w5-3435X

https://www.intel.com/content/www/us/en/products/sku/233421/intel-xeon-w53435x-processor-45m-cache-3-10-ghz/specifications.html

Memory: Supermicro (Micron) 16GB DDR5 4800 (PC5-38400) Server Memory

https://store.supermicro.com/16gb-ddr5-4800-mem-dr516l-cl01-er48.html

Disk: SAMSUNG 980 PRO SSD

GPU: NVIDIA RTX A2000

https://resources.nvidia.com/en-us-briefcase-for-datasheets/proviz-print-nvidia-2?ncid=no-ncid

Intel Ethernet Server Adapter I350-T4

https://www.intel.com/content/www/us/en/products/sku/84805/intel-ethernet-server-adapter-i350t4v2/specifications.html

Intel Ethernet Server Adapter I350-T2

https://www.intel.com/content/www/us/en/products/sku/84804/intel-ethernet-server-adapter-i350t2v2/specifications.html

Hauppauge WinTV-HVR-2250 Media Center

Gentoo Hardened SELinux

KERNEL
TEST TEST TEST KernelBox TEST TEST
[*] Networking support  --->
        Networking options  --->
            <*>   Open vSwitch

            In case you ever want to use tagged VLANs
            <*>   802.1Q VLAN Support
            [*]     GVRP (GARP VLAN Registration Protocol) support

            In case you ever want to setup QoS rules
            [*]   QoS and/or fair queueing  --->
                      <M> ...
Note
TEST TEST TEST Note block TEST TESST
Warning
TEST TEST Warning block TEST TEST

TEST TEST TEST C Block TEST TEST TEST TEST TEST code block TESTS TESTS TEST TEST package block TEST TEST TEST TEST TEST package block TEST TEST


SELinux Multi-Category Security (MCS) & Multi-Level Security (MLS)

Note
SELinux was previously installed using SELinux Installation Guide and running in permissive and strict.
Warning
Do not set SELINUX to enforcing as the baseline policy still needs modifications beyond defaults.

Configuring the SELinux policy

Update the main configuration file at /etc/selinux/config by changing the SELINUXTYPE to either mcs or mcs.

FILE /etc/selinux/config
# This file controls the state of SELinux on the system on boot.

# SELINUX can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - No SELinux policy is loaded.
SELINUX=permissive

# SELINUXTYPE can take one of these four values:
#       targeted - Only targeted network daemons are protected.
#       strict   - Full SELinux protection.
#       mls      - Full SELinux protection with Multi-Level Security
#       mcs      - Full SELinux protection with Multi-Category Security 
#                  (mls, but only one sensitivity level)
SELINUXTYPE=mcs

Update the policy store in /etc/portage/make.conf to include both mcs and pls.

FILE /etc/portage/make.conf
# SELinux
POLICY_TYPES="strict targeted mcs mls"

Rebuilding policies and utilities

Rebuild the sec-policy/selinux-base package, then re-install the core SELinux policies through the sec-policy/selinux-base-policy packages.

root #FEATURES="-selinux" emerge -1av selinux-base
root #FEATURES="-selinux -sesandbox" emerge -1av selinux-base
root #FEATURES="-selinux -sesandbox" emerge -1av selinux-base-policy

Rebuild sec-policy/selinux-policykit and sec-policy/selinux-dbus, otherwise /etc/selinux/mcs/contexts/files/file_contexts and /etc/selinux/mls/contexts/files/file_contexts will not be present in the system and relabeling will be impossible (see bug #891963)

root #FEATURES="-selinux -sesandbox" emerge -1av selinux-policykit selinux-dbus

Reload modules

Rebuild & Reload SELinux Module

root #semodule -BR

Redefine the administrator accounts

Note
Somewhere along the process the administrator accounts were removed and therefore had to be re-added.
root #semanage login -a -s staff_u <username>
root #restorecon -R -F /home/<username>
root #setatus -vv
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             mcs
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

Process contexts:
Current context:                staff_u:sysadm_r:sysadm_t:s0
Init context:                   system_u:system_r:init_t:s0
/sbin/agetty                    system_u:system_r:getty_t:s0

File contexts:
Controlling terminal:           staff_u:object_r:user_devpts_t:s0
/sbin/init                      system_u:object_r:init_exec_t:s0
/sbin/agetty                    system_u:object_r:getty_exec_t:s0
/bin/login                      system_u:object_r:login_exec_t:s0
/sbin/openrc                    system_u:object_r:rc_exec_t:s0
/usr/sbin/sshd                  system_u:object_r:sshd_exec_t:s0
/sbin/unix_chkpwd               system_u:object_r:chkpwd_exec_t:s0
/usr/sbin/unix_chkpwd           system_u:object_r:chkpwd_exec_t:s0
/etc/passwd                     system_u:object_r:etc_t:s0
/etc/shadow                     system_u:object_r:shadow_t:s0
/bin/sh                         system_u:object_r:bin_t:s0 -> system_u:object_r:shell_exec_t:s0
/bin/bash                       system_u:object_r:shell_exec_t:s0
/usr/bin/newrole                system_u:object_r:newrole_exec_t:s0
/lib/libc.so.6                  system_u:object_r:lib_t:s0
/lib/ld-linux.so.2              system_u:object_r:ld_so_t:s0

Rebuild all selinux packages

root #emerge --ask --verbose --update --deep --newuse @world

Relabel the filesystem.

root #rlpkg -a

QEMU/KVM

Determine QEMU Machine Type

Choose appropriate machine type for emulation.

root #qemu-system-x86_64 -machine help
Supported machines are:
microvm              microvm (i386)
pc                   Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-9.1)
pc-i440fx-9.1        Standard PC (i440FX + PIIX, 1996) (default)
pc-i440fx-9.0        Standard PC (i440FX + PIIX, 1996)
pc-i440fx-8.2        Standard PC (i440FX + PIIX, 1996)
pc-i440fx-8.1        Standard PC (i440FX + PIIX, 1996)
pc-i440fx-8.0        Standard PC (i440FX + PIIX, 1996)
pc-i440fx-7.2        Standard PC (i440FX + PIIX, 1996)
pc-i440fx-7.1        Standard PC (i440FX + PIIX, 1996)
pc-i440fx-7.0        Standard PC (i440FX + PIIX, 1996)
pc-i440fx-6.2        Standard PC (i440FX + PIIX, 1996)
q35                  Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-9.1)
pc-q35-9.1           Standard PC (Q35 + ICH9, 2009)
pc-q35-9.0           Standard PC (Q35 + ICH9, 2009)
pc-q35-8.2           Standard PC (Q35 + ICH9, 2009)
pc-q35-8.1           Standard PC (Q35 + ICH9, 2009)
pc-q35-8.0           Standard PC (Q35 + ICH9, 2009)
pc-q35-7.2           Standard PC (Q35 + ICH9, 2009)
pc-q35-7.1           Standard PC (Q35 + ICH9, 2009)
pc-q35-7.0           Standard PC (Q35 + ICH9, 2009)
pc-q35-6.2           Standard PC (Q35 + ICH9, 2009)
isapc                ISA-only PC
none                 empty machine
x-remote             Experimental remote machine
Note
Use pc for basic pc emulation (i.e. pci) or q35 for the latest technology (i.e. pcie).

QEMU passthrough (Network Card)

Intel i350 4-port NIC: WAN - enp142s0f0 LAN - enp142s0f[1-3]

Open vSwitch Bridge (LAN)

Creating an Open vSwitch Bridge for lan0 along with a 3-Port Bond/Trunk.

root #ovs-vsctl add-br vbrlan0
root #ovs-vsctl add-bond vbrlan0 bond0 enp142s0f1 enp142s0f2 enp142s0f3
root #ovs-vsctl set port bond0 lacp=active
root #ovs-vsctl show
    Bridge vbrlan0
        Port vbrlan0
            Interface vbrlan0
                type: internal
        Port bond0
            Interface enp142s0f1
            Interface enp142s0f3
            Interface enp142s0f2

Verify the bond and lacp status.

root #ovs-appctl bond/show
---- bond0 ----
bond_mode: active-backup
bond may use recirculation: no, Recirc-ID : -1
bond-hash-basis: 0
lb_output action: disabled, bond-id: -1
updelay: 0 ms
downdelay: 0 ms
lacp_status: negotiated
lacp_fallback_ab: false
active-backup primary: <none>
active member mac: XX:XX:XX:XX:XX:XX(enp142s0f2)

member enp142s0f1: enabled
  may_enable: true

member enp142s0f2: enabled
  active member
  may_enable: true

member enp142s0f3: enabled
  may_enable: true

More detailed lacp status.

root #ovs-appctl lacp/show
---- bond0 ----
  status: active negotiated
  sys_id: XX:XX:XX:XX:XX:XX
  sys_priority: 65534
  aggregation key: 1
  lacp_time: slow

member: enp142s0f1: current attached
  port_id: 3
  port_priority: 65535
  may_enable: true

  actor sys_id: XX:XX:XX:XX:XX:XX
  actor sys_priority: 65534
  actor port_id: 3
  actor port_priority: 65535
  actor key: 1
  actor state: activity aggregation synchronized collecting distributing

  partner sys_id: XX:XX:XX:XX:XX:XX
  partner sys_priority: 32768
  partner port_id: 1
  partner port_priority: 128
  partner key: 1000
  partner state: activity aggregation synchronized collecting distributing

member: enp142s0f2: current attached
  port_id: 2
  port_priority: 65535
  may_enable: true

  actor sys_id: XX:XX:XX:XX:XX:XX
  actor sys_priority: 65534
  actor port_id: 2
  actor port_priority: 65535
  actor key: 1
  actor state: activity aggregation synchronized collecting distributing

  partner sys_id: XX:XX:XX:XX:XX:XX
  partner sys_priority: 32768
  partner port_id: 2
  partner port_priority: 128
  partner key: 1000
  partner state: activity aggregation synchronized collecting distributing

member: enp142s0f3: current attached
  port_id: 1
  port_priority: 65535
  may_enable: true

  actor sys_id: XX:XX:XX:XX:XX:XX
  actor sys_priority: 65534
  actor port_id: 1
  actor port_priority: 65535
  actor key: 1
  actor state: activity aggregation synchronized collecting distributing

  partner sys_id: XX:XX:XX:XX:XX:XX
  partner sys_priority: 32768
  partner port_id: 3
  partner port_priority: 128
  partner key: 1000
  partner state: activity aggregation synchronized collecting distributing
Hardware Passthrough (WAN)
FILE /etc/conf.d/net
config_enp142s0f0="null"
config_enp142s0f1="null"
config_enp142s0f2="null"
config_enp142s0f3="null"
root #lspci|grep -i 350
8e:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
8e:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
8e:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
8e:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
FILE /srv/vm/opnsense/bind_vfio_NicIntel350.sh
#!/bin/bash
#

# Isolate each ethernet port
port0="0000:8e:00.0"

# Obtain Vendor ID
port0_vd="$(cat /sys/bus/pci/devices/$port0/vendor) $(cat /sys/bus/pci/devices/$port0/device)"

# Bind to VFIO
function bind_vfio {
  echo "$port0" > "/sys/bus/pci/devices/$port0/driver/unbind"
}

# Unbind
function unbind_vfio {
  echo "$port0_vd" > "/sys/bus/pci/drivers/vfio-pci/remove_id"
}


QEMU passthrough (Graphics Card)

QEMU passthrough (USB)

QEMU/KVM guest (OPNSense)

This VM uses hardware passthrough for wan. For the lan, it creates a tap device, that gets added to the ovs-network under vbrlan0.

root #qemu-img create -f qcow2 OPNsense-VM.img 64G
FILE startOPNsense-VM.sh
#!/bin/bash
screen -S OPNsenseVM bash -c 'sudo qemu-system-x86_64 -enable-kvm \
	-name "OPNsense" \
	-cpu host \
	-smp 2 \
	-m 6G \
	-device vfio-pci,host=8e:00.0,id=net0 \
	-device virtio-net,netdev=net1 -netdev tap,id=net1,script=../ovs-ifup,downscript=../ovs-ifdown \
	-hda opnsense-vm.img \
	-drive if=pflash,format=raw,unit=0,readonly=on,file=/usr/share/edk2-ovmf/OVMF_CODE.fd \
	-boot c \
"$@"'

QEMU/KVM guest (Nextcloud)

root #qemu-img create -f qcow2 Nextcloud-VM.img 64G
FILE startNextcloud-VM.sh
#!/bin/bash
screen -S NextcloudVM bash -c 'qemu-system-x86_64 -enable-kvm \
	-name "Gentoo Nextcloud" \
	-cpu host \
	-smp 2 \
	-m 6G \
	-netdev user,id=vmnic,hostname=nextcloud \
	-device virtio-net,netdev=vmnic \
	-device virtio-rng-pci \
	-hda Nextcloud-VM.img \
	-drive if=pflash,format=raw,unit=0,readonly=on,file=/usr/share/edk2-ovmf/OVMF_CODE.fd \
"$@"'

Libvirt/QEMU/KVM

/etc/libvirt/ /etc/libvirt/virtchd.conf /etc/libvirt/virtstoraged.conf /etc/libvirt/virtqemud.conf /etc/libvirt/virtnetworkd.conf /etc/libvirt/qemu.conf /etc/libvirt/virtnwfilterd.conf /etc/libvirt/qemu /etc/libvirt/qemu/networks /etc/libvirt/qemu/networks/ovs.xml /etc/libvirt/qemu/networks/default.xml /etc/libvirt/qemu/networks/autostart /etc/libvirt/virtinterfaced.conf /etc/libvirt/virtsecretd.conf /etc/libvirt/lxc /etc/libvirt/libvirt-admin.conf /etc/libvirt/virtlogd.conf /etc/libvirt/libvirtd.conf /etc/libvirt/virtproxyd.conf /etc/libvirt/qemu-lockd.conf /etc/libvirt/libvirt.conf /etc/libvirt/storage /etc/libvirt/virtlockd.conf /etc/libvirt/nwfilter /etc/libvirt/nwfilter/allow-dhcp.xml /etc/libvirt/nwfilter/no-arp-mac-spoofing.xml /etc/libvirt/nwfilter/no-mac-spoofing.xml /etc/libvirt/nwfilter/allow-ipv6.xml /etc/libvirt/nwfilter/clean-traffic.xml /etc/libvirt/nwfilter/allow-incoming-ipv4.xml /etc/libvirt/nwfilter/no-arp-ip-spoofing.xml /etc/libvirt/nwfilter/allow-dhcpv6-server.xml /etc/libvirt/nwfilter/allow-ipv4.xml /etc/libvirt/nwfilter/allow-dhcp-server.xml /etc/libvirt/nwfilter/qemu-announce-self-rarp.xml /etc/libvirt/nwfilter/allow-incoming-ipv6.xml /etc/libvirt/nwfilter/no-ipv6-multicast.xml /etc/libvirt/nwfilter/no-mac-broadcast.xml /etc/libvirt/nwfilter/no-ipv6-spoofing.xml /etc/libvirt/nwfilter/clean-traffic-gateway.xml /etc/libvirt/nwfilter/allow-dhcpv6.xml /etc/libvirt/nwfilter/no-ip-multicast.xml /etc/libvirt/nwfilter/no-arp-spoofing.xml /etc/libvirt/nwfilter/no-ip-spoofing.xml /etc/libvirt/nwfilter/allow-arp.xml /etc/libvirt/nwfilter/no-other-l2-traffic.xml /etc/libvirt/nwfilter/no-other-rarp-traffic.xml /etc/libvirt/nwfilter/qemu-announce-self.xml /etc/libvirt/virt-login-shell.conf /etc/libvirt/virtnodedevd.conf /etc/libvirt/secrets

/var/run/libvirt/ /var/run/libvirt/lxc /var/run/libvirt/virtlogd-sock /var/run/libvirt/virtlogd-admin-sock /var/run/libvirt/common /var/run/libvirt/common/system.token /var/run/libvirt/network /var/run/libvirt/network/autostarted /var/run/libvirt/network/default.pid /var/run/libvirt/network/driver.pid /var/run/libvirt/interface /var/run/libvirt/interface/driver.pid /var/run/libvirt/secrets /var/run/libvirt/secrets/driver.pid /var/run/libvirt/storage /var/run/libvirt/storage/autostarted /var/run/libvirt/storage/driver.pid /var/run/libvirt/nodedev /var/run/libvirt/nodedev/driver.pid /var/run/libvirt/nwfilter /var/run/libvirt/nwfilter/driver.pid /var/run/libvirt/nwfilter-binding /var/run/libvirt/qemu /var/run/libvirt/qemu/channel /var/run/libvirt/qemu/slirp /var/run/libvirt/qemu/passt /var/run/libvirt/qemu/dbus /var/run/libvirt/qemu/autostarted /var/run/libvirt/qemu/driver.pid /var/run/libvirt/hostdevmgr /var/run/libvirt/libvirt-sock /var/run/libvirt/libvirt-sock-ro /var/run/libvirt/libvirt-admin-sock

/var/lib/libvirt/ /var/lib/libvirt/dnsmasq /var/lib/libvirt/dnsmasq/default.addnhosts /var/lib/libvirt/dnsmasq/default.hostsfile /var/lib/libvirt/dnsmasq/default.conf /var/lib/libvirt/dnsmasq/virbr0.status /var/lib/libvirt/qemu /var/lib/libvirt/qemu/checkpoint /var/lib/libvirt/qemu/nvram /var/lib/libvirt/qemu/snapshot /var/lib/libvirt/qemu/save /var/lib/libvirt/qemu/dump /var/lib/libvirt/qemu/ram

/var/cache/libvirt/ /var/cache/libvirt/qemu /var/cache/libvirt/qemu/capabilities


Libvirt/QEMU networking (OPNsense)

Open vSwitch Bridge (LAN)

Assuming an ovs network named vbrlan0 has already been setup.

root #ovs-vsctl show
    Bridge vbrlan0
        Port vbrlan0
            Interface vbrlan0
                type: internal
        Port bond0
            Interface enp142s0f1
            Interface enp142s0f3
            Interface enp142s0f2

Create a network configuration.

FILE /etc/libvirt/qemu/networks/ovs-network.xml
<network>
	<name>ovs</name>
	<uuid></uuid>
	<forward mode='bridge'/>
	<bridge name='vbrlan0'/>
	<virtualport type='openvswitch'/>
</network>

Define/activate the network configuration.

root #virsh net-define ovs-network.xml
Network ovs defined from ovs-network.xml

Confirm ovs-network was created.

root #virsh net-list --all
 Name      State      Autostart   Persistent
----------------------------------------------
 default   active     yes         yes
 ovs       inactive   no          yes

Enable the ovs network so that it starts during boot-up time:

root #virsh net-autostart ovs
Network ovs marked as autostarted

Start the ovs network:

root #virsh net-start ovs
Network ovs started

Disable/stop the default network:

root #virsh net-destroy default
Network default destroyed

Disable default network autostart:

root #virsh net-autostart --disable default
Network default unmarked as autostarted
Hardware Passthrough (WAN)

WAN is setup using port0 of a 4-port Intel i350. The pci device # is already known so the command output below is abbreviated.

Identify the device.

root #virsh nodedev-list --tree
root #grep pci
  +- pci_0000_8d_01_0
  {{|}}   +- pci_0000_8e_00_0
  {{|}}   +- pci_0000_8e_00_1
  {{|}}   +- pci_0000_8e_00_2
  {{|}}   +- pci_0000_8e_00_3

Gather required information such as the domain, bus, and function.

root #virsh nodedev-dumpxml pci_0000_8e_00_0
<device>
  <name>pci_0000_8e_00_0</name>
  <path>/sys/devices/pci0000:8d/0000:8d:01.0/0000:8e:00.0</path>
  <parent>pci_0000_8d_01_0</parent>
  <driver>
    <name>igb</name>
  </driver>
  <capability type='pci'>
    <class>0x020000</class>
    <domain>0</domain>
    <bus>142</bus>
    <slot>0</slot>
    <function>0</function>
    <product id='0x1521'>I350 Gigabit Network Connection</product>
    <vendor id='0x8086'>Intel Corporation</vendor>
    <capability type='virt_functions' maxCount='7'/>
    <iommuGroup number='12'>
      <address domain='0x0000' bus='0x8e' slot='0x00' function='0x0'/>
    </iommuGroup>
    <numa node='0'/>
    <pci-express>
      <link validity='cap' port='4' speed='5' width='4'/>
      <link validity='sta' speed='5' width='4'/>
    </pci-express>
  </capability>
</device>

Detach the device from the system.

root #virsh nodedev-dettach pci_0000_8e_00_0
Device pci_0000_8e_00_0 detached

Add device to VM xml.

FILE /etc/libvirt/qemu/opnsense.qemu.kvm-x86_64.xml
<hostdev mode='subsystem' type='pci' managed='yes'>
  '"`UNIQ--source-00000064-QINU`"'
</hostdev>
Note
If using SELinux, allow management of the pci devices from the guest.
root #setsebool -P virt_use_sysfs 1
Boolean virt_use_sysfs is not defined

Now the device is ready for use.

Hardware Passthrough (GPU)

The NVIDIA RTX A2000 has two devices, one for video (ac:00.0) and one for audio (ac:00.1). The pci device # is already known so the command output below is abbreviated.

Identify the device.

root #virsh nodedev-list --tree
root #grep pci
  +- pci_0000_ab_01_0
  |   |
  |   +- pci_0000_ac_00_0
  |   |   |
  |   |   +- drm_card1
  |   |   +- drm_renderD128
  |   |     
  |   +- pci_0000_ac_00_1
  |     

Gather required information such as the domain, bus, and function.

root #virsh nodedev-dumpxml pci_0000_ab_01_0
<device>
  <name>pci_0000_ac_00_0</name>
  <path>/sys/devices/pci0000:ab/0000:ab:01.0/0000:ac:00.0</path>
  <parent>pci_0000_ab_01_0</parent>
  <driver>
    <name>nouveau</name>
  </driver>
  <capability type='pci'>
    <class>0x030000</class>
    <domain>0</domain>
    <bus>172</bus>
    <slot>0</slot>
    <function>0</function>
    <product id='0x2531'>GA106 [RTX A2000]</product>
    <vendor id='0x10de'>NVIDIA Corporation</vendor>
    <iommuGroup number='9'>
      <address domain='0x0000' bus='0xac' slot='0x00' function='0x0'/>
      <address domain='0x0000' bus='0xac' slot='0x00' function='0x1'/>
    </iommuGroup>
    <numa node='0'/>
    <pci-express>
      <link validity='cap' port='0' speed='16' width='16'/>
      <link validity='sta' speed='16' width='16'/>
    </pci-express>
  </capability>
</device>

Detach the device from the system.

root #virsh nodedev-dettach pci_0000_ac_00_0
Device pci_0000_ab_01_0 detached

Add device to VM xml.

Warning
View FileBox below as Wiki Code... Unable to get output correct due to xml code?
FILE /etc/libvirt/qemu/opnsense.qemu.kvm-x86_64.xml
<hostdev mode='subsystem' type='pci' managed='yes'>
  '"`UNIQ--source-0000006C-QINU`"'
</hostdev>

Now the device is ready for use.

Libvirt/QEMU guest (OPNsense)

Note
This guide relies upon the LAN/WAN configuration above.

Libvirt/QEMU guest (Ubuntu)

Basic VM for Ubuntu Linux, using a virtio network device (via bridge configuration). Note comments do not exist in display below, please view code.

Warning
View FileBox below as Wiki Code... Unable to get output correct due to xml code?
FILE /etc/libvirt/qemu/ubuntu.qemu.kvm-x86_64.xmlUbuntu VM Example
<domain type='kvm'>
        <seclabel type='dynamic' model='selinux'>
                <baselabel>system_u:system_r:svirt_t:s0</baselabel>
        </seclabel>
        <name>ubuntu</name>
        <uuid>XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX</uuid>
        <title>Ubuntu VM</title>
        <description>Ubuntu Server</description>
        <features>
                <acpi/>
                <smm state='on'/>
        </features>
        <os>
                <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
                <loader readonly='yes' secure='yes' type='pflash'>/usr/share/edk2-ovmf/OVMF_CODE.fd</loader>
                <nvram template='/usr/share/edk2-ovmf/OVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/media_VARS.fd</nvram>
                <boot dev='cdrom'/>
                <boot dev='hd'/>
        </os>
        <vcpu>2</vcpu>
        <memory unit='GiB'>6</memory>
        <clock sync="localtime"/>
        <devices>
                <emulator>/usr/bin/qemu-system-x86_64</emulator>
                <disk type='file' device='disk'>
                        <driver name='qemu' type='qcow2'/>
                        '"`UNIQ--source-00000070-QINU`"'
                        <target dev='vda' bus='virtio'/>
                </disk>
                <interface type='bridge'>
                        <mac address='XX:XX:XX:XX:XX:XX'/>
                        '"`UNIQ--source-00000072-QINU`"'
                        <virtualport type='openvswitch'/>
                        <model type='virtio'/>
                </interface>
                <serial type='pty'>
                        <tartget port='0'/>
                </serial>
                <console type='pty'>
                        <tartget port='0'/>
                </console>
                <graphics type='vnc' port='-1' autoport='yes' passwd='vncpasswordhere' keymap='en-us'/>
        </devices>
</domain>
Note
Insert the following under CDROM to use an iso for the installer.
Warning
View FileBox below as Wiki Code... Unable to get output correct due to xml code?
FILE /etc/libvirt/qemu/ubuntu.qemu.kvm-x86_64.xml
<disk type='file' device='cdrom'>
        <driver name='qemu' type='raw' cache='none'/>
        '"`UNIQ--source-00000076-QINU`"'
        <target dev='sda' bus='sata' tray='closed'/>
        <readonly/>
        <address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
Note
Create a QEMU Network Device and connect to bridge configured earlier.
FILE /etc/libvirt/qemu/ubuntu.qemu.kvm-x86_64.xml
<interface type='ethernet'>
        <script path='/etc/libvirt/qemu/ovs-ifup'/>
        <downscript path='/etc/libvirt/qemu/ovs-ifdown'/>
</interface>
root #virsh define ubuntu.qemu.kvm-x86_64.xml
Domain 'ubuntu' defined from ubuntu.qemu.kvm-x86_64.xml

Libvirt/QEMU guest (Windows)

<hostdev mode='subsystem' type='pci' managed='yes'>

      <address domain='0x0000' bus='0xab' slot='0x01' function='0x0'/>

</hostdev>