You may need to shut down the VM, check the device config to ensure it’s set to e1000, then boot it back up. The PCI ID on your original post belongs to the virtio-net device.
You may need to shut down the VM, check the device config to ensure it’s set to e1000, then boot it back up. The PCI ID on your original post belongs to the virtio-net device.
Instead of trying to backport the virtio device drivers to that version, I’d recommend editing the VM to use the emulated e1000 NIC.
You must have had a real sweetheart deal on VMware then. Proxmox is cheaper than VMware even under the old pricing. You also don’t have to buy the “Standard” subscription. There are cheaper ones.
When you see the Windows and Apple icons on a game, that indicates native Windows and MacOS support. The Steam logo is native SteamOS/Linux. You’ll also see a “SteamOS/Linux” section on the system requirements.
I’m not aware of any that would run all of it at the same time. Most of this equipment is built for use with a server CPU and motherboard, which obviously has more PCI-E lanes. The Zen 5 consumer CPUs only have 28 PCI-E lanes, so unless you buy a motherboard that breaks out more through the use of a PCI-E switch, that’s all you’ll get.
WAN would be the Internet uplink port. A 2.5G WAN port is a 2.5 gigabit Ethernet port. 2.5 gigabit and o a lesser extent 5 gigabit Ethernet are a standard that’s becoming rapidly available on a lot of hardware. OP is stating that for a device shipping near the end of 2024, a new router that is shipping with only 1 GbE instead of 2.5 GbE is a problem.
Not really. WAN has always been WAN. Wireless has always been WLAN.
That’s right. So on the top backplane, you’ll connect the Oculink ports to the Oculink outfitted HBA. One port per drive.
For the bottom 8 drives, it looks like you’ll have one miniSAS HD connector per four drives, plus another for the rear bays. I initially thought they were plain SATA and would go to the motherboard. But it looks like you’ll need a third connector - so you’ll want a 16 port HBA (Supermicro AOC-S3216L-L16iT).
Reading through all the documentation I can, it looks like you’ll have the option to run all the bays as NVME or SAS disks. The controllers and layouts I’ve listed are for running four bays as NVME, and the other 10 as SAS.
If I understand the ports you have - Supermicro AOC-SLG3-4E4T for the U.2 bays and a Supermicro AOC-S3008L-L8e for the SAS bays. You could replace the SAS card with a Dell HBA 330 as well. The Dell PERC cards that support NVME storage don’t appear to have the Oculink ports your backplane has.
Centralized logging like Graylog or Grafana Loki can help with a lot of this.
7000 series run AVX512 as two 256 bit data paths, while the 9000 series has a native 512 bit data path for AVX512.
It looks like you’ve created a partition for Linux - you need to delete this partition. Leave it as “unallocated” or “free space”. Then the Fedora installer will see it as space it can install to. The installer will handle creating the actual partition.
I don’t recall in virt-manager off the top of my head. But if you make changes in the XML of a domain, you do have to shutdown/restart the domain before they’re effective. And just to be safe, I would say to shutdown the domain, then check the XML, then start up again.
You do say you’re just using qemu, so if that’s the case and you aren’t using libvirt in front of it, shutdown the VM, make sure your qemu command specifies an e1000 network device, and run again.
I can check virt-manager when I get some free time this evening, if that’s what you want/need.