COVID Budget Anniversary 2.0 Build

I want to first say thank you for all the great information provided on the website that guided my component selection for my budget build.

Unfortunately due to the current COVID pandemic, the selection of some components has become few and far between, especially for newer items (case and PSU specifically). The purpose of my build was to create a virtualization server to host either Hyper-V server, or Proxmox, as cheap as possible, and still leave room for future upgrades.

Type Part Cost
CPU 2 x E5-2630 (SR0KV) $16.22
CPU cooler 2 x Supermicro 2U Passive $20.00
CPU Fans 2 x Nidec UltraFlow 60mm $13.82
Motherboard Supermicro X9DRD-LF $54.06
Case DeepCool MATREXX 55 $49.18
RAM 2 x Samsung PC3L-10600R 16GB $44.13
SSD ADATA 240GB SATA 3 2.5" $35.16
Power Supply Thermaltake 500W $60.53
Fans Arctic F12 PWM PST 5 Pack $36.12
Cable EPS 8-pin splitter $6.69
Total $335.91
  • Prices listed account for tax and shipping charges, if any

Also installed and not included in the cost, as I already had these items:

  • PCIe M.2 Adapter with 500GB Mushkin Helix-L NVMe
  • WD 500GB Blue
  • Arctic Thermal Paste



Some things I still need to take care of include buying an IO shield and provide active cooling for the CPUs. I had thought the listing for the motherboard included the shield, but it did not, easy fix. As for the CPU cooling, the case I selected limits the intake on the 3 front mounted case fans to just the sides of the front. Under a CPU stress test CPU2 overheated after a couple minutes, but if the front panel is removed the CPU temps stay manageable (70 C). I plan on installing 60mm PWM fans in front of the CPU heat sinks as a cheaper alternative.

Update: 06/05/20
I have purchased (prices provided above have been updated) and installed 2x 60mm fans onto the 2U heatsinks. After running a burn test for two hours the CPU temps stayed steady at 58 C (ambient 24 C) with the CPU fan speeds spinning at 3700 rpm and the case fans at 880 rpm.

2 Likes

Looks like you already have a plan for my (minor) criticisms, so good on you for that!
I’m looking forward to seeing updates for your build in the future.

Oh and, two other things -

  1. I’d personally flip your top fan and use it as an exhaust, you already have 3 intake so 2 exhaust would be ideal.
  2. You should try making a cardboard air duct that reduces the available area and funnels the air over the CPUs and motherboards, you should be able to reduce CPU temperatures significantly without active cooling.

My initial configuration had the top fan setup as exhaust, but since CPU2 (top) is the one expirencing the overheating issue I figured intake would be better for now.
And good suggestion on the cardboard ducting.

This is freakin’ AWESOME!! The prices you got on these parts is really good. Especially ram. If you stick with 16gb sticks and fill this thing out, 128gb is great. The only issue i have with this board, single pcie slot. I do think that i might support bifurcation so maybe could get a riser/extension card and add another nic if you need it at some point.

For a hypervisor, i personally like proxmox. free as in free, zfs support, “fancy” debian base, easy to use(imo). to be fair though, haven’t used hyper v and haven’t used xen/esxi/etc in years.

I am sure the prices could go lower, but I was limited to some selections (case and PSU specifically) due to the low stock caused by the pandemic.

The single PCIe slot was also one of my initial concerns but since my intention for this build was to do a low level VE test bench of sorts, the two onboard SATA3 connections should suffice for now. For future expansion, my intention was to utilize the single slot for an 8x HBA to run some low cost SATA SSD’s in a RAID 5 or 6.

I briefly checked for a proper bifurcation card, but the ones I found were expensive enough that it would be cheaper to upgrade the motherboard than to install the pcie expander/riser cards.

As for the hypervisor, the only reason I started off with Hyper-V was because the office is a Windows shop and the Hyper-V server software, which is also GUI-less for the most part, is provided for free from Microsoft. And Microsoft has an Admin Center software for web based management or you can manage the deployment from another windows server through the Hyper-V console.

i think overall the prices you got for the main components(cpu/ram/mobo) is pretty great. i haven’t found that cpu for that price, the motherboard i found at $60+ and the ram was $40/a stick instead of a pair.so pricewise, you hit the gold mine.

as a test machine, i would agree. imo, the only reason to swap the nvme adapter would be more networking ports or like you mention, hba.

for bifurcation i meant more so the pcie slot on the motherboard. some SM boards have a riser card that splits out the x16 slot into two x8 slots. haven’t looked fully into it but i feel like there are extentions or something that would be cheaper and able to split it.something like this. with the space at the bottom of the case, should be able to slide the adapter in and the plug up two pcie devices. i looked at the motherboard manual and it mentions bifurcation but it’s a x9drd-if/lf manual according to their site so i’m sure if it’s specific to your board, both or just the if. in the bios it’s under the north bridge setting.

ah. i misread your OP and thought it was more a question. my bad. that makes sense. sounds like a good setup. never messed with hyper-v so might have to try it out.

The ram sticks I purchased were used, since they are ECC I didn’t think purchasing new for twice the price was warranted.

as for the bifurcation, checking the BIOS the board will support x4x4x4x4, but finding a board to break a single x16 into 4x4 at a discount may be challenging. The one I found was a $100 pre-order

My other thought was since the X9DRD-LF shares the -iF mainboard and BIOS firmware, if I can solder the PCIe slot and associated capacitors to populate the board. I am still researching to see if anyone has done this before.

I also updated the original post with information about the additional CPU fans added to the build.

2 Likes

That is a really, really cool breakout board…

Hi @silverkorn

I have the -if variant of this board, and I’m having trouble getting the nvme card to work right. Specifically, it works, but only in pcie gen 1 mode (4 lanes). I tried what seems like every permutation of bios settings with no luck. The card looks similar to yours, 4 lanes w/one m2 port.

Did you get it to work correctly at full speed? If so, did you change anything in the bios? And what card are you using?

Any help would be appreciated.

Thanks

I no longer have that card installed in the machine, but from what I remember I did not have any issue getting the card to provide access to the drive, I did start with default BIOS settings.

I have since changed out that card for an Asus Hyper M.2 x16 Card V2 which allows me to install 4 M.2 cards into the single PCIe slot and read each drive separately with the bifurcation. However each card would only access 4 of the 16 lanes. Here are my current BIOS settings:

I just ran a CrystalDiskMark test on a Windows VM located on one of the drives and got read of 8,000 MB/s and Write of 7,400 MB/s which seem correct for the Gen 3 slot

1 Like

That seems extraordinarily high, I be the drive maxes out at around 3000 MB/s actually. Windows has an issue reporting drive speeds in VMs, they are usually wildly incorrect.

Thanks @silverkorn .

I still can’t get either of my 2 cheapo PCIe adapters to work in anything other than Gen1 mode, with speeds to match (~ 900 MB/s). And the same adapter works in Gen2 – twice as fast – on another motherboard (Asus I think, belongs to a friend). So it can’t be the adapter, can it? Those Asus Hyper cards are expensive and in short supply apparently, I would really like to confirm that my motherboard is capable of this before I buy one.

In any case, the single m.2 slot adapter looks very simple to me, there no microchips on it, why would it cause this degraded state anyway?

Also: do you mind checking what your BIOS version is?

To me, it sounds more like a problem with the motherboard/BIOS config than the adapter. Which slot are you using? Pics?

I tried all ports – most recently cpu1/slot3:

BIOS settings:




dmesg (linux) has this message:

[    1.051726] pci 0000:04:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x4 link at 0000:00:02.1 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)

lspci shows

LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
LnkSta:	Speed 2.5GT/s (downgraded), Width x4 (ok)

Full output for the drive:

04:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 (prog-if 02 [NVM Express])
	Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 27
	NUMA node: 0
	Region 0: Memory at dfa00000 (64-bit, non-prefetchable) [size=16K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [70] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 25.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
			RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, NROPrPrP-, LTR+
			 10BitTagComp-, 10BitTagReq-, OBFF Not Supported, ExtFmt-, EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-, TPHComp-, ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
			 AtomicOpsCtl: ReqEn-
		LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete-, EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [b0] MSI-X: Enable+ Count=33 Masked-
		Vector table: BAR=0 offset=00003000
		PBA: BAR=0 offset=00002000
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [148 v1] Device Serial Number 00-00-00-00-00-00-00-00
	Capabilities: [158 v1] Power Budgeting <?>
	Capabilities: [168 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn-, PerformEqu-
		LaneErrStat: 0
	Capabilities: [188 v1] Latency Tolerance Reporting
		Max snoop latency: 0ns
		Max no snoop latency: 0ns
	Capabilities: [190 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
			  PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=0us LTR1.2_Threshold=0ns
		L1SubCtl2: T_PwrOn=10us
	Kernel driver in use: nvme
	Kernel modules: nvme

Downgraded BIOS from v3.3 to v3.2 – link working at full speed now, but bifurcation options are missing in that version of the BIOS. Also, I think in 3.3 bifurcation doesn’t actually do anything (besides forcing everything to PCIe gen1) because I don’t see any difference in bus topology visible to the OS (lspci etc). I only have one NVME drive so I can’t (?) definitively test bifurcation.

Anyway, I can live without bifurcation, but I’m still curious how @silverkorn got the LF board to work.

@silverkorn : are you sure your drives work at full speed ? Would you mind looking up the BIOS version you have there?

Thanks

I finally got around to taking the machine offline to check the BIOS settings:





These are my settings untouched from the working machine. I did notice were using different processors, mine are version 1 to you version 2. The other difference could be the PCI adaptor boards, as I have been using the Asus Hyper M.2 V2, with only 2 of the 4 slots populated with drives. And another odd thing was after snapping these screen shots, upon rebooting back to Proxmox the entire board was not found, took a couple reboots and power cycles to get the motherboard to finally recognize the board again.

After finally booting back to Proxmox I ran some hdparm tests on the drives:

root@pxmx:~# hdparm -t /dev/nvme0n1
/dev/nvme0n1:
 Timing buffered disk reads: 3534 MB in  3.00 seconds = 1177.62 MB/sec
root@pxmx:~# hdparm -t /dev/nvme1n1
/dev/nvme1n1:
 Timing buffered disk reads: 3730 MB in  3.00 seconds = 1241.53 MB/sec
root@pxmx:~# hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads: 1186 MB in  3.00 seconds = 394.87 MB/sec

And here are the detailed specs on the NVMe drives:

89:00.0 Non-Volatile memory controller: Toshiba Corporation XG4 NVMe SSD Controller (rev 01) (prog-if 02 [NVM Express])
        Subsystem: Toshiba Corporation XG4 NVMe SSD Controller
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 39
        NUMA node: 1
        Region 0: Memory at fbe00000 (64-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [50] MSI: Enable- Count=1/8 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [70] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 25.000W
                DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
                        RlxdOrd- ExtTag+ PhantFunc- AuxPwr- NoSnoop- FLReset-
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s (ok), Width x4 (ok)
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- TPHComp- ExtTPHComp-
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
                         AtomicOpsCtl: ReqEn-
                LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+ EqualizationPhase1+
                         EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [b0] MSI-X: Enable+ Count=8 Masked-
                Vector table: BAR=0 offset=00002000
                PBA: BAR=0 offset=00003000
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
                AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 00000000 00000000 00000000 00000000
        Capabilities: [168 v1] Alternative Routing-ID Interpretation (ARI)
                ARICap: MFVC- ACS-, Next Function: 0
                ARICtl: MFVC- ACS-, Function Group: 0
        Capabilities: [178 v1] Secondary PCI Express
                LnkCtl3: LnkEquIntrruptEn- PerformEqu-
                LaneErrStat: 0
        Capabilities: [198 v1] Latency Tolerance Reporting
                Max snoop latency: 0ns
                Max no snoop latency: 0ns
        Capabilities: [1a0 v1] L1 PM Substates
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+
                          PortCommonModeRestoreTime=255us PortTPowerOnTime=1200us
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                           T_CommonMode=0us LTR1.2_Threshold=0ns
                L1SubCtl2: T_PwrOn=10us
        Kernel driver in use: nvme
        Kernel modules: nvme
8a:00.0 Non-Volatile memory controller: Silicon Motion, Inc. SM2263EN/SM2263XT SSD Controller (rev 03) (prog-if 02 [NVM Express])
        Subsystem: Silicon Motion, Inc. SM2263EN/SM2263XT SSD Controller
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 39
        NUMA node: 1
        Region 0: Memory at fbd00000 (64-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [50] MSI: Enable- Count=1/8 Maskable+ 64bit+
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [70] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 25.000W
                DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
                        RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <8us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s (ok), Width x4 (ok)
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- TPHComp- ExtTPHComp-
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
                         AtomicOpsCtl: ReqEn-
                LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS+
                         Compliance De-emphasis: Unknown
                LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+ EqualizationPhase1+
                         EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [b0] MSI-X: Enable+ Count=16 Masked-
                Vector table: BAR=0 offset=00002000
                PBA: BAR=0 offset=00002100
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 00000000 00000000 00000000 00000000
        Capabilities: [158 v1] Secondary PCI Express
                LnkCtl3: LnkEquIntrruptEn- PerformEqu-
                LaneErrStat: 0
        Capabilities: [178 v1] Latency Tolerance Reporting
                Max snoop latency: 0ns
                Max no snoop latency: 0ns
        Capabilities: [180 v1] L1 PM Substates
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
                          PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                           T_CommonMode=0us LTR1.2_Threshold=0ns
                L1SubCtl2: T_PwrOn=10us
        Kernel driver in use: nvme
        Kernel modules: nvme