Start a Conversation

Unsolved

U

1 Rookie

 • 

2 Posts

17

August 29th, 2025 17:11

Intel Ethernet E810-2CQDA2 NIC can't reach full throughput

We are currently evaluating our Dell PowerEdge R650 servers equiped with Intel Ethernet Network Adapter E810-2CQDA2. According to Intel’s product brief, this NIC supports 200 Gbps full-duplex performance (2 × 100 Gbps).

Could you please confirm if there are any platform-specific limitations when using this adapter in Dell servers that might prevent achieving the full bidirectional 100 Gbps throughput on both ports simultaneously? We are getting 45Gbps bi-dir as non drop rate point.

Moderator

 • 

5.2K Posts

September 1st, 2025 03:22

Hello thanks for choosing Dell. May I ask if you are using a Dell certified NIC or 3rd party card? I don't think I'm seeing E810-2CQDA2 for R650. Let us know what you have.  You can take a picture of the label. 

Respectfully,

1 Rookie

 • 

2 Posts

September 1st, 2025 11:32

@DELL-Young E​ I have no picture available right now, I can share "Poduct Name" I can see from iDRAC and "Part Number": 

- Product Name: Intel(R) Ethernet 100G 2P E810-C Adapter - 40:A6:B7:B2:8F:10

- Part Number: 085F8F

- Serial Number: MYFLMIT2AM003I

(edited)

Moderator

 • 

3.5K Posts

September 1st, 2025 12:35

Hi,
here are some potential factors and recommendations:

1. Platform-Specific Limitations
PCIe Bandwidth: The E810-2CQDA2 is a PCIe 4.0 x16 card. Ensure that the Dell PowerEdge R650 server's PCIe slot is configured to provide full PCIe 4.0 x16 bandwidth (16 lanes). If the slot is bifurcated or limited to fewer lanes (e.g., x8), this could bottleneck the throughput.
CPU and Chipset: The Dell R650 supports Intel Xeon Scalable processors (Ice Lake). Verify that the CPU and chipset can handle the full PCIe 4.0 bandwidth. Some configurations might share PCIe lanes with other devices, reducing available bandwidth.
2. NIC Configuration
Driver and Firmware: Ensure you are using the latest Intel drivers and firmware for the E810-2CQDA2. Outdated drivers can significantly impact performance.
Interrupt Moderation and Offloads: Check if interrupt moderation, Large Send Offload (LSO), or other offload features are enabled. These can improve throughput but may need tuning for your specific workload.
Queue Pairs and RSS: Configure Receive Side Scaling (RSS) to distribute traffic across multiple CPU cores. Insufficient queue pairs or CPU cores can limit performance.
3. Network and Traffic Considerations
Traffic Pattern: The 45 Gbps bidirectional throughput you observe might be due to the traffic pattern (e.g., small packet sizes or single-flow traffic). For full throughput, use multiple parallel flows (e.g., with tools like iperf3 with -P flag for parallel streams).
Switch Configuration: Ensure the connected switch supports 100 Gbps full-duplex and is not limiting the throughput due to QoS policies, ACLs, or other settings.
Cabling and Optics: Verify that the cables and optics (e.g., QSFP28) are certified for 100 Gbps and are functioning correctly.
4. Specific Considerations
BIOS Settings: Check the Dell BIOS for any settings that might limit PCIe performance, such as power-saving modes or PCIe link speed/width settings.
5. Testing Methodology
Tool Selection: Use performance testing tools like iperf3 or ntttcp with appropriate settings (e.g., large packet sizes, multiple threads) to measure bidirectional throughput.
Baseline Testing: Test each port independently to isolate whether the issue is port-specific or systemic.
Next Steps
Verify PCIe Slot Configuration: Confirm the NIC is installed in a full x16 PCIe 4.0 slot.
Update Drivers/Firmware: Download the latest from Intel's support site.
Optimize Traffic: Use parallel streams and large packets for testing.
Check Switch/Cabling: Ensure no bottlenecks exist in the network path.

No Events found!

Top