This is just too fast! 100 GbE // 100 Gigabit Ethernet!

preview_player
Показать описание
This is crazy! Testing 100 GigE (100 Gigabit Ethernet) switches with AMD Ryzen 9 5950X CPU and Radeon RX600XT GPU! Who will win? 100 GbE Network or Gaming PCs with 100 Gig network cards?

Menu:
100GbE network! 0:00
How long will it take to copy 40Gig of data: 0:22
Robocopy file copy: 1:08
Speed results! 1:28
Windows File copy speeds: 1:59
iPerf speed testing: 2:30
iPerf settings: 3:20
iPerf results: 3:42
100G Mellanox network cards: 5:14
Jumbo Packets: 6:04
Aruba switch: 6:26
Switch configuration: 7:07
Back to back DAC 100GbE connection:8:52
iPerf testing using DAC cable: 10:05
Windows File copy speeds: 11:00
Robocopy test: 11:30

=========================
Free Aruba courses on Udemy:
=========================

==================================
==================================

======================
Aruba discounted courses:
======================
To register with the 50% off discount enter “DaBomb50” in the discount field at checkout.

The following terms & conditions apply:
50% off promo ends 10/31/21
Enter discount code at checkout, credit card payments only (PayPal)
Cannot be combined with any other discount.
Discount is for training with Aruba Education Services only and is not applicable with training partners.

================
Connect with me:
================

aruba
aruba 8360
aruba networks
aruba networking
abc networking
qsfp
iperf
robocopy
aruba 6300m
100gbe switch
25gbe switch
dac cable
aruba instant one
hpe
hp
hpe networking
aruba mobility
aruba security training
free aruba training
clearpass
clearpass training
hpe training
free aruba clearpass training
python
wireshark
mellanox
mellanox connectx
mellanox connectx-4

Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel!

#100gbe #100gigethernet #arubanetworks
Рекомендации по теме
Комментарии
Автор

Menu:
100GbE network! 0:00
How long will it take to copy 40Gig of data: 0:22
Robocopy file copy: 1:08
Speed results! 1:28
Windows File copy speeds: 1:59
iPerf speed testing: 2:30
iPerf settings: 3:20
iPerf results: 3:42
100G Mellanox network cards: 5:14
Jumbo Packets: 6:04
Aruba switch: 6:26
Switch configuration: 7:07
Back to back DAC 100GbE connection:8:52
iPerf testing using DAC cable: 10:05
Windows File copy speeds: 11:00
Robocopy test: 11:30


Free Aruba courses on Udemy:



Free Aruba courses on davidbombal.com



Aruba discounted courses:

To register with the 50% off discount enter “DaBomb50” in the discount field at checkout.

The following terms & conditions apply:
50% off promo ends 10/31/21
Enter discount code at checkout, credit card payments only (PayPal)
Cannot be combined with any other discount.
Discount is for training with Aruba Education Services only and is not applicable with training partners.


Connect with me:



Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel!

davidbombal
Автор

I think your bottleneck is your PCI Express bandwidth. I'm assuming you have X570 chipset motherboards in the PCs you are using. You have 24 PCIe 4.0 lanes in total with the 5950x and X570 chipset. It is generally broken down to 16 lanes for the top PCIe slot, 4 lanes for a NVMe, and 4 lanes to the X570 chipset. However, this is all dependent on your exact motherboard, so I'm just assuming currently. Your GPUs are in the top x16 slot so your 100g NICs are in a secondary bottom x16 physical slot. This slot fits a x16 card, but is most likely electrically only capable of x4 speeds that is going through the x4 link of the chipset. Looking at Mellanox's documentation the NICs will auto-negotiate the link speed all the way down to x1 if needed, but at greatly reduced performance.

This PCIe 4.0 x4 link is capable of 7.877GB/s at most or 63.016 Gb/s. As other I/O shares the chipset bandwidth you will never see the max anyways. To hit over 100Gb/s you would need to be connected to at least a PCIe 4.0 x8 link or PCIe 3.0 x16 link. There are other factors as what your PCIe bandwidth is, such as if your motherboard has a PCIe switch on some of its slots. You would want to check with the vendor or other users if a block diagram exists. A block diagram will breakdown how everything is interconnected on the motherboard.

You could try moving the GPUs to the bottom x16 slot and the NICs to the top slot. Also you could confirm in the BIOS of each PC that the PCIe slots are set to auto-negotiate or manually set it if required, assuming that's on option for your BIOS.

These NICs are more designed to be used in servers than a consumer end CPU/chipset. The HEDT (High End Desktop) and server CPUs from Intel and AMD have much more PCIe bandwidth to allow for multiple bandwidth heavy expansion cards to be installed and fully utilized.

Being that I believe iPerf by default is memory to memory copying, you should be able to see close to the max 100Gb/s if you put them in the top slot on both PCs. As far as disk to disk transfers reaching that, you would need a more robust storage solution than what would be practical or even possible in a X570 consumer system.

NeoQuixotic
Автор

Absolutely the best way to predict the future is to create it.

simbahunter
Автор

100 Gbps is crazy when I'm still amazed by my 1 Gbps. Love the content, appreciate evrything you do :)

Paavy
Автор

I would imagine the limitation also involves the HDD/SSD.

Please make sure to update us when you solve this as all of us would be interested in upgrading our home networks!

Thanks for the video, and all your encouraging messages.

James-vdxj
Автор

The DMI is the bottleneck. Move your NIC to pcie_0, the one linked directly to your CPU and if you can, turn on RDMA.

daslolo
Автор

Very well explained and interesting video. Technology advances are coming thick and fast. Speeds we didn't even dream of getting 10 years ago are a reality. Keep up the good work David

michaelgkellygreen
Автор

Would love a video on how the network was set up physically, what cards, transceivers and cables you used.

QuantumBraced
Автор

You need PCIe Gen4 to keep up with 100Gbps. The ConnectX-4 NIC is Gen3. The AMD X570 desktop chipset doesn't have enough PCIe lanes to run both your GPU and NIC at full 16 PCIe lanes so the NIC is likely running 8x. You need a more modern NIC and a Threadripper or Epyc board that does PCIe Gen4. Maybe AMD can loan you a prerelease of their next-gen Threadrippers for testing? The current gen will work but...

Technojunkie
Автор

How ironically fitting is it that the most you can get at that 6:30 mark is 56Gbits per second...

How far we have come from the simple modest and humble 56k modems... lol

sagegeas
Автор

In the files copy test limitation is from storages speed, you can check it with local file copies.

MihataTV
Автор

This was the craziest setup for my Arch.Thank you, David

purp
Автор

This video is enriched with some solid information thank you David 🙏
You are the only that I can comment without any hesitation bcz you always.., ... i loss my words

MangolikRoy
Автор

the copying window @ 11:20 shows 2.23GB/s. Doesn't the 2.23GB/s represent 2.23Giga *bytes* /second, not 2.23Giga *bits* /second? i.e: (ie: uppercase B = bytes, lowercase b = bits). This would mean your throughput for these files is actually 2.23GB/s * 8 = 17.84 Giga *bits* /second or 17.84Gb/s.. Sadly, still nowhere near that 55Gb/s from iperf though

gregm.
Автор

Another interesting video. Thank you, David

georgisharkov
Автор

What HDs are you using? Even pcie 4.0 hds top out at about 7.5 gbs of R/W speeds. Doubt you'll be able to get faster by copying, but you should be able to stream (no clue how you could test at that speed) faster than you are coping

russlandry
Автор

I haven't tried such a crazy thing like you did so I might be saying something you already tried but have you tried to raid 0 multiple pcie 4.0 ssds directly thru the CPU lanes? Also on a ryzen 5800x rather than a dual CCD CPU. Maybe the way the IO die works may be limiting your case. It may be splitting the io die into two and while dedicating a single CCD to the task, leaving other to everything else while also having the IO die to distribute its resources evenly per die. You might wanna try it on a raid 0 config directly on the CPU lanes.

fy
Автор

Do these cards support Direct Memory Access (DMA)?
I've heard that DMA and RDMA are here to solve these sort of problems. As I never had such a sweet issue, I don't know exactly for example, should you enable it or it is enabled by default, or how exactly you can take advantage of it on windows machines...

bahmanhatami
Автор

Awesome to see you talk about high performance computing networking topics! (We met at Cisco live in San Diego a few years ago, and we talked about Summit supercomputer and Cumulus Linux) The first bottleneck was your disks.
The second bottleneck is probably your PCI slot. Is your motherboard pcie gen 2, 3 or 4? How many lanes of do you have on your slot and the card?
What is your single TCP stream performance? If you remove the -P option, do single stream, then do 2, 4, 8, and so on.

danielpelfrey
Автор

Maybe do a file transfer between two server? I'm also interested to see how Intel Desktops handle 100Gb network speeds. Cool vid!

MiekSr