MikroTik CCR2004 PCIe NIC in Proxmox

preview_player
Показать описание
A smart PCIe network interface card that adds full-fledged router capabilities to your servers.
Proxmox.
Druvis.
Everything you need for unlimited knowledge in another episode of #MikroTips!
Рекомендации по теме
Комментарии
Автор

I think we need more videos about this card. Id like to better understand usecases and how it works.

AI-xijk
Автор

Very-very good, troubleshooting-style video! I’m not familiar with Proxmox, but it was interesting for me. Thanks.

Автор

Another good thing to try if you want to maximize throughput to a single VM is to directly assign either individual interfaces or the whole PCIe card to a VM. This lets you skip the linux kernel bridge as a possible bottleneck.

brwainer
Автор

Ideally you have SR-IOV support and poke that into the VM directly rather than use a virtual Ethernet card in KVM. I think otherwise you won't get the full capability of PCIe because the KVM guest has to jump through the kernel in both directions.

One thing I have been curious with these cards - is it possible to make the card work basically separately, and then communicate back to the host system via one of the PCIe interfaces? Think like a security appliance where normal packets just come in one interface and go out the other and don't touch the host system's CPU, but packets you want to inspect make a trip through the host system.

ZiggyTheHamster
Автор

MT should fulfill it´s promises and support BSD for the CCR2004-PCIe. It would go great with a PFSense or Opnsense Firewall!

woobm
Автор

another great monitoring app for CLI is glances, also give you good overview

Cossack
Автор

An update video would be greatly appreciated, this is a good card for mikrotik as well as for other open source router OSs.
Example, would something like this work with my lenovo tiny p330, could i use this and a switch to have the ultimate router + proxmox + whatever else ?

examen
Автор

I would love to try this is in a one of my Lenovo / IBM servers

MySmartHomeDomain
Автор

Through many different evolutions of traffic generators we finally found that TREX was the most cost effective way to test devices at our ISP. A Dell R610 can generate about 10Gbps in ASTM mode using Intel Optical cards. TREX has been tested up into multiple tens of gigs and there are even anecdotes of it being used at 100Gbps but I cannot verify this.

mkpickle
Автор

Can the switching acceleration hardware on the board used to make a high-performance firewall? Can multiple boards be used on a signle system with the acceleration hardware on the boards to make a larger fabric across the boards?

pleappleappleap
Автор

Thanks for the video. Since I saw this NIC announcement, I thought the idea was to run CHR directly on the nic, and not so much use it as a passthrough to other VMs. Is that possible?

AaronPace
Автор

So I was just wondering if you have tried tweaking the MTU size to fit 25Gb speed ???
I know for 10Gb the MTU can be shaped to 9000 but in my experience leaving it default in production environment is easy troubleshooting.

SimonLally
Автор

I ordered 3 of this 7 month ago to do exactly that. Haven't received a single card 😢

MdMke
Автор

I just need to be able to get hold of the damn thing... been on pre-order for nearly a year :(

BusbyBiscuits
Автор

It would be great to have similar video with the VMWare ESX and Hyper-V :)

galvesribeiro
Автор

ordered it 06/2022 - still waiting. not available like many other products. i am certified for your stuff and need them for customerprojects, but cannot buy them anywhere. i am really pissed

richik
Автор

Give us EVPN in these cards and you'll see stock go out next day. What an easy enabler of full L3 underlay, especially considering the price.

dimplick
Автор

Isnt virtio limited to 10gps in the driver. The only solution is to pass through the hardware using iommu and making a dedicated VM driving the NIC.

idw_audio_it
Автор

2:00 Arriving a little bit late to this video, but you can fix the NIC naming forcing it using udev rules, which can force the device names using properties like PCI address or MAC addresses from the NIC.

the.elven.archer
Автор

Product definitely looked interesting. However, the fact it simply stops working whenever it's rebooted kinda kills all use cases. Also, I experienced some kernel panics while running it. I suppose if they can fix the PCI-E initialisation issues (e.g. allow it to re-initialise after the host system has booted), it becomes a much more interesting product. Currently having two of these cards but not deploying them as it simply wasn't stable.

JorritPouw