Span vs Tap Testing

preview_player
Показать описание
The goals of this lab were to investigate and document added latency or packet loss when using a span port compared to a TAP.

The methodology for the various tests was fairly straight forward;
100,000 frames were generated by the Optiview to another Optiview and captured by another Optiview.
We chose to only generate a 9% load to best resemble an average loaded gigabit port. The point being that ports under greater load would result in more latency.
The Optiview was chosen since it can capture with a 10 nanosecond resolution and packet slicing was used to reduce the trace file size.
The remaining trace file was filtered by the IP identifier since the Optiview keeps this value constant for all packets.
This trace file was then converted to a CSV file using Wireshark and the filtered output’s delta’s time was graphed using Excel.
The order of the tests are quite important; the first test was a baseline of 2 Optiviews back to back, the second test was to introduce a switch, the third test was to use a tap and lastly we used a span port.
Here is a summary of the packet latency results:
Back to Back = 68 – 69 microseconds
Switch = 56 – 80 microseconds
TAP = 55 – 80 microseconds
Span Port = 50 – 88 microseconds

The conclusion of our tests highlight that the span port used created more latency between packets as well as per packet latency where the TAP resulted in very little latency .

The slides below document products used, any settings or configuration notes.

Рекомендации по теме