Using a CacheFlow Enterprise Cache from 2001

preview_player
Показать описание
Reviving a CacheFlow SA-725 from 2001. We'll diagnose some hardware failures, learn how to use it, and perform a load test. A Compaq DL-380 running Red Hat Linux 6.2 and Apache 1.3 serves as our web server and we'll try running the jolt2 denial of service attack against a few machines.

Music by Karl Casey @ White Bat Audio

#netro #retrotech #retrocomputer #computers #networking

Rack stuff

Video gear

Note: The above are Amazon affiliate links. It doesn't cost you extra, but I'll receive a commission which will help keep the content coming. I only link to things I've personally ordered.

00:38 Intro
00:43 CacheFlow SA-725 Overview
07:36 Starting up the CacheFlow
08:10 Diagnosing Hardware Issues
14:45 Load Test Plans
17:09 Setting up jolt2 on a Dell Laptop
21:19 Installing Red Hat Linux 6.2 on the Compaq
31:22 Figuring out how to use the CacheFlow
34:59 Load Testing the CacheFlow
40:45 Next Steps
Рекомендации по теме
Комментарии
Автор

The two hardest things in programming are, in fact, naming things, cache invalidation, and off-by-one errors.

ZdenalAsdf
Автор

Even if the cache is slower or equal to the server itself, it can still help if you're serving dynamic content with something like ASP or PHP. Have the cache handle all the static content, prioritize your server processing power for the content that needs to be generated every time.

KJBZC
Автор

"Should we attempt to compile and run the Jolt2 DOS attack from the Hannah Montana Linux machine? Yes... the answer is obviously, yes."
Amazing. Love it.

votecarp
Автор

Really enjoy your content. You don’t just sit and talk, talk, talk you get in the nuts and bolts of retro enterprise computing. Great video. Well done!

JMassengill
Автор

Love it when you run period software on the servers. That ancient apache is just *chefs kiss*

Aruneh
Автор

Retro networking and server hardware is so much more interesting than retro gaming that most others do, loving this content

uiopuiop
Автор

We had a thousand of these fuckers running as a cache for a "large Australian ISP" . Every state had hundreds, serving up "them Internets" to consumers on dial-up and low-speed DSL... The Internet was a vastly different place back then. Our largest links to the POPs were 50-100Mb/sec, for the whole state.

SSL killed this stuff off. There was no way to cache site assets on disk for replay once SSL became the default. We scrapped all of it in 2004, just before Asia Online went broke, as we moved away from dial-up to DSL/Broadband.



Hateful things.. :P

tcpnetworks
Автор

To paraphrase an old saying "Some people, when faced with a networking problem, think 'I know - I'll use Policy Based Routing!' They now have two problems"

anwrangr
Автор

I worked for CF back in 2001, and we had used Foundry Server Iron switches, and commodity desktop machines as the servers to run this exact test in the test lab.

We had some customized SSL handshake cards in the chassis as well, because that was CPU intensive back then. So you could take that load off your web server and put it on the CF box.

Amazing to see you recreate a part of computing history.

JoshuaDellinger-jonx
Автор

I've managed Blue Coat (Symantec/Broadcom) web proxies for the past 15 years, and this is the first time I've seen its predecessor. The management CLI is still exactly the same ("enter" 3 times, install multiple systems to boot into to make upgrades and backouts easy, syntax similar to Cisco routers/firewalls). Even the web interface has a similar basic structure, and it took them way too long to move off of Java.

As others have mentioned, configuring WCCP on your Cisco router/firewall was a popular way to handle routing at the time. PBR can work but it gets messy. Explicit proxy config or a PAC file would be an the option to have the client make the routing decisions.

Great video!

isharted
Автор

when i worked in IT at a bigbox repair center. we built a linux cache download server. god that changed our lives.... back then internet was maybe 20meg? at best. so being able to pull at 100 meg over the network was a game changer

beardedgaming
Автор

Awesome cache work and great to see it working! Looking forward to seeing if the CacheFlow or another cache can be utilized in your network for client-side caching.. since fractional T1 is kinda slow, ya know

theserialport
Автор

I worked for a web hosting company from 1999-2015. I saw a lot tech come through the farms. Great time to be involved in the early internet and watch it evolve.

muchosa
Автор

The fact you shared my glee with the Compaq mouse is one of the reasons I watch.

_vilepenguin
Автор

That 5 dollar subscription to your patreon is so worth it, content like this is so hard to find.

TrolleyMC
Автор

I wish to gently correct you. The quote is: "There are only 2 hard things in computer science: (0) Cache Invalidation. (1) Naming Things. (7) Asynchronous Callbacks (2) Off-by-one Errors."

VKFVAX
Автор

My fascination with obscure hardware like this is immeasurable. Thank you for all the effort you put into these! Also, a fellow Andrew Camarata and Diesel Creek fan?? Right on!

TylerStartz
Автор

3:46 ...is this bait for engagement? Well, you got me because the two hardest things in software engineering is naming things, cache invalidation, and off by one errors.

questionablecommands
Автор

Im an ancient Compaq/HP admin and remember these servers and even older. Proliant 800, 1600, 1850R just to nane a few. The early Compaq days used something called EISA utilities versus what the clone systems called BIOS. This is why you needed SmartStart utilities. I believe EISA was used by IBM as well. EISA was shockingly similar to how UEFI on modern systems work just without the encryption/security bits. It was way more flexible at the time than BIOS but less intuitive. Funny how things go in cycles eh?

AlexKiddFun
Автор

Hannah Montana Linux, I was NOT prepared 😂😂😂

tech
welcome to shbcf.ru