The Update That CRASHED Society - CrowdStrike

preview_player
Показать описание
CrowdStrike (one of the worlds largest cyber security firms)... released an unvetted update that shut down large parts of the financial, healthcare, logistics and tech world, in what is (by far) the largest IT blunder of human history.

It is a chilling reminder of just how fragile the systems we rely on actually are, and how close we are at any given moment to simple lines of code de-stabilizing large parts of the modern world. Crowdstrike exists as a company to protect operating systems from threats, but this time, it was the source of all their problems.

#Crowdstrike #tech #technology
Рекомендации по теме
Комментарии
Автор

Ways to support the channel/special deals.

UpperEchelon
Автор

These companies wonder why we don't want them to have kernel level access to our stuff.

John-dumq
Автор

The first problem: Forced Updates
If there was a way to delay an update even by a few hours this wouldn't be as widespread.

Mac_Omegaly
Автор

The CEO used to be the CTO of McAfee in 2010 where they ran an update that listed svchost as malicious. Bricked tens of thousands of computers.

JamesJansson
Автор

I work in I.T. I just got off work, having to manually go in an office of 300+ computers and had to manually remove the one file that crippled all of our enterprise, what a joke.

Davidg
Автор

It's stupid that we put so much responsibility on a few key mega corporations to run our modern systems.

Crambull
Автор

Now the younger generations understand the 'blue screen of death'
Know our pain.

themalcontent
Автор

Surgical rooms got shutdown which caused my aunts life saving surgery to be canceled. This has definitely caused deaths in the medical field. She's fighting to survive now.

fadedmass
Автор

ahh good, i just spent a 12 hour shift fixing this at my company and now i get to watch a video on it :)

lawrencekuhlman
Автор

My brain wants to rename the company Cloudstrike because their software gave the internet a falcon punch.

vgernyc
Автор

I'm an IT director for a network of clinics and we were down for about 3 hours. We had a dozen stragglers towards the end of the day, but overall my team did an awesome job mitigating the issue. My CrowdStrike rep called around noon offering help, and I called him "the bravest" man I know and promptly cut the call short. The letter offered by the CS CEO was of little comfort.

DarkClosetOfTheMind
Автор

The company laid off a large portion of their QA team recently. Just FYI

lgtokyo
Автор

as someone who works in software QA: this only proves how important that profession is. Devs can say "oh it worked on my machine" all they want, but every machine is different no matter what you can test for. This level of incompetence seems: malicious to me. No QA employee on the planet would let that slide. From my brief look this morning, it does not look like they have QA, or an incompetent one if at all. If your software can impact the entire world this way, you NEED a robust QA department.


One should also: NEVER have auto updates on because of things like this. You never know.

SorcererDragon
Автор

In the company where my brother works, the boss is paranoid and he has a couple dozen laptops with all internet functionality disabled. Once a month they unplug their server from the internet, connect the laptops to it by cable, copy all data there and then plug the internet cable back on the server. They thought he was crazy but when this fiasco happened, they could pull out the laptops and with minimal adjustments continue their work. Not at full capacity of course but at least they didn't crash completely.

jpteknoman
Автор

The update broke the Windows systems because the broken component was a kernel driver. If kernel drivers crash, they usually crash the OS.

The Internet itself was largely uneffected, because thankfully most of the Internet doesn't run on Windows.

the_beefy
Автор

As someone who works in tech, it's crazy to me that this Falcon patch was pushed out to everything and not staged from non-critical (kiosks, etc) to critical (911).

phxsisko
Автор

this is why market competition is important.

chikarati
Автор

I've been in IT for 30 years. This was not an accident. Industry standerd is Test, QA, and then production. Zero chance this patch made it past test, much less QA. Also it is an unspoken rule that patching never takes place on a Friday. There is something else going on. Watch and see.

SyntaxNexus
Автор

The old adage still stands "Don't put all your eggs in one basket"

markjames
Автор

This took down the entire Windows side of our data center where I work. Not one server was up. And our windows guys when they logged into the virtual host to research they said they never saw something like this. Every single windows server sitting stuck on a blue screen. Here is the kicker our laptops and desktops where boot looping. Crash and loop.

From around 11:30pm central time till 10am Friday morning I was in a meeting with the entire leadership and upper management who weren’t in a good mood having been awoken.

This technically halted our entire company. All I know is that by the end they were grilling crowdstrike hard and asking for refunds and compensation for the damages.

Because this happened at night it gave us enough time to bring up at the minimum the laptops and desktops and critical servers so that when people started signing in the impact was reduced.

Leadership was estimating in range of many millions of dollars we experienced in damages due to this outage.

I’m sure whoever at crowd strike that took the executive decision to roll out this update no longer has a job. As that one action crippled companies. Didn’t take them out but made it hell for IT departments all over the world.

leosthrivwithautism