Critical CrowdStrike Calamity Causing Chaos & Crashes

preview_player
Показать описание


The Verge

Connecting With Us
---------------------------------------------------

Lawrence Systems Shirts and Swag
---------------------------------------------------

AFFILIATES & REFERRAL LINKS
---------------------------------------------------
Amazon Affiliate Store

UniFi Affiliate Link

All Of Our Affiliates that help us out and can get you discounts!

Gear we use on Kit

Use OfferCode LTSERVICES to get 10% off your order at

Digital Ocean Offer Code

HostiFi UniFi Cloud Hosting Service

Protect your privacy with a VPN from Private Internet Access

Patreon

Chapters
00:00 Crowdstrike Outage
00:50 How to Fix Crowdstrike
01:40 McAfee Incident
02:13 Will The Problem Fix Itself?
Рекомендации по теме
Комментарии
Автор

George Kurtz was the CTO of McAfee during their outage and is now the CEO of CrowdStrike. I am seeing a pattern...

taxplum
Автор

There's such delicious irony in spending thousands using a Cybersecurity company to protect your systems from going down, when it's the reason for your systems going down

TomNook.
Автор

"First day on the job at CrowdStrike!! Pushed out a little update! Going to grab lunch and head home for the rest of the day." Had me fucking dying on Twitter lmfao

vinnys
Автор

As a SysAdmin / Network Gingerbeer, working for a company that exclusively use Linux servers and 99% Linux desktops makes me so happy on a day like today

joshharding
Автор

My sister is a court administrator.. the entire court system is down today .
IT is working overtime today.

Abaddon
Автор

Fun times at the datacenters today. As an IT FEEL your pain.

DoughBoy
Автор

Unreachable systems are the most secure.

DMStern
Автор

Funny, someone asked me about the "windows update" that went wrong this morning. After I told them it was CrowdStrike, I also mentioned it's not the first time and that McAfee is famous for their A/V update that caused outages for so many. Great minds think alike!

tornadotj
Автор

The outage also reveals which companies use CrowdStrike and which don't, which just helps future malicious actors confirm the breadth of a potential attack vector. CrowdStrike just painted a target on their back.

TechySpeaking
Автор

No mention of those of us working in hospital emergency rooms around the country and not being able to access systems that provide electronic medical record documentation, transmission of radiology images for interpretation, transmission of lab results to the electronic medical record... just those little items.

texasermd
Автор

The network stack race condition fix is legit, but it's a dice roll.

KaldekBoch
Автор

Fortunately we run our ESRI Enterprise solutions on-prem. Unfortunately no one can log into their computers to access it. LOL

nathanddrews
Автор

It was around 4pm Friday arvo here in 'Straya. Great timing. The biggest issue, in my view, is that to recover you really to need to be physically at the machine unless you have integrated lights out, hampering recovery somewhat. I know some POS machines that got caught up in it and I also know that they get reimaged every 3 months or so but are otherwise static because stability _really_ matters. So that means CS auto-updated on what is otherwise a 'static' image. Bad news. Hopefully auto-update will be easily turned off in future and _ideally_ a customer can subscribe to different channels to separate servers into different pools to get updates at different times. You know, like how OS updates are rolled out to Dev then UAT then Prod over several days or a couple of weeks to make sure this kind of crap doesn't happen. I think we're all gonna be looking closely at all our software agents once the dust settles

davelloyd-
Автор

What a disaster that was (We first heard about it because our Dispatch center was completely down). I got the call at ~11PM our time (Arizona), Got essential services working by 2:30AM, got some sleep and woke up at 7 to hit it again. We initially expected the worse, so when it was just a BSOD, we were slightly relieved. This was truly a dry run of our response to a disaster and our recovery abilities. Bitlocker was for sure the biggest PITA and with the recent Windows RE issue, we had some devices that had no Recovery partition lol.

gorfmaster
Автор

I heard a previous place I worked outsourced their IT department to India. They didn't have anyone onsite. If a computer was unbootable, it had to be shipped off to India. It it was a huge place with over 100, 000 employees and lots of cost+ government contracts.

ChaJ
Автор

This happened just as I was finishing our working day (Australia). We were exhausted!

VW_Fan
Автор

Even if CrowdStrike had tested that update code. Wouldn’t it be common sense to push the update region by region, and not globally all at the same time?

makedredd
Автор

Me to a friend “Why don’t they use phased updates and all the other QC?”
… “while we were sleeping” … okay now I see, APAC and EMEA are the phased update.

LiveWireBT
Автор

I work in a hospital in the Netherland. This outage had great effects on us. We had to close the emergency departement and were not able to do the planned surgeries..
Crazy to see how dependent we are on technology

LukeBares
Автор

This is the problem with big bang roll outs that hit 10, 000's of installations at the same time.
Personally I always wait a few days before implementing any system updates and then check for any feedback from users who already have installed them before I do.

just-a-yt-guy
visit shbcf.ru