PirateSoftware Breaks Down CrowdStrike Computer Issue

preview_player
Показать описание
JP Direct Links:

Dropped Frames:
Рекомендации по теме
Комментарии
Автор

5:42 - Yes! It took me a while in the IT space to find the confidence to look my boss straight in the face and say "If you see me working like crazy, or in a panic... something is very wrong. It's being handled, don't bog me down with meetings and superfluous communication. If you want to help, I'll show you what you need to do. Otherwise, leave me alone and let me work."
Now as a lead, I am the wall. If you see my guys working hard or in a panic, you don't bother them. You talk to me.

matthewbohne
Автор

"This is the worst outage we've ever seen in our lifetimes".

This is the worst outage we've seen in our lifetimes, *so farrrr* .

randommixedletters
Автор

The thought of a fossilized judge who can't use Excel will be presiding over a massive case like this, needing the lawyers to bring out the crayons to explain even the basics of network design makes me feel like it will be a complete coin flip on how it goes.

chrishendry
Автор

Thor hit the nail on the head with the IT industry. If everything works some exec is saying "What are we paying those guys for?" and if anything goes wrong there's more than one exec saying "What are we paying those guys for?!?!"

kleedrac
Автор

This is why we don't allow day zero updates from external sources. Also medical devices are isolated and do not get updates. Can't risk an update breaking a critical medical system.

ehntals
Автор

For the bitlocker issue: some people figured out how to manipulate BCD (Windows Bootloader) to put the system in something approximating safe mode - safe enough that crowdstrike doesn't load, but not so safe that Windows doesn't ask the TPM for the key. Probably 95% of the bitlockered machines can be recovered this way (my estimate).

jceggbert
Автор

What this issue showed us at my emergency service center is that we don’t have robust enough plans for operation without computers.

It’s helped us improve our systems and we are to a point now where we can totally operate without any computer or internet systems. We’re more prepared than ever now.

KilothATEOTT
Автор

Having worked in IT, I always had the MBA's find out how much their departments cost to run per minute, and then account for how long a 3rd partly IT support company would take to respond. Now we can just point to this...

AngryMan
Автор

As someone who has QE'd... there is also "we told you, but the business never thought this edge case was important".

This appears to be the everyday, common "that isn't our failure" type of design failure that no one solves as there is no ROI on pre-solving these instances.

dowroa
Автор

The channel Dave's' Garage (Dave is a retired Microsoft developer from the MS - DOS and Win 95 Days) did a good breakdown on one of the last questions Thor asked with why there was no check before the systems blue screened with a driver issue.
Crowd strike apparently had elevated permission to run at a kernel level, where if there is a problem at the Kernel windows Must blue screen to protect itself and files from corruption.

Wiki
Автор

10:00 - "It was a blind spot" makes sense from a QA perspective, but... clearly we, as a society, can't be having software systems with billion-dollar costs attached to them.

michaels.
Автор

This is much like what the NTSB goes through. When an airline or train disaster happens. You won't know a fault or failure point. Weather it's human, digital, or mechanical. Until an event happens. And why it takes them months to years, to solve. There's so many factors to look and potentially blame.

rangerreview
Автор

As a Crowdstrike employee this was a very reasonable breakdown by PirateSoftware

dip
Автор

For those who want blame Windows. Same thing happened with Crowdstrike few weeks earlier on Linux, just then was no much devices demaged so no one cared.

jankrnac
Автор

This is why monopolies are bad. There's a VERY good reason the old adage of "Don't Put All Your Eggs In One Basket" has been around for as long as we've had chickens and baskets... it only takes ONE PROBLEM OR ACCIDENT to ruin everything.

listofromantics
Автор

Oh wait what? This is the first time I've not seen a comment on a video like this lol. This really puts into perspective how bad this was for many people. I, fortunately was not as affected by it, but many people were. I'll be praying that the IT people can get a break/ are appreciated more as a result of this.

jawredstoneguy
Автор

Why isn't there any blame on the Companies themselves? I work in IT, and my previous company use to test all windows updates, and software updates on a 48hr test before allowing it push out to the rest of the systems. The current company does not do this and was hit with the crowdstrike issue. My PC was not affected because I disable update pushes on my system and do them manually. I was advocating to start doing smoke tests before allowing update pushes ahead of time.. before this happen.. NOW after half the systems went down.. they decided to add it to the process.

vipast
Автор

9:15 safety regulations are written in blood. Often times you don't know what you need to prepare for until after it happens.

seasnaill
Автор

According to Dave’s Garage the issue was 100% Cloudstrike. They sent a empty package and the driver couldn’t handle the problem.

kingofl
Автор

Crowdstrike is a sample of what we thought Y2K was going to be.

InvictusRed
visit shbcf.ru