My Key Takeaways From Crowdstrike IT Meltdown

preview_player
Показать описание
In this video, I share my thoughts on the recent Crowdstrike IT meltdown. I discuss the impact it had and what we can learn from it. #CrowdStrike #ITMeltdown #thoughts

🆓 FREE Facebook Group
From security to productivity apps to getting the best value from your Microsoft 365 investment, join our Microsoft 365 Mastery Group

🆓 FREE Microsoft 365 Guide
Our FREE Guide - Discover 5 things in Microsoft 365 that will save your business time and money….. and one feature that increases your Cyber Security by 99.9%

💻 Want to Work Together?

😁 Follow on Socials
TikTok @bearded365guy
Instagram @bearded365guy

📽️ Video Chapters
00:00 Introduction
00:25 News Outlets
01:29 Misinformation
02:15 Lazy Marketing
02:43 My Advice
Рекомендации по теме
Комментарии
Автор

The biggest issue was lack of testing. Mistakes always happen, but Software companies have to either test their software, or give the customer control of the update. You can’t brick the world. J.

jasonc
Автор

Nice hearing someone that I can relate to, I love your channel

freditoever
Автор

In Mexico we say: "Make firewood from a fallen tree" when you take advantage of someone else's misfortune

garcialex
Автор

Amen. The only thing that end user orgs may have done better was test the update first in a non-production environment.

PazGorbiz
Автор

John, Microsoft's reputation was saved by the Crowdstrike issue. You see, all of Microsoft's Central US region went down hours before the CS update was pushed. The MS outage was completely separate from the CS issue. MS directed all the traffic from its Central US datacenters to the US West and East coast for a good 8-12 hours. This might be why people are stating to ditch M365 products. MS got things back to normal within 8-10 hours, but the Crowdstrike issue completely overshadowed the MS outage. MS dodged a PR nightmare in my opinion.

_Ryan_
Автор

Unfortunately, right or wrong this could have happened to any of the top security software vendors Huntress, Threatlocker, Bitdefender, and the list goes on. This was something that could have been prevented with testing. My point is it this could have happened to any software vendor, and it just goes show you that no matter the vendor you have to have a plan in place to respond when it does. This will certainly happen again so it's best to be ready.

techgroupservices
Автор

Have always been a very keen believer in the n-1 updating process. Symantec/Norton, Trend, SOFOS got me there many years ago and I have never changed. I manage the release of all updates. If the product doesn't support it then it is not inan SOE.

ianmcpherson
Автор

My son worked days to get his companies computers up and running and there were plenty of problems. Who in their right mind gave someone from India to be able to access the key from Microsoft to do a so called update. Now scammers now know how to infect these machines.test test and test before you execute an update worldwide?

patburton
Автор

We were well prepared and back up within 12 hours with hundreds of machines affected, but what a mess this was. Honestly, I don't know if Crowdstrike survive this.

PerroneFord
Автор

There needs to be an iso standard for quality assurance, testing and patch management

DoughBoy
Автор

CS will likely survive. They have an incredible product, just a stupid mistake that turned out to be catastrophic. Luckily we were able to come back from it relatively quickly.

My biggest surprise was to Microsoft placing partial blame on the EU. I’m interested in where that goes. Probably nowhere.

gtoramirez
Автор

Love your content but you’re dead wrong about Crowdstrike. A start boot driver with kernel access that blindly accepted a random zero file (with no validation or error checking on the driver input) is the craziest thing I’ve ever seen. YOLO cowboys.

KeiferStreet