- cross-posted to:
- linux@lemmy.ml
- de_edv@feddit.org
- dach@feddit.org
- cross-posted to:
- linux@lemmy.ml
- de_edv@feddit.org
- dach@feddit.org
cross-posted from: https://sh.itjust.works/post/22460079
Today I’m grateful I’m using Linux - Global IT issues caused by Crowdstrike update causes BSOD on Windows
This isn’t a gloat post. In fact, I was completely oblivious to this massive outage until I tried to check my bank balance and it wouldn’t log in.
Apparently Visa Paywave, banks, some TV networks, EFTPOS, etc. have gone down. Flights have had to be cancelled as some airlines systems have also gone down. Gas stations and public transport systems inoperable. As well as numerous Windows systems and Microsoft services affected. (At least according to one of my local MSMs.)
Seems insane to me that one company’s messed up update could cause so much global disruption and so many systems gone down :/ This is exactly why centralisation of services and large corporations gobbling up smaller companies and becoming behemoth services is so dangerous.
Tech Alert | Windows crashes related to Falcon Sensor | 2024-07-19
Published Date: Jul 19, 2024
Summary
- CrowdStrike is aware of reports of crashes on Windows hosts related to the Falcon Sensor.
Details
- Symptoms include hosts experiencing a bugcheck\blue screen error related to the Falcon Sensor.
- Windows hosts which have not been impacted do not require any action as the problematic channel file has been reverted.
- Windows hosts which are brought online after 0527 UTC will also not be impacted
- Hosts running Windows7/2008 R2 are not impacted.
- This issue is not impacting Mac- or Linux-based hosts
- Channel file “C-00000291*.sys” with timestamp of 0527 UTC or later is the reverted (good) version.
- Channel file “C-00000291*.sys” with timestamp of 0409 UTC is the problematic version.
Current Action
-
CrowdStrike Engineering has identified a content deployment related to this issue and reverted those changes.
-
If hosts are still crashing and unable to stay online to receive the Channel File Changes, the following steps can be used to workaround this issue:
Workaround Steps for individual hosts:
- Reboot the host to give it an opportunity to download the reverted channel file. If the host crashes again, then:
-
Boot Windows into Safe Mode or the Windows Recovery Environment
- Note: Putting the host on a wired network (as opposed to WiFi) and using Safe Mode with Networking can help remediation.
-
Navigate to the %WINDIR%\System32\drivers\CrowdStrike directory
-
Locate the file matching “C-00000291*.sys”, and delete it.
-
Boot the host normally.
-
Note: Bitlocker-encrypted hosts may require a recovery key.
Workaround Steps for public cloud or similar environment including virtual:
Option 1:
-
Detach the operating system disk volume from the impacted virtual server
-
Create a snapshot or backup of the disk volume before proceeding further as a precaution against unintended changes
-
Attach/mount the volume to to a new virtual server
-
Navigate to the %WINDIR%\\System32\drivers\CrowdStrike directory
-
Locate the file matching “C-00000291*.sys”, and delete it.
-
Detach the volume from the new virtual server
-
Reattach the fixed volume to the impacted virtual server
Option 2:
- Roll back to a snapshot before 0409 UTC.
AWS-specific documentation:
Azure environments:
Please see this Microsoft article.
Bitlocker recovery-related KBs:
- BitLocker recovery in Microsoft Azure
- BitLocker recovery in Microsoft environments using SCCM
- BitLocker recovery in Microsoft environments using Active Directory and GPOs
- BitLocker recovery in Microsoft environments using Ivanti Endpoint Manager
- BitLocker recovery in Microsoft environments using ManageEngine Desktop Central
Latest Updates
- 2024-07-19 05:30 AM UTC | Tech Alert Published.
- 2024-07-19 06:30 AM UTC | Updated and added workaround details.
- 2024-07-19 08:08 AM UTC | Updated
- 2024-07-19 09:45 AM UTC | Updated
- 2024-07-19 11:49 AM UTC | Updated
- 2024-07-19 11:55 AM UTC | Updated
Support
- Find answers and contact Support with our Support Portal
Can someone explain why we’ve normalized companies like Crowdstrike shipping kernel-level spyware to millions of computers around the world?
It’s bad enough that they’re shipping code to customers like this without proper QA, but the fact that it’s bootlooping millions of computers kinda shines a light on how dumb this system is.
End user is not educated on the implications of it, can’t have push back if people don’t know about it or don’t care.