About a month from International CrowdStruck Day, just a few thoughts, more likely to follow:
- How well does your infrastructure behave when none of your Windows machines can boot?
- How well is your out-of-band management?
- How well is your CMDB doing key management, for instance for BitLocker encryption?
- Is checkbox compliance more important than a single point of failure?
- Can you ensure all updates from your supply chain are staggered/staged/phased with a kill switch when things get out of hand?
- Are the worst case scenarios in your disaster recovery plans really the worst?
- Do you understand the human factor of large scale outages (both of the people that – often indirectly – triggered them – hello #HupOps – and the ones that cannot work because of them)?
- Do you value your people – especially the ones that pulled you out of this situation – enough, and did you rename your Human Resource department into something that is more friendly to your people?
- Do you realise this could have happened on any of the platforms you use, including Linux and MacOS?
- If you were mentioned in the media by not recovering well, do you have any idea how much a target you will be from adversaries?
- Did CrowdStrike finally show some real postmortem instead of the half-hearted communications they did mostly after the weekend following the debacle?
- How does your organisation perform dates of critical files?
- Would other platforms be less or more risky? If so: why?
- Will eBPF solve most of this, or at least centralise the issues and what consequences would that have?





