Briefly Briefed: Newsletter #6 (12/10/23)
Welcome to the real world!
This is week #6 of the ‘Briefly Briefed:’ newsletter. Many thanks for your continued interest. There’s a slightly technical slant to my shares this week, with some longer form op-eds too. Hopefully, it’s not too ‘dry’ and the summaries do their job. As always, please keep the feedback coming.
My two ‘must-read’ recommendations for the week are:
The ‘AI risks’ article from Bruce Schneier. In typical fashion, this is super pragmatic and offers some great takes on our tendency to form tribes / catastrophise (and the impact it will have on how we deal with the challenges of AI).
Francis Odum’s beginners guide to Cybersecurity. The vendor landscape isn’t often presented in an entry-level way. The content is useful for any level of security practitioner though, and dissects the marketplace well.
Goodbye, Mr. Anderson.
Lawrence
Funny Cyber Quote || Meme of the Week:
NTLM will finally be phased out in Windows 11. The evolution of Windows authentication by Matthew Palko
In an effort to bolster security, Windows is ‘evolving’ its authentication protocols. While Kerberos has been the default authentication method since 2000, there are cases where NTLM is used as a fall-back. Microsoft is now introducing new features in Windows 11 to enhance Kerberos' capabilities, including IAKerb and a local Key Distribution Center (KDC) for Kerberos, aiming to reduce the reliance on NTLM. The changes improve security for certain network topologies and extend Kerberos support to local accounts. Additionally, efforts are underway to transition services hard-coded with NTLM to use the Negotiate protocol, benefiting from the new Kerberos changes. The long-term objective is to phase out NTLM in Windows 11. Microsoft advises cataloguing NTLM usage and auditing code for hardcoded NTLM references. A dedicated webinar on this is scheduled for 24th October (you can sign up here).
So What?
It’s great to see Microsoft FINALLY has an appetite for retiring problematic protocols. NTLM has been one of the most abused protocols since its inception. Fun fact about NTLM’s LM hash, it only operates on a maximum of 14 characters, it converts all letters to upper-case, it pads with null-bytes to 14 (if needed) and splits the plaintext in half before using (unsalted) DES to encrypt each chunk. Yikes! The original version is now disabled by default, but you do still see it deployed for compatibility purposes.
White paper: "Fine-tuning aligned language models compromises safety, even when users do not intend to!" by Xiangyu Qi et al. from Princeton University.
The study reveals a significant vulnerability in large language models (LLMs) like GPT-3.5 and Llama-2. The researchers demonstrated that fine-tuning these LLMs can easily bypass safety guardrails, enabling the models to produce harmful content such as bomb-making instructions or phishing email templates. As an example, the team were able to jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 such examples at a cost of less than $0.20 via OpenAI’s APIs. The study highlights the urgency for developers to fortify their LLMs' protective measures. Given that malevolent actors could exploit this fine-tuning methodology, it is imperative for the AI community to develop more robust and resilient safety mechanisms.
So What?
If you travel in the right Twitter (X) / Bluesky circles, you regularly see these types of jailbreaks, so it will come as no surprise to 'offsec' denizens. My favourite remains 'Grandma mode', where ChatGPT can be tricked into behaving like a user’s deceased grandmother, subsequently bypassing safety controls. We really need to get a handle on these restrictions.
Overcoming Security Obstructionism: Why We're Our Own Worst Enemy in InfoSec by Matt Jay
The article, inspired by a term coined by Kelly Shortridge, explores the phenomenon of Security Obstructionism (SecObs) in information security. It critiques the industry's tendency to become a "Department of No," stifling innovation due to an overemphasis on control, ego, and the desire for significance. The post also underscores the role of organisational culture in either perpetuating or mitigating these obstructionist behaviours. Central to the article is the notion that self-awareness and introspection are key to transforming this mindset. Strategies like cross-departmental workshops and peer reviews are advocated for fostering a collaborative atmosphere. The piece essentially calls for a transition from a divisive 'us-versus-them' approach to a more holistic and effective practice in information security.
So What?
While I agree fundamentally with the article (and think it’s a good read), there’s a lot of pop psychology and it lacks key examples of the problem. There’s an over rotation towards ‘control == bad’, but it doesn’t really explain why (apart from being ego-centric) or what the alternative is whilst accountable for outcomes. I definitely agree regarding self-awareness and reflection on efficacy, especially when trying to reduce ‘security theatre’. I feel the post conflates the idea of being overly controlling as a personality trait and the genuine need to control variables within a technical environment. I can see where these things intersect, but I feel this is over-emphasised, lacks nuance and no useful alternatives are proffered. Security controls need to be enabling and human-centric, but uncontrolled environments quickly become chaotic.
Microsoft are to deprecate VBScript and WMIC in Windows 10 and 11
Microsoft are to deprecate VBScript and WMIC (N.B - WMI remains GA and accessible via PowerShell) in Windows 10 and 11. This means they’ll still be available for the time being, but are not actively developed. VBScript is already flagged for subsequent removal.
So What?
Most current attacker TTPs don’t really leverage these features as a vector, however, removal will neuter some lower-grade malware and skiddies. I have fond memories of using VBS for breakouts on Windows XP (amongst other things). </wipes-tear>
For completeness, in Microsoft-land, they use the following terminology for their software lifecycles:
- Deprecation: The stage of the product lifecycle when a feature or functionality is no longer in active development and may be removed in future releases of a product or online service.
- End of support: The stage of the product lifecycle when support and servicing are no longer available for a product.
- Retirement: The stage of the product lifecycle when an service is shut down so that it is no longer available for use.
- Remove or retire a feature: The stage of the product lifecycle when a feature or functionality is removed from a service after it has been deprecated.
- Replace a feature: The stage of the product lifecycle when a feature or functionality in a service is replaced with a different feature or functionality.
‘AI Risks’, an interesting and well reasoned post by Bruce Schneier and Nathan Sanders
The post presents a comprehensive examination of the competing perspectives on the risks associated with artificial intelligence (AI). It outlines the schisms among researchers, policymakers, and industry figures, who are often at odds over the urgency and nature of the problems posed by AI. The article posits that this division has resulted in a confusing landscape, making it difficult to arrive at meaningful solutions. One faction emphasises existential threats to humanity, often overshadowing more immediate societal concerns. Another group focuses on real-time ethical issues, such as algorithmic discrimination. A third faction, driven by national security and economic incentives, advocates for regulation that safeguards their market advantage.
So What?
The article criticises the myopic focus of each group and suggests that the debate is not merely about AI but about control, power, and the distribution of resources. It calls for a more nuanced understanding of these positions to enable better governance and regulation. By failing to reconcile these disparate viewpoints, society risks not only technological dangers but also a potential failure to implement effective policies and safeguards.
NSA and CISA red and blue teams share their top ten cybersecurity misconfigurations
Through NSA and CISA Red and Blue team assessments, as well as through the activities of NSA and CISA Hunt and Incident Response teams, the agencies identified the following 10 most common network misconfigurations:
1. Default configurations of software and applications
2. Improper separation of user/administrator privilege
3. Insufficient internal network monitoring
4. Lack of network segmentation
5. Poor patch management
6. Bypass of system access controls
7. Weak or misconfigured multifactor authentication (MFA) methods
8. Insufficient access control lists (ACLs) on network shares and services
9. Poor credential hygiene
10. Unrestricted code executionSo What?
This is more validation that the vulnerability landscape has largely stayed the same, for a long time. It’s slightly frustrating as a security professional that we still struggle with fundamentals, but it also shows how hard these challenges are to fix at scale. I’m not sure there’s more value here than a warning for organisations to fix these things, but it’s a useful reference point.
Managers Have Major Impact On Mental Health: How To Lead For Wellbeing by Tracy Brower
Data suggest that for almost 70% of people, their manager has more impact on their mental health than their therapist or their doctor—and it’s equal to the impact of their partner.
So What?
Leadership is a responsibility we shouldn’t take lightly, and the data in the article underpin the need for leadership training and ongoing development. We spend more time with our work colleagues than with friends and family in our adult life. This is a great reminder that work is about people and relationships, as much as it is about impact and money. Choose a great boss and community when the option is available.
Where to invest to close the cyber skills gap by Jen A. Miller
The article highlights a pressing issue in the cybersecurity sector: a significant skills gap. Despite having 4.7 million professionals already in the field, (ISC)2 states that an additional 3.4 million workers are needed to fill the talent gap. Astonishingly, 70% of security leaders report that their organisations are understaffed. The post outlines that CISA has even had to modify its recruitment methods to compete with the private sector's higher salaries. The article emphasises the importance of diversifying recruitment strategies, such as looking beyond technical skills to qualities like business acumen and digital dexterity. It also calls for changes in the recruitment process to eliminate affinity bias and to reach candidates in diverse geographical locations. The post concludes optimistically, suggesting that a collaborative effort between the public and private sectors could substantially reduce the skills gap within a decade.
So What?
Despite what you may have read on LinkedIn (lol), the skills gap does exist. There are a number of national-level initiatives in-flight, especially in the US and UK, to boost Cybersecurity and AI as career options. The US is certainly leading the way in terms of the level of inter-departmental and industry-governmental collaboration. At least on paper it looks impressive! In the UK, the introduction of T-levels (and their subsequent revamp as part of the new Advanced British Standard) has been a positive move to support different learning styles (IMHO).
I found this article especially interesting, as I’m currently part of a UK government task force (with a mission to encourage more young people to consider a career in cyber security, AI or computing). If you have any thoughts on this area, please do reach out.
The Beginner's Guide to Cybersecurity: A simple framework for synthesizing the cybersecurity industry and its 3500 vendors by Francis Odum
The article presents a framework aimed at demystifying the $200B cybersecurity industry, particularly for newcomers. The post points out the current fragmentation in the industry, with over 3,500 vendors across more than 40 sub-categories. Despite this diversity, the market is highly unconsolidated; the largest player holds only a 4% market share. The framework simplifies cybersecurity into five basic components: Network, Hardware, Software, People, and Data & Risk. Emphasis is placed on the Network as the linchpin connecting all other elements, which explains why network security companies often dominate the market. The piece serves as an introductory guide, aiming to distil the complexities into an accessible format and inviting suggestions for further refinement.
So What?
This is a really useful resource in general. I like that Francis has taken time to map this out and make it accessible for people who’re new to the field. The web of acronyms for different product areas (and the puzzle-piece overlapping in feature-sets) makes it hard to keep-up. Kudos.