Discover more from Munrobotic Blog and 'Briefly Briefed:' Weekly Newsletter
Briefly Briefed: Newsletter #10 (09/11/23)
Hello, I am the Network.
This is week #10 of the ‘Briefly Briefed:’ newsletter. Many thanks for your continued interest. A big welcome to new subscribers, and many thanks to those who continue to read. A few people have asked about the greetings and sign-offs I use in the newsletter. It’s one of those ‘if-you-know-you-know’ situations, I theme each newsletter greeting from a sci-fi film or book, trying to match it to some of the content that week. It keeps me entertained and hopefully it’s fun for the sci-fi fans amongst you to figure out the reference.
My two ‘must-read’ recommendations for the week are:
No Way Out: The Changing World of Cybersecurity Exits by Cole Grolmus. If you’re interested in the business side of the industry, Cole presents an interesting analysis the current state.
Caricatures of Security People by Phil Venables really made me laugh. It’s surprisingly well illustrated (thanks AI) and treads the line between well-intentioned teasing and poking fun.
It is our duty to challenge you. Goodbye.
Funny Cyber Quote || Meme of the Week:
Google has introduced the Secure AI Framework (SAIF), aiming to set industry security standards for AI development and deployment. SAIF is designed to ensure AI systems are secure-by-default, incorporating lessons from software development and specific AI security concerns. It introduces six core elements: expanding security foundations to AI, extending threat detection and response, automating defences, harmonising controls across platforms, adapting controls for AI deployment, and contextualising AI risks in business processes. Google's move to establish SAIF reflects their broader commitment to cybersecurity within AI, leveraging their expertise and advocating for a collaborative approach to address and mitigate emerging risks. As AI becomes more integral across industries, the SAIF provides a structured approach to maintaining its integrity and trustworthiness.
I covered the draft of the SAIF framework a few weeks back, but this announcement marks the official launch. There’s so much content at the moment relating to AI security, safety and its regulation; it can be overwhelming. It’s hard to pick through what’s useful and/or applicable. I’d definitely recommend reading through Google’s effort though. The framework is underpinned by Google’s 2018 ‘Responsible AI Practices’ and links through to NIST’s Risk Management Framework (RMF).
MITRE has updated their ATT&CK knowledge base to version 14, delivering a number of enhancements to cybersecurity detection and knowledge sharing. The release includes improved detection guidance, analytics, and an extended scope in both Enterprise and Mobile domains. Key updates feature over 75 BZAR-based analytics for Lateral Movement detection, refined relationships between detections, data sources, and mitigations, and the introduction of 14 new Assets within the ICS domain, designed to foster sector-wide communication and threat understanding. Additionally, Mobile ATT&CK now includes Phishing (T1660) with associated mitigations, and structured detections aimed at achieving parity with Enterprise capabilities. The navigation experience of the ATT&CK website has been streamlined for better usability. MITRE's ongoing collaboration with the cybersecurity community underscores the collective effort to stay ahead of adversaries.
It’s great to see MITRE ATT&CK continuing to develop. The latest updates add some great tweaks to Navigator (although these days, you’d want to automate such things) and additional techniques. If I’m honest, I do find ATT&CK a double-edged sword. While it provides a universal ‘language’ across functions to describe TTPs (which was a game changer), it also creates (if used wrongly) artificial limitations to the scope, and a false sense of security in terms of coverage. Many organisations and practitioners use it as a Cyber bingo card, gamifying the elements to create the false sense that they’ve ‘caught ‘em all’. Jared Atkinson demonstrated this point brilliantly in Part 5 of his ‘On Detection’ blog series. He demonstrates the different lenses through which we can view and assess the permutations within a single TTP. He gives the example of a sub-technique from MITRE ATT&CK (OS Credential Dumping: LSASS Memory sub-technique) and shows that at a functional level, there are over 39,000 variations of this single sub-technique alone. This shows how nuanced detection engineering can be, and what a blunt instrument ATT&CK is, if poorly understood or misused.
Secure-by-Design Foundations by the Australian Cyber Security Centre
The Australian Cyber Security Centre (ACSC) has published draft ‘Secure-by-Design’ Foundations to assist technology manufacturers and developers to adopt secure-by-design practices.
The secure by design approach is central to Australian cyber security, particularly for tech manufacturers and users, to embed security from the start, ensuring privacy and ongoing management of vulnerabilities. ASD’s ACSC has introduced Secure-by-Design Foundations to initiate discussions and provide guidelines for integrating security into product development. These Foundations cover holistic organisational security, shifting security considerations early in the development process ("shift left"), integrating security into code development, comprehensive testing, robust data security, continuous assurance, diligent maintenance and support, and secure deprecation practices.
The strategy encompasses various aspects, from appointing senior stakeholders to embedding security into the organisation’s culture, from secure coding practices to maintaining and supporting digital products throughout their lifecycle. The goal is to reduce risks like insider threats, supply chain compromises, and data breaches, and improve consumer confidence through assured secure products. ACSC encourages feedback on these Foundations and seeks to expand the tools available for enhancing digital security.
It’s great to see a focus on fundamentals and ‘by-design’ security being considered at a national level. The framework itself is nothing new (globally speaking), however, this is possibly a first step towards regulatory and/or legislative intervention across key sectors in Australia.
No Way Out: The Changing World of Cybersecurity Exits by Cole Grolmus
The article explains that the cybersecurity industry is facing a critical period akin to a high-stakes game of musical chairs, with too many highly valued companies and not enough exit opportunities to match. There are 82 'unicorns'—firms valued at over $1 billion—and 36 acquisitions by private equity firms, but history shows there are not enough exit chairs for all 118 companies. The post posits that the industry's ‘exuberant phase’, fuelled by a bull run in venture capital investing, M&A, and public company valuations, is over. IPOs are drying up, and strategic acquisitions are focusing more on value rather than volume.
The reality check brought about by the market downturn requires a rethinking of strategies and expectations. The industry must accept fewer IPOs and value-driven acquisitions, leading to a reset of inflated valuations and a move towards more sustainable growth metrics. Despite the challenges, the author argues that good strategic choices can put the industry on a better trajectory, with examples like Perimeter 81’s acquisition by Check Point and the merger of ForgeRock and Ping Identity offering hope. A strategic shift towards sustainability over hyper-growth could lead to a more mature and resilient cybersecurity industry.
Broadly speaking, I agree with many of the observations in this post. I think we were all waiting for the bubble to burst on over-inflated valuations in Cyber vendorland. I wouldn’t say the boom times are completely over though, but we’re seeing VCs approach silicon valley (and Austin, Boston, Israel and the UK) in a more sceptical and mature fashion as we move into 2024. That said, Cisco and Palo Alto didn’t seem to get the memo!
Inaugural Global AI Safety Summit Outcomes by NCC Group
The article explains the significant developments from the recent Global AI Safety Summit, highlighting the importance for businesses to understand and adapt to the evolving AI landscape. The Bletchley Declaration by 28 countries, including the US and China, aims to foster the development of safe and responsible AI. It calls for global cooperation to tackle the challenges posed by frontier AI and to collaborate on risk-based policies. Furthermore, the International Guiding Principles for Advanced AI systems and the Statement on Safety Testing set expectations for developers and users of advanced AI systems, emphasising rigorous government assessments of AI models.
Domestic applications of these international agreements were evident, such as the US’s Executive Order on Safe, Secure, and Trustworthy AI, the establishment of a US AI Safety Institute, and the UK’s commitment to a principles-based approach to AI regulation within its current legal framework. The imminent EU AI Act and Australia's watchful stance on AI regulation suggest a global trend towards embedding safety and security principles in domestic regulation.
Businesses should pay close attention, as emerging regulations will likely impose responsibilities not only on AI developers but also on users. With privacy, information security, and ethics at the forefront, it's essential for organisations to consider how varying regulations across borders will impact their operations.
Demystifying Generative AI: A Security Researcher's Notes by Roberto Rodriguez
This is a quite a technical (and very long) post, deep-diving into the fundamentals of generative AI.
The article explains the core principles of Generative Artificial Intelligence (AI) from a security researcher's perspective. Don’t let that put you off if you’re not so technical, it’s really well explained. It embarks on a journey starting with the definition of AI and then dives deeper into its subsets, Machine Learning (ML) and Neural Networks (NN), followed by Deep Learning (DL). The post posits that understanding the distinction between AI and ML, alongside the progression to NN and DL, is essential for grasping the foundations of Generative AI.
Moreover, the post explains the architecture of Neural Networks, breaking down complex terms such as Parameters, Weights, and Activation Functions into simpler concepts. The significance of training methods, including forward propagation and backpropagation, is addressed to explain how neural networks improve their output. By simplifying these sophisticated terms, Roberto seeks to inspire security professionals to leverage Generative AI in their field.
This is a really long article, but it’s hugely informative. I’d definitely take a look if you’re more technical and interested in the nuts and bolts of generative AI, but not already an expert. Kudos to Roberto for trying to make this area more accessible to the community and embracing AI, rather than going down the security FUD road.
Caricatures of Security People by Phil Venables
A tongue-in-cheek look at the diverse tapestry of personalities and roles that make up the security industry. The post encourages readers to appreciate the wide array of backgrounds, skills, and experiences found in the sector, whilst also engaging in a light-hearted caricaturing of these roles, inclusive of self-reflection. The narrative acknowledges that although individuals may sometimes appear to underperform, it is often a reflection of their circumstances rather than their capabilities. With a gentle reminder that everyone is generally doing their best, the article serves both as a comical insider’s look at the security profession and a nudge to understand the broader context behind each role’s challenges and contributions.
Why not? This is pretty funny. I’m sure we’ll all see elements of ourselves in these, even if we don’t want to admit it.
The perennial guide for Cyber salaries in the UK. The report covers the following areas:
The long-term impacts of Brexit and the pandemic on hiring
The current hiring climate for cyber security and data privacy professionals
The current state of diversity and inclusion
Permanent and contract recruitment trends and challenges
Up to date salary information by role and sector
These figures serve as a useful guide for those in the industry. Also, there are some useful data on trends, and movements of salaries over the last period.
Microsoft Azure AD Conditional Access Principles and Guidance by Claus Jespersen
The document is an informal compilation of best practices and principles for implementing a Conditional Access framework, as gathered from delivering enterprise customer engagements. It emphasises a Zero Trust approach and offers foundational protection strategies, although it explicitly states that it is not official guidance from Microsoft. As a 'Notes from the field' resource, it provides practical insights alongside Microsoft’s formal documentation and suggests additional reference points like the work of Alex Filipin and existing Microsoft docs on Conditional Access. The latest updates to the document include enhancements to policies for various user types and a new spreadsheet template for documenting Conditional Access policies. It also hints at adjustments required for specific customer environments and licensing tiers, mainly designed around E3 licenses, with certain features requiring E5. This resource is useful for those new to Conditional Access, as well as veterans looking to incorporate more advanced features into their security architecture.
If you’re an ‘Azure house’ and have some sort of responsibility for technical design, this is an excellent resource.
CyberChef is a simple, intuitive web app for carrying out all manner of "cyber" operations within a web browser. These operations include simple encoding like XOR and Base64, more complex encryption like AES, DES and Blowfish, creating binary and hexdumps, compression and decompression of data, calculating hashes and checksums, IPv6 and X.509 parsing, changing character encodings, and much more.
The tool is designed to enable both technical and non-technical analysts to manipulate data in complex ways without having to deal with complex tools or algorithms. It was conceived, designed, built and incrementally improved by an analyst in their 10% innovation time over several years.
CyberChef is handy for ad hoc manipulation of small amounts of data. A must-have for casual CTFers and web hackers.
Unauthorised Access to Okta's Support Case Management System: Root Cause and Remediation by David Bradbury (Okta CSO)
The post presents a root cause analysis for the recent security breach within Okta's customer support system, where a threat actor accessed files (of less than 1% of Okta's customers) between the 28th September and the 17th October, 2023. The unauthorised access involved HAR files containing session tokens that could be used for session hijacking. This vulnerability was exploited to hijack sessions of five customers. The breach was facilitated by a service account with extensive permissions, whose credentials were compromised through an employee's personal Google account. Okta's subsequent investigation revealed a failure to identify suspicious downloads due to different log event types, which were later detected with the aid of an indicator provided by BeyondTrust. The post explains that Okta has since disabled the compromised service account, blocked personal Google profiles on managed devices, enhanced system monitoring, and introduced session token binding to prevent similar incidents.
I don’t like to comment on breaches in general, I think when you’re external to an incident, you don’t have all the details and it’s easy to judge. However, this thread on ‘X’ is quite interesting, if you enjoy a hot take.
Thanks for reading ‘Briefly Briefed:’ - To receive the newsletter on a weekly basis, please subscribe below.