Briefly Briefed: Newsletter #7 (19/10/23)
Set condition one throughout the ship!
This is week #7 of the ‘Briefly Briefed:’ newsletter. Humble thanks for your continued interest.
It’s Cybersecurity awareness month; just in case you have forgotten! Please remember to: run an extra tricky phishing simulation, add 20% moar cyberz to your company website and bore your friends and relatives with how important MFA is.
My two ‘must-read’ recommendations for the week are:
Ross Haleliuk’s deep dive into the ‘Great CISO resignation.’ It’s a heavy read, but made a great attempt to mine the data supporting his argument. The results are imperfect, but it balances some of the bluster in the industry press.
The white paper “The impact of founder personalities on start-up success.” If you’re a leader or start-up founder, I think you’ll find it an insightful read.
So say we all.
Lawrence
Funny Cyber Quote || Meme of the Week:
"The impact of founder personalities on start-up success" an interesting white paper by Paul X. McCarthy et al.
In an extensive study undertaken by a multidisciplinary team of academics, the role of founders' personalities in start-up success has been highlighted as remarkably influential. The research analysed over 21,000 global start-ups using AI algorithms and the "five-factor" psychology model. It revealed that entrepreneurs possess distinct combinations of personality traits, such as a penchant for risk-taking, networking, and relentless energy, which are essential for start-up success. The study identifies six key founder personality types—Leader, Accomplisher, Operator, Developer, Fighter, and Engineer—each with its unique blend of traits. Moreover, the study found that start-ups led by a diverse blend of these personalities are 8 to 10 times more likely to succeed. The paper demonstrates that while products and market interest remain important, the 'secret sauce' to start-up success appears to be significantly influenced by the personalities at the helm.
So What?
It’s great to a see legitimate research in this area, and not a pithy op-ed in the WSJ. The paper has broader applicability to leadership in general, and is a useful tool for reflection on ‘style’. The key takeaway for me, is the importance of creating a balanced leadership style and to develop your weaker areas. It’s tempting to disconnect your work persona from your core personality, either as a protective mechanism or to project a more positive image. Authenticity, (or lack thereof) is easier to recognise than we can sometimes lead ourselves to believe.
The Open Compute Project (OCP) has launched a program called ‘Security Appraisal Framework and Enablement’ (S.A.F.E.) to enhance the security of data centre IT infrastructure by standardising the security audit process for hardware and firmware. This aims to reduce the costs and redundancies associated with device security audits, and is supported by notable industry players including Google, Microsoft, and Intel. This collaborative effort aims to advance the security posture of device hardware and firmware across the supply chain, reflecting a community-driven approach to tackle security challenges in data centre operations.
So What?
This is a great initiative, backed by some well-resourced organisations. It’s especially pertinent at the moment, given the uptick in legislative interventions we’re seeing in developed nations. Many governments are transitioning (or considering to transition) a broader sub-set of datacentres to CNI. Frameworks like SAFE can support these efforts. One of the challenges the framework addresses, is a lack of specificity and rigour in frameworks like ISO27001. Within traditional ISMSs, there are significant gaps in: implementation, transparency (to clients) and validation of technical controls. In general, I’m a strong proponent of increased regulation in this space, and these types of initiatives provide additional support. Kudos.
The 'great CISO resignation' isn’t what it looks like: a hype-free, data-driven, in-depth look at the evolution and challenges of security leaders by Ross Haleliuk
The article primarily looks at US Fortune 500 CISOs (but does consider some other segments) in order to challenge whether the 'Great CISO resignation' is a real phenomena, or not. Ross' data show that the average tenure of a CISO in Fortune 500 companies stands at 4.5 years, with a median value of 3.6 years. This is not significantly shorter than the average tenures observed for CEOs or other top executives, particularly when the data are adjusted for other variables like industry and age. According to the post (taking secondary data from a recruitment consultancy) CEOs average a tenure of 6.9 years, while other executive roles in evolving fields like the CMO and CHRO average around 3.5 to 3.7 years, respectively. The data appear to challenge the prevailing belief in a retention crisis for security leaders.
So What?
Ross has taken a great crack at a really difficult issue to quantify. The data utilised to support the original hypothesis, were not as rigorous and have largely come from surveys related to job satisfaction. While he acknowledges the limitations in his methodology (really well), I'd still challenge that the F500 isn't representative of the broader landscape. A key element, which skews the data (IMO), is the (increasing) salary gap between top-end CISOs and the rest (IANS recently released a survey showing this). It's unlikely F500 CISOs would retain salary parity moving to vCISO roles vs. those down the stack (one of the key trends contested) or feel the pressure to move quickly. Therefore, I think it may be a stretch to extrapolate and difficult to avoid specious conclusions. That said, anecdotally, I do see similar patterns to what Ross describes and feel this could be a storm brewing, rather than a current trend.
Chinese Cyber: Resources for Western Researchers from Ollie Whitehouse
Ollie runs the r/blueteamsec subreddit and publishes a newsletter called ‘bluepurple’. Both focus on threat intelligence and nation state level cyber activity. This post aggregates his primary sources for intelligence gathering on China.
So What?
If you’re not in a role that requires this much detail relating to Chinese Cyber activities, you’ll likely find this too much. However, if you’re a likely target or responsible for elements of CTI, this is a goldmine!
An interactive map, showing where it’s il/legal to pay a ransom in an extortion event by Ryan Kovar
This is a really interesting resource giving a high-level overview (going to State level in the US) of legislative interventions for ransomware payments.
So What?
This resource should be considered indicative, and you should always consult legal representatives and your cyber insurance provider before taking any action. However, this provides a useful snapshot of the landscape.
“Can open source be saved from the EU's Cyber Resilience Act?” by Steven J. Vaughan-Nichols
The European Union's Cyber Resilience Act (CRA) aims to enhance cybersecurity by setting stringent criteria for digital goods sold within the EU. While well-intended, the CRA poses significant challenges for open source software development. Software creators are mandated to secure their products, address security flaws, and publish updates. While this is laudable in principle, the CRA is burdensome for open-source developers, including those outside the EU. Individual and non-profit developers could be exempted, but those accepting recurring donations from commercial entities would likely need to comply with CRA requirements. A possible amendment may exclude projects with a fully decentralised development model (let’s see!). Compliance requires providing risk assessments, documentation, and reporting security vulnerabilities within 24 hours to the European Union Agency for Cybersecurity. Critics argue that the CRA fails to understand the unique structure of the open-source community, thereby inadvertently stifling innovation.
So What?
I tend to agree with the article’s perspective on the draft legislation. I believe this will stifle Open Source projects, although I’m perplexed as to how they will enforce it at the scale required. I don’t really have much to add, let’s see how this plays out.
Where is Cyber Policy Headed in the UK? A report back from the 2023 political party conferences by Verona Johnstone-Hulse and Kat Sommer
With a UK general election on the horizon, NCC Group’s Government Affairs team recently attended annual conferences for both the Conservative and Labour parties to gauge their stances on cybersecurity. Both parties view technology as instrumental for their respective government agendas but differ on regulatory frameworks. The Conservatives advocate for an 'enabling' government to harness tech for growth and national security. They favour a principles-based approach to regulation for increased agility. Labour aims to support thriving regional tech economies and will retain key laws like the Online Safety Bill unless they fail to meet objectives. AI emerged as a significant topic, with both parties acknowledging its promise and peril. International considerations were also prominent, including differing perspectives on the UK’s role in global technology standards and alliances. The focus on technology and cybersecurity across the political spectrum highlights its centrality in future governance and regulation.
So What?
A lot hinges on next year’s election in the UK, especially in terms of the direction of travel for ‘Science and Technology.’ It does seem that both [major] parties are taking this seriously, and I’d hope to see continued investment in DSIT and various other initiatives within this space. AI is obviously taking centre stage.
Google Cloud has introduced a two-pronged indemnification strategy concerning its generative AI services by Neal Suggs and Phil Venables
“If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.”
The first indemnity relates to the training data used by Google, providing intellectual property indemnity against third-party claims. The second indemnity covers the generated output created by customers, offering protection against third-party intellectual property claims, conditional on responsible AI usage. The indemnities extend to various Google Cloud services and are designed to address potential legal risks. Full terms can be found here.So What?
This is interesting to see, and it’s certainly a positive step by Google Cloud. I have to admit, my first thought was ‘I wonder what happened to trigger this!’ when I read it. This is definitely a welcome step forward, and acknowledges the need for greater control, transparency and consideration of legal implications in LLMs, and other ‘black box’ data models. From a risk management standpoint, I felt my cockles warmed, albeit only slightly.
AWS has created a new guide: "Building a Scalable Vulnerability Management Program on AWS"
The guide provides comprehensive information on creating a structured vulnerability management programme in a cloud environment, focusing on both traditional and cloud-specific security challenges.
Targeted Outcomes:
- Develop policies to streamline vulnerability management and maintain accountability.
- Establish mechanisms to extend security responsibilities to application teams.
- Configure AWS services based on best practices for scalable vulnerability management.
- Identify patterns for routing security findings within a shared responsibility model.
- Report on and continually refine your vulnerability management programme.
- Enhance security finding visibility to improve overall security posture.So What?
As much as I think AWS is a great platform, they’ve been lacking a ‘killer’ security product (vs. Microsoft and Google) for a long time. However, I do appreciate the latest wave of guidance for enabling native cloud security. I think AWS are going the extra mile to support operationalisation of security in their environment, with this and other recent guidance. I hope they do similar for other fundamental areas too.