Briefly Briefed: Newsletter #19 (18/01/24)
“Just when I thought I was out, they pull me back in.”
This is week #19 of the ‘Briefly Briefed:’ newsletter. A big welcome to new subscribers, and many thanks to those who continue to read. The newsletter hit a milestone this week, of over 500 subscribers (growing at about 120 per month) and ~3000 views per month in total. I really appreciate the time you give my ramblings, and I hope you find utility in it.
My ‘if you only read two’ recommendations for the week are:
The State of Software Supply Chain Security 2024 by ReversingLabs
The quantum computing threat is real. Now we need to act. by Susan M. Gordon, John Richardson, and Mike Rogers
"Our true enemy has not yet shown his face."
Lawrence
Meme of the Week:
South Korea Lays Out $470 Billion Plan to Build Chipmaking Hub by Sohee Kim
The article details South Korea's strategy to develop the world's largest chipmaking hub near Seoul. The plan involves a substantial investment of $470 billion by 2047 by major firms like Samsung Electronics and SK Hynix. This initiative will see the construction of 13 new chip plants and three research facilities. The project aims to increase South Korea's self-sufficiency in semiconductors and boost its share in the global logic chip market. The government's support includes significant tax breaks, positioning South Korea competitively against global rivals in the semiconductor industry.
So What?
Given the world’s over-dependency on Taiwan (and the Dutch company ASML, who build the machines that make the most advanced chips) to manufacture semi-conductors, this is a wise move by South Korea. Most developed countries have been
panickingconcerned about the dependency for some time, especially since tensions with China have increased over Taiwan’s independence (and its assurance by the US). The US passed the CHIPS and Science act in 2022, which incentivised chip manufacture on American soil and restricted export to China. This does seem to be having the intended impact, with Taiwan’s market share expected to reduce by 18% by 2033. The UK has promised a £1bn investment over the next 10 years, and has published a twenty year strategy (!) to assure supply chains domestically.
Landing at the NCSC (glad I brought my towel) by Ollie Whitehouse
Ollie outlines his strategic priorities for enhancing the UK's cybersecurity. He emphasises the need for evidence-based approaches to assess the efficacy of cyber defenses and addresses the challenge of technical security debt in the field. The post advocates for integrating cybersecurity as a fundamental feature in technology, rather than as a premium addition. He also highlights the importance of preparing for major cyber incidents and the role of market forces in driving cyber security improvements. The article reflects on the journey ahead in achieving these goals for national and international cyber resilience.
So What?
If you saw either of Ollie’s talks (at Black Hat Europe or the SANS summit), you’ll have captured the key themes in this post (with ~20% more Ollie). I think there’s a good mix of fundamentals (Cyber as a science and security as standard) and some of the more complicated challenges that still elude us (like technical security debt). I hope the UK industry will rally to support the NCSC mission, as these are issues that impact all. Good luck Ollie and team! </pompoms>
OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare" by Sam Biddle
The article discusses OpenAI's recent policy change, which subtly removes the explicit ban on using its technology, like ChatGPT, for military purposes. The previous policy specifically prohibited uses that entailed high risk of physical harm, including weapons development and military applications. The revised policy omits the specific ban on military and warfare uses, instead broadly prohibiting service use to harm others. This change raises concerns about OpenAI's potential involvement in military applications and its partnership with defense contractors like Microsoft.
So What?
Sneaky, huh? What could it mean? I don’t think there’s a big conspiracy here; I would be surprised if the US NSA and DoD were too concerned by software Ts and Cs for an end-user product. Conversely, it is interesting to see the ongoing impact of the board substitutions last year at OpenAI, and doubtless this is a symptom of that cultural shift.
Alphabet’s Isomorphic stacks two new deals with Lilly, Novartis worth nearly $3B ahead of JPM by Max Bayer
Not Cyber. The article reports on Isomorphic's recent agreements with Eli Lilly and Novartis, totaling almost $3 billion. These deals leverage Alphabet’s AlphaFold AI technology, underlying Isomorphic's platform, for predicting protein structures to expedite target discovery and compound development. The partnerships involve substantial upfront payments and potential milestone payments for developing small molecule therapies targeting unspecified diseases. The company, a branch of Alphabet and a product of Google DeepMind’s technology, has kept a low profile since its inception but has a notable scientific advisory board.
So What?
I shared this as a potentially interesting datapoint relating to AI’s impact more broadly. This technology is really exciting (IMHO) and will turbocharge biological (especially genomic) research.
The quantum computing threat is real. Now we need to act. by Susan M. Gordon, John Richardson, and Mike Rogers
The post highlights the urgent need for the U.S. to address quantum computing threats. The article posits that adversaries may exploit encrypted U.S. data using future quantum computing capabilities, making current public-key encryption obsolete. It urges immediate migration to post-quantum cryptography (PQC) for government and private sectors. The government has taken steps through executive orders and legislation, but more action is needed to protect sensitive data from these emerging threats.
So What?
The threat of PQC is becoming more and more real. I don’t believe that most Enterprises need to be worried just yet, but certainly nation states are worried (as the article alludes). The main challenge in preparing, for most organisations, is that the likelihood of threat actors retaining this capability is largely unknown (at least publicly). There is sure to be a large one-off expense for organisations to migrate, and ongoing costs as quantum resistant algorithms (or more likely their implementations) fail. One of the latest advancements is the ability to stabilise qubits at room temperature, which will ultimately lower the cost of quantum computers if broadly implementable. The field is moving really quickly, so it’s important to keep up-to-date as a defender. Key milestones and recommended algorithms are being maintained by NIST.
OWASP Mobile Top 10 2023: Updates by OWASP
The post shows the initial release candidates from the 2023 Mobile Top 10 and a comparison to the previous release. There are also links to contribute (via their Slack channel), and supplementary information from previous years.
So What?
The top 10s from OWASP continue to provide a useful reference point for vulnerability frequency and prevalence. However, many organisations and practitioners still use them for the wrong purpose. A common misuse of the top 10s (especially the web application flavour) is as a vulnerability baseline, in that it’s a minimal checklist of issues to mitigate. Whenever you attempt to illustrate a vast dataset in summary, you always end up with reference points that are not applicable in a high number of cases. The Top 10s are no different. You could argue that it’s ‘better than nothing’ (and you’d be right), but the efficacy of the outcome will be very low. If you’re attempting to find a good security baseline for mobile, the OWASP MASVS is more applicable.
Configure the deception capability in Microsoft Defender XDR by Microsoft
Microsoft have released new deception capability in Defender XDR. This post explains how to enable it. This companion article explains a bit more about the features and what it entails.
Enabling requires one of the following subscriptions, and EA or SA permissions:
- Microsoft 365 E5
- Microsoft Security E5
- Microsoft Defender for Endpoint Plan 2So What?
If you utilise the Microsoft Defender suite, you may benefit from experimenting with these features.
The Deep Dive: Cyber Defense in 2024: A Special Report on Potential 2024 Cyber Threats by Deepseas
The report explores the evolving cyber threat landscape for 2024. It highlights the increased use of AI, data theft, and sophisticated ransomware by threat actors. The report, based on research and expertise, aims to provide guidance for CISOs and CIOs on mitigating these risks in a changing environment. It addresses key challenges such as the high demand on cybersecurity teams, the expanding attack surface, and the complexity of operationalising threat intelligence. The focus is on practical and strategic responses to these emerging threats. The key trends the report identifies are as follows:
Trend 1: AI Evolves from Tool to Weapon
Trend 2: Operational Technology (OT) Attacks Cross a New Line
Trend 3: The Ransomware Madness Continues
Trend 4: No Surprises Here – Humans Are Still Vulnerable
Trend 5: Identity Re-emerges as a Highly Targeted Attack SurfaceSo What?
No real surprises in this report, but useful datapoints for your presentations and papers!
The Global Risks Report 2024 by World Economic Forum
The report provides an analysis of global risks. The analysis, based on insights from nearly 1,500 experts, examines risks over different time frames to assist decision-makers. It highlights the growing global challenges, including environmental risks, societal polarisation, misinformation, economic difficulties, and technological threats. The report emphasises the need for immediate action to address these risks in a rapidly changing, fragmented world.
So What?
This report goes much broader than Cybersecurity. I found it interesting to provide economic context to the challenges we face as an industry. It’s not a light read though!
Generative AI first call for evidence: The lawful basis for web scraping to train generative AI models by the UK ICO
The article discusses the legal considerations for using web-scraped data to train generative AI models. The focus is on ensuring compliance with data protection laws, particularly the lawful basis under UK GDPR. The ICO examines whether the 'legitimate interests' basis can apply, requiring developers to pass a three-part test that includes assessing the purpose of processing, its necessity, and balancing individual rights against the interests pursued. The report emphasises the need for developers to carefully consider and document their compliance with these legal requirements.
So What?
Ironically, this week I received my first warning from ChatGPT regarding directly quoting information I provided to it. Essentially, I asked it to extract section headings from this article, to ensure the summary I created captured the key points. However, the prompt doth protesteth. The error I received was: “I'm sorry, I can't provide the exact headings from the report as it would involve directly repeating content from the article.” I thought this was pretty interesting given it’s not so fussy within the training datasets! Additionally, I have noticed that there is an increase in the number of sites using exception-based policies in their robots.txt for user-agent strings to block AI Bots. It’ll be interesting to see how governments intend to legislate, and more importantly, police this problem.
US Department of Defense Instruction: 858501 for Cyber Red Teams by US DoD
The instruction outlines the policies and responsibilities for the Department of Defense Cyber Assessment Program. It establishes the governance and functioning of the DoD Cyber Red Team (DCRT) community, including mission prioritisation, deconfliction, and reporting of findings. The instruction also defines the scope, authorities of DCRTs, and the processes for validating their skills and qualifications. It details the responsibilities of various DoD officials and departments in relation to the program, emphasising the need for coordination and compliance across different sectors of the DoD.
So What?
If you lead a red team or run regular exercises with third parties, there’s some interesting ideas in this instruction you may find useful.
AI here and there: in the Russian Federation created software to determine the owners of Telegram channels by Ivan Chernousov
The article explains that a new neural network, "Comrade Major," has been developed in Russia. This AI is capable of identifying the owners of anonymous Telegram channels by analysing various data sources like message descriptions, chat information, and digital footprints. The technology, designed to function like an analyst but with greater speed and efficiency, is undergoing internal testing. It's intended for use by organisations investigating cybercrimes related to anonymous Telegram communities. The full version is expected to be released between 2024 and 2025. The post also discusses the potential legal implications and the necessity of careful use to avoid violating privacy rights.
So What?
I challenge you to read ‘Comrade Major’ without a thick Russian accent applied to your inner monologue. The tool sounds pretty interesting, and provides an insight into what nation states may be developing (with AI) at a national level to address intelligence challenges at scale.
The State of Software Supply Chain Security 2024 by ReversingLabs
The report discusses the increasing ease and prevalence of software supply chain attacks. The analysis highlighted a visibility gap in software supply chains, making it difficult to detect and defend against such attacks. The report found a significant rise in malicious code on open-source platforms and noted that typical supply chain risks, like typosquatting and leaking sensitive data, persisted in 2023. The article also observed a growing trend of less sophisticated cyber actors exploiting these vulnerabilities for data theft, ransomware deployment, and other malicious activities. The report anticipates a continued rise in such threats in 2024.
Four key takeaways:
Software supply chain attacks rose 1300% in the past three years.
The Software Supply Chain Is a Blind Spot: Attacks such as the compromise of VoIP vendor 3CX laid bare a yawning visibility gap that hampers the ability of both software makers and their customers to detect software supply chain compromises and defend their organizations from malicious actors.
Software Supply Chain Attacks Are Getting Easier: For example, Operation Brainleeches, identified by ReversingLabs in July, showed elements of software supply chain attacks supporting commodity phishing attacks that use malicious email attachments to harvest Microsoft.com logins.
Change Is Coming... and More of the Same: ReversingLabs observed substantial changes in both the quantity and kinds of malicious code turning up on open-source platforms such as npm, PyPI, and NuGet.
So What?
More report fodder! It’s great to see one of these aggregated Cyber reports focus on a particular area. A lot of the time, the annualised tomes produced by large security providers are a mile wide and an inch deep, with skewed data slanted towards their customer base (for obvious reasons). The headline is undoubtedly the staggering growth of this attack class. Is it really a 2024 and not a 2023 report though? (Me? Pedantic?)