Briefly Briefed: Newsletter #26 (07/03/24)
Greetings all,
This is week #26 of the ‘Briefly Briefed:’ newsletter. A big welcome to new subscribers, and many thanks to those who continue to read. I’ve made some tweaks to the format of the newsletter this week, following some feedback. I hope it makes it easier to find the information you want.
My ‘if you only read two’ recommendations for the week are:
Court orders maker of Pegasus spyware to hand over code to WhatsApp by Stephanie Kirchgaessner
Planes, Ferries and Automobiles - How I Hacked Free Travel Across Iceland by Stefán Orri Stefánsson
Laters,
Lawrence
Meme of the Week:
News 🎭
Here Come the AI Worms by Matt Burgess
Security researchers demonstrated a new cyber threat: an AI worm capable of spreading between generative AI systems, potentially enabling data theft and malware deployment. This worm, named Morris II, exploits vulnerabilities in AI email assistants, showing significant security risks within interconnected, autonomous AI ecosystems. The findings, which emphasise the worm's ability to bypass some security measures in ChatGPT and Gemini, call for heightened awareness and improved security designs among developers and tech companies to mitigate future risks of AI-driven cyberattacks.
So What?
I doubt that anyone who works in cybersecurity is surprised that we’re starting to see abuse of ‘AI’ in more complex ways. Conversely, we’re seeing a number of excellent efforts to improve baseline security in this area, such as the OWASP top 10 for LLMs and the formation of bodies like the UK’s AI Safety Institute and U.S. AISI and AISIC. However, things are moving so fast it’s likely to be the wild-west for some time to come.
Zscaler CEO: Palo Alto Playing Defense as Firewall Sales Ebb by Michael Novinson
The article outlines Zscaler CEO Jay Chaudhry's view on the cybersecurity landscape, specifically focusing on the decline in firewall sales and the shift towards zero trust security. Chaudhry criticises Palo Alto Networks' strategy of offering free products to new platform customers, predicting this approach will not be sustainable. He emphasises the critical nature of cybersecurity, suggesting that customers will invest in leading solutions over cheaper, less effective products. Zscaler's financial success, with a significant increase in revenue and decrease in net loss, underscores Chaudhry's confidence.
So What?
The announcement by Palo Alto Networks on their strategy change has really garnered a lot of mainstream and industry attention to the ‘platformisation’ debate. I’m totally shocked that the CEO of a direct competitor thinks their approach is wrong and his is better (although his main argument seems to be that people don’t want firewalls anymore). I really enjoyed this article by Francis Odum on the topic. He dives into an analysis of the share price dip at Palo (buy the dip?), and unpicks their strategy in detail. I’ll definitely be following how this continues to unfold over the coming months.
The Invisible $1.52 Trillion Problem: Clunky Old Software by Christopher Mims (WSJ)
The article sheds light on the overlooked issue of 'technical debt', where companies' reliance on outdated software systems leads to security risks and hinders innovation. Technical debt, costing the U.S. $2.41 trillion annually, results from quick fixes and obsolete systems that weren't designed for current uses. Highlighted examples include system failures and security breaches affecting major corporations. The article stresses the importance of management empowering IT departments to prioritise updating existing systems over new developments, to mitigate risks and future-proof organisations.
So What?
This is a long-known issue across IT, and a significant threat to cyber resilience. It’s interesting that a monetary figure has been estimated, and I have to say, it ‘feels’ in the right order of magnitude. It’ll be interesting to see if this raises the profile of the issue, and whether strategies for paying down technical debt will be more openly discussed.
Cybersecurity Disclosure Report by Neil McCarthy, James Palmiter, and G. Michael Weiksner
The article examines early 2023 10-K filings to identify trends in cybersecurity disclosures as mandated by new S-K Item 106. It reveals varied approaches to risk management, governance, and incident management across companies. Many align with frameworks like NIST CSF, indicating a trend towards industry-standard practices. The analysis suggests that while there's a common ground in adopting standardised frameworks, companies also exhibit unique strategies tailored to their operational needs and business objectives.
So What?
This is a nice analysis, despite quite a small dataset. I’m not sure the outcomes are that surprising, in that PLCs are aligning to frameworks and strategies are quite disparate. Longer term, I hope that they consider overlaying a ‘success’ meta-analysis on these data, looking at which companies fare better.
Cyber Threat Intelligence 👹
Millions of Undetectable Malicious URLs Generated Via the Abuse of Public Cloud and Web 3.0 Services by Resecurity
The article discusses the emergence of Fully Undetectable (FUD) Links generated by phishing-as-a-service tools exploiting public cloud services and Web 3.0 platforms like GitHub and IPFS. Resecurity highlights the massive scale of these operations, generating thousands of malicious URLs monthly. These URLs evade detection by leveraging the legitimate infrastructure of widely used cloud platforms, making them particularly challenging for anti-spam and anti-phishing solutions to identify due to their inherent low-risk scores by email security filters.
So What?
A pretty interesting vector, and one to keep an eye on in CTI / SOC teams.
The German Marshall Fund are hosting a discussion featuring Ambassador Nathaniel C. Fick and CISA Director Jen Easterly on Ukraine's cyber defense against Russian cyberattacks. The conversation will be focused on international efforts to bolster Ukraine's capacity to detect and defend against these threats, the importance of cybersecurity for the nation's stability, and the broader implications for global cyber resilience. The link takes you to a registration page for the upcoming webinar on 15th February.
So What?
This webinar looks like it’s going to be fascinating! If you’re interested in Cyber-war strategies, it’s worth considering signing up.
Technical 🔣
Don't Gamble with Risk (DGWR) by Daniel Bloom
This GitHub repository introduces "Don't Gamble with Risk (DGWR)," a Monte Carlo simulation system for quantitative risk modeling. Inspired by the FAIR model, it aids in quantifying risk probabilities and impacts, offering a robust framework for risk analysis. DGWR is beneficial for organisations seeking data-driven decision-making, prioritisation of risks, and efficient resource allocation. It uses the PERT distribution for modeling and is designed for flexibility, supporting future integration of other distributions for broader applicability.
So What?
This one is definitely for the cyber risk quantification geeks. I had a quick play with the tool and although it’s a bit unintuitive in places, it does the job.
Planes, Ferries and Automobiles - How I Hacked Free Travel Across Iceland by Stefán Orri Stefánsson
This article recounts Stefánsson's adventures in exploiting vulnerabilities in Icelandic travel companies' IT systems to obtain free travel. It started with a bug in an airline's booking system and expanded to hacking bus and ferry systems, demonstrating significant flaws in their security protocols. Stefánsson emphasises the ethical aspect by ensuring all exploited tickets were cancelled and not used, highlighting the importance of robust IT security in the travel industry.
So What?
This is just a great story and a fun read. *Disclaimer* Stay in school kids, and on the right side of the law.
Hacking Terraform State for Privilege Escalation by Daniel Grzelak
This article explores how attackers can exploit Terraform state files to escalate privileges within cloud environments. It details methods for manipulating the Terraform state to initiate unauthorised actions, such as deleting resources or executing arbitrary code. Grzelak underscores the importance of securing Terraform state files and offers mitigation strategies, including provider pinning, state file security, and enabling state locking, to prevent such attacks.
So What?
This is a really well-documented and well-reasoned post. If you’re not Terraform literate though, you won’t get much value. It’s worth passing on to your cloud infrastructure colleagues though.
The Offensive ML Playbook by threlfall
This resource serves as a comprehensive guide on offensive machine learning (ML) tactics, techniques, and procedures (TTPs), focusing on practical attacks against ML systems. It categorises attacks into offensive ML, adversarial ML, and supply chain attacks, offering a variety of strategies for red teamers. The playbook emphasises tools and code for immediate use rather than theoretical research, aiming to facilitate quick learning and application of adversarial ML techniques without deep technical expertise in data science or ML.
So What?
This is a nice aggregation of resources relating to Offensive ML. There are still only a handful of people interested in this area, but the community is quite active. The guide will help you get started if you’re keen to learn more, and have the right technical skills.
Jailbreaking Proprietary Large Language Models using Word Substitution Cipher by Divij Handa, Advait Chirmule, Bimal Gajera, Chitta Baral
This paper introduces techniques for bypassing the ethical constraints of large language models (LLMs) using encrypted prompts. It focuses on the effectiveness of word substitution ciphers to create "jailbreak" prompts, enabling the circumvention of model guidelines. The study demonstrates a significant success rate in tricking models, including GPT-4, ChatGPT, and Gemini-Pro, into responding to otherwise restricted queries. The findings highlight the need for improving the robustness of LLMs against such adversarial tactics.
So What?
It’s fascinating to see this vector (jailbreaking LLMs) develop, and the creativity of new techniques to bypass safety. I think it’s going to take quite some time for vendors to ‘catch ‘em all’ and the problem may be the ‘XSS of LLMs’ with micro-variations for a long, long time.
How to prevent lateral movement techniques on Google Cloud by Christopher Perry and Wendy Walasek
This article discusses methods to secure Google Cloud against lateral movement techniques exploited by cybercriminals. It highlights Palo Alto Networks' research on exploiting cloud misconfigurations for unauthorised access across cloud environments. The authors explain specific attack vectors such as abusing snapshot creation permissions and adding SSH keys via metadata, offering detailed mitigation strategies to safeguard against these vulnerabilities. The piece underscores the importance of applying the principle of least privilege and using Google Cloud's security features for robust defense.
So What?
An important read if you want to ‘defend in depth’ in GCP. It’s great to see Google Cloud be pretty reactive to recent third-party research. Let’s hope this influences their security by design, and newer iterations of their IAM make it easier to avoid permissions errors.
Geopolitics 💥
Court orders maker of Pegasus spyware to hand over code to WhatsApp by Stephanie Kirchgaessner
A US court ruled NSO Group must give WhatsApp its spyware code, including Pegasus, amidst allegations of spying on 1,400 users. This decision represents a significant win for WhatsApp in its lawsuit against NSO since 2019. While NSO is required to disclose spyware functionality, it isn't forced to reveal client names or server details. The case underscores ongoing concerns about spyware's impact on privacy, security, and national interests, highlighting governmental measures against misuse.
So What?
FTC Order Will Ban Avast from Selling Browsing Data for Advertising Purposes, Require It to Pay $16.5 Million Over by US Federal Trade Commission
The Federal Trade Commission (FTC) has mandated Avast to pay $16.5 million and banned it from selling users' web browsing data for advertising. This settlement addresses the contradiction between Avast's promises of online tracking protection and its actions of selling browsing data to third parties. The FTC highlighted Avast's deceptive practices, including inadequate consumer notice and consent, and imposed additional requirements on Avast to prevent future misconduct.
So What?
Ouch! This underlines the mess that’s PII and the Internet. I doubt this will be the last instance, and the resulting case law will possibly trigger additional suits.
Jensen Huang says kids shouldn't learn to code — they should leave it up to AI by Mark Tyson
Nvidia CEO Jensen Huang suggests at the World Government Summit that programming should be left to AI, freeing humans to master other domains like biology or farming. He believes AI's ability to understand human language makes everyone a programmer, changing the educational focus towards more "useful" fields. Despite Huang's vision, there's skepticism, as the demand for programmers remains high, indicating that AI might expand coding access rather than replace it.
So What?
Jensen clearly hasn’t tried to write code using ChatGPT! Joking aside, I do agree to an extent, that by the time today’s children reach the workforce, AI will be doing a better job of writing code than clunky old humans. However, I’d argue that coding provides a unique understanding of computation and key logical reasoning you won’t find elsewhere in educational curricula.
Jack Teixeira, a 22-year-old U.S. Air National Guard member, has agreed to plead guilty to unlawfully retaining and transmitting classified National Defense Information via a social media platform. This action violates his top-secret security clearance, undermining U.S. national security and risking the safety of Americans and allies abroad. The plea highlights the severe consequences of mishandling classified information, underscoring the Justice Department's commitment to protecting national security.
So What?
This is quite worrying, and hopefully not a growing trend.