Discover more from Munrobotic Blog and 'Briefly Briefed:' Weekly Newsletter
Briefly Briefed: Newsletter #11 (16/11/23)
First, there was darkness. Then came the strangers.
This is week #11 of the ‘Briefly Briefed:’ newsletter. A big welcome to new subscribers, and many thanks to those who continue to read. This week’s edition is quite ‘news heavy’, as there have been a lot of interesting happenings in the Cyberverse!
My two ‘if you only read two’ recommendations for the week are:
The Mirai Confessions by Andy Greenberg.
Cover Your Tracks, A Project of the Electronic Frontier Foundation.
Until our paths cross again in this ever-changing city of life.
Meme of the Week:
The article narrates the tale of Josiah White, Dalton Norman, and Paras Jha, three young hackers responsible for creating the Mirai botnet. The botnet, born from their ‘passion for computers and hacking’, led to a major internet outage in 2016, impacting significant websites like The New York Times and Twitter. The post traces their journey from enthusiastic explorations in cybercrime to the development of Mirai, highlighting the far-reaching implications of their actions. It culminates in losing control of Mirai, and their eventual cooperation with the FBI. The article underscores the thin line between ‘youthful curiosity’ and serious legal consequences.
I found this article really interesting. Normally, I don’t find all the backstories about hackers that interesting, I’m more intrigued by the logistics of the hack. However, this is fascinating, and gives some great insights into what happened at the time. It’s quite a long read, but possibly worth 15 minutes of your time to understand Mirai folklore.
Inside Wall Street's scramble after ICBC hack by Paritosh Bansal
The cyber attack of the Industrial and Commercial Bank of China's (ICBC) U.S. broker-dealer arm was a significant event, highlighting vulnerabilities in the financial sector. The attack was so severe that it caused a complete blackout of the corporate email system, leading employees to switch to Google mail. This incident put the brokerage's resources under strain, as it temporarily owed BNY Mellon a staggering $9 billion, far exceeding its net capital. ICBC Financial Services, the New York-based unit of ICBC, received a cash injection from its parent company to cover the shortfall and manually processed trades with the assistance of BNY Mellon. ICBC collaborated with cybersecurity firm MoxFive to establish secure systems to resume normal business operations. However, the recovery process was expected to take several days. During this period, ICBC requested its clients to temporarily suspend business and clear trades elsewhere, causing other market participants to reassess their exposure and reroute trades.
Yikes! I don’t normally dabble in TI in the newsletter too much, as this is better covered by other newsletters (thanks Ollie) or commercial sources. However, this is quite an interesting story I was keen to highlight, especially if you work in FS.
Introducing GPTs by OpenAI
The article introduces GPTs, a series of language models developed by OpenAI. You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills. The article explains the evolution and capabilities of these models, highlighting their significant advancements in natural language processing and understanding. GPTs have been designed to generate coherent and contextually relevant text based on given prompts, “marking a breakthrough in AI-driven language generation.” The post details the various applications of GPTs, ranging from composing emails to creating content and even coding.
The introduction of GPTs is as terrifying as it is exciting. It gives individuals the ability to quickly create their own sub-versions of ChatGPT, leveraging the prompt and general capabilities of the platform. The scary part, is how people are using them, and the data they’re sharing (or encouraging others to share). Within the security sphere I’ve seen (and experimented with) a few that were released this week. One of the most concerning was Cyber Guardian (clicking this link will re-direct you to ChatGPT and add Cyber Guardian to your ‘My GPTs’ automagically, it can be removed and doesn’t appear to do anything malicious). Cyber Guardian is an incident response assistant for SOC analysts. While I think it’s a great training aid, the way it functions encourages analysts to put live incident data into the prompts. Clearly, this isn’t a good idea, as these data are likely to contain sensitive information. Other security-related GPTs I’ve seen, work in a similar ways. My advice to those responsible for cybersecurity, is to create a robust policy regarding AI and ML, and consider blocking public services whilst making private services (post security review and with SDL guidelines) available. This ‘Generative AI’ policy from Contrast Security is a great starting point.
Cover Your Tracks, A Project of the Electronic Frontier Foundation
"Cover Your Tracks" is a tool developed by the Electronic Frontier Foundation that allows users to test how well their browsers protect them from tracking and fingerprinting. It gives users an insight into how online trackers view their browser, showing the most unique and identifying characteristics of their browsing tool. This service aims to educate users on the methods and technologies used for online tracking, highlighting the importance of digital privacy and the means to safeguard it.
This tool is really useful for individuals to see how easily tracked you are across the Internet, using just your browser. If you’re not already using Privacy-centric browsers, like Brave (Chrome-based) or LibreWolf (FireFox-based) it’s worth considering a move.
The article reports on an incident where the BBC staged a fake break-in at a regional headquarters as a security test, leaving two female reporters terrified. The event occurred late at night after the reporters had finished work. An actor, hired to simulate an intruder, was discovered lurking in the underground staff car park, causing significant distress to the two women. BBC Director-General Tim Davie has promised to investigate the incident, which occurred in Nottingham. The BBC's East Midlands editor, Emma Agnew, informed the staff that no managers in England were aware of the test, and the BBC has refrained from commenting on security matters. This incident has raised concerns among staff, particularly about the allocation of resources towards such security tests rather than improving actual security measures.
Firstly, I must say that I am deeply sorry for including a link to The Sun. I couldn’t find another reference for this story.
This incident highlights what can go wrong during physical entry attack simulations and social engineering exercises. It’s a good reminder of the potential risks and human harms that can unintentionally manifest. There are few cited examples of these exercises going wrong in the media, but having been a social engineer myself and in the industry a long time, I can confidently say that there are a lot of near misses (putting it mildly). The most famous case is undoubtedly the ‘Coalfire’ employees who found themselves in jail, following an exercise for a Courthouse in the US.
How to Create a Web3 Security Incident Response Plan by Rob Behnke
The article guides Web3 developers and auditors in creating a comprehensive security incident response plan for decentralised protocols. Despite robust security measures, the risk of hacks in Web3 applications, particularly in smart contracts, remains a concern. The article outlines various scenarios that qualify as security incidents, emphasising the importance of a swift response to mitigate damage. Key steps in crafting an effective response plan include identifying critical roles, setting up a 'war room' for collaboration, evaluating security threats, executing defensive security measures, and maintaining transparent communication with users. Additionally, the plan should detail the development and deployment of bug fixes and conducting a post-mortem analysis to improve future responses. The article highlights the significance of regular drills to familiarise the team with the incident response process, enhancing the overall security preparedness of Web3 projects.
A slightly esoteric post, but there are some useful reflections for anyone involved in creating IRPs for organisations who indulge in Web3 tech.
SentinelOne announced the launch of PinnacleOne, a strategic risk analysis and advisory group. Led by industry experts Chris Krebs and Alex Stamos, PinnacleOne aims to provide insights and strategies to help customers navigate the complex landscape of cyber risks and evolving technology. The group will operate as both a strategic advisory body and a think tank, offering services to understand digital threats, evaluate security postures, and develop robust security strategies. Krebs, joining as Chief Intelligence and Public Policy Officer, brings experience from his role as the inaugural director of the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA). Stamos, appointed as Chief Trust Officer, has a background as the Chief Security Officer of Facebook and Chief Information Security Officer at Yahoo. PinnacleOne represents SentinelOne’s commitment to addressing the holistic challenges of cybersecurity in a rapidly changing digital environment.
I wouldn’t normally highlight such a small acquisition, but I found this interesting due to the parties involved. The move demonstrates the broadening of their portfolio for SentinelOne. This mirrors other PLCs in the space (such as Crowdstrike and Palo Alto) to become more holistic Cybersecurity providers. The nuance in this acquisition is that Krebs Stamos is so small, with only eight employees, and under three years in existence. I’d assume this is more about Krebs and Stamos as individuals, building a practice during their earn-out, than acquiring a footprint in the consulting space.
The post explains a significant security vulnerability impacting updated iPhones. Security researcher Jeroen van der Ham discovered this issue when his iPhone continuously crashed due to a series of disruptive pop-ups while travelling by train. The source of the problem was a passenger using a Flipper Zero device – a portable tool capable of various wireless communications, including RFID, NFC, Bluetooth, Wi-Fi, and standard radio signals.
Flipper Zero, launched in 2020, has become notorious for its ability to cause a Denial of Service (DoS) loop on iPhones, exploiting vulnerabilities in the Bluetooth protocol. The device employs custom firmware to generate a relentless stream of Bluetooth messages, overwhelming the iPhone's system. This vulnerability predominantly affects iPhones running iOS 17.0 and newer.
This is quite an interesting vulnerability and the fact that it can’t really be prevented (at time of writing) is unfortunate. Currently, the only way to shield iOS devices from such attacks is to disable Bluetooth in the settings.
A Tale of 2 Vulnerability Disclosures by Eddie Zhang
Eddie Zhang's blog post, "A Tale of 2 Vulnerability Disclosures," recounts two very different experiences with reporting security issues. While assessing a client's online presence, Zhang found two exposed data storage areas (buckets) that belonged to other companies. In the first case, he reported a security issue to Monash University. They responded quickly and positively, thanking him and acknowledging his help. This experience shows how well things can go when companies are prepared for such reports.
The second experience was not as positive. The company involved did not have a clear way to report security problems. Zhang tried contacting their top executives via LinkedIn, but the response he got was dismissive. He even got blocked by one of them. Attempts to involve the media and privacy organisations didn't help.
These two stories highlight the different ways companies can handle security reports. While some respond well and work with the person who found the problem, others might ignore or dismiss such reports. This difference can affect how quickly and effectively security issues are resolved.
Anyone who’s tried to disclose vulnerabilities to vendors knows how frustrating this can be. The process of full disclosure does need legislative intervention before it’s going to change (IMHO). There are already efforts towards this in most developed countries. Vendors need to create pathways for notification and embed security in their development processes. This is not exactly news to anyone though.
North Korea experiments with AI in cyber warfare: US official by Bryson Masse
The article reports a significant revelation by Anne Neuberger, Deputy National Security Advisor of the United States, about North Korea's escalating cyber capabilities through artificial intelligence (AI). This is a first-time public acknowledgment by a U.S. official regarding the use of AI in cyber warfare. North Korea, alongside other nation-states and criminal entities, is reportedly utilising AI to expedite the creation of malicious software and identify vulnerable systems. This advancement poses a heightened threat to global enterprises, given North Korea's history of impactful cyberattacks, like the Sony Pictures breach and the WannaCry ransomware incident. These AI-enhanced cyber operations not only increase the efficacy of attacks but also contribute significantly to North Korea's revenue, suspected of funding its missile program. The article underscores the urgency for businesses to bolster their cybersecurity strategies in response to these evolving threats.
More AI threats! This is not at all surprising, but it’s interesting to see this documented. We’re undoubtedly seeing the start of the arms race in offensive and defensive applications of AI.
50 Shades of Vulnerabilities: Uncovering Flaws in Open-Source Vulnerability Disclosures by Aqua Nautilus researchers
Aqua Nautilus researchers conducted a comprehensive analysis of open-source projects, revealing significant flaws in the vulnerability disclosure process. This research, involving GitHub activities and the National Vulnerabilities Database (NVD), highlights the early exposure of vulnerabilities, posing serious security threats. The team introduces the concept of 'Half-Day' and '0.75-Day' vulnerabilities, which lie between the traditional '0-day' and '1-day' categories, underscoring the complexity of vulnerability disclosure. These new categories emphasise the risk of attackers exploiting vulnerabilities during this interim phase. The researchers suggest mitigation strategies such as responsible disclosure, proactive scanning, and runtime protection to enhance open-source security.
More full disclosure! The study underscores the need for standardised disclosure processes and raises awareness about the gravity of early vulnerability exposure. I’m not sure I’m onboard with the fractional x-Day terms though!
Insights on outsourcing and other lessons from a data breach – the UK FCA perspective by Herbert Smith Freehills (A UK law firm)
On 13 October 2023, the UK Financial Conduct Authority (FCA) published a Final Notice to Equifax Limited, fining the firm over £11 million for the 2017 data breach affecting over 13.7 million UK consumers. The FCA found Equifax in breach of several Principles for Businesses, emphasising the need for rigorous oversight in intra-group outsourcing and effective risk management.
The case highlights the necessity of proper software maintenance, prompt incident notification, and accurate customer communication. Additionally, the regulatory landscape has evolved since the breach, with a focus on operational resilience, individual accountability, and customer protection. This incident underscores the importance of firms maintaining data security and complying with evolving regulatory expectations.
LOLSecIssues by Florian Roth
Cybersecurity's lighter side: a collection of the most amusing misunderstandings and missteps from newcomers to offensive security tools. A repository where naiveté in infosec is met with humour.
I’m not really a fan of ‘punching down’ or laughing at inexperienced people, but I feel like this is done in a light-hearted way. We all make mistakes, I’m just hoping I deleted all of mine!
Localtoast, a scanning tool by Google
Localtoast is a scanner for running security-related configuration checks such as CIS benchmarks in an easily configurable manner.
The scanner can either be used as a standalone binary to scan the local machine or as a library with a custom wrapper to perform scans on e.g. container images or remote hosts.
A handy tool if you perform these types of checks for a living!
Thanks for reading ‘Briefly Briefed:’ - To receive the newsletter on a weekly basis, please subscribe below.