Briefly Briefed: Newsletter #16 (20/12/23)
Welcome to the party, pal!
This is week #16 of the ‘Briefly Briefed:’ newsletter. A big welcome to new subscribers, and many thanks to those who continue to read.
The new poll feature doesn’t seem to work very well via email, apologies if you had a blank section or an error. You can interact via the web version if you feel the need. Thanks to those who provided feedback already though!
I’m going to take a slight pause on the newsletter over the Holidays; normal service will resume on the 10th of January.
My ‘if you only read two’ recommendations for the week are:
New Microsoft Incident Response team guide shares best practices for security teams and leaders by Microsoft Incident Response.
Meme of the Week:
2023 CBEST thematic report by Bank of England (FCA and PRA)
For those not in the UK (or who’re not familiar with CBEST), CBEST is an attack simulation framework (centred on an substantial red teaming engagement) focused on the UK financial services industry. The annual CBEST thematic is intended to inform the sector on the findings and lessons learned from the CBEST programme. The key themes for 2023 were identified as:
Identity and access management
Staff awareness and training
Incident response and security monitoring
I was lucky enough to be involved in the genesis of CBEST (being on the exco of CREST and working with Banks who were clients), I was also on the other side of the fence whilst working for a retail bank. Overall, I think CBEST has been successful in its mission to highlight weaknesses in (and improve) Cybersecurity within the UK financial sector. It has spawned a range of other copycat frameworks (TIBER-EU (EU), iCAST (HK), AASE (Sing.) and CORIE (Aus.), which have had varying levels of success. It’s interesting to see this evolve and more sectors adopting equivalent ‘*BESTs’. The report itself is especially interesting if you work in the financial services sector.
BSAM: Bluetooth Security Assessment Methodology by Tarlogic
BSAM is the acronym for Bluetooth Security Assessment Methodology. BSAM is an open and collaborative methodology developed to standardise the security evaluation of devices using Bluetooth technology.
The BSAM methodology defines all the necessary Bluetooth security controls to provide manufacturers, security researchers, software developers, enthusiasts, and cybersecurity professionals with a guide for conducting security assessments on devices with Bluetooth communications.
This is a well constructed framework and methodology, which may be useful for those engaged in building or testing devices utilising Bluetooth.
Former Security Engineer For International Technology Company Pleads Guilty To Hacking Two Decentralized Cryptocurrency Exchanges by U.S. Attorney's Office, Southern District of New York
The press release reports that Shakeeb Ahmed has pled guilty to hacking two decentralised cryptocurrency exchanges, including Nirvana Finance. This case represents the first conviction for hacking a smart contract. Ahmed has agreed to forfeit over $12.3 million, part of which was fraudulently obtained cryptocurrency.
U.S. Attorney Damian Williams highlighted Ahmed's sophisticated methods used in the theft of over $12 million. In July 2022, Ahmed exploited smart contract vulnerabilities in two exchanges. He fraudulently generated $9 million in one attack and profited approximately $3.6 million from the other, leading to Nirvana Finance's shutdown. Ahmed laundered the stolen funds using various techniques, including cryptocurrency mixers.
Ahmed's conviction for computer fraud carries a maximum sentence of five years in prison. He has also agreed to pay restitution of over $5 million. The sentencing is scheduled for 13th of March 2024. The successful investigation involved Homeland Security Investigations and the Internal Revenue Service – Criminal Investigation.
Smart contracts and crypto-based software products remain a juicy target for adversaries, and the wild-west in terms of security levels (and regulation). If you want to understand the journey we’re about to see with ‘AI’ (and 10x3¹⁵ startups with one or no security person), crypto is likely a useful analogue.
New Microsoft Incident Response team guide shares best practices for security teams and leaders by Microsoft Incident Response
Microsoft’s Incident Response team has released a new guide to help organisations develop effective incident response strategies. The guide, titled "Navigating the Maze of Incident Response," focuses on the human elements and processes critical to a successful incident response. It is designed to assist security teams and senior stakeholders during the crucial hours following a breach's detection.
The guide explains that incident response is a shared responsibility, emphasising the importance of assembling a comprehensive team beyond just technical staff. This includes leadership, communication, and regulatory support, ensuring a holistic approach to managing incidents. The guide suggests a command structure to define workstreams, roles, and responsibilities, acknowledging that senior stakeholders often lack a clear understanding of the impact and risk of cybersecurity incidents due to poor communication.
The guide details key activities, responsibilities, potential challenges, and common pitfalls for each workstream. It also addresses the importance of understanding roles and responsibilities, shift planning for long responses, and preventing team burnout. Specific processes for each workstream are outlined, including situation reports, evidence requirements for on-premises and cloud data, and the establishment of secure communication channels. The guide aims to provide detailed, actionable information for effectively responding to and limiting the impact of cybersecurity incidents.
This is a really useful guide. I like that they’re considering a wider view of incident response and how it integrates into an organisation. It’s key to ensure broad engagement in incident response plans, and ideally, clear integration with business continuity planning, crisis management and prioritisation linked to business impact assessments.
The article examines the challenges in creating effective security products and the crucial role of skilled security professionals in cybersecurity. It argues that technological advancements, like AI, are accessible globally to both defenders and attackers, thus neutralising any significant advantage. The core message is that buying security products is essentially outsourcing security, with the effectiveness largely dependent on the quality of the security practitioners employed by the vendor.
The article further highlights that security products, like endpoint detection and response (EDR) tools, often provide generic solutions, which may not be fully effective due to the diverse nature of customer environments. This leads to the potential for false positives or negatives.
The post underscores the limitations of assembling multiple specialised security tools, as sophisticated attacks often span across different technological segments. Ross advocates for evidence-based security over promise-based approaches, emphasising the necessity for companies to build their own skilled security teams. The key to overcoming cyber adversaries, he suggests, is prioritising skilled personnel over tools.
I agree with a lot of the points Ross makes in this article. I feel that ensuring expertise within a business is the crux of the post. Observing the most successful startups, and the genesis of new sub-markets, it’s clear that the ones that last are driven by people who’ve lived the problems they solve. The key advantage startups have (over larger organisations) in solving these problems (IMHO) is a stronger linkage between the business decisionmaker and the problem. Having tried to develop ‘innovative solutions’ within a range of businesses (of varying sizes), the more disconnected the solution’s vision is from the problem, the more diluted the outcome. The challenge is often compounded by competing priorities within stakeholder groups, further constraining a unified mission.
Spera Security to join forces with Okta to advance Identity-powered security by Arnab Bose (Okta)
Okta has announced its plan to acquire Spera Security (for between $100-130mil), a company specialising in identity threat detection and security posture management. This acquisition, set to close in the first quarter of 2024, aims to enhance Okta's capabilities in identity threat detection and response (ITDR) and security posture management. Spera Security's integration will offer customers improved insights and technology to manage identity security risks more effectively. The collaboration is expected to bolster Okta's existing ITDR features and assist customers in dealing with the complexities and risks associated with cloud apps and services.
Big deal of the week! Okta have been embattled with breach woes of late. Their share price profile looks like a rollercoaster ride over the last 6 months, but this acquisition will hopefully rally it.
OpenAI buffs safety team and gives board veto power on risky AI by Devin Coldewey (Tech Crunch)
The article highlights OpenAI's recent enhancements to its AI safety measures. A new safety advisory group has been established to provide guidance on AI risks to the leadership, and the board now has veto power over potentially risky AI projects. These changes follow a leadership overhaul and increasing concerns about AI risks.
The central element of OpenAI's approach is its updated "Preparedness Framework," designed to identify and manage catastrophic risks in AI development. Risks are categorised into areas like cybersecurity, disinformation, and model autonomy, with a focus on preventing economic damage or harm to individuals. Models posing high risks will not be deployed, and those with critical risks won't be developed further.
The Safety Advisory Group, distinct from the technical team, will evaluate AI risks and make recommendations. While the leadership team makes initial decisions on AI deployment, the board has the authority to overturn these decisions. The article also questions the transparency and effectiveness of the board's oversight in this new safety structure.
It’s great to see OpenAI introduce additional oversight. They’re going to need to spend a lot of money (although, they’re probably good for it) keeping up with inbound international regulation. I think it’ll be interesting to see how their model develops as GPT becomes the backbone of a lot of platforms. The impact on third-party risk management and data security cannot be understated. If you thought it was hard to pin-point where your data resides and how it’s used in typical cloud storage, LLMs say ‘hold me beer’.
What to do about disinformation by Eliot Higgins (Bellingcat, for the FT)
Eliot Higgins, founder of Bellingcat, addresses the issue of online disinformation, advocating for education over regulation as the solution. He observes how social media platforms, particularly post-changes at Twitter, have contributed to the spread of false information, undermining public trust in them as reliable news sources. Higgins cites examples from the Israel-Palestine conflict, where misinformation and misused images have manipulated public opinion. He emphasises the role of collaborative knowledge in combating disinformation, as demonstrated by Bellingcat's open-source investigations. The article suggests incorporating open-source investigation and critical thinking into educational curricula to empower people, especially the youth, to discern truth online. Higgins calls for a united approach from policymakers, educators, and tech leaders to create an informed society, highlighting the pivotal role of education in addressing the challenges of disinformation in the digital age.
I agree broadly with Elliot’s position, and I think there’s good evidence for this from countries like Finland. However, as with all things, a balanced approach is needed. The dis/misinformation war reminds me a lot of phishing. You can tell a person 1000000 times how to spot a phishing email, but a juicy pretext and some skilful design can catch-out almost anyone. In order to be successful in mitigation, you need to introduce external controls, as the human subconscious does what it does.
Latio Application Security Tester by Latio Tech
LAST is an open source SAST scanner that uses OpenAI to scan your code for security issues from the CLI. It requires you to bring your own OpenAI token.
I cannot attest to the efficacy of this software, but from a quick exploration of the repo, it seems to be a more advanced attempt to utilise GPTs to do SAST.
Introducing YARA-Forge by Florian Roth
The aim of YARA Forge is to develop user-friendly YARA rule sets sourced from various public repositories. Roth's experience in creating over 17,000 YARA rules (and YARA itself) and other related tools like yarGen and Panopticon has informed the development of YARA-Forge. The tool offers three rule sets - core, extended, and full - to cater to different needs, balancing accuracy, performance, and breadth of threat detection. YARA-Forge also provides feedback to rule authors for improvement and incorporates rules from various public repositories to enhance cybersecurity practices.
If you use YARA, this is a useful tool to support a boost in the efficacy of your rules.
The article reports that cloud engineer Miklos Daniel Brody was sentenced to two years in prison and ordered to pay $529,000 in restitution for deleting the code repositories of his former employer, First Republic Bank, as retaliation for being fired. First Republic Bank, a commercial bank in the U.S., was closed and sold to JPMorgan Chase in May 2023.
Brody's employment at First Republic Bank in San Francisco was terminated on March 11, 2020, due to a policy violation involving inappropriate use of a USB drive. Following his dismissal, he used his still-valid account to access the bank's computer network and cause damages exceeding $220,000. His actions included deleting the bank's code repositories, running a script to erase logs, inserting taunts in the code, impersonating other employees, and emailing proprietary bank code to himself.
Brody initially falsely reported his work laptop stolen and maintained this story even after his arrest in March 2021. However, in April 2023, he pleaded guilty to lying about the laptop and to charges of violating the Computer Fraud and Abuse Act. Alongside the prison term, Brody will also undergo three years of supervised release.
Yikes! This shows the impact of a disgruntled employee in full force. The incident demonstrates the importance of a good JML (Joiners, Movers, Leavers) process, which links to technical controls and regular user audits. Even in 2020, who kept their ‘personal files’ on a USB stick anyway?
Is It Raining Risk? What Data says about Cyber Risk in the Cloud (A Talk at FAIRCON) by Wade Baker (Cyentia Institute)
In the talk, Wade discusses cyber risk in cloud environments compared to on-premise (on-prem) setups, using insights from the FAIR institute's reports. He aims to determine if there's a measurable difference in cyber risk between the two environments, relating it to the FAIR framework.
Baker finds that a slightly higher proportion of organisations face more security exposures in the cloud, but there's no evidence suggesting that cloud environments are inherently less secure than on-prem. The differences in security levels are likely due to organisational characteristics and risk management capabilities. Choosing a suitable cloud provider is a crucial decision for managing cyber risk.
Organisations with a heavy reliance on cloud architecture generally report higher resilience outcomes. However, the transition phases of moving to the cloud might temporarily reduce this resilience. Baker also notes that attack paths compromising critical assets in the cloud tend to be shorter and have fewer control points compared to on-prem environments. Slides from the presentation are available here.
This is quite a useful talk for technical and non-technical people. Wade shares some interesting data relating to cloud risks, although the upshot is fairly non-descript (in that cloud has a similar risk profile to on-prem, but can be more secure if well configured).
By the same token: How adversaries infiltrate AWS cloud accounts by Thomas Gardner and Cody Betsworth
The post discusses how adversaries exploit AWS’s Secure Token Service (STS) to access cloud assets illicitly. The article explains that while traditional access methods like malware and phishing remain constant, in cloud environments, attackers focus more on identities and APIs. The AWS STS allows the provision of short-term access tokens, which are less prone to theft than long-term credentials but can be misused by adversaries.
The authors describe how attackers compromise long-term IAM keys (AKIA) through various means like malware, public repositories, and phishing. Once these keys are compromised, attackers use them to create short-term STS tokens (ASIA) for persistence and evasion. They detail the process of token generation and abuse, emphasising that adversaries often create additional IAM users with long-term keys for backup.
The post highlights the need for defenders, especially those working with AWS, to understand the mechanisms of STS abuse, including the generation and misuse of both long-term AKIA keys and short-term ASIA tokens. It suggests monitoring CloudTrail event data, detecting role chaining events, and building queries to identify chained credentials as effective strategies for staying ahead of such threats.
This is a very technical post, and aimed at those who are responsible for AWS security within an Enterprise. It’s really well written and highlights some key bits of information that are not well documented. It was nice to see that AWS also supported an update of this post, clarifying some of the details and purpose behind default behaviours.
How the EU AI Act regulates artificial intelligence: What it means for cybersecurity by Andrada Fiscutean
The EU AI Act, agreed upon by European Union lawmakers on December 8, 2023, is a significant law regulating artificial intelligence. The act, designed to protect consumer rights and foster innovation, carries substantial implications for cybersecurity, especially for tech giants and AI startups. The AI Act requires critical infrastructure and high-risk organisations to conduct AI risk assessments and comply with cybersecurity standards.
The act categorises AI systems into unacceptable risk, high-risk, and limited and minimal risk. It bans certain uses of AI, such as social scoring systems and real-time biometric identification, which are deemed invasive or discriminatory. For high-risk systems, robust cybersecurity measures are mandated, including sophisticated security features to protect against attacks and vulnerabilities in AI systems and the underlying ICT infrastructure.
Entities violating these rules could face penalties up to 35 million euros or 7% of global turnover. The bill awaits adoption by the Parliament and Council and is expected to come into effect no earlier than 2025.
I believe there’s still some way to go on this and it will be interesting to see how regulation evolves for AI.
CyberSprinters (Game) by NCSC
CyberSprinters is a collection of interactive online security resources for 7-11 year olds. CyberSprinters empowers them to make smart decisions about staying secure online.
The digital game can be played on phone, tablet and desktop, and is supported by a suite of activities to be led by educational practitioners working with 7-11 year olds. Parents and carers can also try the CyberSprinter puzzles with their children at home.
If you’ve got children or nieces and nephews, this is quite a nice way to get them involved and interested in their online security.
Thanks for reading ‘Briefly Briefed:’ - To receive the newsletter on a weekly basis, please subscribe below.