Don’t panic.
This is week #12 of the ‘Briefly Briefed:’ newsletter. A big welcome to new subscribers, and many thanks to those who continue to read. I’ve shifted the weekly release to a day earlier. This is to avoid some cross-over with other newsletters and to experiment with a good time to hit your inboxes. Please let me know if this doesn’t work.
My ‘if you only read two’ recommendations for the week are:
The OpenAI / Sam Altman saga (see below).
Ransomware gang files SEC complaint against company that refused to negotiate by CSO Online.
So long, and thanks for all the fish.
Lawrence
Meme of the Week:
OpenAI fires co-founder and CEO Sam Altman for allegedly lying to company board by Blake Montgomery and Dani Anguiano
This was the big story of last week (and the weekend, and this week). I’ve been following closely, and compiled a quick timeline (the dates are UK time zone) for reference.
17/11 – The media report that Sam Altman has been fired by the board of OpenAI, accused of “being not consistently candid in his communications.” Greg Brockman (co-founder) is removed from the board, but retained his role in the company as President (although he subsequently resigned). CTO Mira Murati is named interim CEO.
17/11 - Theories about why this happened flood the Internet.
18/11 – Investors scramble to try and reinstate Altman, with the board seemingly agreeing to take him back and potentially resigning themselves.
20/11 - Altman and Brockman are announced (by Satya Nadella) as joining Microsoft, to lead Microsoft’s new advanced AI research team.
20/11 – Murati is out (of the interim CEO role) and Emmett Shear is in (ex-CEO of Twitch).
20/11 - OpenAI staff demand the OpenAI board resign after sacking of Altman. They claim Microsoft has assured them that there are jobs for all OpenAI staff if they want to join the company.
20/11 – Altman may still believe that the door may re-open at OpenAI.
21/11 – Microsoft CEO, Satya Nadella, dodges the question of whether Altman will return to OpenAI, but states that ostensibly, Sam Altman will be running the show.
22/11 – OpenAI rehires Sam Altman as CEO. A new ‘initial’ board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo has been instated.
So What?
The situation is pretty crazy, it’s quite shocking how to see how unstable OpenAI was, and the apparent naivety of the board. I don’t believe the drama is over yet, keep the popcorn ready. The situation has raised some interesting questions on the governance structure of OpenAI. The board's power stemmed from bylaws established in 2016, which allowed board members to elect and remove directors without formal meetings. The reason behind the ‘ousting’ is still unclear, although the Internet is awash with theories. I think the most likely root cause is a rift in ethos between Sam and the board. As of this morning (22/11), Sam is back and with a new board to boot. Do we all just act like nothing happened now? Will Emmett Shear include this role on his LinkedIn profile? Let’s see.
New ways to pay for research could boost scientific progress by The Economist
The article explains how alternative funding methods for scientific research could invigorate progress in the field. It posits that the current system, dominated by grants, may stifle innovation and is increasingly competitive, making it difficult for researchers to secure funding. The piece explores several new approaches, such as the 'golden ticket' method allowing for backing unorthodox ideas, and a lottery system for grant allocation, currently being trialled in various countries. Other suggestions include establishing new research institutions and adopting models like the DARPA approach, which has been influential in various research fields. The article also highlights the success of the Howard Hughes Medical Institute, which funds individuals for long-term research, encouraging more risk-taking and potentially leading to ground-breaking discoveries.
So What?
I have a number of friends who’re academics. They spend an inordinate amount of time applying for grants and various stages of funding, rather than focused on research. In an extreme case, a close friend (a Professor of Neuroscience at UCL) spent a year as a project manager overseeing the install of an MRI scanner. This meant he had little time to spend on key research. It’s great to see efforts to change the model and boost research outputs through streamlining processes. In order to move quickly, researchers must not be over-burdened by other tasks. The DIANA accelerator is a great example (with elements of Cyber focus) of an effort tackling this problem, led by NATO.
If you’re interested in this topic, you may find the following papers interesting on the subject:
"Are Ideas Getting Harder to Find?" (2020) by Nicholas Bloom, Charles I. Jones, John Van Reenen, and Michael Webb
Published in: American Economic Review, April 2020, Volume 110, Issue 4, Pages 1104-44
Abstract: The paper discusses the concept of diminishing returns in scientific progress, examining whether the effort required to generate new ideas and knowledge is increasing.
"Scientific Grant Funding" (2022) by Pierre Azoulay and Danielle Li
Published in: "Innovation and Public Policy"
Abstract: This chapter discusses grant funding in early-stage, exploratory science, focusing on the design of grant programs, peer review processes, and incentives for risk-taking.
"Scientific prizes and the extraordinary growth of scientific topics" (2021) by Ching Jin, Yifang Ma, and Brian Uzzi
Published in: Nature Communications, October 5, 2021
Abstract: The study examines the impact of scientific prizes on the growth of scientific topics, finding that prizewinning topics produce more papers, citations, retain more scientists, and attract more new entrants and star scientists compared to non-prizewinning topics.
"Incentives and Creativity: Evidence from the Academic Life Sciences" by Pierre Azoulay and colleagues
Published in: National Bureau of Economic Research
Abstract: This paper explores the impact of different funding streams within the academic life sciences on scientific creativity, particularly comparing the careers of investigators from the Howard Hughes Medical Institute (HHMI) and grantees from the National Institutes of Health (NIH).
Papers by Kyle Myers:
"Estimating Spillovers from Publicly Funded R&D: Evidence from the US Department of Energy"
Abstract: The paper quantifies R&D spillovers from grants to small firms by the US Department of Energy, highlighting the broader impact of these grants beyond direct recipients.
"The Elasticity of Science"
Abstract: This paper investigates the extent to which scientists are willing to change the direction of their work in response to targeted funding opportunities, emphasizing the large switching costs of science.
"Unblock research bottlenecks with non-profit start-ups" by Adam Marblestone and colleagues
Published in: Nature
Overview: This work discusses the concept of Focused Research Organisations (FROs) and their potential to address research bottlenecks in science.
Running Signal Will Soon Cost $50 Million a Year by Andy Greenberg
The article explains the financial challenges facing Signal, an encrypted messaging app, operating as a non-profit. Signal's president, Meredith Whittaker, has disclosed its operating costs to demonstrate the contrast with for-profit surveillance business models. Signal's costs are around $40 million this year and expected to rise to $50 million by 2025. Main expenses include infrastructure and staff salaries for about 50 employees. Initially funded by the US government's Open Technology Fund, Signal now depends on donations. While small in-app donations have grown, substantial increases are necessary for Signal's sustainability. Charging users isn't an option, as Signal strives to offer private communication free from the pressures typical in for-profit tech firms.
So What?
I’m a big fan of Signal. It occupies an important place in the world of digital privacy, as one of the few vendors with high levels of security combined with a clear code of privacy-focused ethics. It’s worth considering funding this project, if you’re a user of the platform.
Responsible AI at Google Research: Adversarial testing for generative AI safety by Kathy Meier-Hellstern
The post explains Google Research's approach to ensuring the responsible use of AI, specifically in the realm of generative AI (GenAI). The Responsible AI and Human-Centered Technology (RAI-HCT) team, along with the BRAIDS (Building Responsible AI Data and Solutions) team, are focusing on integrating responsible AI practices into GenAI applications. This involves a comprehensive risk assessment, internal governance, and the development of tools to identify and mitigate ethical risks. A key part of this strategy is adversarial testing, which tests AI models against a range of potentially harmful inputs to understand and address safety concerns. This includes scaled adversarial data generation, automated test set evaluation, and community engagement to identify unforeseen risks, as well as emphasising rater diversity to ensure evaluations consider a wide range of human perspectives. This approach is crucial for managing the transformative but potentially risky nature of GenAI, ensuring it remains inclusive and safe for diverse user communities.
So What?
There are some really useful tools in these projects. If you’re involved in the organisation of development efforts for GenAI, the frameworks will provide some good support.
Multi-source analysis of Top MITRE ATT&CK Techniques by Cyentia Institute
In a recent meta-study that analysed 22 Cyber threat reports, Cyentia Institute and Tidal Cyber examined the breadth of visibility and reporting across the MITRE ATT&CK matrix. They used Version 12.11 for the study, which defines a total of 193 techniques.
The sources they analysed reported sightings of only124 (64%) ATT&CK techniques. Over one-third (36%) of all techniques were not reported by any of the 22 sources reviewed. Just over half (52%) of ATT&CK techniques were seen by three or more sources, and less than a quarter (23%) of them were reported by at least five sources.
So What?
This demonstrates that visibility varies widely across different sources - and why it's important to draw from multiple reporting sources to achieve broad coverage of TTPs in Cyber Threat Intelligence. Moreover, it stresses the importance of prioritisation in Detection Engineering, as 46% of techniques weren’t cited in any of the 22 sample reports. I can think of lots of use cases where these data are applicable.
Applying Context to Control Adversaries (Part 1) by John Fitzpatrick
The article explains the importance of a context-centric approach in cybersecurity. It argues that traditional threat-centric strategies are insufficient, as they often involve playing catch-up with evolving adversarial tradecraft. The post presents a model for tailored cyber defence, combining general, threat-informed, and context-centric approaches. The article posits that the key is to understand how adversaries are likely to apply their tactics in specific environments. This context-centric approach aims to dictate how adversaries must operate within a given environment, effectively putting defenders in control. The article proposes that merely improving existing strategies is not enough; a tailored cyber defence that takes into account the unique aspects of each environment is crucial for effective security.
So What?
I agree with John’s point in this post, and I think the proposed model is interesting. It’s common that we lack important context in Cybersecurity. Often, this is because it’s hard to define, or the tools/frameworks we use lack orientation to our environments.
A good example of where this problem occurs is CVSS as a vulnerability descriptor. Organisations (and vendors) often prioritise based on the ‘base score’ calculation. This lacks contextual information about the environment, and data pertaining to observations of exploitation in the wild. The impact is that effort is misspent remediating issues that are lower risk.
In order to make the types of models [John presents] viable, there is a need to scale processing, application of metadata and analysis (i.e. you need to automate the automatable). The reason that inferior (or sometimes counterproductive) models become popular, is their ease of use and availability. If you spend more time modelling, than you would have done just working in series on a full dataset, what’s the point?
Privileged Identity Management (PIM) – Common Microsoft 365 Security Mistakes Series by Ru Campbell
The article discusses common security mistakes in using Microsoft 365's Privileged Identity Management (PIM). It highlights the importance of PIM in controlling and monitoring access to critical resources in Microsoft 365 environments. The post emphasises the need for organisations to properly configure and manage PIM to avoid security vulnerabilities. The post points out that neglecting PIM can lead to unnecessary risks and exposure to potential security breaches. The article serves as a guide to understanding the critical role of PIM in enhancing the security posture of Microsoft 365 implementations. The blog covers the following five common mistakes:
‘Require Azure MFA’ probably isn’t giving you the security you think it is
Not using authentication context
Not appropriately requiring approval to activate
No mitigation against role lockouts
Not protecting non-Entra or non-Azure resources with PIM for Groups
So What?
This is a useful resource for those engaged in securing M365 environments.
3 Ways We’ve Made the CIS Controls More Automation-Friendly by the ‘Center for Internet Security’ (CIS)
The article discusses updates to the CIS Critical Security Controls to enhance their compatibility with automation in compliance efforts. The Center [sic] for Internet Security has made three changes: removing the "Intersects With" relationship to reduce ambiguity, emphasising shared efforts in control implementation, and adding a page of unmapped CIS safeguards. These updates aim to simplify compliance by making it easier for machines to understand and process the data. The goal is to move away from manual comparisons of frameworks towards an automated future, making compliance more efficient and less labour-intensive.
So What?
Welcome to the 21st Century CIS! It’s great to see CIS support the ‘how’ as well as the ‘what’. CIS controls and benchmarks have been a staple of the technical security community for a long time. Historically, we’ve relied upon vendors or open source projects to operationalise the outputs. I hope that this marks a step-change in their approach, and to see more efforts from CIS in this direction.
Ransomware gang files SEC complaint against company that refused to negotiate by CSO Online
The BlackCat ransomware gang is exploiting new US Securities and Exchange Commission (SEC) rules by filing complaints against companies that refuse to pay ransoms. The group filed a complaint against MeridianLink, alleging failure to disclose a significant breach. This tactic leverages upcoming SEC regulations requiring companies to report material breaches within four business days. The case raises questions about the new rules' effectiveness in combating cybercrime and their potential misuse by ransomware gangs. This development signals a new phase in cyber extortion, emphasising the need for robust cybersecurity defences beyond mere compliance.
So What?
The cheek, the nerve, the gall, the audacity and the gumption!
This is a big lesson to governments, regulators and commercial organisations regarding threat models. A number of nation states are currently planning to ban paying ransoms in ransomware scenarios. This shows the levels of creativity of cybercriminals, who’ve made vast sums via this vector. They’re not going to let it go without a fight. That said, the SEC were probably quite pleased by this outcome!
A 12 Lesson course teaching everything you need to know to start building Generative AI applications by Microsoft
The training supports learning the fundamentals of building Generative AI applications. Each lesson covers a key aspect of Generative AI principles and application development. Throughout this course you build your own Generative AI start-up, so you can get an understanding of what it takes to launch your ideas.
So What?
I did a couple of the training courses within this already, and it’s really well constructed. Nice job!
PrivateGPT: A Production-Ready AI Project for Offline Use by Iván Martínez
PrivateGPT is an AI project designed for querying documents privately using Large Language Models (LLMs), functioning entirely offline. It offers an API for building private, context-aware AI applications, extending the OpenAI API standard. The API has two parts: a high-level API simplifying document ingestion and chat completions, and a low-level API for advanced users to create complex pipelines. PrivateGPT also provides a Gradio UI client and various tools for usability. This project caters to privacy concerns in industries like healthcare and legal, offering an offline solution for using LLMs while maintaining data control.
So What?
I’ve been waiting for a well-maintained open source private alternative to the various commercial options. Here it is. I can’t attest to the efficacy of the claims (as to privacy, safety etc.), but from initial experiments, this looks pretty good.
UK's National Cyber Security Centre Submits First RFC to IETF by Phil Muncaster
The UK's National Cyber Security Centre (NCSC) has submitted its first Request for Comments (RFC) to the Internet Engineering Task Force (IETF), focusing on indicators of compromise (IoCs). RFC9424, a result of three years of collaboration with industry experts, aims to provide a comprehensive reference for IoCs, detailing their lifecycle and usage in cybersecurity. It includes real examples and discusses the 'pyramid of pain', a concept illustrating how some IoCs pose greater challenges for attackers to modify and evade detection. This initiative highlights the importance of involving cybersecurity experts in the design and development of internet standards.
So What?
It’s nice to see an attempt to standardise IoCs. Kudos to those involved.
NIST AI Risk Management Framework Development
NIST, in partnership with private and public sectors, has created the AI Risk Management Framework (AI RMF) to address risks associated with artificial intelligence (AI). Launched on January 26, 2023, the AI RMF is designed for voluntary use, aiming to enhance trustworthiness in AI design, development, usage, and evaluation. The development process was consensus-driven, open, and collaborative, involving public input, drafts, and workshops. The framework is aligned with other AI risk management efforts. Additionally, NIST has released a companion AI RMF Playbook to accompany the framework.
So What?
I’m not sure how I missed the launch of this framework (back in Jan), but it’s a pretty useful reference if you’re engaged in policymaking for AI.