Briefly Briefed: Newsletter #13 (29/11/23)
Hello, Sweetie.
This is week #13 of the ‘Briefly Briefed:’ newsletter. A big welcome to new subscribers, and many thanks to those who continue to read.
My two ‘if you only read two’ recommendations for the week are:
A video from CSPAN about a social engineering attack utilising AI to replicate a family member’s voice via @notcapnamerica (on ‘X’).
Getting into AWS cloud security research as a n00bcake by Daniel Grzelak.
Allons-y!
Lawrence
Meme of the Week:
Getting into AWS cloud security research as a n00bcake by Daniel Grzelak (Plerion)
The article provides an insightful guide for beginners in AWS cloud security research. Daniel shares his personal journey and lessons learned, emphasising the importance of hands-on experience. He advises readers to build and break things within AWS to understand its complexities and vulnerabilities. The article covers several key areas: building in AWS to understand its intricacies, using practice environments like 'flAWS' and 'CloudGoat' for real-world testing, the significance of writing and sharing knowledge, learning from experts in the field, and maintaining ethical standards in research. Additionally, the post discusses the value of networking with engineers and consistently engaging in research activities. It also highlights specific focus areas for research, such as examining AWS documentation, decomposing SDKs, identifying API discrepancies, finding undocumented APIs, targeting open-source integrations, and exploring the AWS shared responsibility model.
So What?
I think there’s real value in these types of posts, especially given the skills gap - kudos to Daniel. The post is pretty well written and the signposting of resources alone is worth your time if you’re starting out, or continuing your journey into the clouds.
Haveibeensquatted domain squatting analyser
Not porn. The site (based on haveibeenpwned) provides domain squatting permutations. It limits to a smaller number if you’ve not signed up, but enrolment is free.
So What?
This could be useful as part of a proactive domain ownership strategy, to acquire potentially squattable domains (although, there are services that will do this for you). Additionally, you could use this as a seed, to create a block list of domains for email senders. However, you’d need to be fairly confident of the domain’s reputation before committing.
A video from CSPAN about a social engineering attack utilising AI to replicate a family member’s voice via @notcapnamerica (on ‘X’)
This six-and-a-half minute video tells a story, recanted by the victim (a U.S.-based attorney), where he received a distressing call from his ‘son’ (using a Deepfaked voice). The attackers created a pretext whereby his son had been involved in a traffic accident, and had injured a pregnant woman whilst under the influence. The attackers asked for money ($9k) for his son’s bail as part of a complex and procedurally accurate scenario involving a bitcoin ATM.
So What?
The video emphasises the emotional impact of scams as much as the terrifying use of AI. I’d highly recommend sharing this with friends and family, as it highlights the vitriolic creativity of scammers, and the capability of modern AI. Moreover, it provides a sombre reminder to those of us who simulate attacks (especially those utilising social engineering) that there is a human impact on targets and victims. Select your pretexts wisely.
Biden's AI Executive Order: What it says, and what it means for security teams by Joseph Thacker (Wiz)
The article explains the implications of the 2023 Executive Order on AI (order number 14110) issued by President Biden, focusing on its impact on security teams in companies using AI. The order establishes new standards for AI safety, security, and privacy protection. It stresses the importance of developing AI systems that are safe, secure, and trustworthy, with extensive testing required before public release. The article highlights the need for security teams to begin preparing for compliance with these standards, emphasising extensive red-team testing, privacy protection, and fairness in AI applications. The order has significant implications for AI use in various sectors, including healthcare and criminal justice, and requires security teams to adapt their practices to ensure AI systems are ethical and compliant with the new standards.
So What?
This is a great write-up, and provides a useful lens on how the latest Executive Order in the U.S. will impact us as an industry. It’s great to see an emphasis on efficacy assurance (red team assessments etc.). One of the greatest threats to security, is the complacency of compliance. It’s really important to understand your controls and the appropriate way to assess their effectiveness.
Independent Review of University Spin-out Companies by The Department of Science, Innovation and Technology (UK government)
The article reviews the role of spin-out companies, which are start-ups created from university research, in contributing to the UK's ambition of becoming a science and technology superpower. The review identifies best practices from successful university spin-out ecosystems globally and within the UK, aiming to support spin-outs in gaining more investment and faster growth. The UK's unique opportunity lies in leveraging strengths across various academic disciplines, including humanities and arts, to build a leading innovation ecosystem. The review outlines key elements for a successful spin-out ecosystem, including a diverse pool of academic founders, anchor institutions like universities, service providers, accessible investment capital, partnerships with large corporations, talented early employees, and supportive infrastructure. It highlights the success of ecosystems in the US and the UK's 'golden triangle' and presents recommendations for the UK to enhance its spin-out environment.
A summary of the ten recommendations:
"Accelerate towards innovation-friendly university policies that all parties, including investors, should adhere to where they are underpinned by guidance co-developed between investors, founders, and universities."
"More data and transparency on spin-outs through a national register of spin-outs, and universities publishing more information about their typical deal terms."
"HEIF should be used to reduce the need for universities to cover the costs of technology transfer offices (TTOs) from spin-out income."
"Create shared TTOs to help build scale and critical mass in the spin-out space for smaller research universities."
"Government should increase funding for proof-of-concept funds to develop confidence in the concept prior to spinning-out."
"In developing the ‘engagement & impact’ and ‘people & culture’ elements of REF 2028, the four Higher Education Funding Bodies should ensure that the guidance and criteria strongly emphasise the importance of research commercialisation, spin-outs, and social ventures as a form of research impact."
"Founders need access to support from individuals and organisations with experience of operating successful high-tech start-ups, regardless of the region founders are based in or sector they operate in."
"UK Research and Innovation (UKRI) should ensure that all PhD students they fund have a voluntary option of attending high-quality entrepreneurship training."
"Recognising the important role that university-affiliated funds have played in helping spin-outs from some regions access finance, universities considering working with new affiliated investment funds should continue to ensure they are still able to attract a wider set of investors and encourage competition when agreeing such deals."
"We welcome ongoing reforms to support scale-up capital, such as changes to pensions regulation and encourage the government to accelerate these efforts."
So What?
This may be interesting to people based in the UK, who’re involved with the Cyber startup sector.
Startups should consider hiring fractional AI officers by Raphael Ouzan (TechCrunch)
The article discusses the idea that startups, particularly those with limited resources, should consider hiring fractional AI officers. This approach involves hiring AI professionals on a part-time or contract basis, providing startups with the expertise they need without the full-time cost. The article likely explores the benefits of this strategy, such as cost-effectiveness, access to specialised skills, and flexibility. It may also address how startups can integrate these fractional officers into their teams and strategies for making the most of their expertise in developing AI capabilities.
So What?
The concept of fractional or virtual executives is not new (especially for CTOs and CISOs). However, having a deep understanding of AI is becoming evermore important. I believe there could be some benefit to smaller organisations in this approach, as these types of resources can be scarce and cost-prohibitive. That said, smaller organisations aren’t really going to benefit from generic or high-level advice. The challenge with a fractional executive, is they’ll only be able to provide broad strokes, and understanding the organisation’s strategy part-time will be slow (a killer for market agility). I’d posit that startups would likely get more benefit from a point-in-time consulting engagement focused on the opportunity for, and benefits of, utilising AI. Not everything needs AI/ML; shocking, I know. Alternatively, it could be worth prioritising that ‘sweet-sweet’ VC money on a full-time resource, if that’s core to your mission.
Trusted by Millions, Yet So Wrong - Password Strength Tools by Eddie Zhang
The article discusses the inaccuracy of popular password strength tools. Eddie reveals that four out of the top ten search results for these tools provide questionable security advice. Despite one tool estimating a password example would take two million years to crack, Zhang demonstrates it could be cracked in just eight seconds using a $650 graphics card. The article explains this discrepancy as due to these tools employing a simplistic approach to estimating password strength, failing to consider human patterns in password creation. Zhang suggests using multi-factor authentication, not reusing passwords, using a password manager, and creating longer, less predictable passwords. He also highlights the need for improved user education in cybersecurity and questions the effectiveness of regulatory frameworks in reducing cyber misinformation.
So What?
Solid advice and quite a nice write-up.
Active Directory Canaries (Tool) by Airbus Protect
The repo presents "Active Directory Canaries," a detection tool for Active Directory enumeration techniques. The tool utilises the concept of DACL Backdoors, first introduced by Andy Robins and Will Schroeder in their 2017 white paper "An ACE Up the Sleeve." The main purpose of this project is to offer and continuously update a PowerShell script that simplifies the deployment of required Active Directory objects, thereby enhancing the detection and monitoring capabilities against potential enumeration attacks within Active Directory environments.
So What?
Canaries are a really great tool, complementing XDR and other detection-focused capabilities. From regularly speaking with capable red teamers, I know that Canaries are one of the things they really hate. It makes their jobs harder, giving SOCs a clear vector on their activities. Canaries are especially useful as the signal-to-noise ratio is very high, given there is no legitimate use case to interact with them. I’d highly recommend experimenting with this tool, commercial options or some of the built-in honeytoken capabilities within Azure.
Top 5 reasons why OpenAI was probably never really worth $86 billion by Gary Marcus
The article by Gary Marcus argues that OpenAI's estimated $86 billion valuation was likely unrealistic. It explains that the true value of OpenAI resides in its staff, not its intellectual property, data, customer list, or infrastructure. This perspective is supported by the ease with which other companies replicated OpenAI's achievements.
The post points out the unresolved issue of hallucinations in AI, a problem acknowledged by OpenAI's leadership. The article further discusses OpenAI's unclear business model and the high cost of running advanced systems like GPT-4. The revenue generated is mostly from testing, not sustained usage, making it speculative.
Additionally, the post critiques OpenAI's hybrid non-profit model, highlighting the tension between safety and financial return. Finally, it questions the ability of large language models to address the alignment problem, essential for AI safety and reliability. The article concludes that OpenAI's high valuation was more based on promise than tangible results, with its non-profit mission clashing with profit-driven goals.
So What?
There are some fair points in this analysis, based on traditional ways of valuing companies. Moreover, Gary makes some great points about the specific shortcomings of the platforms themselves. However, OpenAI stole a march on EVERYONE and catapulted AI from a future promise in projects like DeepMind, to government departments’ top priority. I know grandparents who use ChatGPT. It’s inherently difficult to value a company at this stage of its life, as so much of the value is entrenched in its potential. I believe the valuation represents the level of investment in OpenAI and the astronomical disruption they’ve caused to the software market. The question for me, is whether OpenAI can continue to innovate and retain their ‘lead’, with everyone else playing catch-up.
Security Planning Workbook by CISA
The Security Planning Workbook is a comprehensive resource that can assist critical infrastructure owners and operators with the development of a foundational security plan. The workbook is designed to be flexible and scalable to suit the needs of most facilities.
It is intended for individuals involved with an organisation’s security planning efforts, including individuals or groups with varying degrees of security expertise, charged with the safety and security of facilities and people. This product also provides descriptions of critical elements of security planning information, offers a multitude of resources, and includes fillable fields to guide a stakeholder’s planning efforts.
So What?
This is a great resource for SME organisations or people new in-role taking stock.
The RULER Project by Phill Moore
The RULER project is an initiative aimed at enhancing forensic investigations through the detailed study of application logs. It highlights the challenge of understanding logs from different applications and seeks to provide a structured approach to identify crucial forensic information. The project primarily compiles data from various sources, crediting those who have contributed significantly to this field.
The project does not focus on recommending what should be logged; instead, it emphasises understanding what is typically logged by default. The project currently concentrates on endpoint information relevant to investigations but is open to expanding into other log categories.
The roadmap for RULER includes incorporating a wider range of logs such as mail and web server logs, storing data in formats like YAML or databases for tool integration, and contributing to the DFIR Artefact museum.
So What?
This is a useful resource for Red and Blue teams. The description of what it is and how it works is a bit vague, but essentially, it’s a list of things relating to software internals for EDR and other interesting tools.
Cybersecurity firm executive pleads guilty to hacking hospitals By Sergiu Gatlan
The article reports on Vikas Singla, the former chief operating officer of Securolytics, pleading guilty to hacking two Gwinnett Medical Center (GMC) hospitals in 2021. The attacks occurred in September 2018, targeting GMC Northside Hospital in Duluth and Lawrenceville. Singla disrupted phone and printer services and stole patient data from a mammogram machine's digitising device. He also printed stolen patient information and threatening messages in Duluth's GMC hospital. These actions were part of a strategy to boost Securolytics' business, with Singla promoting the hack on Twitter and the company mentioning it in client outreach.
The cyberattacks caused over $817,000 in damages. Singla has agreed to pay this amount as restitution. Despite facing 17 counts of intentional computer damage and one count of obtaining information, prosecutors recommend a 57-month probation sentence due to his serious health conditions. Sentencing is scheduled for February 15, 2024.
So What?
Yikes! There’s not much to say about this, other than the profile of cyber-related insider vectors increasing within the media.
SBOM Hall of Fame by communitysec
This Github repo seeks to highlight organisations that are doing SBOMs ‘right’ (in contributors’ prevailing views), giving praise to those working hard on the challenges and providing learning to others about their success.
So What?
Echooo, echooo! I’m in two minds about this sort of effort. On one hand, I like that they’re celebrating success and signposting what good looks like. Conversely, who made these folks (or whoever contributes) arbiters of SBOMs implementation and will this yield enough detail to be useful? I guess we’ll find out if anyone ever adds an organisation to the table.