2023 was a year of relentless evolution in the cybersecurity landscape. As the attack surface expanded with emerging technologies and interconnected systems, so did the sophistication and frequency of cyber threats. 

Let’s take a look at a few critical security happenings from last year, including notable data breaches, valuable report findings, and key themes. 

No review of 2023 would be complete without mentioning the explosion of AI into the public eye, like ChatGPT and Copilot. Beyond the hype of AI lies immense value and impact, but skilled AI professionals are essential. Read on for the insights and takeaways I curated to stay informed of emerging threats and opportunities. 

5 Notable Data Breaches

Here’s a quick overview of five significant breaches in 2023. These breaches demonstrate the diverse attack vectors and potential consequences cybersecurity threats pose. By understanding these cases and implementing industry best practices, organizations can build stronger defenses and better navigate the constantly evolving threat landscape.


What happened: Hackers infiltrated Clorox’s IT systems, disrupting operations and causing temporary production slowdowns. The company suspected ransomware involvement but didn’t confirm data exfiltration.

How it happened: The exact attack vector remains unclear, but experts speculate social engineering or a software vulnerability could be responsible.

Result: Disrupted production led to product shortages and a 23-28% loss in net sales for Q1 2024. The company estimated total damages at $356 million.

Main takeaways: With the root cause not publicly known, specific recommendations cannot be drawn. However, social engineering is a common tactic, so it is advisable to continuously improve security awareness and education in an effort to decrease the effectiveness of social engineering attacks. Security education should emphasize the importance of a security mindset and critical thinking on the part of individuals in all organization activities to self-assess the security consequences and potential risks before taking action.


What happened: This widespread attack targeted the MOVEit file transfer software used by numerous organizations. Ransomware actors compromised the software’s server, potentially exposing the data of millions of users across various industries.

How it happened: A critical vulnerability in the MOVEit software allowed attackers to gain access to servers and deploy ransomware.

Result: Estimates suggest over 60 million individuals were affected, with potential exposure of personal and financial information. Known victim organizations crossed the 1,000 milestone. Specific financial losses for impacted organizations remain unclear.

Main takeaways: Review MOVEit software implementation and patch ASAP or risk breach with slow remediation. Perform regular security evaluations of vendors and acquired software to strengthen organizational security posture and improve third-party risk management.


What happened: Hackers gained access to Okta’s customer support system, potentially impacting numerous downstream organizations that relied on Okta for identity and access management (IAM). The threat actor ran and downloaded a report that contained the names and email addresses of all users in the Okta customer support system.

How it happened: The unauthorized access to Okta’s customer support system leveraged a service account stored in the system itself. This service account was granted permission to view and update customer support cases. During the investigation into suspicious use of this account, Okta Security identified that an employee had signed in to their personal Google profile on the Chrome browser of their Okta-managed laptop. The username and password of the service account had been saved into the employee’s personal Google account. The most likely avenue for exposure of this credential is the compromise of the employee’s personal Google account or personal device. 

Result: Though the full scope remains unclear, the breach affected almost all Okta customers and highlighted the potential risks associated with third-party vendors managing sensitive data.

Main takeaways: Monitor and alert on abnormal or unusual account activity, such as logins from unexpected locations and suspicious downloads. Regularly review identity and access, including service accounts, with particular scrutiny of privileged and high-risk accounts. Provide an easy and secure method for employees to conveniently manage organizational passwords without syncing data to their personal accounts, where it can more easily get stolen.


What happened: Hackers breached MGM’s network, stole sensitive customer data, and encrypted over a hundred hypervisors running critical virtual machines. This caused widespread IT system outages disrupting a broad range of business operations for several days. 

How it happened: A large-scale social engineering attack by the ransomware group Scattered Spider is the likely entry point, although the full attack vector remains under investigation.

Result: The availability of the main website, online reservations systems, and in-casino services like slot machines, credit card terminals, and ATMs was impacted. In addition to losing $100 million in earnings, MGM also suffered less than $10 million in one-time expenses for risk remediation, legal fees, third-party advisory, and incident response measures. 

Main takeaways: Similar to the Clorox breach above, the root cause is not publicly known. However, a thoroughly tested and well-practiced backup and recovery plan can significantly reduce the impact of a ransomware attack. Creating and storing regular backups of critical systems is key. When needed, an organization can then restore from a trusted backup after an attack to minimize the disruption to its operations.


What happened: Hackers accessed the personal information of nearly 7 million 23andMe users, including names, birthdates, locations, and some genetic data.

How it happened: Threat actors leveraged credential stuffing, a tactic in which hackers use stolen login information from one account to gain access to other accounts with the same passwords, to access and scrape personal data. 

Result: The breach likely eroded trust in 23andMe for many users and highlighted the privacy concerns associated with genetic data and the need for robust security measures in DNA testing companies.

Main takeaways: Encourage employees and users to improve cyber hygiene by applying best practices such as using a password manager, using longer/randomized passwords, enabling multi-factor authentication (MFA), and not re-using passwords. Enable the use of passkeys for more advanced or tech-savvy users. Establish a breach communication plan with clear guidelines on how to communicate a breach to customers and maintain trust as much as possible. Be transparent with privacy policies and provide resources to customers on actions they can take to secure their data. 

Disclaimer: Specific details of some breaches may remain unclear or under investigation.

More Data Breach Resources:

6 Valuable Security Reports and Key Findings

Sonatype 9th Annual State of the Software Supply Chain

  • 96% of known-vulnerable open source downloads had a fixed version available.
  • 2023 saw twice as many software supply chain attacks as 2019-2022 combined.
  • Only 11% of open source projects are actively maintained.
  • 135% increase in the adoption of AI and ML components within corporate environments over the last year. 
  • Nearly half of the respondents—47% of DevOps and 57% of SecOps—reported that by using AI, they saved more than six hours a week.
  • Among the 97% of DevOps and SecOps leaders who confirmed they currently employ AI to some degree in their workflows, most said they were using two or more tools daily. Topping the list at 86% was ChatGPT and GitHub Copilot at 70%.

GitHub Octoverse 2023: The state of open source and rise of AI

  • 65,000 public generative AI projects created in 2023 with 248% year-over-year growth.
  • 38% year-over-year growth in private projects accounting for more than 80% of all activity on GitHub.
  • TypeScript overtook Java for the first time as the third-most popular language across OSS projects on GitHub with 37% growth of its user base.
  • 169% increase in automation on public projects with GitHub Actions.
  • 300+ AI-powered GitHub Actions in the marketplace.
  • Open source developers merged 60% more automated Dependabot pull requests for vulnerable packages than in 2022.
  • Generative AI-based OSS projects, like langchain-ai/langchain and AUTOMATIC1111/stable-diffusion-webui, rose to the top 10 projects by contributor count on GitHub. More developers are building LLM applications with pre-trained AI models and customizing AI apps to user needs.

Google Cloud 2023 State of DevOps

  • Teams with generative cultures, composed of people who felt included and like they belonged on their team, have 30% higher organizational performance than organizations without a generative culture.
  • Teams that focus on the user have 40% higher organizational performance than teams that don’t.
  • High-quality documentation leads to 25% higher team performance relative to low-quality documentation.
  • Underrepresented respondents report 24% more burnout than those who are not underrepresented.
  • Using a public cloud, for example, leads to a 22% increase in infrastructure flexibility relative to not using the cloud. This flexibility, in turn, leads to teams with 30% higher organizational performance than those with inflexible infrastructures.
  • Teams with faster code reviews have 50% higher software delivery performance.
  • Majority of respondents incorporate at least some AI into technical tasks.

GitLab 2023 Global DevSecOps Report Series

  • 83% of those surveyed said that implementing AI in their software development processes is essential to avoid falling behind, however 79% noted they are concerned about AI tools having access to private information and IP.
  • 40% of all respondents cited security is already a key benefit of AI, but 40% of security professionals surveyed were concerned that AI-powered code generation will increase their workload.
  • 90% of participants reported using AI in software development or plan to, while 81% said they need more training to successfully use AI in their work.
  • Only 25% of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60% of developers’ day-to-day work.
  • 95% of senior technology executives said they prioritize privacy and protection of intellectual property when selecting an AI tool.
  • Only 7% of developers’ time is spent identifying and mitigating security vulnerabilities and 11% is spent on testing code.
  • 48% of developers were significantly more likely to identify faster cycle times as a benefit of AI, compared to 38% of security professionals.
  • Despite 75% of respondents saying their organization provides training and resources for using AI, a roughly equal proportion also said they are finding resources on their own, suggesting that the available resources and training may be insufficient.
  • 65% who use, or are planning to use, AI for software development said their organization hired or will hire new talent to manage AI implementation.

Building Security In Maturity Model (BSIMM) 14 Report

  • Automated, event-driven security testing increased by 200% over the last two years.
  • Organizations are embracing modern toolchain technology that allows security testing in the QA stage to be automated – leading to a 10% growth in several related security activities.
  • Automation has led to a 68% growth in mandatory code review in the last five years.
  • Organizations are increasingly building Software Bills of Materials (SBOMs), with a 22% increase in SBOM creation from last year.
  • Identifying and controlling open source risk increased by just under 10% from last year.
  • Security testing and training are considerable weak points for several verticals, like Financial, Healthcare, and Insurance.

IDC Business Value of AI Survey

  • 92% of AI deployments are taking 12 months or less. 
  • 40% of organizations had implementation times of less than 6 months.
  • Organizations are realizing a return on their AI investments within 14 months. 
  • For every $1 a company invests in AI, it is realizing an average of $3.5 in return. 
  • 52% report that a lack of skilled workers is their biggest barrier to implement and scale AI.

10 Key Trends and Lessons

Navigating the complex terrain of cybersecurity requires constant vigilance and adaptation. As 2023 unfolded, we witnessed both sophisticated attacks and the growing potential of AI in strengthening our defenses. Here are 10 key takeaways to consider as you advance your AI and security efforts.

1. Beyond AI Hype, Towards Operationalization: While the potential of AI has been widely discussed, 2023 saw a shift towards concrete implementation. Organizations are moving beyond proof-of-concept pilots and integrating AI into core software development processes like code generation, automated testing, and real-time anomaly detection. Organizations should focus on tasks where AI excels, while ensuring human oversight for critical decisions.

2. Innersource as a Collaborative Shield: Beyond open-source, innersource is gaining traction. By sharing code internally with a wider developer community, organizations can leverage collective expertise for efficiency gains, vulnerability detection, and security improvements. This collaborative approach strengthens internal security postures while benefiting from the open-source spirit.

3. AI Threat to IP and Privacy: The rise of generative AI projects brings excitement and concerns. While automation and code generation offer benefits, AI posing a threat to intellectual property and privacy in software security is a valid concern. Addressing these concerns through transparency, responsible development practices, and robust access controls is crucial to ensure ethical and secure AI adoption.

4. Security Testing Shifts Left and Right: Traditionally, security testing happened late in the development cycle. However, the industry is witnessing a shift left with security considerations integrated into earlier stages of development. Additionally, shift right with continuous monitoring and threat detection throughout the software lifecycle is also gaining momentum. This holistic approach ensures comprehensive security throughout the software development and deployment journey.

5. Government Regulations and the Rise of SBOMs: Proactively prepare for emerging government regulations around software composition, including mandatory Software Bills of Materials (SBOMs). SBOMs provide transparency into the components used in software, aiding vulnerability management and compliance. Understanding these regulations and adopting efficient SBOM practices is vital for responsible software development.

6. Risk Management of the Software Supply Chain: Open source vulnerabilities remain a challenge, as highlighted in Sonatype’s report. Collaborative management of the software supply chain, involving vendors, developers, and security professionals, is key to mitigating risks. Sharing threat intelligence, promoting secure coding practices, and adopting rigorous patch management are essential for a collective defense against vulnerabilities. 

7. AI Transforming Software Practices: While AI’s revolution in software is undeniable, it’s not a replacement for human expertise. AI excels at automating tasks and processing vast datasets, but human judgment and strategic decision-making remain vital. The future lies in collaboration between humans and AI, leveraging each other’s strengths for comprehensive software solutions.

8. Talent Gap and the Quest for AI Expertise: The lack of skilled AI personnel, as highlighted by IDC’s survey, presents a significant hurdle. Investing in training, attracting talent with AI expertise in software security, and fostering a culture of continuous learning are critical to bridge this gap and leverage AI’s full potential.

9. Building a Culture of Security and Trust: As revealed in Google Cloud’s report, fostering a collaborative and inclusive work environment promotes ownership and responsibility for security. Encourage open communication, incident reporting, and proactive threat identification within your organization. By building a culture of security, organizations can empower individuals to contribute to a more secure software development process.

10. User Privacy and Responsible AI: Privacy concerns, as exemplified by the 23andMe breach, demand responsible AI development and deployment. Implementing ethical AI practices, ensuring transparency and user control over personal data usage, and adhering to data privacy regulatory guidelines are crucial for building customer trust and ensuring sustainable AI-powered solutions.

What’s Next?

Looking back, 2023 offered many insightful findings into the dynamic cybersecurity landscape and the infusion of AI in software practices. As AI plays an increasingly prominent role, staying informed, prioritizing vulnerability management, fostering a culture of security awareness, training for critical skills, and embracing responsible AI development are key to navigate the digital future with confidence. As we press on in 2024, keep in mind proactive cybersecurity efforts, continuous learning, and collaboration are the cornerstones of building resilient software and protecting your organization from potential threats.

How Coveros Can Help

Looking to make application security or software supply chain security a priority? We’d love to chat about your current challenges, opportunities, and how our experts can help.

Leave a comment

Your email address will not be published. Required fields are marked *