Vulnerabilities in the open source software (OSS) supply chain are looming large in the cybersecurity landscape with threats and attacks such as SolarWinds, 3CX, Log4Shell, and now XZ Utils, and these security breaches could be catastrophic. It is clear that this can have a significant impact. Capterra research reveals that from April 2022 to April 2023, software supply chain attacks affected nearly two-thirds (61%) of all U.S. businesses.
We expect attacks on the open source software supply chain to accelerate as attackers automate attacks on common open source software projects and package managers. Many CISOs and DevSecOps teams are not prepared to implement controls in their existing build systems to mitigate these threats. In 2024, DevSecOps teams will move away from a shift-left security model and opt for a “shift down” that uses AI to automate security from developer workflows.
Here, we discuss the factors driving the rise in software supply chain attacks and the role of AI in helping developers work more efficiently while writing more secure code.
Attacks on the open source software supply chain are accelerating
Open source libraries and languages are the basis for more than 90% of the world's software. A US survey of nearly 300 IT and IT security professionals found that 94% of companies use open source software and 57% have adopted multiple open source platforms . Exactly half of respondents say their threat level is “high” or “extreme,” and a further 41% consider their threat level to be “moderate.” At the time of this writing, details of the backdoor embedded in the XZ library and several other OSS packages have just been made public. The prevalence of open source around the world is one of the key factors driving the rise in supply chain attacks.
Data governance and data supply chain become key issues
Security professionals must also consider how security vulnerabilities propagate through the data supply chain. Organizations typically integrate externally developed software through their software supply chain, but the data supply chain often requires clearer mechanisms to understand and contextualize data . In contrast to the structured systems and functionality of software, data is unstructured or semi-structured and faces different regulatory standards.
Many companies are building AI or ML systems on huge pools of data with disparate sources. The Model Zoo ML model is published with minimal understanding of the code and content used to create the model. Software engineers must treat these models and data as carefully as they would the code that goes into the software they are creating, paying attention to their origins.
DevSecOps teams must assess their data usage responsibilities, especially when building LLMs to train AI tools. This will require careful data management within the model to prevent sensitive data from being accidentally sent to third parties such as his OpenAI.
Organizations should adopt strict policies outlining the approved uses of AI-generated code. Additionally, when incorporating third-party platforms for AI, a thorough due diligence assessment must be conducted to ensure that the data is not used to train or fine-tune AI/ML models.
AI security automation helps organizations move from “shifting left” to “shifting down”
Ten years ago, the industry adopted the concept of shifting left to address security flaws early in the software development lifecycle and enhance developer workflows. Defenders of the system have long been at a disadvantage. AI has the potential to level the playing field. As DevSecOps teams navigate the complexities of data governance, they must also assess the impact of the evolving shift-left paradigm on their organization's security posture.
Companies will move beyond shifting left and deploying AI to fully automate security processes and begin to remove security processes from developer workflows. This is called “downshifting.” This is because it pushes security into automated, low-level functions within the technology stack, rather than putting the burden of making complex and difficult decisions on developers.
GitLab's Global DevSecOps Report: The State of AI in Software Development found that developers spend only 25% of their time generating code. AI can improve output by optimizing the remaining 75% of your workload. This is one way to leverage the power of AI to solve specific technical problems and improve efficiency and productivity throughout the software development lifecycle.
2024 is a year in which threats to the OSS ecosystem have increased, negatively impacting the global software supply chain, and driving significant changes in cybersecurity strategies, including increased reliance on AI to protect digital infrastructure. We hope that you will look back on it as such. The cybersecurity landscape is already changing, with a focus on reducing supply chain vulnerabilities, strengthening data governance, and incorporating AI into security efforts. This transformation is expected to move DevSecOps teams toward a software development process that puts efficiency and security at the forefront.