Modern software is collaborative, fast, and global. It’s assembled from open source libraries, internal modules, vendor packages, and transitive dependencies. Like any physical supply chain, it carries inherent risk. But unlike physical supply chains, most organizations don’t inspect the source.
That trust is increasingly misplaced in upstream packages, unknown contributors, and abandoned repos. The good news? You don’t need to control every line of code to manage risk. You just need to see clearly.
This is a modern framework for understanding sociotechnical risk: the combined human and technical factors that impact your software supply chain. It’s about context, not just exposure. It’s about trust you can verify.
Redefining CVEs: From Exposure to Context
Most security programs still focus on published vulnerabilities (CVEs), which are lagging indicators. Sophisticated attackers target overlooked areas: poorly governed packages, unmonitored contributor changes, and default configurations.
CVE scanning is necessary, but it doesn’t prevent a software supply chain attack nor help you mitigate a zero-day. Modern attackers don’t wait around for CVE disclosures. They look for behavioral weaknesses, package manager quirks, or misconfigurations that no scanner will catch. Organizations need to shift from simply tracking exposures to understanding their context. So let’s flip the model and think like an attacker by considering context:
- Which risks are actively being exploited?
- Scan and correlate the Exploitation Prediction Scoring System (EPSS) and CISAs Known Exploited Vulnerabilities (KEV) to prioritize what matters.
- Which code paths matter most to the business
- Watch for signs of backdoors, unexpected install scripts, telemetry leaks, or unusual file access patterns.
- Which parts of the software supply chain lack transparency or accountability?
-
- Assume one of your transitive dependencies will be compromised, and ask yourself, “What does my detection surface look like?”
Maintaining the mindset of an attacker, constantly red teaming your codebase, and automating context are ways to move from being reactive to resilient.
Know Who’s Behind the Code
The most critical risks aren’t always in the code itself; they’re in who writes it, how it’s reviewed, and when things quietly change. Modern software supply chain security requires more than static analysis or periodic audits. It requires continuous red teaming of your codebase: ongoing, proactive scrutiny of the people, behaviors, and changes behind the systems your business runs on. This requires contextual threat intelligence to provide visibility into your software that includes:
- Contributor identity
-
-
- Is the author of the code known, verified, and accountable? Unknown or unaudited contributors introduce risk by default, whether inside or outside the company. In the Codecov breach, for example, no software vulnerability was exploited. A trusted integration was modified silently, affecting thousands of companies.
-
- Behavioral drift
-
-
- Is the contributor acting within expected norms, or has something shifted? A change in commit patterns, skipped reviews, or unexpected access timing could signal compromise or malicious intent. The XZ Utils incident started not with malware but behavioral red flags, gradual consolidation of power, reduced visibility, and subtle obfuscation.
-
- Review hygiene
-
-
- Is peer review consistently enforced? When a single contributor can bypass review and push to a production branch, you lose one of your last systemic guardrails. Continuous red teaming includes enforcing and monitoring review flows for deviations.
-
- Ownership changes
-
-
- Who controls your dependencies—and has that changed recently? When a popular npm package changed hands in the event-stream attack, the new maintainer inserted a targeted backdoor. No alert, no governance. This is a solvable problem if you’re watching.
-
- Account security
-
-
- Are contributor credentials properly protected? No amount of good intent matters if access can be impersonated. Continuous monitoring should flag contributors who lack MFA or use risky credentials before attackers do.
-
- Affiliations and influence
-
- Who are you relying on—and what’s their risk context? Some packages powering core infrastructure are maintained by a single entity, sometimes in high-risk jurisdictions. EasyJSON is one such example. It’s not about blame—it’s about visibility.
Traditional red teaming focuses on penetration testing and network compromise. But many of today’s most effective attackers insert risk upstream, in your code pipeline, long before deployment. A continuous red teaming model for your software supply chain is essential. It’s not a replacement for trust. It makes trust sustainable at scale by treating third-party code and the people who wrote it as equal parts of your organization.
Scorecard Hygiene and the Health of a Project
Once you have confidence in the people, or at least know who they are, it’s time to look at the project’s habits. This is where Open Source Security Foundation Scorecard and similar repo hygiene tools come in handy.
You don’t need perfection. You need signals. And these signals are actually helpful, such as:
- Signed commits: Can you trace authorship and trust it wasn’t tampered with?
-
- Signed commits verify the identity of who made a change, and whether the code was altered in transit. Without them, anyone can spoof a contributor and push malicious code under a trusted name. If you’re not requiring signed commits, you’re flying blind on authorship and audit integrity. That’s a liability.
- Branch protection rules: Is “force push to main” a thing? It shouldn’t be.
-
- Branch protection rules are guardrails on your source code repositories (e.g., GitHub, GitLab) to prevent mistakes or malicious changes to your most critical branches—usually main or production. Think of them as a firewall for your source code: they control who can change what, how, and when. The best part is that branch protection is easier to configure than any actual firewall you’ve ever had to work with in history. If you allow force pushing to main, you’re saying: “Anyone with access can rewrite production history. Quietly. Permanently.” That’s not just risky, it can be considered operationally and legally negligent in security-conscious orgs.
- CI/CD pipelines and fuzzing: Does the project proactively catch bugs?
-
- Modern development isn’t just about building fast—it’s about catching risks early and automatically. CI/CD (continuous integration and continuous delivery) pipelines should enforce testing, linting, vulnerability scanning, and—ideally—fuzzing. If you’re not automating checks, bugs and security flaws could slip through by default. Proactive is cheaper than reactive and is proven to pay dividends over time.
- Pinned dependencies: Is the project locking dependency versions? Or could it auto-pull a compromised sub-dependency at any moment?
-
- Loose versioning (^, ~, or no version at all) is a silent killer. It allows projects to unknowingly pull in malicious or broken packages—any time, any day. Pinning versions means locking down what you trust, ensuring reproducibility, and giving your team time to respond to upstream risks. If it’s not pinned, it’s not predictable and less secure to source for your builds.
If a project isn’t doing any of this, it might still be usable—but only if you have stronger compensating controls downstream. That means shifting from blind trust to active verification and analyzing the code, who wrote it, how it’s changing, and where it might be quietly compromised. With contributor intelligence, runtime integrity checks, and deep SCA visibility, you gain the context around the actual risk in your software. Hygiene doesn’t guarantee security, but bad hygiene is almost always a red flag for organizations and a green light for attackers.
Conclusion: Trust in Open Source, But Build with Eyes Wide Open
Open source is a miracle-worker. It’s the reason our industry moves fast, builds things people love, and solves impossible problems with small teams.
Security doesn’t mean distrusting open source. It means honoring its true importance in the ecosystem enough to protect it while protecting yourself and other community members.
It doesn’t matter whether you are shifting left or right; the attacker will look to hide in the gap and figure out the timing that makes sense for them. These are the ways of the past. Let’s look into who’s writing the code, how it’s maintained, and whether anyone’s trying to break it.
The open source ecosystem is strong, but only as secure as our collective attention. The data is out there to secure what matters most. So let’s put it to work.