A recent analysis by cloud security firm Wiz has revealed that a significant number of the world’s leading AI companies have inadvertently exposed sensitive secrets on GitHub, potentially compromising training data, proprietary models, and organizational information.
The study focused on companies listed in the Forbes AI 50, highlighting the growing cybersecurity risks in the rapidly expanding artificial intelligence sector. Wiz’s research found that 65% of these companies with a GitHub presence had leaked verified secrets, collectively representing organizations valued at over $400 billion.
How the Leaks Occurred
While many leaks are typically caught by GitHub’s built-in scanners or internal company audits, Wiz adopted a more in-depth approach. Their analysis included:
- Full commit history scans
- Forked and deleted repository histories
- Workflow logs and gists
- Contributions from individual organization members
This comprehensive method uncovered less common secret types often overlooked by conventional scanning tools, including API keys, tokens, and credentials for platforms like Google API, Weights & Biases, Flickr, Infura, ElevenLabs, and Hugging Face.
Some of the leaked secrets had the potential to reveal private AI models, sensitive training data, and internal organizational structures, making the disclosures particularly concerning for companies operating in highly competitive AI markets.
Industry Response
Affected firms were notified of the leaks. Companies such as ElevenLabs and LangChain were commended for responding quickly. However, Wiz noted that nearly half of the disclosures either did not reach the company or received no response, reflecting a gap in formal vulnerability disclosure channels.
The study also highlighted contrasting approaches to secret management. For instance, a company with only a dozen members and no public repositories still leaked secrets, whereas another firm with 60 public repositories and 28 members had no exposure, demonstrating the effectiveness of structured secrets management practices.
Recommendations for AI Companies
Wiz has outlined key strategies to prevent future secret leaks, which are relevant across industries:
- Enable public version control secret scanning to catch potential leaks early.
- Establish formal disclosure channels for third parties to report vulnerabilities.
- Prioritize detection for proprietary secret types unique to the organization’s technology stack.
As AI adoption grows globally, the Wiz report underscores the urgent need for companies to strengthen cybersecurity practices, safeguard intellectual property, and ensure sensitive data remains protected from unauthorized access.