#gitguardian
Explore tagged Tumblr posts
Text
The state of secrets security: 7 action items for better managing risk

The exposure of development secrets is a growing epidemic, driven by software supply chain complexity. Over 4 years, exposed secrets have quadrupled.
Read more: https://jpmellojr.blogspot.com/2024/04/the-state-of-secrets-security-7-action.html
0 notes
Text
Python's PyPI Reveals Its Secrets

Source: https://thehackernews.com/2024/04/gitguardian-report-pypi-secrets.html
Report: https://www.gitguardian.com/files/the-state-of-secrets-sprawl-report-2024
2 notes
·
View notes
Link
[ad_1] Artificial intelligence is driving a massive shift in enterprise productivity, from GitHub Copilot's code completions to chatbots that mine internal knowledge bases for instant answers. Each new agent must authenticate to other services, quietly swelling the population of non‑human identities (NHIs) across corporate clouds. That population is already overwhelming the enterprise: many companies now juggle at least 45 machine identities for every human user. Service accounts, CI/CD bots, containers, and AI agents all need secrets, most commonly in the form of API keys, tokens, or certificates, to connect securely to other systems to do their work. GitGuardian's State of Secrets Sprawl 2025 report reveals the cost of this sprawl: over 23.7 million secrets surfaced on public GitHub in 2024 alone. And instead of making the situation better, repositories with Copilot enabled the leak of secrets 40 percent more often. NHIs Are Not People Unlike human beings logging into systems, NHIs rarely have any policies to mandate rotation of credentials, tightly scope permissions, or decommission unused accounts. Left unmanaged, they weave a dense, opaque web of high‑risk connections that attackers can exploit long after anyone remembers those secrets exist. The adoption of AI, especially large language models and retrieval-augmented generation (RAG), has dramatically increased the speed and volume at which this risk-inducing sprawl can occur. Consider an internal support chatbot powered by an LLM. When asked how to connect to a development environment, the bot might retrieve a Confluence page containing valid credentials. The chatbot can unwittingly expose secrets to anyone who asks the right question, and the logs can easily leak this info to whoever has access. Worse yet, in this scenario, the LLM is telling your developers to use this plaintext credential. The security issues can stack up quickly. The situation is not hopeless, though. In fact, if proper governance models are implemented around NHIs and secrets management, then developers can actually innovate and deploy faster. Five Actionable Controls to Reduce AI‑Related NHI Risk Organizations looking to control the risks of AI-driven NHIs should focus on these five actionable practices: Audit and Clean Up Data Sources Centralize Your Existing NHIs Management Prevent Secrets Leaks In LLM Deployments Improve Logging Security Restrict AI Data Access Let's take a closer look at each one of these areas. Audit and Clean Up Data Sources The first LLMs were bound only to the specific data sets they were trained on, making them novelties with limited capabilities. Retrieval-augmented generation (RAG) engineering changed this by allowing LLM to access additional data sources as needed. Unfortunately, if there are secrets present in these sources, the related identities are now at risk of being abused. Data sources, including project management platform Jira, communication platforms like Slack, and knowledgebases such as Confluence, weren't built with AI or secrets in mind. If someone adds a plaintext API key, there are no safeguards to alert them that this is dangerous. A chatbot can easily become a secrets-leaking engine with the right prompting. The only surefire way to prevent your LLM from leaking those internal secrets is to eliminate the secrets present or at least revoke any access they carry. An invalid credential carries no immediate risk from an attacker. Ideally, you can remove these instances of any secret altogether before your AI can ever retrieve it. Fortunately, there are tools and platforms, like GitGuardian, that can make this process as painless as possible. Centralize Your Existing NHIs Management The quote "If you can not measure it, you can not improve it" is most often attributed to Lord Kelvin. This holds very true for non-human identity governance. Without taking stock of all the service accounts, bots, agents, and pipelines you currently have, there is little hope that you can apply effective rules and scopes around new NHIs associated with your agentic AI. The one thing all those types of non-human identities have in common is that they all have a secret. No matter how you define NHI, we all define authentication mechanisms the same way: the secret. When we focus our inventories through this lens, we can collapse our focus to the proper storage and management of secrets, which is far from a new concern. There are plenty of tools that can make this achievable, like HashiCorp Vault, CyberArk, or AWS Secrets Manager. Once they are all centrally managed and accounted for, then we can move from a world of long-lived credentials towards one where rotation is automated and enforced by policy. Prevent Secrets Leaks In LLM Deployments Model Context Protocol (MCP) servers are the new standard for how agentic AI is accessing services and data sources. Previously, if you wanted to configure an AI system to access a resource, you would need to wire it together yourself, figuring it out as you go. MCP introduced the protocol that AI can connect to the service provider with a standardized interface. This simplifies things and lessens the chance that a developer will hardcode a credential to get the integration working. In one of the more alarming papers the GitGuardian security researchers have released, they found that 5.2% of all MCP servers they could find contained at least one hardcoded secret. This is notably higher than the 4.6% occurrence rate of exposed secrets observed in all public repositories. Just like with any other technology you deploy, an ounce of safeguards early in the software development lifecycle can prevent a pound of incidents later on. Catching a hardcoded secret when it is still in a feature branch means it can never be merged and shipped to production. Adding secrets detection to the developer workflow via Git hooks or code editor extensions can mean the plaintext credentials never even make it to the shared repos. Improve Logging Security LLMs are black boxes that take requests and give probabilistic answers. While we can't tune the underlying vectorization, we can tell them if the output is as expected. AI engineers and machine learning teams log everything from the initial prompt, the retrieved context, and the generated response to tune the system in order to improve their AI agents. If a secret is exposed in any one of those logged steps in the process, now you've got multiple copies of the same leaked secret, most likely in a third-party tool or platform. Most teams store logs in cloud buckets without tunable security controls. The safest path is to add a sanitization step before the logs are stored or shipped to a third party. This does take some engineering effort to set up, but again, tools like GitGuardian's ggshield are here to help with secrets scanning that can be invoked programmatically from any script. If the secret is scrubbed, the risk is greatly reduced. Restrict AI Data Access Should your LLM have access to your CRM? This is a tricky question and highly situational. If it is an internal sales tool locked down behind SSO that can quickly search notes to improve delivery, it might be OK. For a customer service chatbot on the front page of your website, the answer is a firm no. Just like we need to follow the principle of least privilege when setting permissions, we must apply a similar principle of least access for any AI we deploy. The temptation to just grant an AI agent full access to everything in the name of speeding things along is very great, as we don't want to box in our ability to innovate too early. Granting too little access defeats the purpose of RAG models. Granting too much access invites abuse and a security incident. Raise Developer Awareness While not on the list we started from, all of this guidance is useless unless you get it to the right people. The folks on the front line need guidance and guardrails to help them work more efficiently and safely. While we wish there were a magic tech solution to offer here, the truth is that building and deploying AI safely at scale still requires humans getting on the same page with the right processes and policies. If you are on the development side of the world, we encourage you to share this article with your security team and get their take on how to securely build AI in your organization. If you are a security professional reading this, we invite you to share this with your developers and DevOps teams to further the conversation that AI is here, and we need to be safe as we build it and build with it. Securing Machine Identity Equals Safer AI Deployments The next phase of AI adoption will belong to organizations that treat non-human identities with the same rigor and care as they do human users. Continuous monitoring, lifecycle management, and robust secrets governance must become standard operating procedure. By building a secure foundation now, enterprises can confidently scale their AI initiatives and unlock the full promise of intelligent automation, without sacrificing security. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post. [ad_2] Source link
0 notes
Text
What is DevSecOps? Integrating Security into the DevOps Pipeline

What is DevSecOps? Integrating Security into the DevOps Pipeline
In today’s fast-paced digital landscape, delivering software quickly isn’t just a competitive advantage — it’s a necessity. Enter DevOps: the fusion of development and operations, aimed at streamlining software delivery through automation, collaboration, and continuous improvement. But as we build faster, we must also build safer. That’s where DevSecOps comes in.
What is DevSecOps?
DevSecOps stands for Development, Security, and Operations. It’s an evolution of the DevOps philosophy that embeds security practices directly into the DevOps pipeline — from planning to production. Instead of treating security as a final step or a separate process, DevSecOps makes it an integral part of the development lifecycle.
In short: DevSecOps = DevOps + Continuous Security.
Why DevSecOps Matters
Traditional security models often acted as bottlenecks, kicking in late in the software lifecycle, causing delays and costly rework. In contrast, DevSecOps:
Shifts security left — addressing vulnerabilities early in development.
Promotes automation of security checks (e.g., static code analysis, dependency scanning).
Encourages collaboration between developers, security teams, and operations.
The result? Secure, high-quality code delivered at speed.
Key Principles of DevSecOps
Security as Code Just like infrastructure can be managed through code (IaC), security rules and policies can be codified, versioned, and automated.
Continuous Threat Modeling Teams assess risk and architecture regularly, adapting to changes in application scope or external threats.
Automated Security Testing Security tools are integrated into CI/CD pipelines to scan for vulnerabilities, misconfigurations, or compliance issues.
Culture of Shared Responsibility Security isn’t just the InfoSec team’s job. Everyone in the pipeline — from devs to ops — has a role in maintaining secure systems.
Monitoring and Incident Response Real-time logging, monitoring, and alerting help detect suspicious behavior before it becomes a breach.
How to Integrate DevSecOps into Your Pipeline
Here’s a high-level roadmap to start embedding security into your DevOps process:
Plan Securely: Include security requirements and threat models during planning.
Develop Secure Code: Train developers in secure coding practices. Use linters and static analysis tools.
Build with Checks: Integrate SAST (Static Application Security Testing) and SCA (Software Composition Analysis) into your build process.
Test Continuously: Run DAST (Dynamic Application Security Testing), fuzzing, and penetration testing automatically.
Release with Confidence: Use automated security gates to ensure only secure builds go to production.
Monitor Proactively: Enable real-time monitoring, anomaly detection, and centralized logging.
Popular DevSecOps Tools
SAST: SonarQube, Checkmarx, Fortify
DAST: OWASP ZAP, Burp Suite
SCA: Snyk, WhiteSource, Black Duck
Secrets Detection: GitGuardian, TruffleHog
Container Security: Aqua Security, Prisma Cloud, Clair
Final Thoughts
DevSecOps is not just about tools — it’s a mindset shift. It breaks down silos between development, operations, and security teams, making security a shared, continuous responsibility. By baking security into every stage of your pipeline, you ensure your applications are not only fast and reliable — but also secure by design.
WEBSITE: https://www.ficusoft.in/devops-training-in-chennai/
0 notes
Text
GitGuardian launches multi-vault integration to combat secrets sprawl
http://securitytc.com/TGs0mq
0 notes
Text
Defending Your Commits From Known CVEs With GitGuardian SCA And Git Hooks
The Hacker News : All developers want to create secure and dependable software. They should feel proud to release their code with the full confidence they did not introduce any weaknesses or anti-patterns into their applications. Unfortunately, developers are not writing their own code for the most part these days. 96% of all software contains some open-source components, and open-source components make http://dlvr.it/T77sp7 Posted by : Mohit Kumar ( Hacker )
0 notes
Quote
GitHub ユーザーは、2023 年中に 300 万以上のパブリック リポジトリにある 1,280 万件の認証情報と機密情報を誤って公開し、その大部分は 5 日後でも有効のままでした。 のサイバーセキュリティ専門家によると これは、 GitGuardian 、GitGuardian は機密を暴露した人に 180 万通の無料電子メール警告を送信しましたが、連絡を受けた人のうち、エラーを修正するために迅速な行動をとったのはわずか 1.8% のみでした。 公開された秘密には、アカウントのパスワード、API キー、TLS/SSL 証明書、暗号化キー、クラウド サービスの資格情報、OAuth トークン、その他の機密データが含まれており、外部の攻撃者がさまざまなプライベート リソースやサービスに無制限にアクセスできるようになり、データ侵害や金銭的損害につながる可能性があります。 。 2023 年のソフォスのレポートでは、 今年上半期に記録されたすべての攻撃の根本原因の50% は認証情報の漏洩が原因であり 、続いて 23% のケースで攻撃方法として脆弱性の悪用があったことが強調されています。 GitGuardian によると、世界で最も人気のあるコード ホスティングおよびコラボレーション プラットフォームである GitHub での秘密の暴露は 2020 年以降、マイナスの傾向をたどっています。
2023 年に 1,200 万を超える認証シークレットとキーが GitHub で漏洩
1 note
·
View note
Text
How to Secure Your IaC and Configuration Management Tools with GitGuardian’s Honeytoken - Security Boulevard
📣 StatesOne — https://news.google.com/rss/articles/CBMifWh0dHBzOi8vc2VjdXJpdHlib3VsZXZhcmQuY29tLzIwMjMvMDcvaG93LXRvLXNlY3VyZS15b3VyLWlhYy1hbmQtY29uZmlndXJhdGlvbi1tYW5hZ2VtZW50LXRvb2xzLXdpdGgtZ2l0Z3VhcmRpYW5zLWhvbmV5dG9rZW4v0gGBAWh0dHBzOi8vc2VjdXJpdHlib3VsZXZhcmQuY29tLzIwMjMvMDcvaG93LXRvLXNlY3VyZS15b3VyLWlhYy1hbmQtY29uZmlndXJhdGlvbi1tYW5hZ2VtZW50LXRvb2xzLXdpdGgtZ2l0Z3VhcmRpYW5zLWhvbmV5dG9rZW4vYW1wLw?oc=5&utm_source=dlvr.it&utm_medium=tumblr
0 notes
Text
Big Tech Vendors Object to US Gov SBOM Mandate
Big Tech Vendors Object to US Gov SBOM Mandate
Home › Cyberwarfare Big Tech Vendors Object to US Gov SBOM Mandate By Ryan Naraine on December 07, 2022 Tweet The U.S. government’s mandates around the creation and delivery of SBOMs (software bill of materials) to help mitigate supply chain attacks has run into strong objections from big-name technology vendors. A lobbying outfit representing big tech is calling on the federal government’s…
View On WordPress
#amazon#amplify#big tech#Chainguard#coa#coa parser#containers#Git#gitguardian#github#Google#Intel#iti#javascript#malware#MFA#Microsoft#npm#package manager#rc#rc configuration loader#sbom#secrets#secrets sprawl#sequoia#slsa#supply chain
0 notes
Text
GitGuardian’s honeytokens in codebase to fish out DevOps intrusion
http://dlvr.it/SmJSPG t.ly/m_Jb
0 notes
Text
Amenaza interna: los desarrolladores filtraron 10 millones de credenciales y contraseñas en 2022
La tasa a la que los desarrolladores filtraron secretos de software críticos, como contraseñas y claves API, aumentó a la mitad para llegar a 5,5 de cada 1000 confirmaciones en los repositorios de GitHub. Eso es según un informe publicado por la firma de gestión de secretos GitGuardian esta semana. Aunque el porcentaje parece pequeño, en general, la empresa detectó al menos 10 millones de…

View On WordPress
0 notes
Photo
Code security platform @GitGuardian releases its annual ‘State of Secrets Sprawl Report’ 🦉 It measure the exposure of secrets within #GitHub, #Docker, and internal repositories and how it evolves over time. Find out, & much more, in this ungated report: https://t.co/sGNXosEpRP https://t.co/Ms8QhrTLmP (via Twitter https://twitter.com/TheHackersNews/status/1531564228674433026)
0 notes
Text
GitGuardian Visual Studio Code extension helps developers protect their sensitive information
http://securitytc.com/TFK5WT
0 notes
Text
Python's PyPI Reveals Its Secrets
The Hacker News : GitGuardian is famous for its annual State of Secrets Sprawl report. In their 2023 report, they found over 10 million exposed passwords, API keys, and other credentials exposed in public GitHub commits. The takeaways in their 2024 report did not just highlight 12.8 million new exposed secrets in GitHub, but a number in the popular Python package repository PyPI. PyPI, http://dlvr.it/T5NCZl Posted by : Mohit Kumar ( Hacker )
0 notes
Text
[Media] ggshield
ggshield Protect your secrets with GitGuardian ggshield is a CLI application that runs in your local environment or in a CI environment to help you detect more than 300 types of secrets, as well as other potential security vulnerabilities or policy breaks. https://github.com/GitGuardian/ggshield

0 notes