GitHub is putting more security checks inside the tools developers now use to write code with AI. The company said secret scanning through the GitHub MCP Server is generally available, while dependency scanning through the same server has entered public preview.
The practical change is simple: an AI coding agent can be asked to inspect a branch for leaked credentials or vulnerable packages before a developer commits or opens a pull request. That moves some security feedback from the CI queue into the actual coding session.
Security shifts left again
Secret scanning in the GitHub MCP Server has been in public preview since March and is now generally available for repositories with GitHub Secret Protection enabled. GitHub says the tools now honor existing push protection customization, which matters because organizations do not want agent-driven checks to behave differently from their repository rules.
The dependency scanning preview works for repositories with Dependabot alerts enabled. When a developer asks an agent to check newly added dependencies, the MCP Server can use its Dependabot toolset and GitHub Advisory Database to return affected packages, severity, and recommended fixed versions. For deeper checks, GitHub says the toolset can also run the Dependabot CLI locally to compare dependency graphs.
This is not a flashy model launch. It is plumbing. But for software teams, plumbing is where AI agents either become useful or become another source of unmanaged risk.
Why it matters for agentic coding
AI coding tools are getting better at generating larger changes. That raises the cost of waiting until a pull request, a CI run, or a security review to discover that a new package is vulnerable or a key has slipped into a file.
Putting security checks into the agent loop gives developers a chance to fix obvious problems while the context is still fresh. It also gives security teams a cleaner control point: existing GitHub protections can be surfaced through the assistant rather than bolted on after the assistant has already produced code.
GitHub’s separate public preview for enterprise-managed Copilot CLI plugins points in the same direction. Administrators can define plugins, hooks, and MCP configurations for enterprise users, making agent setup less dependent on each developer remembering the right local configuration.
The catch
These tools only help where the underlying protections are enabled. Secret scanning requires GitHub Secret Protection. Dependency scanning requires Dependabot alerts. Teams also need developers to ask the agent to run the checks, or administrators to make those checks part of a standard workflow.
There is another limitation: pre-commit scanning catches known classes of mistakes. It does not prove that an AI-generated change is secure, correct, or maintainable. A vulnerable dependency is easier to flag than a subtle authorization bug introduced by a plausible-looking refactor.
What to watch next
The important question is whether agent security checks become ambient. If every AI coding session can automatically apply organization policy, scan the diff, and explain the fix in the same interface where code is generated, security review becomes less of a separate gate and more of a continuous constraint.
That is the right direction. AI-assisted development is moving too fast for security to live only at the end of the pipeline. GitHub’s MCP updates are an early sign that the next control plane for software risk may be the coding agent itself.



