Table of Contents
Lowering the barrier for entry leads to basic security mistakes
This post is part of our session recap series from the Descope Global MCP Hackathon Launch Party. You can catch the recording of all sessions here.
MCP (Model Context Protocol) adoption is accelerating at an unprecedented pace—faster than long-standing security practices can keep up with. Developers and organizations are deploying MCP servers left and right, but they’re making basic mistakes that leave AI-connected tools and data vulnerable.
Case in point: In July 2025, Knostic researchers scanned the internet for exposed MCP servers and found nearly 2,000. They manually verified a portion of these and found every single one granted access to internal tool listings without any form of authentication.
This is the still-maturing environment that Trupi Shiralkar, Advisory Board Member at Backslash Security, addressed at the Descope Global MCP Hackathon Launch Party. Rather than detailing exotic attacks or sophisticated exploits only AI agents need to fear, Shiralkar underscored the fundamentals: day-one security basics that are often forgotten (or unknown) when development moves this quickly.
This article draws from Shiralkar’s presentation to explore:
Why the accessibility of vibe coding creates systemic security gaps in the AI ecosystem
How default MCP configurations (and vibe coder blind spots) expose devs to attack
What to look for when embedding security into MCP and AI development workflows
Lowering the barrier for entry leads to basic security mistakes
For both experienced and novice developers, vibe coding promises unparalleled speed. Simply describe what you want to an AI, let the LLM (Large Language Model) build it, then ship. But while LLMs can produce functional code at the speed of conversation, vibe coders who don’t know security fundamentals aren’t going to catch fatal flaws in the output.
“Within a couple of hours, I produced a somewhat functional application,” Shiralkar recounted. Yet, even with her 18 years of experience in software security, the ease with which the AI produced code introduced some tunnel vision: “While I was doing that, I completely forgot about security. It is that addictive.”
If security professionals can lose track of the fundamentals when vibe coding, what happens when developers with minimal experience start vibe coding?
They may not have taken a security class when pursuing their computer science degrees, or they may not have a degree at all “because all I need to know is how to do smart English prompts,” Shiralkar noted.

Research on AI-generated code paints a pretty dire picture. The GitHub 2024 survey found that 97% of developers had used AI tools, which demonstrates nearly universal buy-in. But as Veracode discovered in their GenAI Code Security Report, nearly half (45%) of AI-produced code includes exploitable security flaws.
Meanwhile, despite writing unsafe code, coders who relied on AI were more likely to believe their applications were secure. Here, willingness to scrutinize AI outputs plays a crucial role: Stanford research found that the less devs trust their AI assistants, the more secure their code.
Bottom line? AI makes code easier to write, but it doesn’t make it easier to secure. And when that code includes MCP server configurations, the stakes compound exponentially.
Bad configurations are prime attack vectors
Shiralkar asked the room a straightforward question: How many have seen MCP servers configured to bind to 0.0.0.0? Several hands went up.
For context, 0.0.0.0 is a common default binding for development convenience. This setting tells the server to listen on all network interfaces, which eliminates connection troubleshooting during local development. The problem is that not every network offers a safe environment for MCP development.
“We think the MCP server is running locally,” Shiralkar explained. “But when we are connected to a [public WiFi network], let’s say I’m at a Starbucks or at the airport vibe coding, that’s when a bad actor can connect to your laptop and perform lateral movements.”

The fix is trivial. You simply need to change the bind address to 127.0.0.1, also known as localhost (or your own computer). However, trivial fixes require awareness, and many novice developers never learn why this matters. Defaults work during development, so it ships to production unchanged.
Shiralkar underscored another common configuration misstep: “As I was vibe coding, I found that by default, MCP servers could have a lot of excessive permissions.”
She highlighted the importance of least-privilege access, or only extending the permissions that are absolutely necessary for a task. Overscoping is a frequent (bad) practice among developers of AI agents, and Shiralkar raised reasonable concerns over the use of these sweeping permissions.
With excessive permissions, “bad actors can execute OS command injection,” she explained. “They can also attempt path traversal vulnerabilities where they eventually get to sensitive information.” This combination of configuration vulnerabilities—network exposure and overscoping—results in significantly increased risk.
Backslash Security’s June 2025 research found hundreds of MCP servers bound to all interfaces (0.0.0.0), many with configurations that would allow arbitrary command execution. Their report noted, “When network exposure meets excessive permissions, you get the perfect storm. Anyone on the same network can take full control of the host machine running the MCP server—no login, no authorization, no sandbox.”
Also read: Top 6 MCP Vulnerabilities (and How to Fix Them)
The unvetted MCP ecosystem
After discussing the problem of misconfiguration, Shiralkar turned to the issue of publicly available, malicious (or simply insecure( MCP servers. She directed the audience to Backslash’s catalog of roughly 15,000 MCP servers.
Most of these aren’t from official sources. Many have insecure configurations. And unlike the npm or PyPi ecosystems, where years of supply chain attacks have trained developers to be wary, the MCP community has yet to cement their skepticism.
An audience member brought up an example with an unofficial Docker MCP server and tested it using Backslash’s scanner, which detected excessive permissions. The audience member noted that the MCP server appeared to be masquerading as an official version. “Typosquatting is still a thing,” they said, “You probably want to look at the official one and not this one.”

There’s an inherent trust issue at the core of how MCP servers are disseminated throughout the community. Novel attacks surface constantly, with several patterns becoming well-documented:
Tool poisoning is the most common vector. In a typical MCP use case, tool descriptions (which aren’t typically seen by the user, but are seen by the LLM) are ingested into the LLM’s context. Tool poisoning means inserting malicious instructions into this unseen variable.
Server spoofing is much like the unofficial Docker MCP example above. Adversaries publish a malicious MCP server with a name practically indistinguishable from an official or trusted one. This fools the user or AI agent into using it, unaware of its potentially malicious nature.
Rug-pull updates when a new version contains malicious content previous versions didn’t. Whether it’s a trusted developer “going bad” or simply an unintentional exploit in the latest commit, rug-pull updates are especially insidious because MCP clients don’t alert users of changes by default.
As Shirlkar put it, “A vibe coder who's very eager to bring ideas to life doesn’t have awareness about any visibility or governance.” They’re plugging MCP servers into their clients and building their own servers with crippled security, filled with overconfidence and bound for a rude awakening.
Practical security for the vibe coding era
The underlying challenge isn’t that vibe coders are careless. It’s that security teams can’t scale to review every vibe-coded application, and waiting for extensive security review undermines the promise of rapid development.
“By the time you’re deep into a vibe-coded application, you realize you have millions of lines of code,” Shirlakar observed. “Too late to reach out to the security team. The security team already has a lot of work to do.”
The answer here isn’t more gatekeeping and review limbo. It’s what Shiralkar called "frictionless, developer-centric security”: policies and tools that prevent issues from entering production without requiring shifts into security review mode.

This begins with the principle of least privilege. Shiralkar returned to the concept repeatedly: “If somebody doesn’t have any business connecting to your MCP server with write privileges, they shouldn’t be. Be very frugal, very stingy about what kind of privileges you are assigning to entities."
Andre Langraf made a similar point in his session at the hackathon: “A support center LLM should not have access to a database, not even read-only access.” Vercel SVP of Product Aparna Sinha echoed these sentiments at the same event, describing the platform’s decision to make their MCP server read-only because of its remote nature.
Notably, when agents can discover scope requirements up front, they’re able to request only the necessary permissions per tool or feature. Progressive scoping enables agents to request the minimum necessary privileges for the specific task at hand, significantly reducing the risk surface.
While least-privilege policies can decrease the “blast radius” of AI going off the rails, tool-level authorization offers much more granular protection. This controls what tools are available to the AI acting on behalf of users, not just what data they can access. Simply put, it extends role-based access control (RBAC) to the MCP layer.
Beyond tool-specific controls, organizations need visibility into their AI-assisted footprint: which users are working with which agents, which MCP servers are in use (local vs. remote), credential configurations, and potential vulnerabilities. Tools that can observe and trace these relationships surface problems before they reach production.
This visibility imperative maps directly to identity infrastructure. Organizations need to know which AI agents are connecting to which MCP servers with which scopes. It’s the same sort of lifecycle management that traditional IAM/CIAM provides for human users, but purpose-built for the agentic context.
Shipping secure MCP servers without specialized expertise
Shiralkar closed her presentation with a variation of the familiar line from 2002’s Spider-Man: “Great power comes with great responsibility.” MCP has the power to transform software development, but that evolution comes with the burden of ensuring the output is safe for users. Yet, developers can’t be expected to become security experts on top of everything else.
Security isn’t the core competency of most development teams, especially vibe coders, and it shouldn’t have to be. What organizations need is resilient infrastructure that handles authentication, authorization, and governance with minimal effort or expertise. This allows developers to to focus on building while security disappears into the background—still robust and flexible, but a solved problem that doesn’t distract.
Descope’s Agentic Identity Hub was built to solve these specific problems. It provides the identity infrastructure layer that helps organizations:
Protect MCP servers with spec-compliant OAuth 2.1, PKCE, DCR, user consent flows, and tool-level scoping
Enable AI agents to easily connect with 50+ third-party solutions without worrying about token management or storage
Provision and manage AI agent identities alongside user identities
Whether you’re shipping your own MCP server or connecting agents to external systems, Descope will handle the auth infrastructure so you can focus on building the next big thing.
Sign up for a Free Forever Descope Account to start mapping enterprise-grade MCP auth flows, or book a demo to see how Descope can get your agentic architecture from proof of concept into production.

