Table of Contents
Challenge #1: Availability and operational architecture
Integrating AI into business operations and systems has become an enterprise dev team’s biggest headache. For most of modern AI’s lifespan, every data source required custom implementation, which exploded exponentially for every combination of LLM (Large Language Model) and tool.
This “NxM problem”, was the core challenge holding enterprise devs back—until the Model Context Protocol (MCP) arrived in late 2024. Yet, more than six months after MCP’s big debut, enterprise adoption remains apprehensive.
CTOs and IT leaders face the tough reality of moving from proof-of-concept to enterprise deployment, which can expose critical security gaps. Much of the AI security discourse revolves around model vulnerabilities like prompt injection and tool poisoning.
However, authentication and authorization—which are highlighted in 5 out of 10 threats in OWASP’s Top 10 for GenAI—remain dangerously under-emphasized, despite being fundamental to securing AI deployments.
This analysis examines why enterprise MCP adoption often stalls and the auth challenges organizations need to address before becoming production-ready.
Main points
Remote vs. local: Most official MCP servers are currently local-only (using stdio transport), which creates security and operational hurdles when enterprises move to more scalable, remote implementations.
Auth isn’t built-in: Current authentication and authorization approaches for MCP lack enterprise-grade features like OAuth compliance, SSO integration, and granular permission management
Gaps in production: Performance overhead, multi-tenancy complications, and data governance gaps can quickly snowball if left unchecked before production
Challenge #1: Availability and operational architecture
The availability problem with enterprise MCP adoption comes down to a fundamental choice: you can either use existing MCP servers, like those listed by MCP’s maintainers, or you can build one yourself.
Option 1: Use official MCP servers
The MCP repo contains a number of reference and official third-party servers, which have been built or vetted by the team at Anthropic. These include servers for popular tools like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer. These integrations have been tested and are maintained by the API owners, making them attractive for rapid deployment.
However, most official MCP servers use stdio transport, meaning they run as local processes that communicate with the MCP client through standard input/output rather than as remote network services. This transport mechanism presents immediate issues for enterprise architecture:
Single-user auth model: Most local servers’ authentication works for one user—which is not scalable for multiple users with different permissions
Constrained deployment: MCP servers must run on the same machine as your AI client, preventing distributed deployment across your infrastructure
Security boundaries: You can’t implement network-level policies (e.g., restricting specific agents/MCP clients from accessing the server/specific tools) because servers and clients share the same process space
Operational limitations: No support for load balancing, failover, or horizontal scaling of MCP server instances
Infrastructure misalignment: Impossible to separate MCP servers into different security partitions or apply different resource allocation policies
Even when the MCP server connects to cloud services (like Google Drive APIs), the server process itself must run locally alongside your AI client. This means no visibility into what the MCP server is doing or auditing of tool execution. Conversely, a remote server allows you to log these data streams into a central observability tool.
For enterprises, there’s also the fundamental issue of supply chain security. Using untested, community-built MCP servers (e.g., those listed on MCP.so, mcpservers.org) introduces risks that most organizations can’t accept for production deployments, unless there’s an extensive vetting process first.
Option 2: Build your own
Organizations absolutely can build their own MCP servers; the technical barrier isn’t prohibitive. However, the resource commitment and ongoing operational burden could prove more extensive than anticipated.
Building enterprise-ready MCP servers requires:
Protocol knowledge: Understanding MCP specifications, which are still evolving rapidly
Transport familiarity: Supporting both stdio for dev and HTTP/SSE for production deployment
Security implementation: OAuth 2.1 compliance, token management, and enterprise authentication integration
Performance optimization: Handling spikes in request load, managing connection states, and optimizing for enterprise-scale workflows
Ongoing maintenance: Keeping pace with protocol updates, security patches, and integration ecosystem changes
Instead of a full DIY project, another option some companies pursue is to fork an existing MCP server and build on top of it. While this has a number of benefits—decreased dev burden, faster time-to-market, lower barrier for entry—they usually suffer from maintainability issues. “Augmented” MCP servers like these often result in unanticipated maintenance that offsets the initial advantages.
Recent reports show that companies contend with over 400 APIs, and even with MCP to “wrap” AI integration, the developer workload to operationalize even a fraction of these could be considerable. In short, the availability challenge isn’t so much whether enterprises can DIY or easily deploy existing MCP servers; it’s more about the operational overhead and risk trade-offs involved in each approach.
Challenge #2: Security
Security for MCP deployments extends far beyond simple API authentication. Enterprises need protection at multiple layers: external API access, internal employee permissions, and the MCP infrastructure itself.
The OAuth 2.1 compliance deficit
Companies need OAuth 2.1-compliant APIs to secure their systems against external access, but most existing MCP servers lack comprehensive OAuth 2.1 support. The MCP authorization specification incorporates a subset of OAuth 2.1 “with appropriate security measures for both confidential and public clients.”
While this may sound simple in theory, the reality involves shoring up significant gaps that MCP’s auth spec doesn’t address by default:
SDKs and reference implementations often assume MCP servers are also the authorization server, making third-party integration trickier. This exposes a fundamental problem: enterprises want to leverage existing identity providers (IdPs) like Okta or Azure AD, but MCP’s architecture pushes them toward hosting separate authorization infrastructure.
The specification mandates clients MUST use PKCE for all authorization code flows, which many organizations have never implemented before. While PKCE prevents code interception attacks, it requires familiarity that internal development teams often lack.
The MCP authorization spec also recommends using Dynamic Client Registration (DCR), a protocol that allows clients to receive credentials (i.e., client ID and potentially secrets) at runtime instead of manually pre-registering. Many identity providers still do not actively support DCR, and it it often requires initial access tokens or admin privileges. Thus, many enterprises bypass this by pre-registering trusted clients—which is far from scalable.
Token management gaps
Token management in enterprise MCP deployments using typical approaches often breaks down in these three key areas:
Policy definition: Organizations need systematic ways to determine whether users have the required tokens and whether those tokens grant appropriate access to specific MCP servers, in addition to the tools within them. Most handle this ad hoc, which creates security vulnerabilities.
Permission translation: Enterprise APIs often aren’t OAuth-compliant. This makes it essential for your MCP server to be protected with proper OAuth roles and permissions mapped to the keys and tokens used in your legacy APIs. A user with read access to a database shouldn’t inherit write access simply because they’re using an MCP server with blanket permissions.
Validation: The spec mandates complex token mapping and tracking when using third-party authorization, which significantly increases the development burden. Organizations face a choice between local validation (faster, but requires key distribution across infrastructure) and remote validation (more secure, but introduces latency that affects performance).
Other threats, like tool poisoning and prompt injection attacks, pose less significant challenges to enterprises when using their own or a trusted MCP server. However, these vectors are still evolving, and enterprise security teams should regularly review updates to their MCP server, the underlying infrastructure (like tool instructions), and tool access logs.
Challenge #3: Scopes and permissions
The permission model of enterprise MCP deployments requires a fundamental shift from API-level access controls to function-level granularity—while at the same time supporting both user and tenant-level permissions across multi-tenant environments. This is an area that can quickly become mired in complexity during enterprise MCP development.
Function-level permission requirements
Traditional API security operates with endpoint granularity, meaning users either have access or do not have access. MCP requires much finer control since individual tools within an MCP server may need different permissions (i.e., scopes).
As MCP server developers, your organization must define scopes for each tool action rather than relying on existing API-level permissions. A CRM MCP server might expose tools to “read customer data,” “update customer records,” and “export customer lists.” Each will require different scope even though they all access the same underlying API
Most enterprises haven’t designed their permissions and scopes around the function level. Your existing RBAC policies, while useful for existing use cases, likely grant access to entire applications or data sets, not individual AI tool capabilities.
Progressive scoping
Just-in-time or runtime conditional access represents a major uplift for enterprise AI integration. Rather than granting broad “forever scopes,” organizations can implement dynamic permission models that request minimal access initially and expand scopes only when specific tools or tasks require them.
Progressive scoping focuses on intent: what the tool is trying to accomplish, and whether it has the permissions to do so. When scope requirements are made discoverable up front based on intent, AI agents and MCP-equipped LLMs request only the necessary permission grants, which reduces overscoping and token vulnerability.
However, progressive scoping calls for sophisticated token management that can cache and serve different permissions sets based on context, user role, and specific tool combinations—which makes it a significant developer challenge without the appropriate expertise.
Multi-tenancy and tenant-level scopes
Enterprises face a fundamental architectural challenge that most documentation doesn’t address: how to handle multitenancy in cloud-hosted AI applications where the frontend is stateless and the AI logic runs server-side.
Most MCP guidance assumes single-user scenarios where one person runs an AI client that talks directly to MCP servers. But in enterprise SaaS applications, the architecture is profoundly different: your cloud backend acts as the MCP client on behalf of multiple users and tenants.
This exposes a number of permission challenges:
Application-level tenant isolation: Your cloud application must manage which users within each tenant can access which MCP tools, since MCP servers themselves don’t understand your tenant model. A user from Company A shouldn’t be able to trigger MCP tools that access Company B’s data, even though both requests come from the same cloud application.
Identity delegation: Your cloud backend needs to authenticate to MCP servers using service credentials while maintaining user-level permissions and audit trails. This often requires building custom flows where you application proves to MCP servers which specific user is making a request, without exposing tenant-specific auth details.
Hierarchical user and admin scoping: Similarly to delegation, enterprise environments require support for both admin-defined, baseline scopes and user-granted scopes. Admins might provide blanket scopes necessary for all AI agents within the organization (like read access to company directories or non-sensitive content), while individual users need the ability to grant more scopes for specific tools or elevated access (like write permissions for their personal CRM account).
Challenge #4: Single sign-on (SSO) integration
In the same vein as multi-tenancy, enterprise MCP deployments must integrate with existing identity providers—unfortunately, the current standard lacks native single sign-on (SSO) support. Organizations will either need to build an SSO solution in-house or seek an SSO provider able to bridge the gap with their MCP server.
Internal employee authentication
Internal employees using MCP clients can greatly benefit from authenticating through your organization’s existing IdP, whether it’s Okta, Azure AD, Entra, or another enterprise provider. Not only is SSO much more convenient, it also enables consistent security policies, audit trails, and compliance—essentially, easier and better orchestration.
External users (contractors, customers, and partners)
Enterprise environments typically include external parties of some kind, whether they are contractors, B2B customers, or partners. These external users need conditional access to MCP-enabled tools, but they may not exist in the primary user directory. The obvious solution is SSO, but MCP doesn’t support it in its “vanilla” state.
User content management
MCP and SSO integration can complicate consent mechanisms that work within existing IdP workflows, but aren’t a one-to-one mapping when applied in AI scenarios. When an AI agent needs to access sensitive data through MCP servers, the consent flow must:
Integrate with the organization’s existing OAuth authorization server
Present clear, understandable consent screens that explain which tools and data the AI agent will access
Allow admins to pre-approve certain tool combinations while requiring user consent for others
Maintain audit logs that integrate with existing data streaming and Security Information and Event Management (SIEM) systems
SSO implementation hurdles
The technical challenge lies in MCP’s flexible but potentially inconsistent authorization model. MCP servers are capable of reading JWT claims and making authorization decisions beyond basic scope validation, which means organizations can build more robust logic if they wish.
However, this poses a number of obstacles for enterprises:
No standardized enterprise claims: The MCP spec doesn’t define standard claims for organizational context, user roles, or policy constraints
DIY custom authorization logic: Organizations must build custom logic into each MCP server to interpret their specific token claims and policy
Interoperability issues: Without standardized claim formats, MCP servers can’t reliably work across different IdPs or tenant structures
Integration complexity: The burden falls on individual MCP server developers to create from-scratch organization-specific authorization patterns
Challenge #5: Visibility and control
The final challenge enterprise MCP deployments must overcome is comprehensive visibility and control: observability into AI agent activities, and granular management for user permissions.
Auditing and activity tracking
Organizations need detailed, centralized visibility into AI agent behavior through MCP servers. When AI agents access sensitive data or execute tasks on behalf of users, enterprises will need comprehensive audit trails that capture which users accessed which tools, what data was retrieved or modified, and the context that drove those actions. This ensures security policies are met, compliance regulations are satisfied, and resources aren’t wasted.
MCP doesn’t have audit logging built in, but simply plugging in standard middleware or a proxy enables you to listen to all the traffic it’s generating. The real challenge lies in standardization and consistency. Each organization must contend with the fact that MCP events are relatively novel, which makes it difficult to integrate and disambiguate in existing SIEM systems. The question isn’t “How can we monitor MCP behavior?” Instead, it’s “How will we characterize these events and understand why they happened?”
User access management and revocation
Enterprises will require visibility into which users are connected to MCP clients and the ability to manage access at scale. Critical management capabilities that current, non-enterprise servers often lack include:
User enumeration: Seeing all users currently connected to MCP servers and their active sessions
Permission oversight: Understanding which users have access to which MCP tools across the organization
Bulk management: Applying permission changes across multiple users or organizational units consistently and efficiently
Emergency controls: Immediately terminating all MCP access for compromised accounts or during security incidents
Selective revocation: Removing specific token permissions or tool access without affecting all user capabilities
Without these management features, enterprises will struggle to maintain the operational oversight necessary for secure, compliant AI agent deployments at scale. The challenge is that, ultimately, most MCP servers focus on enabling access rather than controlling it, leaving enterprises without the tools to enforce policy and respond to problems. Thus, they must build the tools themselves or find a third-party solution.
Addressing enterprise MCP challenges with Descope
The path to production-ready MCP deployment lies in solutions that address these fundamental challenges without sacrificing the protocol’s core benefits. Organizations need a solid foundation for auth that can bridge the gap between MCP’s technical elegance and enterprise requirements.
Descope Agentic Identity Hub provides comprehensive solutions for enterprise MCP adoption, including:
Inbound Apps, which can turn any application into an OAuth provider. Your existing enterprise systems can securely expose APIs to AI agents with user consent, solving the availability challenge by making your internal tools agent-ready.
Outbound Apps, which provide scalable ways to connect AI agents with external tools and enterprise systems. Rather than managing tokens and permissions manually across dozens of integrations, Outbound Apps handle the complex auth flows that enterprise MCP deployments need.
MCP Auth SDKs and APIs, which help developers building remote MCP servers implement enterprise-grade authorization controls while extending functionality through multiple OAuth-based services. These tools address the security, permission management, and SSO integration challenges that block MCP from reaching production.
Trying to bring an enterprise MCP server online? Reach out to our auth experts to learn how Descope can help you secure AI-enabled user journeys. Just looking to tinker with Descope’s MCP SDKs? Sign up for a Free Forever Account and see how seamless MCP security can be.