Ever since we implemented the HTTP/SSE transports we’ve been worrying about not having any sort of authentication between the MCP clients and the MCP server. So we implemented a simple token authentication between those, independent of the Box API authentication.
Let’s first take a look at the available community MCP server parameters:
options: -h, --help show this help message and exit --transport {stdio,sse,streamable-http} Transport type (default: stdio) --host HOST Host for SSE/HTTP transport (default: 0.0.0.0) --port PORT Port for SSE/HTTP transport (default: 8001) --box-auth {oauth,ccg} Authentication type for Box API (default: oauth) --no-mcp-server-auth Disable authentication (for development only)
We now have an extra parameters, --no-mcp-server-auth , that will turn off MCP Client to MCP Server authentication, This is not recommended.
Note that if using STDIO, authentication is ignored.
So to your question, how would this work, specifically with n8n.
You start by generating a strong token, (essentially a strong password), there are plenty of on-line services to do this, for example this API Key Generator.
Once you have an API Key, you add it to your .env file. For example:
In n8n you create a credential of type Bearer Auth, and paste your API Key.
Then you adjust the configuration of your MCP client in n8n, switching the authentication to Baerer Auth, and selecting the credential you just created:
Run your MCP server using something like:
uv run src/mcp_server_box.py --transport sse --box-auth oauth
You get:
INFO:middleware:Setting up auth middleware wrapper for transport: sse INFO:middleware:Wrapped sse_app method Starting Box Community MCP SSE Server on 0.0.0.0:8001 INFO:middleware:wrapped_sse_app called with mount_path=None INFO:middleware:Adding middleware to app: 4394996736 INFO:middleware:Added OAuth discovery endpoint INFO:middleware:Middleware added. App middleware count: 1 INFO: Started server process [24150] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)
Now you can interact with n8n, for example:
Hope this helps clarify the current authentication.
To your other point relative to the docket container, you should be able to expose whatever port you desire, and map it to the port inside the container. From there it should be transparent to the MCP Client.