DEV Community

Pedro Santos
Pedro Santos

Posted on

MCP Client with LangChain4j

MCP Client with LangChain4j: Connecting an Agent to Multiple Services

In the previous post, I turned each microservice into an MCP server. Now let's connect an AI agent to all of them. The agent will have access to 12+ tools across 4 services and the LLM will decide which ones to call at runtime.

The Client Configuration

LangChain4j provides McpToolProvider, a tool provider that connects to one or more MCP servers and exposes their tools to the agent. Here's my config:

@Configuration
@RequiredArgsConstructor
public class McpClientConfig {

    @Value("${mcp.order-service-url}")
    private String orderServiceUrl;
    @Value("${mcp.payment-service-url}")
    private String paymentServiceUrl;
    @Value("${mcp.inventory-service-url}")
    private String inventoryServiceUrl;
    @Value("${mcp.product-validation-url}")
    private String productValidationUrl;

    @Bean
    public McpToolProvider mcpToolProvider() {
        return McpToolProvider.builder()
            .mcpClients(List.of(
                buildClient(orderServiceUrl),
                buildClient(paymentServiceUrl),
                buildClient(inventoryServiceUrl),
                buildClient(productValidationUrl)
            ))
            .build();
    }

    private McpClient buildClient(String sseUrl) {
        return new DefaultMcpClient.Builder()
            .transport(new HttpMcpTransport.Builder()
                .sseUrl(sseUrl)
                .logResponses(true)
                .logRequests(true)
                .build())
            .build();
    }
}
Enter fullscreen mode Exit fullscreen mode

The URLs come from application.yml:

mcp:
  order-service-url:      ${ORDER_MCP_URL:http://localhost:3000/sse}
  payment-service-url:    ${PAYMENT_MCP_URL:http://localhost:8091/sse}
  inventory-service-url:  ${INVENTORY_MCP_URL:http://localhost:8092/sse}
  product-validation-url: ${PRODUCT_VALIDATION_MCP_URL:http://localhost:8090/sse}
Enter fullscreen mode Exit fullscreen mode

Environment variables for production, localhost defaults for development. Standard Spring Boot pattern.

Building an Agent with MCP Tools

Once the McpToolProvider bean exists, wiring it into an agent is one line:

DataAnalystAgent agent = AiServices.builder(DataAnalystAgent.class)
    .chatModel(primaryChatModel)
    .toolProvider(mcpToolProvider)      // all 12+ tools from 4 services
    .maxSequentialToolsInvocations(5)   // safety limit
    .build();
Enter fullscreen mode Exit fullscreen mode

The toolProvider replaces .tools(...). Instead of passing specific tool instances, you pass a provider that dynamically resolves tools from the MCP servers. The agent sees all tools from all connected servers.

maxSequentialToolsInvocations caps how many tool calls the agent can make in a single turn. Without this, a confused LLM could loop forever calling tools. I set it to 5 for the DataAnalystAgent. The OperationsAgent uses 3 because it only needs RAG context, no MCP calls.

What the LLM Sees

When the agent starts, LangChain4j calls tools/list on each MCP server. It collects all tool schemas and sends them to the LLM as functionDeclaration objects. The LLM sees something like:

Available tools:
- getOrderById(orderId: string) - Returns order details
- listRecentEvents(limit: integer) - Returns recent saga events
- getPaymentStatus(transactionId: string) - Returns payment status
- getFraudRiskScore(totalAmount: number, clientType: string, hourOfDay: integer) - Calculates fraud risk
- getStockByProduct(productCode: string) - Returns available stock
- getLowStockAlert(threshold: integer) - Returns low-stock products
... (12 total)
Enter fullscreen mode Exit fullscreen mode

The LLM reads the descriptions and decides which tool to call based on the user's question. Ask "what's the stock for COMIC_BOOKS?" and the LLM picks getStockByProduct. Ask "list recent failed sagas" and it picks listRecentEvents.

The Agent Loop

Here's what happens when a user asks a question:

User: "Is there enough stock for COMIC_BOOKS?"
  ↓
LLM reads the question + all 12 tool descriptions
  ↓
LLM generates: functionCall(getStockByProduct, {productCode: "COMIC_BOOKS"})
  ↓
LangChain4j intercepts the functionCall
  ↓
McpToolProvider routes it to inventory-service's MCP server
  ↓
HTTP POST to http://localhost:8092/mcp/message → tools/call
  ↓
inventory-service runs inventoryService.findByProductCode("COMIC_BOOKS")
  ↓
Returns: "available=600"
  ↓
LangChain4j sends the result back to the LLM as functionResponse
  ↓
LLM generates: "Yes, COMIC_BOOKS has 600 units available."
Enter fullscreen mode Exit fullscreen mode

The agent doesn't know which service owns which tool. It doesn't know the URLs. It just calls tools by name and the McpToolProvider handles the routing.

Multi-Tool Chains

The interesting cases involve multiple tools. When you ask "list the 5 most recent failed sagas and assess their fraud risk," the agent needs to:

  1. Call listRecentEvents(15) on order-service
  2. Filter for FAIL status
  3. For each failed saga, call getOrderById() on order-service
  4. For each order, call getFraudRiskScore() on payment-service

That's 11 tool calls across 2 services in a single question. The LLM chains them automatically. Each tool call returns data that informs the next one.

This only works because I set maxSequentialToolsInvocations(5) high enough for the workflow. For simpler agents that only need one or two lookups, I keep it at 3.

Virtual Threads Matter

Each MCP tool call is an HTTP request. Without virtual threads, 5 sequential tool calls take 5x the latency. With virtual threads, LangChain4j can parallelize independent calls.

spring:
  threads:
    virtual:
      enabled: true
Enter fullscreen mode Exit fullscreen mode

One line in application.yml. In my tests, a 5-tool chain dropped from ~8 seconds to ~3 seconds. The calls that don't depend on each other's results run in parallel.

Error Handling

MCP servers can fail. Network timeouts, service restarts, tool exceptions. The McpToolProvider handles most of this transparently. If a tool call fails, the result sent back to the LLM is an error message. The LLM usually adapts by trying a different tool or reporting that the data is unavailable.

For critical failures (MCP server completely down), the agent fails when trying to initialize the tool list. I handle this at the service level:

public String runAgent(String userQuestion) {
    try {
        DataAnalystAgent agent = createAgent();
        return agent.analyze(userQuestion);
    } catch (Exception e) {
        e.printStackTrace();
        return "Agent failed: " + e.getMessage();
    }
}
Enter fullscreen mode Exit fullscreen mode

Not elegant, but functional. The agent never crashes the application. The worst case is a failed query with an error message.

@Tool vs McpToolProvider: When I Use Each

In my project, MCP handles everything that crosses service boundaries. But I still use @Tool in one place: the SagaComposerAgent doesn't need MCP tools. It only needs the DataAnalystAgent as a sub-tool (agent-calling-agent). For that, I register the sub-agent as a local @Tool:

var sagaComposerAgent = AiServices.builder(SagaComposerAgent.class)
    .chatModel(primaryChatModel)
    .maxSequentialToolsInvocations(3)   // no MCP tools needed
    .build();
Enter fullscreen mode Exit fullscreen mode

Rule of thumb: same JVM, use @Tool. Different service, use MCP.

What's Next

The client and servers are connected. But how do you debug when the agent calls the wrong tool or gets unexpected results? In the next post, I'll cover testing and debugging MCP: manual curl testing, log analysis, and the mistakes I made with tool descriptions that caused silent failures.

The repo: github.com/pedrop3/saga-orchestration

Top comments (0)