TL;DR
This hands-on tutorial shows you how to enhance Typhoon using the Model Context Protocol (MCP). You'll:
-
Learn what MCP is and why it matters
-
Build a weather-aware trip consultant powered by Typhoon and real-time data
-
Use tools like LangChain, LangGraph, and Typhoon’s own MCP server
-
Run everything locally or in a Colab notebook
-
Explore built-in prompt templates for tasks like brainstorming, email drafting, and more
-
Try it all instantly via the Typhoon Playground—no setup required
-
If you're looking to build smarter LLM apps or automate real-world workflows with Typhoon, this guide is for you.
What MCP Is And Why It Matters
Model Context Protocol (MCP) is a new open standard introduced by Anthropic that allows large language models (LLMs) to seamlessly interact with tools, prompts, and resources through a unified API. With Typhoon 2’s enhanced long-context and tool-calling features, all models in the Typhoon family—including the latest Typhoon 2.1—can now seamlessly connect to any MCP-compliant server.
Connecting to an MCP server offers a range of benefits. It allows models to access prebuilt prompt templates, dynamically retrieve relevant data from databases or documents, and invoke external tools like APIs and calculators in real time. This integration significantly enhances task performance, improves accuracy, and enables richer, more interactive experiences—all without requiring users to handcraft complex prompts or manage tool integrations manually.
In this article, we’ll show you how to unlock Typhoon’s full potential using MCP. You’ll learn how to build a weather-aware trip consultant that demonstrates Typhoon’s ability to interact with real-world tools. We’ll also introduce our MCP server, which comes preloaded with prompt templates for common use cases—helping you get started with Typhoon quickly and effectively. Let’s dive in!
Typhoon Trip Consultant
Planning a trip can be complex—and weather is a key consideration. Since LLMs don’t natively access real-time data, generating reliable trip plans directly from the model has limitations. Fortunately, MCP enables Typhoon to connect to external services like weather APIs.
In this tutorial, we'll build a lightweight application that uses Typhoon to generate personalized itineraries, factoring in current weather conditions. This setup enables the model to query real-time weather forecasts and tailor trip recommendations—like suggesting indoor activities during rain or packing tips for sunny getaways.
We’ll use typhoon-v2.1-12b-instruct
, a 12‑billion‑parameter version of Typhoon 2.1, accessible via the free Typhoon API.
Typhoon 2.1 is our latest release, built on Gemma 3. It's designed to outperform larger models while remaining cost-effective. It features improved Thai language alignment, a controllable “thinking mode” for long-form reasoning, and enhanced code-switching capabilities for Thai–English use cases—making it ideal for real-world applications.
Here is the list of tools that we will be using in this tutorial:
uv
– A lightweight Python package manager that simplifies dependency management and project setup. Think of it as a sleek alternative topip
andvirtualenv
.LangChain
– A powerful framework for building applications with large language models. It helps you connect models to tools, prompts, memory, and more.LangGraph
– An extension of LangChain designed for building agent-style workflows using a graph-based architecture. It helps manage tool calls and reasoning steps.langchain-mcp-adapters
– A utility that makes it easy to connect LangChain agents to MCP servers using standard protocols.python-dotenv
A small tool for loading environment variables from a .env file, keeping your API keys and configs secure and manageable.
Step-by-Step Tutorial
You can either follow along with this tutorial using our Colab notebook or run everything locally. If you prefer to run it locally, follow this following setup guide.
Environment Setup
First, let’s set up the project using uv
, a Python package manager:
Next, install the necessary dependencies:
We'll use LangChain to interact with Typhoon 2.1. LangChain provides a high-level interface for building applications that use LLMs. Notably, langchain-mcp-adapters
simplify connecting to MCP servers, and langgraph
allows us to build an agent that interacts with those tools.
Next, create a .env
file in your project's root directory and add:
You can obtain the Typhoon API key from Typhoon Playground. Once you have the API key, add it to the .env
file.
Let’s create some boilerplate code.
This code loads environment variables from the .env
file we just created. Note that we make our function async
to prepare for a streamable LLM interaction.
System and Assistant Prompts
Next, let’s prepare some system prompts to help Typhoon become a travel consultant!
Main Application Logic
Now, let’s focus on the main function. First, we will connect to our MCP server. Our remote MCP server is available at https://typhoon-mcp-server-311305667538.asia-southeast1.run.app. Please note that the MCP server is available in SSE transport mode.
We named the connection to our MCP server "weather"
and specified the "transport"
as "sse"
. Now that we have the client
, let’s list the tools available on our server. We can fetch available tools using the imported function get_tools()
. Let’s print the available tools.
Run your program with:
You should see an output similar to:
This output shows that we have one available tool named get_weather()
accepting two parameters: location
and target_date
. The description indicates that we can retrieve weather forecast information for a specific location on a given date. Great! Now we know we can connect to our MCP server.
Next, let’s ensure we can connect to our LLM.
Connecting Typhoon LLM with MCP Server
We create ChatOpenAI
and specify base_url
as "https://api.opentyphoon.ai/v1"
so we can interact with Typhoon models available in the API. To use Typhoon 2.1 12B, we set the model name to "typhoon-v2.1-12b-instruct"
. Let’s test the connection by asking a simple question.
You should see a response similar to:
Note that your actual response may vary. Nice! We can connect to the LLM. Now, let’s create an agent using both our defined LLM and the available tools. We’ll use a ReAct agent, which lets the agent think and observe tool results to plan its responses. We use LangGraph’s create_react_agent()
function to handle the tool calling and parsing automatically.
Next, let’s build a simple chat loop so we can interact with our agent in the CLI.
Build a Simple Chat Loop
Wow, that’s quite a bit of code! The main idea is to prepare a chat loop so users can interact with Typhoon. We maintain a global chat history for context and stream responses as soon as content is generated.
Full Source Code for This Tutorial
Here's the complete code.
Let's Try It Out
Since we have the agent set up, let’s run it and see it in action! Execute:
You should see a greeting like:
Now, let’s ask Typhoon about the weather tomorrow in Bangkok with this prompt: How's the weather tomorrow in Bangkok?
Typhoon responded with:
It seems like tomorrow evening will be slightly rainy and quite hot. I don’t want to get wet. Let’s ask Typhoon where I should go to avoid the rain with this prompt: I don't want to get wet in the evening! Where should I go?
Here’s the response:
Great! It seems like I can stay dry tomorrow evening. That wraps up this tutorial. I can’t wait to see how you expand Typhoon’s capabilities with an MCP server to solve everyday problems!
Typhoon MCP Server
The MCP server used in the previous section is a MCP server we developed. However, the weather tool is not the only feature our MCP server offers!
Templates
We’ve prepared a list of prompt templates for common use cases, such as:
- Brainstorming — Ask Typhoon to generate ideas for solving any problems you might have, like arranging a room, choosing a new hairstyle, or finding recommended books on a subject.
- Email drafting — Draft professional emails in just a minute by providing the necessary details.
- Grammar correction — Get your writing proofread and polished.
- And so much more!
Instead of crafting these prompts manually, you can now select the use case you need and provide minimal input. The server will return an optimized prompt, ready for Typhoon or any other MCP‑compatible LLM. This helps you build faster, reduces manual prompt engineering, and scales across multiple tasks.
How to Connect
Connect to the same MCP server at:
https://typhoon-mcp-server-311305667538.asia-southeast1.run.app/sse
Bonus: Typhoon Playground Now Integrated With the Typhoon MCP Server
Exciting news! You can now try all of this with zero setup directly in the playground at https://t1.opentyphoon.ai. Just open the playground, pick a model, choose a use case, and start chatting! Check out the demo video.
Wrapping Up
With the Model Context Protocol and tool-calling, Typhoon models become dramatically more powerful. Whether you're building advanced applications or just beginning your LLM journey, MCP reduces complexity and expands what’s possible.
Try the Typhoon API and MCP Playground today, and share what you build—we’d love to see it!
Join our Discord server to show off your projects or get help from other developers!