Blog LogoTYPHOON
Home
Research
Join Text-to-Speech Research
DocumentationDemo AppsPlayground
Blog
About
Blog LogoTyphoon
  • Home
  • Research
    • Join Text-to-Speech Research
  • Get Started
    • Documentation
    • Demo Apps
    • Playground
  • Blog
  • About

© 2025 SCB 10X Co., Ltd.

SCB 10X’s Typhoon Thai-Optimized LLMs Now Available for Enterprise-Ready AI Deployment with NVIDIA NIM

SCB 10X’s Typhoon Thai-Optimized LLMs Now Available for Enterprise-Ready AI Deployment with NVIDIA NIM

NVIDIA
Oravee (Orn) Smithiphol
Oravee (Orn) Smithiphol
June 11, 2025

Table of Contents

Why This MattersMeet Typhoon 2 on NIMBuilt for Thais and Enterprise AI WorkflowsTry It Today

Today, we’re proud to share that we are bringing the Typhoon 2 family of LLMs to NVIDIA NIM — unlocking enterprise-ready, Thai-optimized AI for rapid deployment across public cloud, on-premise, and hybrid environments.

Unveiled at NVIDIA GTC Paris, we believe the NVIDIA universal LLM NIM microservice offers a transformative leap in enterprise AI. Designed to streamline secure and scalable deployment of a broad range of large language models across industries, this innovation enables enterprises to integrate their choice of open and specialized LLMs faster than ever.

Why This Matters

Enterprises across Southeast Asia face growing demand for intelligent systems that speak their customers’ language. Typhoon 2 model family offers best-in-class Thai comprehension, code-switching fluency, and robust instruction-following in both Thai and English. With NIM, these capabilities are now just minutes away from production.

Meet Typhoon 2 on NIM

The Typhoon 2 family includes pre-trained and instruct-tuned models ranging from 1B to 70B parameters. These models are optimized for high performance in bilingual (English and Thai), multi-domain enterprise tasks — from document processing to conversational AI and AI agent orchestration.

Now available as NVIDIA NIM microservices, Typhoon models benefit from:

● Fast, standards-based deployment using Docker and REST APIs

● Enterprise-grade reliability with NVIDIA AI Enterprise support

● Cross-environment flexibility across data centers, public clouds, and hybrid stacks

● Support for agentic workflows through integration with NVIDIA NeMo microservices for model customization, evaluation, guardrails, and RAG

Built for Thais and Enterprise AI Workflows

Typhoon’s open-source models are among the top performers on key benchmarks such as ThaiExam, O-NET, TGAT, M3Exam, MT-Bench, and Function Calling Accuracy. — affirming their position as the go-to models for Thai-language enterprises and bilingual applications.

With up to 128K token context and high instruction-following precision, Typhoon is ideal for structured tasks, retrieval workflows, and AI agents that power real-world automation.

Try It Today

Experience the power of enterprise-ready AI — customized for Thailand and deployable in under five minutes.

● Learn more about our models at opentyphoon.ai

● Check out NVIDIA’s blog: Simplify LLM Deployment and AI Inference with Unified NVIDIA NIM Workflow

● Explore the universal LLM NIM microservice at build.nvidia.com

Note: The Typhoon model currently tested and validated with NIM is Typhoon-2-8b-instruct. If you're interested in deploying a different Typhoon model with NIM, don’t hesitate to reach out. You can explore the full model lineup on our website.

Previous
Practical Guide to Agentic Self-Reflection and Other Methods to Improve LLMs Inference Performance on Complex Questions

Practical Guide to Agentic Self-Reflection and Other Methods to Improve LLMs Inference Performance on Complex Questions

Next

Typhoon’s Joint Research Included in 5 Accepted Papers at ACL 2025

Typhoon’s Joint Research Included in 5 Accepted Papers at ACL 2025

© 2025 SCB 10X Co., Ltd.. All rights reserved