Blog LogoTYPHOON
Home
Research
Join Text-to-Speech Research
DocumentationDemo AppsPlayground
Blog
About
Blog LogoTyphoon
  • Home
  • Research
    • Join Text-to-Speech Research
  • Get Started
    • Documentation
    • Demo Apps
    • Playground
  • Blog
  • About

© 2025 SCB 10X Co., Ltd.

Customize LLMs to Fit Your Needs: Join Our Hands-On Bootcamp in Bangkok

Customize LLMs to Fit Your Needs: Join Our Hands-On Bootcamp in Bangkok

EventBootcampLLMFinetuningFloat16NVIDIATyphoon
Oravee (Orn) Smithiphol
Oravee (Orn) Smithiphol
May 28, 2025

Table of Contents

Why Open-Source LLMs Like Typhoon MatterWhy FinetuningFrom Finetuning to Deployment: Your End-to-End AI Toolkit🎓 What You’ll Get📍 Event DetailsWho Should Join?📝 How to Apply

Large Language Models (LLMs) are transforming the way we work, build, and create. You may have already used LLMs to power your applications. But to truly unlock the power of AI, you need ownership and control — and that’s where open-source LLMs like Typhoon come in.

Why Open-Source LLMs Like Typhoon Matter

Unlike closed APIs, open-source models give you the freedom to customize, finetune, and deploy your own versions — tailored to your use case, industry, or product. You’re not stuck hoping that a generic prompt will give you the right output. You shape the model itself.

Why Finetuning

Pre-trained LLMs are trained on massive, general-purpose datasets. While they’re powerful, they often lack the domain specificity and tone required for real-world use cases — whether that’s legal, medical, financial, or startup-specific applications.

Finetuning Typhoon means you can:

  • Improve relevance and reduce hallucinations

  • Align tone, format, or domain-specific knowledge

  • Optimize for performance and cost

It’s a next-level skill — and one that makes you way more valuable as a developer, ML engineer, or AI product builder.

From Finetuning to Deployment: Your End-to-End AI Toolkit

Getting your model to perform well in a notebook is great — but deploying it into production is what actually creates value. Knowing how to efficiently serve your finetuned model using GPU infrastructure is a must-have skill for modern AI practitioners.

In this bootcamp, you’ll learn how to deploy your models using Float16’s ServerlessGPU platform powered by NVIDIA, giving you hands-on experience with production-grade tooling and infrastructure — with free GPU credits provided.

🎓 What You’ll Get

  • Hands-on learning with real tools and real GPU credits

  • A digital certificate upon completion

  • Access to $30 in GPU credits

  • New connections in Thailand’s growing AI ecosystem

  • 100% free, thanks to support from SCB 10X and Float16

📍 Event Details

  • Location: DistrictX, One FYI Center, Bangkok

  • Date: July 4, 2025

  • Language: Thai

  • Cost: Free (Limited spots available)

Who Should Join?

  • Machine learning engineers and data scientists

  • AI researchers and university students

  • Developers eager to deploy custom LLMs

  • Startup founders exploring AI products

📝 How to Apply

Spots are limited, and applications will be reviewed to ensure participants get the most out of the experience.

Apply now at: https://lu.ma/i2nst25l The application form will be closed on June 16, 2025.

Previous
Build a Powerful AI Assistant with Typhoon and MCP: Full Code & Step-by-Step Guide

Build a Powerful AI Assistant with Typhoon and MCP: Full Code & Step-by-Step Guide

Next

Practical Guide to Agentic Self-Reflection and Other Methods to Improve LLMs Inference Performance on Complex Questions

Practical Guide to Agentic Self-Reflection and Other Methods to Improve LLMs Inference Performance on Complex Questions

© 2025 SCB 10X Co., Ltd.. All rights reserved