Gabriel Chua

๐Ÿ”ฎ Data Scientist / LLM Whisperer

At GovTech, I research and build tooling for Responsible AI, specialising in safety testing and guardrails. I also develop LLM solutions to fight online scams. Outside work, I occasionally organise AI community events.

Research

๐ŸŒ

RabakBench

Benchmarking safety robustness to Singapore's multilingual setting (Singlish, Malay, Mandarin, Tamil)

Under review
Gabriel Chua, Leanne Tan, Ziyu Ge, Roy Ka-Wei Lee
๐ŸŽ

MinorBench

Benchmarking AI Safety for children in educational settings

ICLR 2025 Workshop - AI for Children
Shaun Khoo, Gabriel Chua, Rachel Shong
๐Ÿ›ก๏ธ

Off-Topic Guardrail

Lightweight guardrail to detect off-topic LLM queries

ICLR 2025 Workshop - Building Trust in LLMs and LLM Applications
Gabriel Chua, Chan Shing Yee, Shaun Khoo

Tinkering

๐Ÿงง Gongxi Guru

Practice your Chinese New Year greetings - powered by OpenAI's realtime API

๐ŸŽ™๏ธ Open Notebook LM

Convert any PDF into a podcast episode, using open-source AI models

๐Ÿ“ฐ Daily AI Papers

Summaries auto-generated from HuggingFace's Daily Papers using Gemini and GitHub Actions

๐Ÿ” RAGxplorer

Visualise your RAG documents

Selected Writings

OpenAI Agents SDK: First Thoughts

Early observations and experiences building with OpenAI's newly released Agents SDK, including insights on agent handoffs, guardrails, and production considerations.

Mar 2025

Eliciting Toxic Singlish from r1

We discovered that, with just standard prompt-engineering best practices, r1 could generate highly toxic and realistic Singlish content.

Jan 2025

From Risk to Resilience: Adding LLM Guardrails From Day 1

7+1 technical tips on how to get started with LLM Guardrails

Dec 2024

Building Responsible AI - Why Guardrails Matter

In this post, we discuss why LLM Guardrails are essenital and how we think about designing and implementing them at GovTech

Nov 2024

Community