NGUYEN
QUANG DUNG
Specializing in Agentic AI, LLM/SLM Fine-tuning, and Physics-Informed ML. Building intelligent systems for complex industrial and linguistic challenges.

About
Results-oriented AI/ML Engineer specializing in Agentic AI, Large/Small Language Models, and Physics-Informed Machine Learning. Proven expertise in architecting scalable multi-agent systems and deploying Small Language Models for domain-specific applications.
Proficient in integrating scientific constraints into neural networks through Physics-Informed ML, enhancing model interpretability and performance in complex production environments. Published researcher with work accepted at SOICT 2025.
Professional Experience
Koidra AI
AI Engineer · Remote
- Engineered robust greenhouse control systems by integrating PID controllers with gradient-based anomaly detection, improving operational stability and reducing manual interventions.
- Refactored the core control module leveraging State Machine design patterns, enhancing system reliability and reducing code maintenance costs.
- Developed Physics-Informed ML model to model transpiration dynamics, outperforming baseline predictive accuracy by 90% while ensuring strict physical consistency.
A-Star Group
AI Engineer (Promoted from Intern) · Ha Noi, Vietnam
- Startup Incubation & R&D: Spearheaded the technical R&D for Web3 AI initiatives, translating abstract business requirements into functional MVPs and scalable architectures for flagship portfolio products.
- Architected a Web3 Multi-Agent Platform using Model Context Protocol (MCP), successfully automating yield optimization strategies and serving as the core engine for a flagship product.
- Developed a crypto-wallet classification microservice with FastAPI, managing the full ML lifecycle from data curation to production inference.
- Implemented domain-specific RAG and NER pipelines tailored for blockchain data, reducing LLM hallucination rates and significantly improving autonomous agent reliability.
NLP Lab — BKAI (HUST)
Undergraduate Research Assistant · Hanoi, Vietnam
- Optimized pre-training and fine-tuning pipelines for Small Language Models, significantly enhancing Information Retrieval capabilities for low-resource domains.
- Standardized NLP workflows for NER and Text Classification, achieving substantial improvements in data processing efficiency and evaluation consistency.
Research & Publications
ViLexCPO: A Multi-Task and Preference-Aligned Framework for Legal Question Answering
Quang-Dung Nguyen (First Author), Duc-Dung Nguyen, Huu-Tri-Dung Vo, Thanh-Huong Le·Hanoi University of Science and Technology
A two-stage training framework for Vietnamese legal QA combining Multi-task Supervised Fine-Tuning across three complementary tasks with Contrastive Preference Optimization (CPO) to align model outputs with high-quality legal reasoning. Built on Qwen3-1.7B, demonstrating that well-trained compact models can achieve competitive performance in complex legal reasoning.
Training Pipeline
Key Results — VLSP LegalSLM Benchmark
| Model | ACCMCQ | ACCUC | ACCavg | % Imp. |
|---|---|---|---|---|
| Qwen3-1.7B Raw | 76.02% | 50.00% | 63.01% | — |
| Qwen3-1.7B Pretrain | 87.00% | 63.33% | 75.16% | +19.3% |
| SFT-2T + CPO | 86.30% | 88.00% | 87.15% | +38.1% |
| SFT-3T + CPO (Ours) | 87.00% | 96.01% | 91.49% | +44.4% |
Results averaged over 3 runs with seeds {42, 84, 126}.
Technical Expertise
Languages
AI/ML Domains
Frameworks & Libraries
MLOps & Engineering
Honors & Awards
Silver Medal
National Physics Olympiad (VPhO)
- 2025
Top 6 Globally
Trustworthy NeuroSymbolic & XAI Workshop @ IJCNN 2025
- 2025
Top 5 Finalist
DataFlow 2025 — National Data Analysis Hackathon
Education
Hanoi University of Science and Technology (HUST)
B.Sc. in Computer Science
2023 — Present