Working in AI research and engineering has a funny way of humbling you. The deeper you go, the more you realize that most of the real learning does not happen in textbooks or lecture slides. It happens when a training run silently diverges after 12 hours, when a seemingly trivial hyperparameter destroys model stability, or when a single blog post from a stranger on the internet saves you days of debugging.
Over time, I started noticing a pattern: many of the most useful insights in machine learning are scattered across papers, GitHub issues, personal blogs, and half-documented experiments. Valuable knowledge exists everywhere, but it is rarely consolidated in a way that is easy to digest.
This blog is my attempt to contribute back to that ecosystem.
I work at the intersection of Agentic AI, large language model fine-tuning, and Physics-Informed Machine Learning. These areas represent three different but increasingly connected directions of modern AI systems: reasoning agents, adaptable language models, and machine learning systems grounded in physical laws. Throughout my work, I constantly find interesting ideas, practical lessons, and engineering patterns that are worth documenting.
Rather than letting these notes disappear inside private notebooks or internal documentation, I decided to share them publicly.
What This Blog Will Cover
The content here will revolve around a few core themes that I spend most of my research and engineering time on.
Agentic AI and Multi-Agent Systems
Agent-based systems are quickly becoming one of the most exciting paradigms in AI. Instead of treating models as static predictors, we increasingly design autonomous systems that plan, reason, and interact with tools or other agents.
I plan to explore topics such as:
- Architectural patterns for agentic systems
- Multi-agent coordination and communication
- Orchestration frameworks and tool integration
- Practical challenges when moving from demos to production
Many tutorials focus on simple toy examples, but real systems introduce subtle problems—state management, tool reliability, hallucinated plans, and evaluation. These are the areas I’m particularly interested in unpacking.
Fine-Tuning Large and Small Language Models
While foundation models receive most of the attention, fine-tuning remains one of the most powerful techniques for adapting models to specific domains.
I will share practical notes on:
- Dataset construction and curation strategies
- Instruction tuning and preference optimization
- Techniques such as DPO, CPO, and other alignment methods
- Efficient training approaches for smaller models
A lot of the discussion around LLMs focuses on scale. However, with the right training pipeline, small and mid-sized models can achieve impressive performance in specialized tasks.
Physics-Informed Machine Learning
One of the most fascinating directions in modern machine learning is the integration of scientific knowledge directly into neural networks.
Physics-Informed Neural Networks (PINNs) offer a framework where models are trained not only on data but also on physical constraints derived from differential equations. This makes them particularly useful in domains like fluid dynamics, heat transfer, and scientific simulation.
Instead of purely fitting observations, the model learns solutions that remain consistent with known physical laws. In many scientific applications, this dramatically improves generalization and stability.
I will write about both the theoretical foundations and practical implementation challenges of these approaches.
MLOps and ML Engineering
Research ideas are only half the story. Turning them into reliable systems requires a significant amount of engineering.
Another focus of this blog will be the infrastructure side of machine learning, including:
- Designing production-ready ML services
- Building training pipelines and experiment tracking
- Containerization and deployment strategies
- CI/CD workflows for ML systems
Good engineering practices often determine whether a promising idea becomes a usable system or remains a research prototype.
Research Notes and Paper Insights
A large part of my time is spent reading papers. Some are groundbreaking, others are incremental, but almost all of them contain at least one idea worth remembering.
Occasionally, I will write structured notes and summaries of papers that I find particularly insightful. These posts will focus less on repeating the abstract and more on extracting practical takeaways, implementation details, and potential research directions.
Why Write Publicly?
Writing forces clarity.
It is easy to feel like you understand a concept until you try to explain it. The process of organizing ideas into a coherent explanation often reveals gaps in your own thinking.
Public writing adds another dimension: accountability. When you share something openly, you are motivated to verify assumptions, revisit sources, and structure your thoughts more carefully.
Another reason is simple: the machine learning community thrives on shared knowledge. Many of the techniques I use today came from someone else’s blog post, GitHub comment, or research note.
If even one article here helps someone debug a model faster, understand a paper more clearly, or discover a new research direction, then writing it was worth the effort.
A Thought That Resonates
One quote that has stayed with me throughout my work in computing comes from Richard Hamming:
“The purpose of computation is insight, not numbers.”
It is easy to get lost in benchmarks, metrics, and leaderboard scores. But ultimately, the goal of computation—and of machine learning itself—is deeper understanding.
Stay Connected
I plan to publish new articles regularly, covering experiments, lessons learned, and research ideas.
If you’re interested in Agentic AI, LLM fine-tuning, or physics-informed learning, this space will likely overlap with many of the topics you are exploring as well.
You can reach me via email at nqdung.work@gmail.com or connect with me on LinkedIn. I’m always happy to exchange ideas, discuss research, or learn about interesting problems others are working on.
Thanks for reading!
Related posts
Comments
Join the discussion via GitHub Discussions.