Skip to content
Thanh Le-Cong

Research

Our research focuses on trustworthy automated programming for building reliable and secure software systems. We work at the intersection of software engineering and artificial intelligence, combining symbolic reasoning with modern machine learning techniques to make software development safer and more productive.

Research Areas

Automated Debugging

Developing AI-driven techniques for automated bug detection and repair, enabling systems to identify errors, analyze their causes, and generate fixes with minimal human intervention.

AI Security

Investigating security risks in AI systems, e.g., backdoor detection, adversarial robustness, and membership inference attacks.

Projects

FLAMES — Memory-Efficient LLM Program Repair

Active

Semantic-guided patch generation with memory-efficient large language models. Reduces GPU memory footprint while maintaining repair quality on standard benchmarks.

PatchGuru — Patch Oracle Inference

Active

Infers patch correctness oracles from natural language artifacts (issue reports, commit messages) using LLMs, enabling automated patch validation without formal specs.

FormalBench — LLM Formal Specification Inference

Active

Comprehensive benchmark and evaluation framework for assessing LLMs' ability to reason about program semantics and infer formal specifications.