Wang Bill Zhu
I work on natural language processing for cross-disciplinary domains: multimodal, embodied, healthcare, and social science, with an emphasis on building verifiable LLM systems that solve real-world problems.
My research has two coupled directions:
- Evaluation methods and benchmarks that pinpoint where and why current LLMs fail.
- Methods for adapting LLMs to domain-specific tasks, including fine-tuning, structured generation, and the use of formal or programmatic representations.
I will complete my Ph.D. in Apr 2026 at the University of Southern California, co-advised by Jesse Thomason and Robin Jia. Earlier, I worked with Fei Sha on vision-language navigation, and with Greg Mori and Oliver Schulte during my undergrad at Simon Fraser University and Zhejiang University.
I’m open to research collaborations and student mentorship.
News
- Apr 2026 Invited talk at UC Berkeley (Host: Serina Chang).
- Mar 2026 Two papers accepted to Findings of ACL 2026: PDB and PDDLego+.
- Apr 2026 [New preprint] PDDL-Mind: Large Language Models are Capable on Belief Reasoning with Reliable State Tracking.
- Apr 2026 [New preprint] Precise Debugging Benchmark: Is Your Model Debugging or Regenerating?
- Apr 2026 [New preprint] Self-Evolving LLM Memory Extraction Across Heterogeneous Tasks.
- Mar 2026 Passed my Ph.D. defense.
- Mar 2026 Invited talk at the EleutherAI Planning Reading Group (Host: Alexander Spangher).
- Jan 2026 PSALM-V accepted to ICRA 2026.
- Jan 2026 Invited talk at the USC ISI NLP Seminar (Host: Jonathan May).
- Jan 2026 Cancer-Myth and Zebra-CoT accepted to ICLR 2026.
- Oct 2025 Awarded an NSF Computing Grant.
- Sep 2025 VisualLens and Think or Not Think accepted to NeurIPS 2025.