
Jiawei Han
[intermediate] Structure-Guided, Theme-Based Knowledge Discovery with Large Language Models
Summary
Large language models (LLMs) need retrieval-augmented generation (RAG) to incorporate external knowledge to enhance its power of reasoning, query answering, and problem solving. However, retrieval augmented generation may not always perform effectively since retrieval may still get irrelevant or unstructured information or miss some critical information, undermining the power of LLM reasoning and knowledge discovery. We introduce LLM-enhanced methods for mining knowledge structures for theme-centered retrieval and constructing theme-specific knowledge graphs to facilitate structure-augmented reasoning generation. We show such theme-centered, structure-augmented generation will enhance the power of knowledge discovery and multi-hop reasoning generation with LLMs. We will use some case studies to demonstrate the methods and the power of structures in LLM reasoning.
Syllabus
- An Introduction to Text Representation Learning, LLMs, and their Implementations
- Retrieval for RAG (Retrieval Augmented Generation): RL-based Retrieving and Structured Retrieval
- Structuring: Entity Structure Mining, Relation Extraction, and Knowledge Graph Construction
- Structured-Guided Reasoning for LLMs
- Integration of Retrieval Augmented Generation and Agentic AI for Application Exploration
References
Pengcheng Jiang, Cao Xiao, Minhao Jiang, Parminder Bhatia, Taha Kass-Hout, Jimeng Sun, Jiawei Han, “Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval”, ICLR’2025.
Pengcheng Jiang, Xueqiang Xu, Jiacheng Lin, Jinfeng Xiao, Zifeng Wang, Jimeng Sun, Jiawei Han, “s3: You Don’t Need That Much Data to Train a Search Agent via RL”, EMNLP’2025.
Pengcheng Jiang, et al., “Adaptation of Agentic AI”, ArXiv: 2512.16301, 2025.
Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, Jiawei Han, “Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning”, COLM 2025.
Priyanka Kargupta, Runchu Tian, Jiawei Han, “Beyond True or False: Retrieval-Augmented Hierarchical Analysis of Nuanced Claims”, ACL 2025.
Priyanka Kargupta, Ishika Agarwal, Tal August, Jiawei Han, “Tree-of-Debate: Multi-Persona Debate Trees Elicit Critical Thinking for Scientific Comparative Analysis”, ACL 2025.
Tanay Komarlu, Minhao Jiang, Xuan Wang, Jiawei Han, “OntoType: Ontology-Guided and Pre-Trained Language Model Assisted Fine-Grained Entity Typing”, KDD’24.
Yu Meng, Jiaxin Huang, Guangyuan Wang, Zihan Wang, Chao Zhang, Yu Zhang and Jiawei Han, “Discriminative Topic Mining via Category-Name Guided Text Embedding”, WWW’2020.
Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Amatriain, Jianfeng Gao, “Large Language Models: A Survey”, ArXiv: 2402.06196.
Siru Ouyang, Jun Yan, I Hsu, Yanfei Chen, Ke Jiang, Zifeng Wang, Rujun Han, Long T Le, Samira Daruki, Xiangru Tang, Vishy Tirumalashetty, George Lee, Mahsan Rofouei, Hangfei Lin, Jiawei Han, Chen-Yu Lee, Tomas Pfister, “ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory”, ArXiv: 2509.25140.
Jash Parekh, Pengcheng Jiang, Jiawei Han, “SARG: Structure Augmented Reasoning Generation”, ArXiv: 2506.08364.
Yunyi Zhang, Ruozhen Yang, Xueqiang Xu, Jinfeng Xiao, Jiaming Shen, Jiawei Han, “TELEClass: Taxonomy Enrichment and LLM-Enhanced Hierarchical Text Classification with Minimal Supervision”, WWW’2025.
Pre-requisites
Basic knowledge of Machine Learning, Data Mining, NLP, Large Language Models and Retrieval Augmented Generation.
Short bio
Jiawei Han is Michael Aiken Chair Professor, Siebel School of Computing and Data Science, University of Illinois Urbana Champaign. He is a Fellow of ACM and a Fellow of IEEE, with over 1,000 research publications. He received the ACM SIGKDD Innovation Award (2004), IEEE Computer Society Technical Achievement Award (2005), IEEE W. Wallace McDowell Award (2009), Excellence in Graduate and Professional Teaching Award at UIUC (2012), Japan’s Funai Achievement Award (2018), and was elevated to Fellow of Royal Society of Canada (2022). His research interests include data mining, text mining, machine learning, and large language model applications.
















