BIO
Wanyun Cui is an associate professor at the School of Computing and Artificial Intelligence, Shanghai University of Finance and Economics. He leads the SCALE Lab (SUFE Cognitive AI & Language Exploration Lab) here. His research interests lie at the large language model inference. He has published 20+ papers in NeurIPS, ICLR, SIGMOD, PVLDB, IJCAI, AAAI, ACL, and EMNLP. He has been recognized as an AI 2000 Most Influential Scholar Honorable Mention (2012-2021, 2013-2022) and has won the ACM China Outstanding Doctoral Dissertation Nomination Award (top 4 in China) and the ACM Shanghai Outstanding Doctoral Dissertation Award (top 2 in Shanghai).
Current Research Projects
LLM-based Complex Reasoning We explore techniques to enhance LLMs’ reasoning capabilities.
Efficient LLM Inference We optimize the efficiency of LLM inference, including long-context LLM and parameter quantization.
LLMs for Domain Applications (e.g. Finance and Education) We develop specialized LLM applications based on FinChat, such as automated financial analysis, research report generation, retrieval-augmented QA, and expert role-playing.
🔴 For students interested in my research: I am recruiting students who are passionate about artificial (general) intelligence and possess strong programming skills (experience with open-source projects is a plus).
What we offer:
- Learning: Customized LLM training plans tailored to your development. And GPU resources
 - Research: Opportunities to explore cutting-edge LLM problems and publish in top AI conferences
 - Practice: Hands-on experience with real LLM applications and potential internships from leading tech companies (Alibaba, Ant Group, ByteDance, etc.)
 
🔴 If you are interested, please do not hesitate to contact me.
🔥 News 🔥
- 2025/10. I have been selected as a NeurIPS 2025 “Reviewer: Top Reviewer”.
 - 2025/09. One paper on large language model long-context inference accepted to NeurIPS 2025.
 - 2024/09. One paper on large language model quantization accepted to NeurIPS 2024.
 - 2024/09. One paper on instruction generation by large language models accepted to EMNLP 2024 Findings.
 - 2024/03. We release the Technology and Security White Paper on the Application of Large Language model in Finance with Ant Group and BCTC. pdf
 - 2023/12. One paper accepted to AAAI 2024.
 - 2023/07. We released the large language model FinChat, which outperforms ChatGPT (3.5) on C-Eval. Congrats to our research team! FinChat intro
 - 2023/06. Are you interested in training your own large model chatbot? Check out the dataset collection we have gathered: LLMDataHub
 - One paper accepted to ACL 2023.
 - One paper accepted to ACL Findings 2023.
 - One paper accepted to NeurIPS 2022.
 - I was recognized as the AI 2000 Most Influential Scholars Honorable Mention (2012-20221) by AMiner, rank 31 in the database field.
 
📝 PUBLICATIONS
Preprint
- Shuyang Cai, Wanyun Cui, Evade ChatGPT Detectors via A Single Space, paper
 
Published
- Wanyun Cui, Mingwei Xu, Homogeneous Keys, Heterogeneous Values: Exploiting Local KV Cache Asymmetry for Long-Context LLMs, (NeurIPS 2025, CCF A)
 - Wanyun Cui, Qianle Wang, Cherry on Top: Parameter Heterogeneity and Quantization in Large Language Models, (NeurIPS 2024, CCF A)
 - Wanyun Cui, Qianle Wang, Ada-Instruct: Adapting Instruction Generators for Complex Reasoning, (EMNLP Findings 2024, CCF B)
 - Wanyun Cui, Linqiu Zhang, Modeling Knowledge Graphs with Composite Reasoning, (AAAI 2024, CCF A)
 - Wanyun Cui, Xingran Chen, Free Lunch for Efficient Textual Commonsense Integration in Language Models, (ACL 2023, CCF A)
 - Wanyun Cui, Xingran Chen, Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction, (ACL Findings 2023, CCF A)
 - Wanyun Cui, Xingran Chen, Instance-based Learning for Knowledge Base Completion, (NeurIPS 2022, CCF A) paper code 中文版
 - Wanyun Cui, Xingran Chen, Enhancing Natural Language Representation with Large-Scale Out-of-Domain Commonsense (ACL Findings 2022, CCF A) paper code
 - Wanyun Cui, Xingran Chen, Open Rule Induction, (NeurIPS 2021, CCF A) paper code slides
 - Wanyun Cui, Sen Yan, Isotonic Data Augmentation for Knowledge Distillation, (IJCAI 2021, CCF A) paper code
 - Wanyun Cui, Guangyu Zheng, Wei Wang, Zero-shot domain adaptation for natural language inference by projecting superficial words out, (Knowledge-Based Systems 2021, JCR Q1)
 - Wanyun Cui, Guangyu Zheng, Wei Wang, Unsupervised Natural Language Inference via Decoupled Multimodal Contrastive Learning, (EMNLP 2020, CCF B, oral) paper code
 - Wanyun Cui, Guangyu Zheng, Zhiqiang Shen, Sihang Jiang, Wei Wang, Transfer Learning for Sequences via Learning to Collocate, (ICLR 2019) paper code
 - Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, Wei Wang, KBQA: Learning Question Answering over QA Corpora and Knowledge Bases, (PVLDB 2017, CCF A)
 - Wanyun Cui, Yanghua Xiao, Wei Wang, KBQA: An Online Template Based Question Answering System over Freebase, (IJCAI 2016, CCF A), demo
 - Wanyun Cui, Xiyou Zhou, Hangyu Lin, Yanghua Xiao, Haixun Wang, Seung-won Hwang, Wei Wang, Verb Pattern: A Probabilistic Semantic Representation on Verbs, (AAAI 2016, CCF A)
 - Wanyun Cui, Yanghua Xiao, Haixun Wang, Wei Wang, Local Search of Communities in Large Graphs, (SIGMOD 2014, CCF A)
 - Wanyun Cui, Yanghua Xiao, Haixun Wang, Yiqi Lu, and Wei Wang. Online Search of Overlapping Communities, (SIGMOD 2013, CCF A)
 - Wanyun Cui, Yanghua Xiao, Chapter 14: Question Answering over Knowledge Graphs, Knowledge Graph: Concepts and Techniques (book chapter).
 - Bo Xu, Yong Xu, Jiaqing Liang, Chenhao Xie, Bin Liang, Wanyun Cui and Yanghua Xiao, CN-DBpedia: A Never-Ending Chinese Knowledge Extraction System, IEA/AIE 2017.
 - Yanghua xiao, Ji Hong, Wanyun Cui, Zhenying He, Wei Wang, Guodong Feng, Branch Code: An Efficient Labeling Scheme for Query Answering on Trees, (ICDE 2012, CCF A)
 
🎓 ALUMNI
- Qianle Wang, ByteDance
 - Jian Sun, Baidu
 - Junhao Zhao, Huawei
 - Xingran Chen, University of Michigan, master
 - Sen Yan, University of Colorado Boulder, PhD
 - Guangyu Zheng, Ant Group
 - Le Wang, Ant Group
 - Lingyu Cai, Alibaba
 
🏆 AWARDS
- 2022 The AI 2000 Most Influential Scholar Annual List by AMiner, rank 31 in the database field.
 - 2017 ACM China Outstanding Doctoral Dissertation Nomination Award, top 4 in China
 - 2017 ACM Shanghai Outstanding Doctoral Dissertation Award, top 2 in Shanghai
 - 2017 Fudan Academic Star
 - 2013 Fudan Outstanding Undergraduate Award, top 10 for Fudan undergraduates.
 - 2011, 2009 ACM/ICPC Beijing/Shanghai Region Invitation Contest double gold medals
 
