Publications
(# indicates equal contribution)
2025
- APT: Improving Specialist LLM Performance with Weakness Case Acquisition and Iterative Preference Training
Jun Rao, Zepeng Lin, Xuebo Liu, Lian Lian, Dong Jin, Shengjun Cheng, Jun Yu, Min Zhang
in Proc. of ACL 2025 (Findings) - Orchestrating Prompt Expertise: Enhancing Knowledge Distillation via Expert-Guided Tuning
Xv Meng#, Jun Rao#, Shuhan Qi, Xuebo Liu, Lei Wang, Min Zhang, Xuan Wang
in TALLIP 2025
2024
CommonIT: Commonality-Aware Instruction Tuning for Large Language Models via Data Partitions
Jun Rao, Xuebo Liu, Lian Lian, Cheng Shengjun, Yunjie Liao, Min Zhang
in Proc. of EMNLP 2024Harnessing the Power of Prompt Experts: Efficient Knowledge Distillation for Enhanced Language Understanding
Xv Meng#, Jun Rao#, Shuhan Qi, Lei Wang, Jing Xiao, Xuan Wang
in Proc. of ECML-PKDD 2024Curriculum Consistency Learning for Conditional Sentence Generation
Liangxin Liu, Xuebo Liu, Lian Lian, Shengjun Cheng, Jun Rao, Tengfei Yu, Hexuan Deng, Min Zhang
in Proc. of EMNLP 2024
2023
Finetuning Language Models for Multimodal Question Answering
Xin Zhang, Wen Xie, Ziqi Dai, Jun Rao, Haokun Wen, Xuan Luo, Meishan Zhang, Min Zhang
in Proc. of MM 2023Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language Pretraining?
Fei Wang, Liang Ding, Jun Rao, Ye Liu, Li Shen, Changxing Ding
in ToMM 2023What is the limitation of multimodal llms? a deeper look into multimodal llms through prompt probing
Shuhan Qi, Zhengying Cao, Jun Rao, Lei Wang, Jing Xiao, Xuan Wang
*in IPM 2023, Corresponding AuthorParameter-efficient and student-friendly knowledge distillation
Jun Rao, Xv Meng, Liang Ding, Shuhan Qi, Xuebo Liu, Min Zhang, Dacheng Tao
in TMM 2023Dynamic contrastive distillation for image-text retrieval
Jun Rao, Liang Ding, Shuhan Qi, Meng Fang, Yang Liu, Li Shen, Dacheng Tao
in TMM 2023
2022
- Where Does the Performance Improvement Come From? -A Reproducibility Concern about Image-Text Retrieval
Jun Rao, Fei Wang, Liang Ding, Shuhan Qi, Yibing Zhan, Weifeng Liu, Dacheng Tao
in Proc. of SIGIR 2022