Kainan Liu

AI Researcher & Engineer

Kainan Liu

Education

The Hong Kong University of Science and Technology (Guangzhou)
M.Sc. in Data-Centric Artificial Intelligence
Zhongnan University of Economics and Law
B.Sc. in Data Science and Big Data Technology

Publications

Astra: Activation-Space Tail-Eigenvector Low-Rank Adaptation of Large Language Models
ACL 2026 Findings
Kainan Liu*, Yong Zhang*, Ning Cheng, Yun Zhu, Yanmeng Wang, Shaojun Wang, Jing Xiao
Proposes a novel PEFT method that leverages tail eigenvectors of the activation covariance matrix — estimated from a task-specific calibration set — to construct task-adaptive low-rank adapters. Achieves faster convergence and improved downstream performance with significantly fewer parameters compared to standard LoRA approaches.
GRASP: Replace Redundant Layers with Adaptive Singular Parameters for Efficient Model Compression
EMNLP 2025 Main Conference
Kainan Liu*, Yong Zhang*, Ning Cheng, Zhitao Li, Shaojun Wang, Jing Xiao
A gradient-guided compression framework that identifies functionally redundant transformer layers and replaces them with learnable singular value parameters. Preserves model performance while substantially reducing parameters and inference cost, enabling efficient deployment of LLMs.
Detecting and dissecting anomalous anatomic regions in spatial transcriptomics with STANDS
Nature Communications
Kaichen Xu, Yan Lu, Suyang Hou, Kainan Liu, Yihang Du, Mengqian Huang, Hao Feng, Hao Wu, Xiaobo Sun*
A GAN-based multi-task deep learning framework that integrates spatial information and gene expression profiles to detect and dissect anomalous tissue domains in spatial transcriptomics data at unprecedented resolution.

* First author    Corresponding author

Experience

Ping An Technology (Shenzhen) Co., Ltd.
AI Researcher & Engineer
  • Training and optimizing LLMs for financial risk control and corporate banking analysis, including domain-adaptive fine-tuning and prompt optimization for domain-specific tasks
  • Developed Astra (ACL 2026 Findings): an activation-space low-rank adaptation method for efficient LLM fine-tuning in financial domains
  • Building evaluation and monitoring pipelines to track model performance across financial NLP benchmarks
Ping An Technology (Shenzhen) Co., Ltd.
Research Intern
  • Conducted research on model compression techniques, focusing on identifying and replacing redundant transformer layers with parameter-efficient alternatives
  • Developed GRASP (EMNLP 2025 Main Conference), a gradient-guided compression framework for large language models
  • Implemented and benchmarked various pruning and compression methods on models up to 7B parameters
Vipshop (China)
Recommendation Algorithm Intern
  • Processed and analyzed large-scale user-item interaction data and behavioral sequences for recommendation system enhancement
  • Deployed XGBoost models for recommendation ranking optimization, improving key engagement metrics
  • Developed feature engineering pipelines for extracting user behavior patterns from historical interaction data

Technical Skills

Programming Python (primary), PyTorch, NumPy, Pandas Frameworks PyTorch, PEFT, Transformers, vLLM, XGBoost, scikit-learn Research Areas Low-rank adaptation (LoRA, DoRA, Astra), Model compression, AI Agents, Agent data synthesis, LLM evaluation Languages Chinese (native), English (professional proficiency)