MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning

Published in AAAI, 2025

MTL-LoRA enhances LoRA by improving the adaptation of large language models to multiple tasks simultaneously. It leverages task-specific transformation matrices and multiple up-projection matrices to effectively extract both task-specific and task-agnostic information.

Author: Yaming Yang*, Dilxat Muhtar*, Yelong Shen, Yuefeng Zhan, Jianfeng Liu, Yujing Wang, Hao Sun, Weiwei Deng, Qi Zhang, Yunhai Tong.

Download paper here