MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning

Under Review

MTL-LoRA enhances LoRA by improving the adaptation of large language models to multiple tasks simultaneously. It leverages task-specific transformation matrices and multiple up-projection matrices to effectively extract both task-specific and task-agnostic information.

In submission.

Author: Yaming Yang, Dilxat Muhtar, Yelong Shen, Yuefeng Zhan, Jianfeng Liu, Yujing Wang, Hao Sun, Weiwei Deng, Qi Zhang, Yunhai Tong.