Repository logo
Institutional Digital Repository
Shreenivas Deshpande Library, IIT (BHU), Varanasi

A new faster first order iterative scheme for sparsity-based multitask learning

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Multitask learning methods facilitate learning multiple related tasks together and improvise results as compared to the schemes where each task is considered independently. In order to incorporate the shared information in multiple tasks, various regularizers have been integrated in pre-existing techniques. In this paper, we explore the problem of convex formulations of multitask learning with sparsity inducing regularizers. The main contribution of this paper is to introduce a novel first order iterative procedure (MTL-FIBM) which we prove to converge faster than previously existing work. Our method belongs to the class of proximal gradient-based techniques, where the loss function is considered to be smooth and the regularization function maybe non-smooth. We performed extensive experiments with synthetic as well as two real datasets namely School and Parkinson Telemonitoring datasets and show that the experimental results agree with the theoretical analysis of our algorithm. Results demonstrate the efficacy and improvement in terms of speed and accuracy. © 2016 IEEE.

Description

Keywords

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By