Repository logo
Institutional Digital Repository
Shreenivas Deshpande Library, IIT (BHU), Varanasi

A new accelerated proximal gradient technique for regularized multitask learning framework

dc.contributor.authorVerma M.; Shukla K.K.
dc.date.accessioned2025-05-24T09:29:43Z
dc.description.abstractMultitask learning can be defined as the joint learning of related tasks using shared representations, such that each task can help other tasks to perform better. One of the various multitask learning frameworks is the regularized convex minimization problem, for which many optimization techniques are available in the literature. In this paper, we consider solving the non-smooth convex minimization problem with sparsity-inducing regularizers for the multitask learning framework, which can be efficiently solved using proximal algorithms. Due to slow convergence of traditional proximal gradient methods, a recent trend is to introduce acceleration to these methods, which increases the speed of convergence. In this paper, we present a new accelerated gradient method for the multitask regression framework, which not only outperforms its non-accelerated counterpart and traditional accelerated proximal gradient method but also improves the prediction accuracy. We also prove the convergence and stability of the algorithm under few specific conditions. To demonstrate the applicability of our method, we performed experiments with several real multitask learning benchmark datasets. Empirical results exhibit that our method outperforms the previous methods in terms of convergence, accuracy and computational time. © 2017 Elsevier B.V.
dc.identifier.doihttps://doi.org/10.1016/j.patrec.2017.06.013
dc.identifier.urihttp://172.23.0.11:4000/handle/123456789/16196
dc.relation.ispartofseriesPattern Recognition Letters
dc.titleA new accelerated proximal gradient technique for regularized multitask learning framework

Files

Collections