Is Meta Embedding better than pre-trained word embedding to perform Sentiment Analysis for Dravidian Languages in Code-Mixed Text?
| dc.contributor.author | Chanda S.; Singh R.P.; Pal S. | |
| dc.date.accessioned | 2025-05-23T11:27:28Z | |
| dc.description.abstract | This paper describes the IRlab@IITBHU system for the Dravidian-CodeMix - FIRE 2021: Sentiment Analysis for Dravidian Languages pairs Tamil-English (TA-EN), Kannada-English (KN-EN), and Malayalam-English (ML-EN) in Code-Mixed text. We have reported three models output in this paper where We have submitted only one model for sentiment analysis of all code-mixed datasets. Run-1 was obtained from the FastText embedding with multi-head attention, Run-2 used the meta embedding techniques, and Run-3 used the Multilingual BERT(mBERT) model for producing the results. Run-2 outperformed Run-1 and Run-3 for all the language pairs. © 2021 Copyright for this paper by its authors. | |
| dc.identifier.doi | DOI not available | |
| dc.identifier.uri | http://172.23.0.11:4000/handle/123456789/11444 | |
| dc.relation.ispartofseries | CEUR Workshop Proceedings | |
| dc.title | Is Meta Embedding better than pre-trained word embedding to perform Sentiment Analysis for Dravidian Languages in Code-Mixed Text? |