Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deep Classification, Embedding & Text Generation (E1) - Ram et al 2019 #33

Open
HyunkuKwon opened this issue Apr 7, 2020 · 2 comments

Comments

@HyunkuKwon
Copy link

  1. Pathak, Ajeet Ram, Basant Agarwal, Manjusha Pandey & Siddharth Rautaray. 2019. “Application of Deep Learning Approaches for Sentiment Analysis.” Deep Learning-Based Approaches for Sentiment Analysis. Algorithms for Intelligent Systems. Springer, Singapore.
@liu431
Copy link

liu431 commented May 22, 2020

The question I have when using sentiment analysis in the projects is that one sentiment classification (or normalized score) doesn't tell the whole story of the people who wrote the content. For example, when leaving reviews on Amazon, a consumer might write half of the paragraph on good things and the rest on complaining. The sentiment score might be neutral (0), but actually the consumer has mixed opinions about the product <0.8, -0.8>. So I am wondering how can we customize the sentiment analysis algorithms into this specific application?

@DSharm
Copy link

DSharm commented May 22, 2020

I have a question about the evaluation metrics used for sentiment analysis. We see all of the same metrics e.g. accuracy, f1, precision, recall, etc. However, all of these assume there is a "true" sentiment. Given the incredible subjectivity of sentiment, what are the best practices for getting "ground truth" sentiment in order to then compare to the predictions? Is this something that is often done by crowd sourcing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants