Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Add VERBO: Voice Emotion Recognition database in Portuguese Lang #19

Merged
merged 1 commit into from
Jun 16, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ The table can be browsed, sorted and searched under https://superkogito.github.i
| <sub>[ANAD]</sub> | <sub>2018</sub> | <sub>1384 recording by multiple speakers.</sub> | <sub>3 emotions: angry, happy, surprised.</sub> | <sub>Audio</sub> | <sub>~2 GB</sub> | <sub>Arabic</sub> | <sub>[Arabic Natural Audio Dataset] </sub> | <sub>Open access</sub> | <sub>[CC BY-NC-SA 4.0] </sub> |
| <sub>[EmoSynth]</sub> | <sub>2018</sub> | <sub>144 audio file labelled by 40 listeners.</sub> | <sub>Emotion (no speech) defined in regard of valence and arousal.</sub> | <sub>Audio</sub> | <sub>103.4 MB</sub> | <sub>--</sub> | <sub>[The Perceived Emotion of Isolated Synthetic Audio: The EmoSynth Dataset and Results] </sub> | <sub>Open access</sub> | <sub>[CC BY 4.0] </sub> |
| <sub>[CMU-MOSEI]</sub> | <sub>2018</sub> | <sub>65 hours of annotated video from more than 1000 speakers and 250 topics.</sub> | <sub>6 Emotion (happiness, sadness, anger,fear, disgust, surprise) + Likert scale.</sub> | <sub>Audio, Video</sub> | <sub>--</sub> | <sub>English</sub> | <sub>[Multi-attention Recurrent Network for Human Communication Comprehension] </sub> | <sub>Open access</sub> | <sub>[CMU-MOSEI License] </sub> |
| <sub>[VERBO]</sub> | <sub>2018</sub> | <sub>14 different phrases by 12 speakers (6 female + 6 male) for a total of 1167 recordings. </sub> | <sub>7 emotions: Happiness, Disgust, Fear, Neutral, Anger, Surprise, Sadness </sub> | <sub>Audio</sub> | <sub>--</sub> | <sub>Portuguese</sub> | <sub>[VERBO: Voice Emotion Recognition dataBase in Portuguese Language]</sub> | <sub>Restricted access</sub> | <sub>Available for research purposes only</sub>
| <sub>[CMU-MOSI]</sub> | <sub>2017</sub> | <sub>2199 opinion utterances with annotated sentiment.</sub> | <sub>Sentiment annotated between very negative to very positive in seven Likert steps.</sub> | <sub>Audio, Video</sub> | <sub>--</sub> | <sub>English</sub> | <sub>[Multi-attention Recurrent Network for Human Communication Comprehension] </sub> | <sub>Open access</sub> | <sub>[CMU-MOSI License] </sub> |
| <sub>[MSP-IMPROV]</sub> | <sub>2017</sub> | <sub>20 sentences by 12 actors.</sub> | <sub>4 emotions: angry, sad, happy, neutral, other, without agreement</sub> | <sub>Audio, Video</sub> | <sub> -- </sub> | <sub>English</sub> | <sub>[MSP-IMPROV: An Acted Corpus of Dyadic Interactions to Study Emotion Perception]</sub> | <sub>Restricted access</sub> | <sub>Available under an Academic License & Commercial License </sub> |
| <sub>[CREMA-D]</sub> | <sub>2017</sub> | <sub>7442 clip of 12 sentences spoken by 91 actors (48 males and 43 females).</sub> | <sub>6 emotions: angry, disgusted, fearful, happy, neutral, and sad</sub> | <sub>Audio, Video</sub> | <sub> -- </sub> | <sub>English</sub> | <sub>[CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset]</sub> | <sub>Open access</sub> | <sub>Available under the [Open Database License & Database Content License] </sub> |
Expand Down Expand Up @@ -86,6 +87,7 @@ The table can be browsed, sorted and searched under https://superkogito.github.i
[MELD]: https://affective-meld.github.io/
[ShEMO]: https://github.com/mansourehk/ShEMO
[DEMoS]: https://zenodo.org/record/2544829
[VERBO]:https://sites.google.com/view/verbodatabase/home
[AESDD]: http://m3c.web.auth.gr/research/aesdd-speech-emotion-recognition/
[Emov-DB]: https://mega.nz/#F!KBp32apT!gLIgyWf9iQ-yqnWFUFuUHg!mYwUnI4K
[RAVDESS]: https://zenodo.org/record/1188976#.XrC7a5NKjOR
Expand Down Expand Up @@ -152,6 +154,7 @@ The table can be browsed, sorted and searched under https://superkogito.github.i
[BEHAVIOURAL FINDINGS FROM THE TORONTO EMOTIONAL SPEECH SET]: https://www.semanticscholar.org/paper/BEHAVIOURAL-FINDINGS-FROM-THE-TORONTO-EMOTIONAL-SET-Dupuis-Pichora-Fuller/d7f746b3aee801a353b6929a65d9a34a68e71c6f/figure/2
[CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4313618/
[DEMoS: An Italian emotional speech corpus. Elicitation methods, machine learning, and perception]: https://link.springer.com/epdf/10.1007/s10579-019-09450-y?author_access_token=5pf0w_D4k9z28TM6n4PbVPe4RwlQNchNByi7wbcMAY5hiA-aXzXNbZYfsMDDq2CdHD-w5ArAxIwlsk2nC_26pSyEAcu1xlKJ1c9m3JZj2ZlFmlVoCZUTcG3Hq2_2ozMLo3Hq3Y0CHzLdTxihQwch5Q%3D%3D
[VERBO: Voice Emotion Recognition dataBase in Portuguese Language]: https://thescipub.com/pdf/jcssp.2018.1420.1430.pdf
[A Parameterized and Annotated Spoken Dialog Corpus of the CMU Let’s Go Bus Information System]: http://www.lrec-conf.org/proceedings/lrec2012/pdf/333_Paper.pdf
[Introducing the RECOLA Multimodal Corpus of Remote Collaborative and Affective Interactions]: https://drive.google.com/file/d/0B2V_I9XKBODhNENKUnZWNFdVXzQ/view
[Multimodal Emotion Recognition]: http://personal.ee.surrey.ac.uk/Personal/P.Jackson/pub/ma10/HaqJackson_MachineAudition10_approved.pdf
Expand Down
2 changes: 2 additions & 0 deletions src/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ However, we cannot guarantee that all listed links are up-to-date. Read more in
.. _`MELD`: https://affective-meld.github.io/
.. _`ShEMO`: https://github.com/mansourehk/ShEMO
.. _`DEMoS`: https://zenodo.org/record/2544829
.. _`VERBO`: https://sites.google.com/view/verbodatabase/home
.. _`AESDD`: http://m3c.web.auth.gr/research/aesdd-speech-emotion-recognition/
.. _`Emov-DB`: https://mega.nz/#F!KBp32apT!gLIgyWf9iQ-yqnWFUFuUHg!mYwUnI4K
.. _`RAVDESS`: https://zenodo.org/record/1188976#.XrC7a5NKjOR
Expand Down Expand Up @@ -116,6 +117,7 @@ However, we cannot guarantee that all listed links are up-to-date. Read more in
.. _`EMOTIONAL SPEECH SYNTHESIS USING SUBSPACE CONSTRAINTS IN PROSODY`: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.420.8899&rep=rep1&type=pdf
.. _`Naturalistic emotional speech collectionparadigm with online game and its psychological and acoustical assessment`: https://www.jstage.jst.go.jp/article/ast/33/6/33_E1175/_pdf
.. _`EMOVO Corpus: an Italian Emotional Speech Database`: https://core.ac.uk/download/pdf/53857389.pdf
.. _`VERBO: Voice Emotion Recognition dataBase in Portuguese Language`: https://thescipub.com/pdf/jcssp.2018.1420.1430.pdf
.. _`The eNTERFACE’05 Audio-Visual Emotion Database`: http://poseidon.csd.auth.gr/papers/PUBLISHED/CONFERENCE/pdf/Martin06a.pdf
.. _`Arabic Natural Audio Dataset`: https://data.mendeley.com/datasets/xm232yxf7t/1
.. _`Introducing the Geneva Multimodal Expression Corpus for Experimental Research on Emotion Perception`: https://www.researchgate.net/publication/51796867_Introducing_the_Geneva_Multimodal_Expression_Corpus_for_Experimental_Research_on_Emotion_Perception
Expand Down
1 change: 1 addition & 0 deletions src/ser-datasets.csv
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
"`ANAD`_","2018","1384 recording by multiple speakers.","3 emotions: angry, happy, surprised.","Audio","2 GB","Arabic","`Arabic Natural Audio Dataset`_","Open","`CC BY-NC-SA 4.0`_"
"`EmoSynth`_","2018","144 audio file labelled by 40 listeners.","Emotion (no speech) defined in regard of valence and arousal.","Audio","0.1034 GB","--","`The Perceived Emotion of Isolated Synthetic Audio: The EmoSynth Dataset and Results`_","Open","`CC BY 4.0`_"
"`CMU-MOSEI`_","2018","65 hours of annotated video from more than 1000 speakers and 250 topics.","6 Emotion (happiness, sadness, anger,fear, disgust, surprise) + Likert scale.","Audio, Video","--","English","`Multi-attention Recurrent Network for Human Communication Comprehension`_","Open","`CMU-MOSEI License`_"
"`VERBO`_","2018","14 different phrases by 12 speakers (6 female + 6 male) for a total of 1167 recordings.","7 emotions: Happiness, Disgust, Fear, Neutral, Anger, Surprise, Sadness","Audio","--","Portuguese","`VERBO: Voice Emotion Recognition dataBase in Portuguese Language`_","Restricted","Available for research purposes only"
"`CMU-MOSI`_","2017","2199 opinion utterances with annotated sentiment.","Sentiment annotated between very negative to very positive in seven Likert steps.","Audio, Video","--","English","`Multi-attention Recurrent Network for Human Communication Comprehension`_","Open","`CMU-MOSI License`_"
"`MSP-IMPROV`_","2017","20 sentences by 12 actors.","4 emotions: angry, sad, happy, neutral, other, without agreement","Audio, Video","--","English","`MSP-IMPROV: An Acted Corpus of Dyadic Interactions to Study Emotion Perception`_","Restricted","Academic License & Commercial License"
"`CREMA-D`_","2017","7442 clip of 12 sentences spoken by 91 actors (48 males and 43 females).","6 emotions: angry, disgusted, fearful, happy, neutral, and sad","Audio, Video","--","English","`CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset`_","Open","`Open Database License & Database Content License`_"
Expand Down