diff --git a/README.md b/README.md
index 4a5fcad..f7af936 100644
--- a/README.md
+++ b/README.md
@@ -26,6 +26,7 @@ The table can be browsed, sorted and searched under https://superkogito.github.i
| [ANAD] | 2018 | 1384 recording by multiple speakers. | 3 emotions: angry, happy, surprised. | Audio | ~2 GB | Arabic | [Arabic Natural Audio Dataset] | Open access | [CC BY-NC-SA 4.0] |
| [EmoSynth] | 2018 | 144 audio file labelled by 40 listeners. | Emotion (no speech) defined in regard of valence and arousal. | Audio | 103.4 MB | -- | [The Perceived Emotion of Isolated Synthetic Audio: The EmoSynth Dataset and Results] | Open access | [CC BY 4.0] |
| [CMU-MOSEI] | 2018 | 65 hours of annotated video from more than 1000 speakers and 250 topics. | 6 Emotion (happiness, sadness, anger,fear, disgust, surprise) + Likert scale. | Audio, Video | -- | English | [Multi-attention Recurrent Network for Human Communication Comprehension] | Open access | [CMU-MOSEI License] |
+| [VERBO] | 2018 | 14 different phrases by 12 speakers (6 female + 6 male) for a total of 1167 recordings. | 7 emotions: Happiness, Disgust, Fear, Neutral, Anger, Surprise, Sadness | Audio | -- | Portuguese | [VERBO: Voice Emotion Recognition dataBase in Portuguese Language] | Restricted access | Available for research purposes only
| [CMU-MOSI] | 2017 | 2199 opinion utterances with annotated sentiment. | Sentiment annotated between very negative to very positive in seven Likert steps. | Audio, Video | -- | English | [Multi-attention Recurrent Network for Human Communication Comprehension] | Open access | [CMU-MOSI License] |
| [MSP-IMPROV] | 2017 | 20 sentences by 12 actors. | 4 emotions: angry, sad, happy, neutral, other, without agreement | Audio, Video | -- | English | [MSP-IMPROV: An Acted Corpus of Dyadic Interactions to Study Emotion Perception] | Restricted access | Available under an Academic License & Commercial License |
| [CREMA-D] | 2017 | 7442 clip of 12 sentences spoken by 91 actors (48 males and 43 females). | 6 emotions: angry, disgusted, fearful, happy, neutral, and sad | Audio, Video | -- | English | [CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset] | Open access | Available under the [Open Database License & Database Content License] |
@@ -86,6 +87,7 @@ The table can be browsed, sorted and searched under https://superkogito.github.i
[MELD]: https://affective-meld.github.io/
[ShEMO]: https://github.com/mansourehk/ShEMO
[DEMoS]: https://zenodo.org/record/2544829
+[VERBO]:https://sites.google.com/view/verbodatabase/home
[AESDD]: http://m3c.web.auth.gr/research/aesdd-speech-emotion-recognition/
[Emov-DB]: https://mega.nz/#F!KBp32apT!gLIgyWf9iQ-yqnWFUFuUHg!mYwUnI4K
[RAVDESS]: https://zenodo.org/record/1188976#.XrC7a5NKjOR
@@ -152,6 +154,7 @@ The table can be browsed, sorted and searched under https://superkogito.github.i
[BEHAVIOURAL FINDINGS FROM THE TORONTO EMOTIONAL SPEECH SET]: https://www.semanticscholar.org/paper/BEHAVIOURAL-FINDINGS-FROM-THE-TORONTO-EMOTIONAL-SET-Dupuis-Pichora-Fuller/d7f746b3aee801a353b6929a65d9a34a68e71c6f/figure/2
[CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4313618/
[DEMoS: An Italian emotional speech corpus. Elicitation methods, machine learning, and perception]: https://link.springer.com/epdf/10.1007/s10579-019-09450-y?author_access_token=5pf0w_D4k9z28TM6n4PbVPe4RwlQNchNByi7wbcMAY5hiA-aXzXNbZYfsMDDq2CdHD-w5ArAxIwlsk2nC_26pSyEAcu1xlKJ1c9m3JZj2ZlFmlVoCZUTcG3Hq2_2ozMLo3Hq3Y0CHzLdTxihQwch5Q%3D%3D
+[VERBO: Voice Emotion Recognition dataBase in Portuguese Language]: https://thescipub.com/pdf/jcssp.2018.1420.1430.pdf
[A Parameterized and Annotated Spoken Dialog Corpus of the CMU Let’s Go Bus Information System]: http://www.lrec-conf.org/proceedings/lrec2012/pdf/333_Paper.pdf
[Introducing the RECOLA Multimodal Corpus of Remote Collaborative and Affective Interactions]: https://drive.google.com/file/d/0B2V_I9XKBODhNENKUnZWNFdVXzQ/view
[Multimodal Emotion Recognition]: http://personal.ee.surrey.ac.uk/Personal/P.Jackson/pub/ma10/HaqJackson_MachineAudition10_approved.pdf
diff --git a/src/index.rst b/src/index.rst
index bc8964e..2109d70 100644
--- a/src/index.rst
+++ b/src/index.rst
@@ -58,6 +58,7 @@ However, we cannot guarantee that all listed links are up-to-date. Read more in
.. _`MELD`: https://affective-meld.github.io/
.. _`ShEMO`: https://github.com/mansourehk/ShEMO
.. _`DEMoS`: https://zenodo.org/record/2544829
+.. _`VERBO`: https://sites.google.com/view/verbodatabase/home
.. _`AESDD`: http://m3c.web.auth.gr/research/aesdd-speech-emotion-recognition/
.. _`Emov-DB`: https://mega.nz/#F!KBp32apT!gLIgyWf9iQ-yqnWFUFuUHg!mYwUnI4K
.. _`RAVDESS`: https://zenodo.org/record/1188976#.XrC7a5NKjOR
@@ -116,6 +117,7 @@ However, we cannot guarantee that all listed links are up-to-date. Read more in
.. _`EMOTIONAL SPEECH SYNTHESIS USING SUBSPACE CONSTRAINTS IN PROSODY`: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.420.8899&rep=rep1&type=pdf
.. _`Naturalistic emotional speech collectionparadigm with online game and its psychological and acoustical assessment`: https://www.jstage.jst.go.jp/article/ast/33/6/33_E1175/_pdf
.. _`EMOVO Corpus: an Italian Emotional Speech Database`: https://core.ac.uk/download/pdf/53857389.pdf
+.. _`VERBO: Voice Emotion Recognition dataBase in Portuguese Language`: https://thescipub.com/pdf/jcssp.2018.1420.1430.pdf
.. _`The eNTERFACE’05 Audio-Visual Emotion Database`: http://poseidon.csd.auth.gr/papers/PUBLISHED/CONFERENCE/pdf/Martin06a.pdf
.. _`Arabic Natural Audio Dataset`: https://data.mendeley.com/datasets/xm232yxf7t/1
.. _`Introducing the Geneva Multimodal Expression Corpus for Experimental Research on Emotion Perception`: https://www.researchgate.net/publication/51796867_Introducing_the_Geneva_Multimodal_Expression_Corpus_for_Experimental_Research_on_Emotion_Perception
diff --git a/src/ser-datasets.csv b/src/ser-datasets.csv
index 6cbce0a..1f22b43 100644
--- a/src/ser-datasets.csv
+++ b/src/ser-datasets.csv
@@ -21,6 +21,7 @@
"`ANAD`_","2018","1384 recording by multiple speakers.","3 emotions: angry, happy, surprised.","Audio","2 GB","Arabic","`Arabic Natural Audio Dataset`_","Open","`CC BY-NC-SA 4.0`_"
"`EmoSynth`_","2018","144 audio file labelled by 40 listeners.","Emotion (no speech) defined in regard of valence and arousal.","Audio","0.1034 GB","--","`The Perceived Emotion of Isolated Synthetic Audio: The EmoSynth Dataset and Results`_","Open","`CC BY 4.0`_"
"`CMU-MOSEI`_","2018","65 hours of annotated video from more than 1000 speakers and 250 topics.","6 Emotion (happiness, sadness, anger,fear, disgust, surprise) + Likert scale.","Audio, Video","--","English","`Multi-attention Recurrent Network for Human Communication Comprehension`_","Open","`CMU-MOSEI License`_"
+"`VERBO`_","2018","14 different phrases by 12 speakers (6 female + 6 male) for a total of 1167 recordings.","7 emotions: Happiness, Disgust, Fear, Neutral, Anger, Surprise, Sadness","Audio","--","Portuguese","`VERBO: Voice Emotion Recognition dataBase in Portuguese Language`_","Restricted","Available for research purposes only"
"`CMU-MOSI`_","2017","2199 opinion utterances with annotated sentiment.","Sentiment annotated between very negative to very positive in seven Likert steps.","Audio, Video","--","English","`Multi-attention Recurrent Network for Human Communication Comprehension`_","Open","`CMU-MOSI License`_"
"`MSP-IMPROV`_","2017","20 sentences by 12 actors.","4 emotions: angry, sad, happy, neutral, other, without agreement","Audio, Video","--","English","`MSP-IMPROV: An Acted Corpus of Dyadic Interactions to Study Emotion Perception`_","Restricted","Academic License & Commercial License"
"`CREMA-D`_","2017","7442 clip of 12 sentences spoken by 91 actors (48 males and 43 females).","6 emotions: angry, disgusted, fearful, happy, neutral, and sad","Audio, Video","--","English","`CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset`_","Open","`Open Database License & Database Content License`_"