The goal of the Emo-Net project is to train a computer to be able to categorize emotion from "vocal bursts": These are non-speech vocal sounds that humans make that express emotion. (Think oohs, aahs, sighs, laughter, etc.) Research has shown that humans can identify as many as 24 distinct emotional categories from these sounds; but so far computers are unable to match that performance.
If you can categorize these sounds as expressing amusement, confusion, and disgust, you're doing better than most computers.
To improve computer performance in this area, we're asking people to make recordings of themselves making these sounds (like the samples above). If you choose to participate, you'll be asked to record these sounds using the microphone on your computer or mobile device via this Emo-Net website--these audio recordings will be collected in a publically-available dataset that will be used for the Emo-Net project, and potentially other projects.
Your submissions will be completely anonymous; although we're asking for your e-mail address during the collection phase, the dataset will not include it.
We'll ask you to make 3 different recordings for each of the 30 emotion categories; each will only be a few seconds long. You may omit any category you wish, but it really helps the project if you provide samples for all 30 categories.
Interested in helping computers become more emotionally intelligent? Please register below, and thanks!
(BTW, our mascot is Emo-Gator; he's really into My Chemical Romance and Jimmy Eat World -- well, at least their earlier stuff.)
To get started, please enter your e-mail address in the field below and click the Login button.