This book captures the current challenges in automatic recognition of emotion in spontaneous speech and makes an effort to explain, elaborate, and propose possible solutions. Intelligent human–computer interaction (iHCI) systems thrive on several technologies like automatic speech recognition (ASR); speaker identification; language identification; image and video recognition; affect/mood/emotion analysis; and recognition, to name a few. Given the importance of spontaneity in any human–machine conversational speech, reliable recognition of emotion from naturally spoken spontaneous speech is crucial. While emotions, when explicitly demonstrated by an actor, are easy for a machine to recognize, the same is not true in the case of day-to-day, naturally spoken spontaneous speech. The book explores several reasons behind this, but one of the main reasons for this is that people, especially non-actors, do not explicitly demonstrate their emotion when they speak, thus making it difficult for machines to distinguish one emotion from another that is embedded in their spoken speech. This short book, based on some of authors’ previously published books, in the area of audio emotion analysis, identifies the practical challenges in analysing emotions in spontaneous speech and puts forward several possible solutions that can assist in robustly determining the emotions expressed in spontaneous speech.
This book captures the current challenges in automatic recognition of emotion in spontaneous speech and makes an effort to explain, elaborate, and propose possible solutions. Intelligent human–computer interaction (iHCI) systems thrive on several technologies like automatic speech recognition (ASR); speaker identification; language identification; image and video recognition; affect/mood/emotion analysis; and recognition, to name a few. Given the importance of spontaneity in any human–machine conversational speech, reliable recognition of emotion from naturally spoken spontaneous speech is crucial. While emotions, when explicitly demonstrated by an actor, are easy for a machine to recognize, the same is not true in the case of day-to-day, naturally spoken spontaneous speech. The book explores several reasons behind this, but one of the main reasons for this is that people, especially non-actors, do not explicitly demonstrate their emotion when they speak, thus making it difficult for machines to distinguish one emotion from another that is embedded in their spoken speech. This short book, based on some of authors’ previously published books, in the area of audio emotion analysis, identifies the practical challenges in analysing emotions in spontaneous speech and puts forward several possible solutions that can assist in robustly determining the emotions expressed in spontaneous speech.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.