Automatic detection of emotions with music files Online publication date: Thu, 21-Aug-2014
by Angelina A. Tzacheva; Dirk Schlingmann; Keith J. Bell
International Journal of Social Network Mining (IJSNM), Vol. 1, No. 2, 2012
Abstract: The amount of music files available on the internet is constantly growing, as well as the access to recordings. Music is now so readily accessible in digital form that personal collections can easily exceed the practical limits of the time we have to listen to them. Today, the problem of building music recommendation systems, including systems which can automatically detect emotions with music files, is of great importance. In this work, we present a new strategy for automatic detection of emotions with musical instrument recordings. We use Thayer's model to represent emotions. We extract timbre-related acoustic features. We train and test two classifiers. Results yield good recognition accuracy.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Social Network Mining (IJSNM):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com