数据集 Oct 23, 2020

简介 Introduction

音乐情感识别研究近年来受到了广泛的关注。而这类研究往往需要大量的带标签的音乐语料库。为此,我们提出并更新了PMEmo数据集,该数据包含了794首带情感注释的歌曲以及被试听歌过程中的皮肤电活动信号。为了构建高质量的情感注释音乐语料库,我们精心设计了相关的音乐情感实验。PMEmo主要用于音乐情感检索和识别的基准测试,我们将该数据集公开提供给研究社区。为了直接评估音乐情感分析的方法,PMEmo还包括了提取好的音频、文本和生理特征集。此外,我们还提供了人工选择的歌曲副歌节选,以促进副歌提取相关研究的发展。在论文《The PMEmo Dataset for Music Emotion Recognition,》中,我们详细描述了歌曲来源、被试选择、实验设计和标签收集过程,以及数据集内容和数据统计分析。我们还将该数据集应用于一些基本的音乐情感识别任务中,验证了其优秀性能。

Music Emotion Recognition (MER) has recently received considerable attention. To support the MER research which requires large music content libraries, we present the PMEmo dataset containing emotion annotations of 794 songs as well as the simultaneous electrodermal activity (EDA) signals. A Music Emotion Experiment was well-designed for collecting the affective-annotated music corpus of high quality, which recruited 457 subjects. The dataset is publically available to the research community, which is foremost intended for benchmarking in music emotion retrieval and recognition. To straightforwardly evaluate the methodologies for music affective analysis, it also involves pre-computed audio feature sets. In addition to that, manually selected chorus excerpts (compressed in MP3) of songs are provided to facilitate the development of chorus-related research. In Our article, The PMEmo Dataset for Music Emotion Recognition, We describe in detail the resource acquisition, subject selection, experiment design and annotation collection procedures, as well as the dataset content and data reliability analysis. We also illustrate its usage in some simple music emotion recognition tasks which testified the PMEmo dataset’s competence for the MER work. Compared to other homogeneous datasets, PMEmo is novel in the organization and management of the recruited annotators, and it is also characterized by its large amount of music with simultaneous physiological signals.‌

更多细节可见 More details:
Github示例代码 View on Github:

下载 Download

  • 百度网盘下载:

2018原版链接: 提取码: xsad

2019更新版链接: 提取码: huav

  • Download from Google Drive:

[2018 version]

[2019 version]

License & co

如果您使用了我们的代码或者数据,请引用此论文。Please cite our paper if you use our code or data.

	author = {Zhang, Kejun and Zhang, Hui and Li, Simeng and Yang, Changyuan and Sun, Lingyun},
	title = {The PMEmo Dataset for Music Emotion Recognition},
	booktitle = {Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval},
	series = {ICMR '18},
	year = {2018},
	isbn = {978-1-4503-5046-4},
	location = {Yokohama, Japan},
	pages = {135--142},
	numpages = {8},
	url = {},
	doi = {10.1145/3206025.3206037},
	acmid = {3206037},
	publisher = {ACM},
	address = {New York, NY, USA},
	keywords = {dataset, eda, experiment, music emotion recognition},



Zhang Hui


Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.