EmotiW 历年竞赛冠军数据及论文信息
date |
Baseline paper |
Champion paper |
Model & Feature |
Test_acc |
2013 |
[1] |
[2] |
EmoNets |
41.03 |
2014 |
[3] |
[4] |
HOG,DSIFT,DCNN |
50.37 |
2015 |
[5] |
[6] |
AU-aware facial feature,CNN |
53.8 |
2016 |
[7] |
[8] |
CNN-LSTM,C3D |
59.02 |
2017 |
[9] |
[10] |
SSE |
60.34 |
2018 |
[11] |
[12] |
MTCNN,VGG16,LMED |
61.87 |
参与文献:
[1]. Dhall A, Goecke R, Joshi J, et al. Emotion recognition in the wild challenge 2013[C]//Proceedings of the 15th ACM on International conference on multimodal interaction. ACM, 2013: 509-516.
[2]. Kahou S E, Pal C, Bouthillier X, et al. Combining modality specific deep neural networks for emotion recognition in video[C]//Proceedings of the 15th ACM on International conference on multimodal interaction. ACM, 2013: 543-550.
[3]. Dhall A, Goecke R, Joshi J, et al. Emotion recognition in the wild challenge 2014: Baseline, data and protocol[C]//Proceedings of the 16th international conference on multimodal interaction. ACM, 2014: 461-466.
[4]. Liu M, Wang R, Li S, et al. Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild[C]//Proceedings of the 16th International Conference on multimodal interaction. ACM, 2014: 494-501.
[5]. Dhall A, Ramana Murthy O V, Goecke R, et al. Video and image based emotion recognition challenges in the wild: Emotiw 2015[C]//Proceedings of the 2015 ACM on international conference on multimodal interaction. ACM, 2015: 423-426.
[6]. Yao A, Shao J, Ma N, et al. Capturing au-aware facial features and their latent relations for emotion recognition in the wild[C]//Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 2015: 451-458.
[7]. Dhall A, Goecke R, Joshi J, et al. Emotiw 2016: Video and group-level emotion recognition challenges[C]//Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, 2016: 427-432.
[8]. Fan Y, Lu X, Li D, et al. Video-based emotion recognition using CNN-RNN and C3D hybrid networks[C]//Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, 2016: 445-450.
[9]. Abhinav D, Roland G, Shreya G, et al. From individual to group-level emotion recognition: Emotiw 5.0[J]. ACM ICMI 2017, 2017.
[10]. Hu P, Cai D, Wang S, et al. Learning supervised scoring ensemble for emotion recognition in the wild[C]//Proceedings of the 19th ACM international conference on multimodal interaction. ACM, 2017: 553-560.
[11]. Dhall A, Kaur A, Goecke R, et al. Emotiw 2018: Audio-video, student engagement and group-level affect prediction[C]//Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 2018: 653-656.
[12]. Liu C, Tang T, Lv K, et al. Multi-Feature Based Emotion Recognition for Video Clips[C]//Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 2018: 630-634.
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· 10年+ .NET Coder 心语 ── 封装的思维:从隐藏、稳定开始理解其本质意义
· 地球OL攻略 —— 某应届生求职总结
· 提示词工程——AI应用必不可少的技术
· Open-Sora 2.0 重磅开源!
· 周边上新:园子的第一款马克杯温暖上架
2017-06-19 概率分布之间的距离度量以及python实现(三)