Multitask sparse Learning based Facial Expression Classification
Pratik Nimbal1 , Gopal Krishna Shyam2
Section:Research Paper, Product Type: Journal Paper
Volume-7 ,
Issue-6 , Page no. 197-202, Jun-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7i6.197202
Online published on Jun 30, 2019
Copyright © Pratik Nimbal, Gopal Krishna Shyam . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
View this paper at Google Scholar | DPI Digital Library
How to Cite this Paper
- IEEE Citation
- MLA Citation
- APA Citation
- BibTex Citation
- RIS Citation
IEEE Style Citation: Pratik Nimbal, Gopal Krishna Shyam, “Multitask sparse Learning based Facial Expression Classification,” International Journal of Computer Sciences and Engineering, Vol.7, Issue.6, pp.197-202, 2019.
MLA Style Citation: Pratik Nimbal, Gopal Krishna Shyam "Multitask sparse Learning based Facial Expression Classification." International Journal of Computer Sciences and Engineering 7.6 (2019): 197-202.
APA Style Citation: Pratik Nimbal, Gopal Krishna Shyam, (2019). Multitask sparse Learning based Facial Expression Classification. International Journal of Computer Sciences and Engineering, 7(6), 197-202.
BibTex Style Citation:
@article{Nimbal_2019,
author = {Pratik Nimbal, Gopal Krishna Shyam},
title = {Multitask sparse Learning based Facial Expression Classification},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {6 2019},
volume = {7},
Issue = {6},
month = {6},
year = {2019},
issn = {2347-2693},
pages = {197-202},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=4531},
doi = {https://doi.org/10.26438/ijcse/v7i6.197202}
publisher = {IJCSE, Indore, INDIA},
}
RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i6.197202}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=4531
TI - Multitask sparse Learning based Facial Expression Classification
T2 - International Journal of Computer Sciences and Engineering
AU - Pratik Nimbal, Gopal Krishna Shyam
PY - 2019
DA - 2019/06/30
PB - IJCSE, Indore, INDIA
SP - 197-202
IS - 6
VL - 7
SN - 2347-2693
ER -
VIEWS | XML | |
547 | 504 downloads | 147 downloads |
Abstract
In today’s era, Facial Expression and recognition is a very challenging and fascinating subject with regards to field of AI and pattern recognition because of developmental psychology and human machine interface. For outward appearance designing classifiers with high reliability is a significant advance in this research. This paper represents a framework for person dependent expressions by combining all types of facial recognition types of facial by means of various multiple kernel learning in Support vector machines(SVM). We contemplated the impact of MKL for learning the piece loads and observationally assess the aftereffects of six fundamental expressions with impartial expression. included. In our investigations we have joined two mainstream facial element portrayals, dlib library and Multikernel SVM with polynomial kernel. Our experimental results on the cohn-Kanade face database as well as manually included database demonstrate that this framework out performs the state-of-arts, conventional techniques and straightforward MKL based multiclass SVM for facial expression recognition.
Key-Words / Index Term
Facial Expression Recognition, Multikernel, Support Vector Machines
References
[1] Asullah Khalid Alham, Maozhen Li1, Suhel Hammoud and Hao Qi, “Evaluating Machine Learning Techniques for Automatic Image annotation,vol. 11, no. 1, january/february (2009),p.21-235.
[2] O. Marques, N. Barman, "Semi-Automatic Semantic Annotation of Images Using Machine Learning Techniques" Proc. of ISWC(2003), p. 550-565.
[3] J. Liu, B. Wang, M. Li, Z. Li, W. Y. Ma, H. Lu and S. Ma, “Dual Cross-Media Relevance Model for Image Annotation,” in Proceedings of the 15th International Conference on Multimedia(2007), p. 605 – 614.
[4] C. F. Tsai and C. Hung, “Automatically Annotating Images with Keywords: A Review of Image Annotation Systems,” Recent Patents on Computer Science (2008), vol 1, pp 55-68.
[5] Learning Multiscale Active Facial Patches for Expression Analysis Lin Zhong, Qingshan Liu, Peng Yang, Junzhou Huang, and Dimitris N. Metaxas, Senior Member, IEEE
[6] R. Datta, D. Joshi, J. Li and J. Z. Wang, “Image Retrieval: Ideas, Influences, and Trends of the New Age” ACM Computing Surveys (CSUR)(2008), vol. 40, ), p. 605 – 614.
[7] L. Cao, J. Luo, H. Kautz and T. S. Huang. “Image Annotation within the Context of Personal Photo Collections Using Hierarchical Event and Scene Models”, In (2009) IEEE Multimedia 11(2), p. 208- 219.
[8] W. Viana, J. B. Filho, J. Gensel, M. Villanova-Oliver and H. Martin, "PhotoMap: From location and time to context-aware photo Annotations", In (2008) Journal of Location Based Services 2(3), p. 211-235.
[9] M. Ames and M. Naaman, “Why We Tag: Motivations for Annotation”. In proc. CHI, ACM Press (2007), p. 971-980
[10] U. WESTERMANN and R. JAIN, "Toward a Common Event Model for Multimedia Applications", In (2007) IEEE Multimedia 14(1), p. 19-29.
[11] M. Davis, N. V. House, J. Towle, S. King, S. Ahern, C. Burgener, Perkel, M. Finn, V.Viswanathan and M. Rothenberg, “MMM2: Mobile Media Metadata for Media Sharing”, Ext. Abstracts CHI (2005), ACM Press, p. 1335-1338.
[12]Tianxia Gong, Shimiao Li and Chew Lim Tan, ”A Semantic Similarity Language Model to Improve Automatic image annotation”, In (2010) 22nd International Conference on Tools with Artificial Intelligence.
[13] Lei Ye, Philip Ogunbona and Jianqiang Wang, "Image Content Annotation Based on Visual Features” Proceedings of the Eighth IEEE International Symposium on Multimedia (ISM`06).
[14] Yunhee Shin, Youngrae Kim and Eun Yi Kim, "Automatic textile image annotation by predicting emotional conceptsfrom visual features”. In (2010) Image and Vision Computing, p. 28.
[15] Ran Li, YaFei Zhang, Zining Lu, Jianjiang Lu and Yulong Tian, “Technique of Image Retrieval based on Multi-label Image Annotation”, In (2010) Second International Conference on MultiMedia and Information Technology.
[16] T. Jiayu, “Automatic Image Annotation and Object Detection” (2008) PhD thesis, University of Southampton, United Kingdom
[17] P. Ekman, W. V. Friesen, and J. C. Hager, Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto, CA, USA: Consulting Psychologists Press, 2002.
[18] M. Bartlett, J. Hager, P. Ekman, and T. Sejnowski, “Measuring facial expressions by computer image analysis,” Psychophysiology, vol. 36, no. 2, pp. 253–263, Mar. 1999.
[19] Y. Tian, T. Kanade, and J. F. Cohn, “Recognizing action unites for facial expression analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 2, pp. 97–115, Feb. 2001.
[20] M. S. Bartlett et al., “Recognizing facial expression: Machine learning and application to spontaneous behavior,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2. Jun. 2005, pp. 568–573.
[21] M. S. Bartlett et al., “Fully automatic facial action recognition in sponta- neous behavior,” in Proc. 7th Int. Conf. Autom. Face Gesture Recognit., Southampton, U.K., 2006, pp. 223–230.
[22] J. F. Cohn, “Foundations of human computing: Facial expression and emotion,” in Proc. Int. Conf. Multimodal Interfaces, 2006, pp. 223–238.
[23] J. F. Cohn, L. Reed, Z. Ambadar, J. Xiao, and T. Moriyama, “Automatic analysis and recognition of brow actions and head motion in spontaneous facial behavior,” in Proc. IEEE Int. Conf. Syst., Man, Cybern., 2004, pp. 610–616.
[24] B. Jiang, M. F. Valstar, and M. Pantic, “Action unit detection using sparse appearance descriptors in space-time,” in Proc. IEEE Int. Conf. Autom. Face Gesture Recognit. Workshops, Santa Barbara, CA, USA, 2011, pp. 314–321.
[25] M. F. Valstar, M. Pantic, Z. Ambadar, and J. F. Cohn, “Foundations of human computing: Facial expression and emotion,” in Proc. Int. Conf. Multimodal Interfaces, 2006, pp. 162–170.
[26] G. Guo and C. R. Dyer, “Learning from examples in the small sample case—Face expression recognition,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, no. 3, pp. 477–488, Jun. 2005.
[27] M. Lyons, J. Budynek, and S. Akamatsu, “Automatic classification of single facial images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, no. 12, pp. 1357–1362, Dec. 1999.
[28] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987, Jul. 2002.
[29] C. Shan, “Smile detection by boosting pixel differences,” IEEE Trans. Image Process., vol. 21, no. 1, pp. 431–436, Jan. 2012.
[30] C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image Vis. Comput., vol. 27, no. 6, pp. 803–816, May 2009.
[31] P. Yang, Q. Liu, and D. N. Metaxas, “Exploring facial expressions with compositional features,” in Proc. Int. Conf. Comput. Vis. Pattern Recognit., San Francisco, CA, USA, 2010, pp. 2638–2644.
[32] Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, “A survey of affect recognition methods: Audio, visual and spontaneous expressions,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 1, pp. 39–58, Jan. 2009.
[33] Y. Chang, C. Hu, R. Feris, and M. Turk, “Manifold based analysis of facial expression,” Image Vis. Comput., vol. 24, no. 6, pp. 605–614, Jun. 2006.
[34] N. Sebe et al., “Authentic facial expression analysis,” in Proc. 6th IEEE Int. Conf. Autom. Face Gesture Recognit., 2004, pp. 517–522.
[35] L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expres- sion database for facial behavior research,” in Proc. Int. Conf. Autom. Face Gesture Recognit., Southampton, U.K., 2006, pp. 211–216.
[36] Komal D. Khawale* , D. R. Dhotre “ To Recognize Human Emotions Based on Facial Expression Recognition : A Literature Survey “ International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2017 IJSRCSEIT | Volume 2 | Issue 1
[37] Muchiri , Ismail Ateya , Gregory Wanyembi “Human Gait Indicators of Carrying a Concealed Firearm : A Skeletal Tracking and Data Mining Approach Henry “ International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018