Human identification at a distance is a very challenging task, which has long been a popular research topic in the field of computer vision. The gait sequences of different people can be very distinctive, which makes gait an important body characteristic that can be used for human identification. In this lecture, I will first introduce the brief history of gait-based human identification and list out the challenges that lie in this field, such as cross-view and cross walking condition gait recognition. Then I will share a comprehensive survey on the different modules of a gait-based human identification system. Specifically, I will summarize both the traditional approaches and the advanced deep learning based approaches for gait-based human identification. In particular, such novel deep learning models can achieve an average accuracy of 98% under identical view conditions and 91% for cross-view scenarios in the database with more than 4000 people, which are much better than the previously reported results. Afterwards, we discuss the applications of gait recognition at a distance in different kinds of visual tasks. Finally, I will share some suggestions of employing gait recognition in practice and indicate potential directions of this area for future work.
Dr. Liang Wang received both the B. Eng. and M. Eng. degrees from Anhui University in 1997 and 2000 respectively, and the PhD degree from the Institute of Automation, Chinese Academy of Sciences (CAS) in 2004. From 2004 to 2010, he worked as a Research Assistant at Imperial College London, United Kingdom and Monash University, Australia, a Research Fellow at the University of Melbourne, Australia, and a lecturer at the University of Bath, United Kingdom, respectively. Currently, he is a full Professor of Hundred Talents Program at the National Lab of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, P. R. China.
His major research interests include machine learning, pattern recognition and computer vision. He has widely published at highly-ranked international journals such as IEEE TPAMI and IEEE TIP, and leading international conferences such as CVPR, ICCV and ICDM. He has obtained several honors and awards such as the Special Prize of the Presidential Scholarship of Chinese Academy of Sciences. He is currently a Senior Member of IEEE and a Fellow of IAPR, as well as a member of BMVA. He is an associate editor of IEEE Transactions on Cybernetics and IEEE Transactions on Information Forensics and Security.
- Z. Wu, Y. Huang, L. Wang, X. Wang, and T. Tan, “A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs” IEEE Trans on Pattern Analysis and Machine Intelligence (TPAMI), 2016.
- C. Wang, J. Zhang, L. Wang, J. Pu, X. Yuan, “Human identification using temporal information preserving gait templates”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 34(11), pp 2164-2176, 2012.
- L. Wang, T. Tan, H. Ningand W. Hu, “Silhouette analysis based gait recognition for human identification”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2003, 25(12): 1505-1518.
- L. Wang(Lead Guest Editor), G. Y. Zhao, N. Rajpoot, and M. Nixon, Special issue on new advances in video-based gait analysis and applications: challenges and solutions, IEEE Transactions on Systems, Man and Cybernetics, Part-B (TSMC-B), 2010, 40(4).
- W. Kusakunniran, Q. Wu, J. Zhang, H. Li, L. Wang, “Recognizing gaits across views through correlated motion co-clustering”, IEEE Transactions on Image Processing (TIP), 23(2), pp 696-709, 2014.
- L. Wang, T. Tan, W. Hu and H. Ning, “Automatic gait recognition based on statistical shape analysis”, IEEE Transactions on Image Processing (TIP), 2003, 12(9):1120-1131.
- P. Larsen, E. Simonsen, and N. Lynnerup, “Gait analysis in forensic medicine,” Journal of Forensic Sciences, vol. 53, pp. 1149–1153, 2008.
- I. Bouchrika, M. Goffredo, J. Carter, and M. S. Nixson, “On using gait in forensic biometrics,” Journal of Forensic Sciences, vol. 56(4), pp. 882–889, 2011.
- D. Weinland, R. Ronfard, and E. Boyer, “Free viewpoint action recognition using motion history volumes,” Computer Vision and Image Understanding, vol. 104(2-3), pp. 249–257, 2006.
- A. Farhadiand M. K. Tabrizi, “Learning to recognize activities from the wrong view point,” in ECCV, 2008.
- J. Han and B. Bhanu, “Individual recognition using gait energy image,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28(2), pp. 316–322, 2006.
- S. Yu, D. Tan, and T. Tan, “A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition,” in ICPR, 2006.
- G. Zhao, G. Liu, H. Li, and M. Pietikainen, “3D gait recognition using multiple cameras,” in Int. Conf. Automatic Face and Gesture Recognition, 2006.
- G. Ariyantoand M. Nixon, “Model-based 3D gait biometrics,” in Int. Joint Conf. Biometrics, 2011.
- M. Goffredo, I. Bouchrika, J. Carter, and M. Nixon, “Self-calibrating view-invariant gait biometrics,” IEEE Trans. Systems, Man, and Cybernetics, Part B,vol. 40(4), pp. 997–1008, 2010.
- W. Kusakunniran, Q. Wu, J. Zhang, Y. Ma, and H. Li, “A new view invariant feature for cross-view gait recognition,” IEEE Trans. Information Forensics and Security, vol. 8(10), pp. 1642–1653, 2013.
- Y. Makihara, R. Sagawa, Y. Mukaigawa, T. Echigo, and Y. Yagi, “Gait recognition using a view transformation model in the frequency domain,” in ECCV, 2006.
- Y. LeCun, K. Kavukvuoglu, and C. Farabet, “Convolutional networks and applications in vision,” International Symposium on Circuits and Systems, 2010.
- Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “DeepFace: Closing the gap to human-level performance in face verification,” in CVPR, 2014.
- A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNetclassification with deep convolutional neural networks,” in NIPS, 2012.
- P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “OverFeat: Integrated recognition, localization and detection using convolutional networks,” arXiv:1312.6229, 2013.
- C. Farabet, C. Couprie, L. Najman, and Y. LeCun, “Learning hierarchical features for scene labeling,” IEEE Trans. Pattern Analysis and Machine Intelligence, 2013.
- A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. sukthankar, and F. Li, “Large-scale video classification with convolutional neural networks,” in CVPR, 2014.
- R. H. S. Chopra and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in CVPR, 2005.
- M. Hu, Y. Wang, Z. Zhang, J. Little, and D. Huang, “View-invariant discriminative projection for multi-view gait-based human identification,” IEEE Trans. Information Forensics and Security, vol. 8(12), pp. 2034–2045, 2013.
- H. Iwama, M. Okumura, Y. Makihara, and Y. Yagi, “The OU-ISIR gait database: Comprising the large population dataset and performance evaluation of gait recognition,” IEEE Trans. Information Forensics and Security, vol. 7(5), pp. 1511–1521, 2012.
- C. Wang, J. Zhang, J. Pu, X. Yuan, and L. Wang, “Chrono-gait image: A novel temporal template for gait recognition,” in ECCV, 2010.
- T. Lam, K. Cheung, and J. Liu, “Gait flow image: A silhouette-based gait representation for human identification,” Pattern Recognition, vol. 44(4), pp. 973–987, Apr. 2010.
- W. Kusakunniran, Q. Wu, H. Li, and J. Zhang, “Multiple views gait recognition using view transformation model based on optimized gait energy image,” in Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences (THEMIS), 2009.
- W. kusakunniran, Q. Wu, J. Zhang, and H. Li, “Support vector regression for multi-view gait recognition based on local motion feature selection,” in CVPR, 2010.
- ——，“Gait recognition under various viewing angles based on correlated motion regression,” IEEE Trans. Circitsand Systems for Video Technology, vol. 22(6), pp. 966–980, 2012.
- K. Bashir, T. Xiang, and S. Gong, “Cross-view gait recognition using correlation strength,” in BMVC, 2010.
- H. Hu, “Enhanced gaborfeature based classification using a regularized locally tensor discriminant model for multiviewgait recognition,” IEEE Trans. Circitsand Systems for Video Technology, vol. 23(7), pp. 1274–1286, 2013.