Recognition Of Hand Gesture Using CNN for American Sign Language
Abstract
We have proposed gesture recognition system for hand gesture in this paper which would make communication easier for people who have problem of hearing impairment, and they will have more opportunities to interact and deal with the outer world because not everyone can understand the sign language so advancement in this would make a great social impact for such patients that they will easily be able to carry out their thoughts. For recognition of sign language alphabets. There are four modules on which our system works which includes hand tracking and segmentation, feature extraction, recognition of gesture, application interface. HSV (Hue Saturation Value and Camshaft method is utilized for hand tracking and segmentation. We have implemented CNN (Convolutional Neutral Network) for recognition of gesture. We have proposed a gesture recognition system which is not expensive, and it is also easy to use which works for single hand gesture recognition. The system will be helpful for greater number of hearing-impaired people to be able to easily communicate with other people. The paper is divided into various section which has the introduction at first, following to it the literature review, then it contains methodology and result discussions, and lastly the conclusion with future scope.
References
[2] S. DiPaola and O. N. Yalcin, “A multi-layer artificial intelligence and sensing based affective conversational embodied agent,” in 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), Cambridge, United Kingdom, Sep. 2019.
[3] P. Kumar, H. Gauba, P. Pratim Roy, and D. Prosad Dogra, “A multimodal framework for sensor-based sign language recognition,” Neurocomputing, vol. 259, pp. 21–38, Oct. 2017.
[4] L. Haldurai, T. Madhubala, and R. Rajalakshmi, “A Study on CNN and its Applications,” International Journal of Computer Sciences and Engineering, vol. 4, p. 6, 2016.
[5] R. Bryll, R. T. Rose, and F. Quek, “Agent-Based Gesture Tracking,” IEEE Trans. Syst., Man, Cybern. A vol. 35, no. 6, pp. 795–810, Nov. 2005.
[6] F. Ullah, “American Sign Language recognition system for hearing impaired people using Cartesian Genetic Programming,” in the 5th International Conference on Automation, Robotics and Applications, Wellington, New Zealand, Dec. 2011.
[7] S. Rajaganapathy, B. Aravind, B. Keerthana, and M. Sivagami, “Conversation of Sign Language to Speech with Human Gestures,” Procedia Computer Science, vol. 50, pp. 10–15, 2015.
[8] M. Salem, S. Kopp, I. Wachsmuth, and F. Joublin, “Generating robot gesture using a virtual agent framework,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Oct. 2010.
[9] G. Kosta and M. Benoit, “Group Behavior Recognition for Gesture Analysis,” IEEE Trans. Circuits Syst. Video Technol., vol. 18, no. 2, pp. 211–222, Feb. 2008.
[10] A. S. Ghotkar, R. Khatal, S. Khupase, S. Asati, and M. Hadap, “Hand gesture recognition for Indian Sign Language,” in 2012 International Conference on Computer Communication and Informatics, Coimbatore, India, Jan. 2012.
[11] “Hand Gesture Recognition using Ant Colony Optimization Technique,” International Journal of Engineering Research, vol. 3, no. 19, p. 3, 2015.
[12] Meng Lei, Teng Lin. Karhunen loeve transform-based online handwritten signature scheme via dynamic bit allocation method and naive bayes for biometric key generation[J]. Journal of Information Hiding and Multimedia Signal Processing, v 9, n 4, p 896-903, July 2018.
[13] Jing Yu, Hang Li, Shoulin Yin. Dynamic Gesture Recognition Based on Deep Learning in Human-to-Computer Interfaces [J]. Journal of Applied Science and Engineering, vol. 23, no. 1, pp.31-38, 2020.
[14 Liguo Wang, Yin Shoulin, Hashem Alyami, et al. A novel deep learning-based single shot multibox detector model for object detection in optical remote sensing images [J]. Geoscience Data Journal, 2022
[15] R. Anitha, Prakash, and S. Jyothi, “A segmentation technique to detect the Alzheimer’s disease using image processing,” in 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), Chennai, India, Mar. 2016, pp. 3800–3801.
[16] Y. Ben Fadhel, S. Ktata, and T. Kraiem, “Cardiac scintigraphic images segmentation techniques,” in 2016 2nd International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Monastir, Tunisia, Mar. 2016, pp. 364–369.
[17] P. Bhosale, A. Gokhale, and Y. Motey, “Image segmentation using graph cut technique for outdoor scene images,” in 2013 International Conference on Communication and Signal Processing, Melmaruvathur, India, Apr. 2013, pp. 280–282.
[18] A. Swarnalatha and M. Manikandan, “Review of segmentation techniques for intravascular ultrasound (IVUS) images,” in 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN), Chennai, India, Mar. 2017, pp. 1–4.
[19] T. D. Vishnumurthy, H. S. Mohana, and V. A. Meshram, “Automatic segmentation of brain MRI images and tumor detection using morphological techniques,” in 2016 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT), Mysuru, India, Dec. 2016, pp. 6–11.
[20] Kang-Hyun Jo, Sung-Eun Kim, and Kyung Sup Park, “Interaction between agents from recognition of face direction and hand gestures in the virtual space,” in Proceedings KORUS 2000. The 4th Korea-Russia International Symposium on Science and Technology, Ulsan, South Korea, 2000.
[21] A. A. Hosain, P. S. Santhalingam, P. Pathak, J. Kosecka, and H. Rangwala, “Sign Language Recognition Analysis using Multimodal Data,” arXiv:1909.11232 [cs, stat], Sep. 2019, Accessed: Aug. 24, 2020.