Wednesday, April 3, 2013

HandiCom

HandiCom: Handheld Deaf and Dumb Communication Device based on Gesture to Voice and Speech to Image/Word Translation with SMS Sending and Language Teaching Ability


















Rationale:

The world is a place of misery. We see there are millions of people who suffer from hearing loss (deaf) and speech loss (dumb) that might have occurred since birth or at a later stage during their lifetime. For those who suffer, this cannot be cured by medicines because they are not some disease caused by some virus. There are no sages alive who can cure them by their will power and hence those people must depend on science and technology to innovate a solution to make them live a better life.

Fundamental Problem:

Deaf and dumb often communicate via sign language, a kind of representation of words through hand and finger positions. But it has got serious limitations because it is not easy to understand by a normal listener on the opposite and to make things worse, not many in the world know sign language at all. Also, it is difficult to represent all the words of a plain language like English into a sign language symbol. Even if there is one, then learning and using them would be tough and cumbersome.

Previous Efforts:

Although people have previously worked on projects involving sign language translating devices such as gesture sensing gloves, these are anything but correct solution because the sign language method has got serious drawbacks mentioned above. So we propose a new form of communication mechanism that aims to eliminate these drawbacks with the help of the latest technologies available.

Abstract:

Our project aim is to build a handheld device that would help deaf and dumb people to communicate with others in every day spoken language such as English. The project can be divided into four modules.
Working – There are Four Modules:

The first one is Gesture to Voice translating module. It involves touchscreen based gesture recognition using 65K Color Touchscreen TFT Display. The process is to understand and decode the swipe gesture made on the touchscreen and then to speak out this word/alphabet/numeral in a virtual human voice through an MP3 audio decoder. The user will be able to form sentences using this process quite quickly and easily. The color display would help this process by rendering an onscreen swipe keypad layout for the user to input their gestures.

The second one is Speech to Image Translating module. It involves advanced Speech Recognition unit and a color display. The process is to recognize the words spoken by a normal person and to convert this voice input to an image or text and to display it on the screen of the device. The device has a large storage space, a FAT-32 MicroSD memory card, in order to store all the images needed.

The third module is the ability to send SMS to mobile phones. Even the deaf or dumb need to communicate over long distances and hence the device has an inbuilt GSM module to send SMS. Based on the touchscreen display, the user can enter his text and the mobile number just like in a normal mobile phone to send SMS to others.

The fourth module is the language learning ability. In this mode the deaf and dumb can use this device to learn letters, numbers and words. They will be displayed as pictures in the color display.

Project Advantages:
·         Used as the mobile phone for Deaf and Dumb.
·         Deaf or dumb or deaf and dumb, all can communicate with it.
·         Short distance as well as long distance communication is possible.
·         Touchscreen gesture method eliminates the use of complex hand gestures.
·         Hence removes the need for hand movement sensing systems which are quite large, complex, expensive and slower.
·         Support for uneducated people with Image translation feature.
·         Language learning mode helps uneducated people to learn English words through it.
·         More advanced user can use word translation.
·         Large onboard memory to store image and voice files.
·         High quality voice generation with the audio decoder module.
·         Can be extended to support multiple languages.

Microcontroller Used:

The device is designed and developed around a low power and high performance 32-bit LPC1313, an ARM Cortex-M3 microcontroller from NxP Semiconductors.

Software Tools Used:
·         Programming Language:          Embedded C
·         Development Tool:                  LPCXpresso IDE (Eclipse based)

Embedded Protocols Used:
·         I2C, SPI, UART
Software Libraries Used:
·         Graphics Library
·         Touchscreen Controller Driver via SPI protocol
·         FatFs FAT-32 File System Library
·         Micro-SD Card Driver Library via SPI
·         Audio File Decoder Software
·         Speech Recognition Unit Driver Software via UART
·         GSM Modem Driver Software via UART protocol
·         Cortex-M3 Peripheral Device Driver Library
·         CMSIS from ARM

0 comments:

Post a Comment

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Justin Bieber, Gold Price in India