About Us

In today's digital age, a significant gap exists due to the lack of tools that enable real-time conversion of sign language and voice to text, posing everyday communication challenges for the deaf and mute communities.
Sign It, developed by Mohammad Alkhabaz, Abdulmutaleb Almuslimani, Estabraq Altararwah, and Kawthar Webdan, is designed for deaf, mute, and hearing individuals to facilitate seamless communication.

Features

Real-time Sign Language Translation Mockup
Real-time Sign Language Translation

This feature enables immediate translation of sign language gestures into text, allowing for seamless communication between deaf individuals and non-signers.

Voice-to-Text Conversion Mockup
Voice-to-Text Conversion

With this feature, spoken words are converted into written text in real-time, providing a communication bridge for those who cannot sign.

Search for Sign Gestures Mockup
Search for Sign Gestures

Users can search for specific sign gestures within the application, making it easier to learn new signs or find alternatives for existing ones.

Impact

Impact on Society Image

Impact on Society

Our app has a positive impact on society by fostering Social Cohesion and facilitating Integration in Society.It promotes Equal Opportunities for all individuals while increasing Awareness and Empathy towards the deaf and hard-of-hearing community.

Impact on Individuals Image

Impact on Individuals

The app significantly impacts individuals by Improved Communication, increasing Confidence, fostering Independence,and enhancing Access to Essential Services.

Impact on Organizations Image

Impact on Organizations

Our app benefits organizations by creating an Inclusive Workspace, enhancing Diversity, ensuring Compliance and Reputation,while also improving Customer Service and contributing to Cost Efficiency.

ASL Translation App Core Development Focus

This section provides insights into the core programming languages used in our real-time ASL translation application, specifically highlighting the use of Dart/Flutter and Kotlin to ensure robust and versatile functionality.

Dart/Flutter64.7%
Kotlin (Java)35.3%

Tech Explained

  • In the realm of computer vision and real-time interaction, the Mediapipe framework stands out for its robust capabilities in tracking human body dynamics. Particularly, the Pose and Gesture models within Mediapipe are pivotal for applications requiring precise human movement and gesture recognition. These models identify 33 unique landmark points on a person, providing detailed 3D spatial data (x, y, z coordinates) for each point.

    These landmarks cover crucial joints and contours of the body and hands, offering a comprehensive mapping of human poses and gestures. This detailed capture of spatial data enables developers to create interactive and responsive applications that can interpret human movements with high accuracy. The Pose model captures the entire body's dynamics, while the Gesture model focuses more granularly on hand movements, both critical for applications ranging from augmented reality to advanced accessibility tools like sign language interpreters.

  • The "Sign it" app integrates OpenAI's cutting-edge Whisper model to enhance its voice-to-text translation features, particularly excelling in noisy environments. Whisper's robust design allows it to filter out background noise and focus on the primary speech, enabling clear and accurate text translations even in less than ideal auditory conditions. This capability is crucial for ensuring that users can communicate effectively regardless of surrounding distractions.

    Currently, the app supports English language translation, tapping into Whisper's extensive training on diverse accents and dialects to provide a seamless user experience. The Whisper API offers a powerful toolset for developers, featuring capabilities that handle various audio qualities and linguistic nuances, making it an ideal choice for real-time communication aids like "Sign it."

  • To adapt the "Sign it" app specifically for American Sign Language (ASL), our team retrained Mediapipe's Gesture Landmark model using a dataset of over 12,000 images. This extensive training enabled the model to accurately recognize ASL gestures, a crucial feature for effective communication within the deaf and mute communities.

    Additionally, we developed a pose detection model that utilizes the Euclidean distance formula to calculate differences between real-time poses and pre-stored pose data, enhancing the model’s ability to accurately detect ASL poses. This was a vital step in our pose model creation.

    The app further employs a custom-built model that integrates the outputs from both the gesture and pose models. By combining these outputs, our custom model can interpret not just individual signs but also sequences of gestures and poses that form complete expressions or sentences in ASL.

    Overall, "Sign it" harnesses three models: Pose, Gesture, and a custom model to create a comprehensive solution that translates real-time sign language into text, enhancing communication possibilities for its users.

Youtube Channel

To Top