AI-powered app helps speech-impaired kids speak
Source: Chronicle News Service
Imphal, July 06 2025:
A ground-breaking AI-powered communication tool called 'My First Voice' is helping non-verbal children in Manipur and across the world speak for the first time in voices that sound like their own, marking a transformative moment in disability inclusion and assistive technology.
Developed by Monks India in collaboration with the Centre for Community Initiative (CCI), a not-for-profit organisation based in Churachandpur district, the tool, which was launched in the last part of April this year, uses artificial intelligence (AI) to replicate the vocal textures, tones and regional accents of nonverbal children.
By converting natural vocalisations such as grunts, hums and fragmented sounds into personalised speech, My First Voice allows children to express themselves with individuality and cultural identity intact.
The project is part of a broader initiative by CCI, a disability advocacy organisation and a partner of UNDP and UN Volunteers, to address deep-rooted stigma and isolation surrounding speech impairment and non-verbal conditions.
CCI's founder and director, Pauzagin Tonsing, said that the journey began with the realisation that many children in remote towns remain undiagnosed due to social stigma and lack of awareness.
The organisation was born out of the collective hope of parents, including Pauzagin himself, whose own son was born with a disability, to create a space where children with special needs could learn and grow like their peers.
The emotional toll on families is often high.
Parents frequently face social exclusion and struggle with the inability to communicate with their children.
Many withdraw from public life out of shame or discomfort.
A mother from Churachandpur shared that when her son Patrick made sounds in public, people would stare and point, leading her to avoid taking him out altogether.
It is these day-to-day challenges that My First Voice seeks to alleviate.
The development of the tool involved a hyper-personalised approach.
Unlike standard AI voice systems, the voice samples used in this project were often unclear or unstructured.
Monks India collected recordings of each child's natural sounds, supplemented by speech data from siblings and parents.
The team used advanced opensource AI models such as Coqui TTS for speech synthesis, RVC-2 for tone refinement and ElevenLabs for accent modelling.
This layered approach helped overcome the limitations of conventional AI models, which tend to generalise speech patterns.
The developers also finetuned their methods through iterative testing, allowing them to reduce the voice training time.
This significantly improved scalability while maintaining the authenticity and individuality of each child's voice.
The resulting interface allows children with motor or cognitive challenges to communicate through a simplified platform using preset buttons and text-to-speech functions.
Just 30 seconds of sound samples are enough to create a working voice profile.
Feedback from families has been overwhelmingly positive.
In one case, a mother described how her son, Tyson Lungousang, was overjoyed when he used the app for the first time.
He eagerly demonstrated how it worked, making his family feel hopeful and connected in a way they hadn't experienced before.
So far, 10 children from Churachandpur have bene fitted from the pilot rollout of My First Voice.
Parents reported that the tool has improved communication at home and increased their children's participation in social and educational settings.
Some children have used the tool to share basic needs, recite poems and even take part in classroom interactions.
Endorsed by speech therapists and featured in numerous media outlets, the tool is being recognised as a scalable and culturally adaptive model for inclusive technology in India.