A Dramatically Different Approach
Up to 99% Accuracy
You may have heard the adage that 70% of human communication is non-verbal. We can't ignore the fact that visual communication is just as important as voice communication. To address this, we use movie grade avatars that can act as brand ambassadors and greatly increase user satisfaction and trust. We have avatar interfaces that can run on wrist watches as well as huge 8k displays.
Our software uses Wikipedia and other online sources to answer questions. This also works with un-structured company data. It turns out that our ability to understand people as they speak naturally, also allows us to help them locate facts and figures faster. In a pilot effort with Amazon, we improved the accuracy of their search by 6X!
SapientX can sound like anyone and our voices run online or off. That said, it's quite economical to use off-the-shelf voices from several commodity synthetic voice (TTS) providers. We will be happy to audition options for you.
Chatscript analyses words the way your 7th grade English teacher taught you to do. From this, it extracts meaning like no other AI system. This also allows it discern the users emotional state so that we can adapt responses to it. Our avatars can even change their emotions.
Machine vision is a form of AI that uses a video camera to identify people and objects in front or our characters. This helps us to greet people, tell who's speaking and even complement you on your new hat. We do this while maintaining your privacy.
We have our own speech recognition system that converts spoken words to text. It runs onine or off. We also use third party speech rcognizers (ASR) when appropriate. For instance, when we run in an iPhone, we might as well use the ASR built into that phone.
In installations, where we might have users speaking more than one language, we can let the user select the language of their choice. Even better, our system can detect the language that they are speaking and change to that language.