It is only natural to interact with the console using (unrestricted) speech, which increases the game's level of immersion. During the heat of the moment, especially in noisy situations, it's also common to raise your voice (known as the Lombard effect). All state-of-the-art speech recognizers repeatedly fail in this scenario .
But let's not stop there. Wouldn't it be great if your console allowed interaction beyond speech recognition,in case you'd like to jump in the discussion with your friends on the couch while gaming. While the speech recognizer tried to plainly map sound to text, now semantics will play a role, the field of Natural Language Understanding (NLU), speech comprehension or simply speech understanding.
Humans are great in reading verbal and non-verbal cues that speech is addressed to them. Computers not so, hence the use of trigger words like "Computer
A solution to this problem is apparently not trivial. 50 years of AI research produced an endless stream of (derivations of) components that are relevant, such as various knowledge representations. For sure, much ground can be won using this classical divide-and-conquer approach and remains important in the future. However in speech, the whole is greater than the sum of the parts. It's not just a reference to text symbols, it means something. This inspired me to build a framework that goes beyond standard speech recognition - the Real Time Cognition Framework. The main requirements to the framework is that it allows processing in real-time since speech is very sensitive to the moment it is said. Secondly it must allow for a multitude of modalities, most importantly vision, since speech often references features from the physical world (like objects) that cannot be deduced from audio only. Thirdly, it should allow rapid prototyping. Plug-in a new object recognizer, change parameters or the flow of processing and multimodal synchronization. It should allow facilities to record, manipulate, and visualize data out-of-the-box. The first generation of the framework has a built-in Speech Recognizer, Gesture Recognizer, Object Recognizer and Image Processor. It also features a Smart Buffer to fuse the multimodal input and a Semantic Analyser that uses machine learning and advanced pattern recognition techniques to further process the input. All recognizers and the semantic analyser can be re-trained with your own custom data. This initial version is built using the Kinect version 1 as provider of the (3D) video streams, (multi channel) audio streams and Skeleton streams.
In the next few months I will gradually show more of this project and eventually, after graduation, release the source code including the (huge) data set that is used. In the mean time, read my paper on the theoretical foundation of the framework like my definitions of meaning and understanding and enjoy the small teaser of the UI and UX of an early version. I hope it peaked your interest!
Steven Bos