Mark Yahiro, Intel
Timothy Tuttle, Expect Labs
Liesl Capper, IBM
Roberto Pieraccini, Jibo
Intel offers platform and foundation to use speech, motion, who you are, to use data in intelligent ways.
Jibo (a male character) is not humanoid, but has stereo camera, mic, speaker identification, motion or facial expression detection, display, touch points. Just got big crowned sourced funds. Japanese trend is to more humanoid, is creepy, uncanny valley. farthing can express feelings (teapot in beauty and beast).
Ongoing conversations? Watson uses experts systems, known data sets, to develop ranked diagnosis of medical conditions.
Movie Her and anticipatory agents? Tim suggests we’re not as far off as we think. We are going to starts seeing intelligence in new devices. Recent breakthroughs from IBM in deep learning remarkably reduce error rate within a couple of years.
What is the future? Cognitive glue working across other agents. Holy grail is all human interaction. There are already a lot of agents, interacting between and across them is learning a new language. Filter the right data specific usage, depending on the usage do we filter before or after? Where is context? What makes sense.
Speech recognition is about 60, 65 years old. Big problem is that once you understand how to did it, depends on the language and what you are talking about. Can’t create a closed loop.