Multi-modal AI Assistants for Data Analysis and Decision-Making

Jeffrey O. Kephart
Manager, Agents and Emergent Phenomena and Manager, Tivoli-Autonomic Computing Joint Program
Thomas J. Watson Research Center
Ricketts 211, Rensselaer Polytechnic Institute
Wed, October 02, 2019 at 4:00 PM
Refreshments served at 3:30 p.m.

In recent years, consumer interest in voice-driven AI assistants such as Alexa, Google Home, and Siri has exploded. Today, such services are available on a billion devices. Due to this trend, consumers have grown accustomed to interacting with AI assistants somewhat as they do with fellow humans.

Soon, people will bring the same expectation into the workplace -- but there will be some major differences. Rather than uttering one-shot commands like "Play my favorite radio station," business users and scientists will engage in extended interactions with AI assistants designed to help them analyze data and make decisions. Moreover, since such cognitive tasks tend to be strongly visual in nature, this new breed of AI assistant will not just listen and talk. It will be visually aware, and use multiple sensory channels plus conversational and spatial context to determine what users are pointing at, looking at, and wanting to do.

Bridging the large gulf between "Set a timer" and "Help me figure out why this project is behind schedule" or "Should I purchase this oil field?" surfaces a broad array of exciting research challenges in areas such as contextual sensor fusion, spatial intelligence, and multi-criteria decision-making under uncertainty. In the course of describing multi-modal AI assistants that I have built with my team at IBM Research in domains such as mergers & acquisitions, oil & gas, construction project management, and exoplanets exploration, I will highlight some of these challenges and illustrate how we have addressed them. I will conclude by outlining significant research challenges that remain to fully realize the vision of multi-modal AI assistants, and offer thoughts about how IBM and RPI can work together to address them.

Jeffrey O. Kephart

Jeffrey O. Kephart is a distinguished research scientist at IBM Research in Yorktown Heights, NY. Known for his work on computer virus epidemiology and immune systems, electronic commerce agents, autonomic (self-managing) computing systems, and data center energy management, he presently leads research in the area of embodied AI systems and collaborates extensively with the Cognitive and Immersive Systems Laboratory at RPI. Kephart's work has been featured in Scientific American, The New York Times, Wired, Forbes, The Atlantic Monthly, Discover Magazine, and comparable publications. He has co-authored over 60 patents and 200 papers, which have received over 24,000 citations. In 2013, Kephart was awarded the rank of IEEE Fellow for his leadership and research in founding autonomic computing as an academic discipline. He graduated from Princeton University with a BS in electrical engineering (engineering physics) and received his PhD from Stanford University in electrical engineering, with a minor in physics.