Will We Soon Be Operating Computers With Our Thoughts?

PinIt

The interface battle today is seen as “voice vs. screen” — but is the future found in thought-driven selection for tomorrow’s computers?

The time is approaching in which we may no longer need screens or other interfaces to run applications — we may be able to just control computers with our thoughts. That’s the word from MIT’s Adam Conner-Simons, who points out that researchers have already figured out ways to control machines through brainwaves. Along with muscle-sensitive controls, it may be possible to interact with the world with barely the blink of an eye.

The latest project out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is a system that enables “users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger,” Simons states. This includes multiple-choice tasks, “opening up new possibilities for how human workers could manage teams of robots.”

See also: AI needs big data, and big data needs AI

A video report in the Futurist observes that while it may take up to 100 years until brainwave-activated interfaces are ubiquitous, “prototypes for non-invasive brain interfaces are already in development.” This is especially the case with medical applications, as “early prototypes have already helped patients recover from strokes, and given amputees the ability to experience touch again, with the help of a sensor-covered prosthesis.”

The medical applications are exciting, but question is, of course: Is this useful to business? Let’s review the possibilities:

  • One person can manage teams of robots or systems. “By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task,” Says Simons. “Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.”
  • Brain-activated or muscle-sensitive computing will be immensely practical for employees with disabilities.
  • These systems and interfaces can be used to detect the alertness of drivers or machine operators, capable of sensing drowsiness and shutting down systems.
  • Workers in tight spaces, or wearing cumbersome utility suits may be able to easily access systems and data.
  • Workers with intelligent visors or eyeglass frames would be able to switch functions.

Once brain and muscle-activated computing get out the gate, there will be possibilities for businesses to expand the promise of ubiquitous computing. It may be a few years, or even decades until this becomes mainstream, but it’s never to early to begin considering the possibilities it offers.

“We’d like to move away from a world where people have to adapt to the constraints of machines,” according to CSAIL Director Daniela Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”

Avatar

About Joe McKendrick

Joe McKendrick is RTInsights Industry Editor and industry analyst focusing on artificial intelligence, digital, cloud and Big Data topics. His work also appears in Forbes an Harvard Business Review. Over the last three years, he served as co-chair for the AI Summit in New York, as well as on the organizing committee for IEEE's International Conferences on Edge Computing. (full bio). Follow him on Twitter @joemckendrick.

Leave a Reply

Your email address will not be published. Required fields are marked *