February 24, 2024

Ddog: MIT controls the Dog Spot robot via a brain-computer interface

A scientific team from the Massachusetts Institute of Technology (MIT). Dodog project He controlled Spot, a robotic dog from Boston Dynamics, via a Brain-Computer Interface (BCI) using thought and eye movements. The system can help people with physical disabilities such as amyotrophic lateral sclerosis, cerebral palsy and spinal cord injuries to be able to cope with their daily lives more independently.

advertisement

The Ddog system is based on the MIT-developed Brain Switch, a BCI that allows individuals with physical disabilities to communicate nonverbally with a caregiver. Accordingly, the researchers expanded the application scope of the Ddog system.

The BCI used consists of wireless glasses with various sensors built into the frame. It measures brain activity, namely the wearer's electroencephalogram (EEG) and electrooculogram, and a person's eye movements. The system uses the measured values ​​to control the four-legged robotic dog. The advantage of this BCI: it does not require any adhesive sensors on the head or backpacks with additional electronics and is therefore more suitable for daily use.

Recommended editorial content

With your consent, an external YouTube (Google Ireland Limited) video will be uploaded here.

Always upload a YouTube video

The video shows how to control Spot via a brain-computer interface.

Recommended editorial content

With your consent, an external YouTube (Google Ireland Limited) video will be uploaded here.

Always upload a YouTube video

In an interview, Natalia Kuzmina, Ddog project manager, explains the system.

Spot's selection of the Ddog project is no coincidence. The robot dog is very mobile, and can move in small spaces and climb stairs, making it well suited for use in apartments. He also has a robotic arm with which he can perform tasks such as fetching groceries, medicine, books, and moving a chair. Spot can perform these tasks independently. Simple instructions are enough.

Before Spot can carry out instructions through the power of his mind, he must first map his surroundings in three dimensions. This is done using lidar technology and cameras that combine 3D data, images and videos to form a 3D map. In a second step, the system uses this information to build a semantic map so that it can recognize objects, for example.

The Apple iPhone is used for communication, asking the user what task to perform next. Through thoughts and eye movements, the user now provides feedback, which the system translates into concrete instructions for the robot, such as “Go to the kitchen.” To make the robot dog come back, just think of Spot.

Another iPhone runs the local navigation map and controls Spot's robotic arm. Additionally, the iPhone's lidar complements Spot's lidar data. Both mobile phones communicate with each other to track progress in completing tasks.

The Ddog system works both online and offline. The difference is that the online version works more accurately. It uses better and more accurate machine learning models.


(OLB)

To the home page

See also  Google Messages is supposed to compete with WhatsApp – updates for Android and Wear OS