The project is still half completed and I put the summary introduction here, hopefully  you guys like it and complete it with us :)

The structure of the robot is highly expandable, and it has high moving speed and environmental adaptability which are not available to the existing legged robots and wheeled robots through rolling motion.

The non-invasive BCI multi-functional rolling robot take advantages of BCI(Brain-Computer Interface) technology, which does not depends on the normal communication system consisting of peripheral nerve and muscle output pathways and can directly detect signal from brain activities. It also involves neuroscience, signals Detection, signal processing, pattern recognition and other interdisciplinary technologies.

The inference set come out from BCI is processed by Raspberry Pi 4b AR wearable device equipped with its own Python front and back end programs. With the collaboration of portable AR device and Bluetooth model , user could Interact with the robot through the disk program, which is used for data visualization and give instructions to the computer through EEG in a more orderly way. In order to obtain more pure and accurate EEG signal,the ignal processing should include the following three parts: preprocessing , feature selection and extraction, and feature classification. In this paper, common spatial pattern (CSP) algorithm is used to extract features, and LDA (Latent Dirichlet Allocation) is used to classify and train features. Finally, the movement direction of users' fact imagination is obtained.

The application of BCI tech and rolling robot are not just limited in the campus, it also can be applied in many fields in the society. For example, the cooperation of BCI technology and MR allows players to get more immersion and other special experience. It also could be applied to the product tools to improve the efficiency. Just because this kind of tech could be widly used in the modern society, it deserves more time and energy to be consummated.

 User wearing mixed reality and EEG headset “Ariel Supremacy”

2. OPENBCI

Different kinds of BCI signal collection tools have been developed while most are not affordable for teams with limited fund. The Openbci open-source board Cyton 8-

channel version was used in this project as a low cost solution and can be easily promoted to developer communities.

The Openbci also provides software supporting the interface between the Cyton board and computer, like Openbci GUI and Openbci Hub, allowing user to access and briefly process the signal collected, including using bias, controlling electrode states and frequency domain, which the Openbci GUI【2】 is able to interface with other application by LSL(Lab Streaming Layer) and other methods. Other BCI research applications like Openvibe【3】 and Neuropype【4】 are adaptable.

Fig. 3: Openbci GUI. Signal of electrodes of the user is observable.

3. METHODS

3.1   EEG

The EEG signal is a 5-100μv and low frequency bioelectric signal, which needs to be amplified before it can be displayed and processed. In the EEG signal processing and pattern recognition system, to correctly recognize the EEG signal, the signal processing should include the following three parts: pre-processing, feature selection and extracted feature classification. Signal pre-processing is mainly to remove low-frequency noise interference, such as using a spatial filter (CAR) to filter out low-frequency noise interference of ocular electricity and myoelectricity.

Feature extraction and selection are mainly for reducing the dimensionality of EEG data and extracting features related to classification.

         At present, there are three main features of EEG data [5]: time-domain features, frequency-domain features, and spatial-domain features. Different features require different feature extraction methods. For example, spatial-domain features are generally extracted by spatial filters (common spatial mode, CSP). Frequency-domain features are generally obtained by Fourier transformation, wavelet transformation or auto-regressive (AR) model. Feature classification mainly uses classification algorithms to classify the extracted features. It is mainly divided into two steps: first, use the training sample features to train the model to obtain the classification parameters, and second, using the trained classifier obtains the category of features of the test sample. Now, the more commonly used classifiers include Fisher, Support Vector Machine (SVM), Neural Network Classifier (NNC) and Bayesian classifier (BAYC) and so on.

3.2   Motor imagery and Mixed Reality

Motor imagery refers to the absence of actual physical behaviours, but the use of brain ideas to imagine physical movements, and the controller implements subsequent actual operations. Beyond basic comprehension, it is a kind of endogenous spontaneous EEG. Unlike evoked EEG, it does not require external stimuli, and only needs to perform imagery motions, resulting in a specific waveform from EEG. Due to the simplicity, flexibility, and non-invasiveness of this technology, it is widely used to realize motion imagery in BCI systems .

Fig. 4:  Mixed reality and EEG headset “Ariel Supremacy”

With the advancement of neuroscience and information technology, the application of the BCI system has also been greatly expanded. It can not only aid disabled patients, but also serve the general population, such as the development of brain-computer games, mental state monitoring and assisting work in special environments. For physical inconvenience caused by diseases such as paralysis and stroke [7], which are common in the elderly, the BCI system based on motor imagery can not only help patients control objects and realise self-care but can also be used as an approach of physical therapy to help them recover from greatest extent.

The headset was integrated the modified version of mixed reality Ariel. It was integrated with the Openbci Ultracortex frame and eight Thinkpulse@ polynomial EEG electrodes with Openbci Cyton board. The electrodes are placed at C3 ,Cz, C4, P3, Pz, P4, O1, O2 and FPz locations. Two ear clips were used connected to the SRB channel. By using the mixed reality and MRTK development kit user is able to identify the current command state towards reality though “disk” system interface. Motion leap@ controller and Intel@ Realsense@ T261 had been installed, enabling hand racking and 6dof tracking, supporting application in BCI-Mixed Reality-Reality interface. The development kit is also open-sourced and can be easily modified.

ROBOTICS

The robot designed for the project was specifically designed for supporting human users using BCI to complete varieties of tasks in complex terrains. Wheeled robots and legged robots consists of majority of modern robotics research, which both has advantages and disadvantages. Wheeled robots are fast on flat terrains while not as capable as legged robots in complex terrains like Stanford Doggo[10] while legged robots have relatively much higher cost achieving high speed. This project’s robot The Round Head was developed to solve this issue by using rotation to accumulate momentum to achieve high mobility while remains the capability for complex terrain transportation

Fig. 9: Photo of The Round Head.

The flat head consists of 6 sets of legs separated at two sides, with a tube connecting two plate, containing electric components and possible goods. The whole robots used PETG 3d printing material as frame and TPU 3d printing material as edge of legs, increasing frictional force. Controlling system of the robots consist of Arduino Uno, Gear motor controlling board and power supply device. Bluetooth was used to transmit command to the robot from the Python script that monitoring states of user’s motor imaginary. 24 high-torque gear motors were used to build.

Electronic components of the robots.

In order to increase the mobility of the robot, we designed a method of increasing mobility of the robot by rotation. The robots first shape its legs to form circles at both sides. The pair of legs that located at the bottom of the robot will stretch, giving robot the momentum to rotate, and return to normal position of forming circle when left the ground. The next pair of legs will continue this process. By changing radius of the circles formed, the robot can change its moving direction. The robot can also walk as classic legged robots walker, enhancing its adaptability in complex terrains.

Round Head performing rotation movement.

By using BCI implementation the robot is expected to provide assistant to human when user is occupied by certain tasks that are not available to use controller or voice recognition devices to control robots, like car or other machine fixing, goods carrying, patrolling, vehicle driving, construction and other tasks.