**INTRODUCTION**

The **goals** of this project are the following:

- Apply the machine learning to a robot car.
- Add the PID controller to this robot.

To facilitate the understanding of this project, we have divided it into the following sections: 1) Hardware, 2) Machine Learning, 3) PID Controller, 4) Printing and Assembling the Chassis, and 5) Conclusion.

**1) HARDWARE**

In the figure below I show you the schematic diagram.

**SparkFun RedBoard Artemis**

Features: Arduino Uno R3 Footprint, 1M Flash / 384k RAM, 48MHz / 96MHz turbo available, 24 GPIO - all interrupt capable, 21 PWM channels, Built in BLE radio, 10 ADC channels with 14-bit precision, 2 UARTs, 6 I2C buses, 4 SPI buses, PDM Interface, I2S Interface and Qwiic Connector.

**2) MACHINE LEARNING**

A very good machine learning **bibliographic reference** is the following: This post is in Spanish, and in my case it helped me to understand the theory and calculation of neural networks: *https://www.aprendemachinelearning.com/crear-una-red-neuronal-en-python-desde-cero/*

In this tutorial we will create a neural network with Python and copy its weights to a network with forward propagation on the **Artemis RedBoard ATP** board, and that will allow the robot car to drive alone and without hitting the walls.

For this exercise we will make the neural network have 4 outputs: two for each motor, since to the L298N driver we will connect 2 digital outputs of the board for each car motor. In addition the outputs will be between 0 and 1 (depolarize or polarize the motor).

We will have four inputs, three correspond to the 3 sensors and the fourth is for the BIAS, the values will be 0 and 1, and they are assigned with the following logic: The sensors on the left and right will have a value of 1 if the distance is less than 13 cm, and will have a value of 0 if the distance is greater than 13 cm. The center sensor will have a value of 1 if the distance is less than 16.7 cm, and will have a value of 0 if the distance is greater than 16.7 cm. The BIAS will have a value of 1. Here we see the changes in this table:

And the actions of the engines would be the following:

To create our neural network, we will use this code developed with Python 3.7.3: *Neural_Network.py*

```
import numpy as np
# We create the class
class NeuralNetwork:
def __init__(self, layers, activation='tanh'):
if activation == 'sigmoid':
self.activation = sigmoid
self.activation_prime = sigmoid_derivada
elif activation == 'tanh':
self.activation = tanh
self.activation_prime = tanh_derivada
# Initialize the weights
self.weights = []
self.deltas = []
# Assign random values to input layer and hidden layer
for i in range(1, len(layers) - 1):
r = 2*np.random.random((layers[i-1] + 1, layers[i] + 1)) -1
self.weights.append(r)
# Assigned random to output layer
r = 2*np.random.random( (layers[i] + 1, layers[i+1])) - 1
self.weights.append(r)
def fit(self, X, y, learning_rate=0.2, epochs=100000):
# I add column of ones to the X inputs. With this we add the Bias unit to the input layer
ones = np.atleast_2d(np.ones(X.shape[0]))
X = np.concatenate((ones.T, X), axis=1)
for k in range(epochs):
i = np.random.randint(X.shape[0])
a = [X[i]]
for l in range(len(self.weights)):
dot_value = np.dot(a[l], self.weights[l])
activation = self.activation(dot_value)
a.append(activation)
#Calculate the difference in the output layer and the value obtained
error = y[i] - a[-1]
deltas = [error * self.activation_prime(a[-1])]
# We start in the second layer until the last one (A layer before the output one)
for l in range(len(a) - 2, 0, -1):
deltas.append(deltas[-1].dot(self.weights[l].T)*self.activation_prime(a[l]))
self.deltas.append(deltas)
# Reverse
deltas.reverse()
# Backpropagation
# 1. Multiply the output delta with the input activations to obtain the weight gradient.
# 2. Updated the weight by subtracting a percentage of the gradient
for i in range(len(self.weights)):
layer...
```

Read more »