Making Of “ESP32 Deus Ex”
Today I share a personal project with an ESP32 and a 320x240px TFT screen, consisting of making an avatar to interact with the user from particles, which can express expressions, blink, disintegrate and reform.
There are many projects to provide a robot with a face or expressions through some combination of screen and processor such as Arduino, ESP32 or similar. I myself have a bookstore
But, tired of “funny bookstores” I wanted to do something different and more impressive. So, inspired by the Deus Ex Machine, the final “enemy” from the “Matrix” movie
This character is the interlocutor who speaks on behalf of the machines. It consists of a face (in theory, a baby face) made up of many individual machines. He can show emotions, talk, move his eyes, etc. And he has the cool effect of being able to form or fall apart from the machines that form him.
The goal is to do something similar, within the capabilities of a microprocessor like ESP32, and make it work “for real”. Logically, the face of the film is an animation made with a lot of work by the artists, and it doesn’t work “really”. It’s just a video.
It is the typical test project, in which it is more difficult to discover how to achieve an effect like the one we want, than to program it. Here you have the final result of the project (which, by the way, is cooler live than on video)
So let’s go with the “making of” of the project, with the guidelines and steps to do it. I also share a large part of the code, in case anyone is encouraged and wants to replicate the project or do something similar.
The first thing we have to take into account is that the machine has three different states.
- Keep face, corresponds to the face formed and animating
- Forming, the particles are moving to form the chamber
- Free motion, particles fly across the screen
For that we have an enum and a state variable. We will see later where it affects the behavior of the program.
The basic structure of the program is two functions. One for render, which runs on Core 0, and one for update, which runs on Core 1.
The render function is very simple. For each particle, we draw a small circle at the coordinates of the particle. Something like that.
For its part, in the loop we only have the function to update the device, which basically reads the button presses on the device.
Next, if we have pressed a button, we change the state of the face. I do this solely for demo purposes. In a real robot, these transitions would be triggered whenever we wanted (when the robot fell over, on power up, on receiving a signal, etc).
Also Read : ECommerce Agency
The fundamental part of the program is going to be a particle. Which is a simple element that has the following definition.
As we see, we have three “positions” represented by points in 2D. We will see what the function of each of them is later. But, as a summary.
- Origin, is the position it occupies within the face
- Position, the position you currently hold
- Render, the position in which it will be displayed on the screen
Of course, we also have the speed, acceleration, color, and life of the particle. Finally we have an Update function, which simply updates the kinematics of the particle.
In the project we have a vector of 2000 particles. To get the effect we want, we must position these particles so that they form the face.
To do this, first of all I have generated a color map, superimposing 5 frames of the video, with 20% opacity. In this way we calculate the average of the frames. Subsequently, median filter and rescaling to 320x240px. Something like this remains.
On the other hand, we make a similar image in black and white. This image corresponds to the particle density map. Where it is whiter there will be more particles, and where it is black there will be none.
I could have used the same image for color and density. But having them separate allows me to play with particle density independently of color.
Now we have to generate the 2000 particles, according to the density map. To do this, we make a function that randomly creates particles on the screen. It then generates a random number from 0 to 255. If the density map pixel value is greater than the random number, the particle is created. If not, the process is repeated.
And with this, we manage to draw the original image using particles, whose density is given by the density map (obviously) and the color by the color map (obviously again). Such a thing.
Calculating Particle Speed
One part of the effect is that since the face is formed, the particles must move along the contours of the face. In this way they get the feeling that the face is made up of particles.
I tried several ways to calculate this movement, which would not put a lot of load on the processor, and at the same time the effect would look good. In the end, the best compromise between effect quality / computation time was for each particle to move to its neighbor with higher density.
To avoid having to perform this calculation every frame, this speed map is calculated only once when starting the device.
Now in each frame, while the face is held, each particle picks up the velocity of the computed velocity map. (the code isn’t particularly clean, but hey, it was Sunday, and it’s not worth much more either)
This has the negative part that, after a few frames, the particles will have moved to the local maxima of the density map, leaving the rest of the face without particles.
This is why each particle has a life, with a certain degree of randomness. The particles move according to the velocity map but, after a few frames, they are eliminated and reappear in a new position on the face.
Being so many particles and random lives, the disappearance and appearance is not perceived. Instead, the sensation is of a continuous flow of particles along the contours of the face.
Also Read : Digital Workspace