Zakhar the Robot

Zakhar is a robotics UX project. The main aim is to decrease the anxiety of users interacting with a robot.

Public Chat
Similar projects worth following
The robot using life concepts for its program architecture (instincts, emotions, etc.)

Zakhar is a robotics UX project. The main aim is to decrease the anxiety of users interacting with a robot.

Assumed: that the humans can interact the most effective and seamless with other living creatures

Suggested: to develop a robot's program structure in the way when basics of its logic are understandable from the robot's behavior. The behavior should mimic the animal one (as the human is an animal too).

Suggested to split the program into three parts:

  • Conscious
  • Unconscious
  • Reflexes

Conscious represents simple intentions: move forward, sleep, run away, search something, etc.

Unconscious is responsible to solve how to realize those intentions: what to do for moving, how to behave if looking something etc.

Reflexes are small algorithms that can monopolize Unconscious from Conscious in some very explicit situations. E.g. panic and fear if something breaking, convulsions if robot stuck and can't move as the conscious tells.

Main parts:

Read more: Robot with the Conscious: Imitating animal behavior for reducing user’s anxiety

  • Zakhar 2: Migration to a New Frame

    Andrei Gramakov08/31/2021 at 20:00 0 comments
  • Dear I2C, I resemble

    Andrei Gramakov08/22/2021 at 23:04 0 comments

    I2C is a simple and elegant protocol. But probably because of its simplicity, MCU manufacturers deliver it poorly to developers.

    I'm using several platforms in the project, and all of them have completely different driver implementations. Ok, Arduino means to be simple, I'm not complaining there. But my dear ESP32 brought me so much frustration with I2C (unlike STM32 and Raspberry Pi where everything works perfectly). I spent a couple of weeks solely trying to get it working as I designed, but even after writing my interruption handler, I haven't gotten any success. 

    To be precise, I lack the START interruption, which exists on paper but doesn't work on my boards. I presume that there are some problems in hardware (e.g., here my ex-colleague points out that the hardware is not that flexible). Also, it can be that something is slipping out from me, and the problem can be easily solved...

    I want to take advantage of this situation and will try to switch to a more suitable bus for robotic applications. Let's make it CAN.

    I will start with the creation of a small device that can verify the connection to each module in the system and development of some minimalistic protocol that could fulfill project needs.

  • Got a 3D Printer

    Andrei Gramakov08/22/2021 at 21:43 0 comments

  • Zakhar II: Separately controllable DC motor speed

    Andrei Gramakov07/10/2021 at 09:34 0 comments

  • "On the way to AliveOS" or "making of a robot with emotions is harder than I thought"

    Andrei Gramakov03/22/2021 at 20:52 0 comments

    So, indeed, the more I develop the software for Zakhar, the more complicated it goes. So, first:

    Contributions and collaborations are welcome!

    If you want to participate, write to me and we will find what you can do in the project.

    Second, feature branches got toooooo huge, so I'll use the workflow with the develop branch (Gitflow) to accumulate less polished features and see some rhythmic development. Currently, I'm actively (sometimes reluctantly) working on the integration of my EmotionCore to the Zakhar's ROS system. So now, the last results are available in the `develop` branch:

    Third, to simplify the development of the robot's core software I would like to separate really new features from the implementation. So, Zakhar will be a demonstration platform and the mascot of the project's core. The core I'd like to name the AliveOS, since implementation of the alive-like behavior is the main goal of the project. I am not sure entirely what the thing AliveOS will be. It is not an operating system so far, rather a framework, but I like the name :).

    Fourth and the last. Usually, I write such posts after some accomplishment. That's true and today. I just merged the feature/emotion_core branch to the develop one (see above). It is not a ready to use feature, but the core is working, getting affected by sensors and exchanging data with other nodes. It doesn't affect the behavior for now. For this I need to make some huge structural changes. The draft bellow (changes are in black and white)

    Next steps:

    1. Implementing of the new architecture changes
    2. Separation of the core packages into the one called AliveOS
    3. Moving AliveOS to the separated repository and including it as a submodule (the repo already exists:

    Thank you for reading! Stay tuned and participate!

  • EmotionCore - 1.0.0

    Andrei Gramakov02/23/2021 at 19:59 0 comments

    First release!

    The aim of this library is to implement an emotion model that could be used by other applications implementing behavior modifications (emotions) based on changing input data.

    The Core has sets of **parameters****states** and **descriptors**:

    **Parameters** define the **state** of the core. Each state has a unique name (happy, sad, etc.) **Descriptors** specify the effect caused by input data to the parameters. The user can use either the state or the set of parameters as output data. Those effects can be:

    - depending on sensor data
    - depending on time (temporary impacts)

    It is a cross-platform library used in the Zakhar project. You can use this library to implement sophisticated behavior of any of your device. Contribution and ideas are welcome!

    Home page: r_giskard_EmotionCore: Implementing a model of emotions

  • Updated draft of the ROS-node network with the Emotional Core

    Andrei Gramakov01/28/2021 at 15:25 0 comments

    Working on the Emotion Core update it became clear to me that placing responsibility of emotional analysis to Ego-like nodes (a Consciousness, Reflexes, and Instincts) is a wrong approach. It leads to the situation where the developer of the application should specify several types of behavior themself, which is too complicated for development.

    I want to implement another approach when concepts themselves  contain information about how they should be modified based on a set of the emotion parameters. For example: 

    1. An Ego-like node sends the concept `move` with a modifier `left`:

        "concept": "move", 
        "modifier": ["left"] 

    2. The concept `move` contains a descriptor with information that: if adrenaline is lower than 5 - add a modifier `slower`

    3. The Concept-to-command interpreter update the concept to:

        "concept": "move",
        "modifier": ["left", "slower"]

     4.  According the concept descriptors the Concept-to-command interpreter sends commands to the device Moving Platform:

    In addition, I updated the diagram itself by structuring it and adding notes to make the entire system easier to understand. Here is the diagram:

    Now I will implement the structure above in the emotion_core branch of the zakharos_core repository. The feature is getting closer to implementation. More updates soon.


  • Draft of the updated ROS-node network with the Emotional Core

    Andrei Gramakov01/12/2021 at 12:38 0 comments

    Complexity of the networks is getting increased in a non-linear manner with any new type of node added, so documentation of the development is becoming more crucial than ever. So, I've decided to, firstly redraw the network diagram to make it easier to read and contain more useful information. Then I've spent some time on how the Emotional Core should interact with other nodes.

    Initially, I thought that naming each condition based on a set of emotional parameters is a clever idea. With that approach, we would only send the name of the emotion to the main program and that emotion would affect the robot behavior. But it leads to the situation when the robot has a set of discrete states, in other words, I would develop a pretty sophisticated state machine - and it is way far from how the animals behave.

    Analyzing my feelings, I also cannot say that the name of emotions defines my behavior, I would say that it is rather an uninterrupted specter of states. So, it would be that the researches of Carroll Izard and his colleagues in distinguishing human emotions I've read a lot last months are not applicable to my project. As well as the part of the Emotional Core responsible for changing emotions depending on a set of emotional parameters. A bit sad about spent time, but it is the development process.

    So now I have a draft of the structure I will use integrating the Emotional Core into the mind of Zakhar.

  • Update of the ROS-network

    Andrei Gramakov01/10/2021 at 21:32 0 comments

    Hi! I merged a huge update for zakharos_core - the main part of the Zakhar project.

    The repository is a Robot Operating System network where the main application consists of ego-like nodes: the consciousness (the main application) and instincts (interruptions). Each of them operates with concepts in the manner as our mind works.

    As I already said, my aim is to make a robot that behaves like an animal, and hence having more understandable for the user behavior.

    After this update the next step is to integrate my already developed emotional core , which is basically an attempt to recreate the endocrine system of alive organisms.

    Returning to the update. Currently the robot uses three ego-like nodes:

    •  node-consciousness: the Small researcher - robot is moving by circles. 
    •  instinct: Bird Panic - analyzing light changing patterns to recognizing a single moving fast shadow, then put the robot in the panic state, when it is trying to find a darker (safer) place.
    • instinct: Avoid Close Objects - every object closer then 5 cm should be avoided. Object in front leads to moving backward. Objects at sides activate the algorithm: move back, turn on 60 degrees.

    The ROS-network is shown on the picture bellow. Also have a look at the robot in action on a video. Thanks!

  • Hardware structure

    Andrei Gramakov01/09/2021 at 16:15 0 comments

    Just added illustrations describing hardware used in the project to the repository:

    Here are the pictures:

    And detailed for each device:

    Read more »

View all 41 project logs

Enjoy this project?



Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates