Close
0%
0%

[2021] Aina - Humanoid plataform ROS robot

Aina is an open-source social robot that's able to speak and move with nearly no human intervention based in ROS that has AI inbuilt.

Similar projects worth following

Related links

https://github.com/magokeanu/aina_robot

https://hub.docker.com/repository/docker/magokeanu/aina_ros_jetson

------------------------------------------------------------------------------------------------------------------------------------------

Aina is a humanoid robot that has social capabilities, now I'm going to explain to you why this is an interesting project.

Humanoid robots are fascinating because when are well done they are capable of communicating with people in a significant way, and that's the core of my project, I mean "effective social interaction", to achieve that I was working under the (now in term of design) the concept of modularity, this allows me to divide the robot among three mayor subsystems wich characteristic are/will be:

  • Head: this is part that enables the emotional side of the communication through facial expressions with the eyes and eyebrows (and sometimes the neck), no mouth is added because I think that the noise of it moving can be annoying when the robot talks.
  • Arms: No much to say, the arms are designed in an anthropological way and due to financial constraints this cant can have a huge torque, but I can it move with some NEMA 14 with 3d printed cycloidal gear to gesticulate and grasp very light things, so it is fine.
  • Mobile base: This is the part that enables the robot to map, positioning, and move the robot in the world through a visual slam with a Kinect V1.

Divide to conquer also applied to the software counterpart, thanks to docker I can create very different diverse programs grouped in containers (one for each subsystem) and that allow me to say "hey, I want to change the way that the mobile base perceives the word without but keep everything else working", but this is not only good for me because if one of you want to modify or just make one of the parts for your projects you can not only have the 3d parts, but also the working software in a container ready to work in minutes.

To interconnect all the programs ROS melodic containers are used and implemented in a jetson nano, this is important because the jetson allows me to implement Deep Learning networks alongside robotics algorithms without the libraries and python versions makes problems between them.

I really think that this can be not only a good companion because of its social interaction design orientation, but also due to its ROS and docker based is easy to add more functionalities like connect it to google calendar or to smart home to manage calendars, light and so on, besides in the hardware part is also easy to modify.

In the logs, I'll explain some things about how I design it and all the parts involved, but in the mid-time, you can see an image of the completed robot.

I think that this robot can be a good companion due to its facial and conversational habilities, the possibilities maybe not be endless, but sure are a lot, it has a webcam in the head and also en the base (the RGB part of the Kinect) so it can follow a person for example, or by it slam process check the house and inform if something odd is happening (neural networks can make the detection part works with just images), it can al remember things to you (meetings for the calendar maybe), I aim to create a cheap and easy to mod version of the Pepper humanoid robot, so follow the project if you are interested and any feedback is welcome.

Aina_robot_design.part2.rar

F3D and STEP format part 2.

RAR Archive - 50.00 MB - 09/27/2021 at 07:33

Download

Aina_robot_design.part1.rar

F3D and STEP format part 1.

RAR Archive - 50.00 MB - 09/27/2021 at 07:32

Download

Aina_robot_design.part3.rar

F3D and STEP format part 3.

RAR Archive - 9.58 MB - 09/27/2021 at 07:31

Download

Cycloidal_Gear_20_1.rar

F3D and STEP format.

RAR Archive - 666.70 kB - 09/27/2021 at 07:31

Download

  • Subsystem - Arm

    Maximiliano Rojas12/01/2021 at 21:25 0 comments


    Design

    The design of the arm is made in Autodesk's Fusion360 software, which is 40 cm long, where the arm and forearm occupy 20 cm each, the materials for the structures are PVC tubes of 20 mm in diameter and pieces 3D printed. An important aspect is that of the torque that each of the motors must exert, which makes it possible not to move the structure itself and to be able to manipulate objects, by a simple torque analysis it is known that the base motors always (that one in the shoulder) will be the one that exerts the most force per unit of distance at the time by carrying out the movements, for this reason, it was decided to use two NEMA 17 motors with 3D printed cycloidal reducers, while for the elbow a servomotor is simply used, the end effector consists of a clamp with one degree of freedom (open and close). In Figure 1 you can see the complete design.

                                                                                                                    Figure 1 - Arms design.

    As it is the M1 motor that has the highest load in terms of force, it was decided to base the design of the gearbox on it, to include the weight of this element and add a safety factor, the mass of the center C3 was increased to 0.45 kg, while to include the weight of the structure and the hand, a mass of 0.6 kg is selected for the center C4, then the estimated maximum torque is the one calculated in Equation 1.

    As M1 is associated with the stopping torque of the motor, it is necessary to choose a reduction level that makes the magnitude of interest exceed this value, for practical purposes and to eliminate any possibility of failure, a gearbox with a 20: 1 ratio is designed. This generates that at the output of the servomotor a maximum of 80 kg-cm is generated, this criterion is maintained for the rest of the motors, which leaves a margin of ideally 0.725 kg-cm to lift objects at a distance of 40 cm from the motor M1.

    Concerning the control part, it is possible to implement the FABRIK algorithm correctly in MATLAB (code in github), to test the ability to generate spatially and temporally coherent movements, that is, that there are no large angular differences in the space of arm configurations that at each moment is established, the end effector is made to follow a series of points belonging to a pure or deformed circumferential curve in the generated 3D space. The parametric curves can be seen in Equations 2, 3, 4, and 5.

    The results are shown in Figure 2.

                                                                                                                Figure 2 - Matlab plots.

    Regarding the control of the arms, it can be said that the NEMA motors move by angular step control, that is, an H bridge sends signals so the motor moves 1.8 degrees per step in a certain direction until it reaches the desired angular position, proportional control is not...

    Read more »

  • Subsystem - Mobile base

    Maximiliano Rojas09/27/2021 at 09:45 0 comments

     Hi guys, in this log I intend to explain a little bit how the mobile platform works, so as usual this subsystem will be divided into two main topics: hardware and software, but before that lets see a video!


    Software

    One of the main concepts in the whole process of this robot design is modularity, for this reason, I think ROS is a good choice, with this meta operative system you can easily create programs and interconnect them, basically, it establishes a solid way to create complex robotic systems with not only a high degree of personalization but also its wiki provides a large amount of documentation and ready-to-go packages of programs, a good characteristic to fastenest the prototype development process, of course, ROS by itself can be very confusing and tangled when the projects become big enough and that's why I decided to use docker with it, containers allow me to separate the software in different groups that can communicate between them (a nice trick is to configure them to connect to the host network so no multimachine ROS parameters assignment are necessary), in the light orange square of the Figure 1 you can see the general and simplified structure of the docker-ROS:

                                                                                                    Figure 1 - General software configuration.

    Let's briefly summarise what all the navigation nodes do and how they work together:

    • Rosserial: This a ROS package provided for the community that allows the computer to communicate easily with some microcontrollers development boards such as Arduino and Esp8226, the microcontroller can be a node with a very similar structure with a c++ one, and there are enough messages types to prototype[1].
    • Freenect: This is another ROS package that provides all the necessary files to connect a PC with a Kinect V1, when is launched it provides several topics with RGB, cloud point, registered point cloud, and tilt control messages (between others) basically with this I can obtain the necessary data to create the navigation maps[2}.
    • Depth_to_laserscan: As its name says this package simulates a laser scan sensor (like a lidar) from the depth cloud map from the, in this case, the Kinect V1, a necessary part because this message type (laserscan) is necessary to create a 2D grid map[3].
    • Odom_tf: This is just a custom made node that connects the reference frame of the Kinect V1 with the reference frame of the mobile robot, this is important because the general system needs to know where the camera is positioned to adequate the position of the cloud points in the representation of the real world [4].
    • Mov_base: It provides the implementation of the actions to control the robot mobile base in the real world given a goal in it, it has global and local maps and planners to modify the mobile base behavior to the grid map and the local objects that can suddenly appear, at first it can be a little tricky to make it works with all the other packages (at least for me), so in [5] you can read more about it in a more comprehensive way.
    • RTAB-map: This is a ROS wrapper of the Real-Time Appearance-Based Mapping algorithm, basically it is a visual RGB-D SLAM implementation for ROS, which means the robot can generate a 3D map of the environment and through a global loop closure detector with real-time constraints, it can know where the robot is in the map, more information in [6].
    • RVIZ: This is a 3D visualization tool for ROS and is used to see all the sensor information...
    Read more »

  • Introduction - A general overview

    Maximiliano Rojas09/27/2021 at 08:24 0 comments

    As I said in the description, Aina is an open-source humanoid robot which aim is to be able to interact not only with its environment in terms of moving through it and grasp some light things but also speak and interact with humans by voice and facial expressions. There are several tasks that this robot can accomplish once it's finished like:

    • Assistance: Interaction to assist, for example, elders, students, and office workers through remembering events, looking for unusual circumstances in the home, between others
    • Development: Aina is also a modular platform to test algorithms of robotics and deep learning.
    • Hosting: It can guide people in different situations and places to inform, help, or even entertain someone.

    These tasks by no means are far from reality, a good example of it is the pepper robot, used in banks, hotels, and even houses., so let think about which characteristic the robot must have to do similar things:

    1. Navigation: It must be able to move through the environment.
    2. Facial expressions: It must have a good-looking face with enough set of emotions.
    3. Modularity: Easy to modify the software and hardware.
    4. Recognition: Of objects with Deep Learning techniques.
    5. Socialization: Be able to talk like a chatbot.
    6. Open source: Based on open-source elements as much as possible.

    Because there is a lot to do a good strategy is divide to conquer, for this reason, the project is separated into three main subsystems and a minor one:

    • Head: this is part that enables the emotional side of the communication through facial expressions with the eyes and eyebrows (and sometimes the neck), no mouth is added because I think that the noise of it moving can be annoying when the robot talks.
    • Arms: No much to say, the arms are designed in an anthropological way and due to financial constraints this cant can have high torque, but I can it move with some NEMA 14 with 3d printed cycloidal gear to gesticulate and grasp very light things, so it's fine.
    • Mobile base: This is the part that enables the robot to map, position it, and move the robot in the world through a visual slam with a Kinect V1.
    • Torso: As the middle part that interconnects the other subsystems.

    The general characteristics of the robot are:

    • A mobile base of area 40x40 cm2.
    • Total height of 1.2 meters.
    • A battery bank of 16.4@17000 mAh.
    • estimated autonomy of one and a half hours.
    • It has spaces for modularity, which means that it's easy to adapt it to future necessities.
    • The arms has 3 degrees of freedom each one and three more in the hand.
    • The software is based on ROS and docker, so it's faster and easy to develop.
    • Part of its behavior is based on deep learning techniques.

    Several parts of the robot has holes or are detachable and even interchangeable, the holes are useful to attach electronic components to the robot keeping the possibility to remove them (or add others) whenever necessary, meanwhile, the detachable parts can be modified to fulfill future requirements, like add a sensor, a screen or just make room for more internal functionalities, a good example of this is the mobile base, as you can see in the next images there are several parts that others can modify (light blue ones), and if you want to move or exchange some electronic part just unscrew it.


    Of course, the same principle applied to the rest of the robot.


    In regards to the software, as I said before is ROS and docker based, the simplified scheme of connections between the different elements are presented in the next image.

    As you can see several containers are running on a Jetson Nano, let me clarify the purpose of them, the docker container on the right part is responsible for the navigation task, it's able to do the whole slam process with  a Kinect V1 and control the non-holonomic mobile base to the goal point specified by the user, the yellow squares represent the ROS packages implemented to accomplish the task, the up-left docker container has all the necessary programs to control the head,...

    Read more »

View all 3 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates