• Real-time Pose Animation : Google TensorFlow

    05/12/2020 at 00:48 0 comments

    Pose Animator

    Pose Animator takes a 2D vector illustration and animates its containing curves in real-time based on the recognition result from PoseNet and FaceMesh. It borrows the idea of skeleton-based animation from computer graphics and applies it to vector characters.

    This is not an officially supported Google product.



    In skeletal animation a character is represented in two parts:

    1. a surface used to draw the character, and
    2. a hierarchical set of interconnected bones used to animate the surface.

    In Pose Animator, the surface is defined by the 2D vector paths in the input SVG files. For the bone structure, Pose Animator provides a predefined rig (bone hierarchy) representation, designed based on the keypoints from PoseNet and FaceMesh. This bone structure’s initial pose is specified in the input SVG file, along with the character illustration, while the real time bone positions are updated by the recognition result from ML models.


    // TODO: Add blog post link. For more details on its technical design please check out this blog post.

    Demo 1: Camera feed

    The camera demo animates a 2D avatar in real-time from a webcam video stream.

    Demo 2: Static image

    The static image demo shows the avatar positioned from a single image.

    Build And Run

    Install dependencies and prepare the build directory:


    To watch files for changes, and launch a dev server:

    yarn watch

    Platform support

    Demos are supported on Desktop Chrome and iOS Safari.

    It should also run on Chrome on Android and potentially more Android mobile browsers though support has not been tested yet.

    Animate your own design

    CHECK OUT THE PROJECThttps://hackaday.io/project/171547-pose-estimation

    1. Download the sample skeleton SVG here.
    2. Create a new file in your vector graphics editor of choice. Copy the group named ‘skeleton’ from the above file into your working file. Note:
      • Do not add, remove or rename the joints (circles) in this group. Pose Animator relies on these named paths to read the skeleton’s initial position. Missing joints will cause errors.
      • However you can move the joints around to embed them into your illustration. See step 4.
    3. Create a new group and name it ‘illustration’, next to the ‘skeleton’ group. This is the group where you can put all the paths for your illustration.
      • Flatten all subgroups so that ‘illustration’ only contains path elements.
      • Composite paths are not supported at the moment.
      • The working file structure should look like this:
          [Layer 1]    |---- skeleton    |---- illustration          |---- path 1          |---- path 2          |---- path 3
    4. Embed the sample skeleton in ‘skeleton’ group into your illustration by moving the joints around.
    5. Export the file as an SVG file.
    6. Open Pose Animator camera demo. Once everything loads, drop your SVG file into the browser tab. You should be able to see it come to life :D

  • Can computer think ?

    04/10/2020 at 09:58 0 comments

    From the age of A.M.Turing we've gone so forward with AI and made chat bot  using AI which can talk with someone by asking back the same question and learns while a chat and can ask same questions again and again,

    What computer is doing is merely a superficial imitation of human intelligence, which means its unable to do something that we don't know like if we don't knew log operation then computer can not made the algorithm.
    Computer can't understand it repeats that has been saved in his memory but with these constrains we have got so far that Google had made "OK google" Apple made "Siri", Microsoft made "Cortana" , But I am not comfortable with them properly so I have come up with a solution.

    What if computer starts recording and learn from reality how conversation works what people say when angry and what are the wishes they make ...First need to let computer know that that particular person said in anger and this is a good thing and from those database we can possibly get a perfect assistant that can fake human feelings learns from you at the same time.

    I am looking forward with this Drone project to start the process of saving all data of communication and reactions . 

    I have prepared a repository in Github for all kind of imitation examples :

     Github Imitation source codes,

    #here is a very simple example of imitation of human intelligence :
    r=int(input("Rate the product from 0 to 10 : "))
    def checking_your_Emotion(r):
        if r in range(0,4):
            print("You are angry with the service")
        elif r in range(5,7):
            print("You're OK with our service")
        else :
            print("You're very much satisfied")