AI at the Edge Hack Chat with NVIDIA

Machine learning unleashed

Wednesday, May 1, 2019 12:00 pm PDT Local time zone:
Hack Chat
Similar projects worth following

John Welsh from NVIDIA will host the Hack Chat on Wednesday, May 1, 2019 at noon PDT.

Time zones got you down? Here's a handy time converter!

Join Hack ChatMachine learning was once the business of big iron like IBM's Watson or the nearly limitless computing power of the cloud. But the power in AI is moving away from data centers to the edge, where IoT devices are doing things once unheard of. Embedded systems capable of running modern AI workloads are now cheap enough for almost any hacker to afford, opening the door to applications and capabilities that were once only science fiction dreams.

John Welsh is a Developer Technology Engineer with NVIDIA, a leading company in the Edge computing space. He'll be dropping by the Hack Chat to discuss NVIDIA's Edge offerings, like the Jetson Nano we recently reviewed. Join us as we discuss:

  • NVIDIA’s complete Jetson embedded AI product line-ups;
  • What the "out-of-box" experience is like with edge embedded machines;
  • Good applications for getting started with AI projects, like the JetBot robotics kit; and
  • How is AI on the edge likely to grow going forward?

  • Hack Chat Transcript, Part 2

    Lutetium05/01/2019 at 20:06 0 comments

    John12:51 PM
    @deshipu Absolutely! AI is a very broad term, it can mean lots of things and the implementation doesn't have to be "deep learning". Deep learning is particularly interesting because it's simplicity and performance :)

    John12:52 PM
    @Inderpreet Singh Do you mean by digging into a lower level. as in "what's inside the neural network"?

    Inderpreet Singh12:52 PM
    People ask about Arduinos and not microcontrollers hence the term AI or deep learning are more broad.

    deshipu12:52 PM
    neurons, mostly...

    Inderpreet Singh12:53 PM
    Could focus on say vision based stuff. It is difficult for people to wade through ALL the information out there

    Josh Lloyd12:53 PM
    some connections, a bunch of weights heh...

    In other news I arrive 1 hour late to the hackchat again. Really looking forward to daylight savings.

    Tegwyn☠Twmffat12:54 PM
    I assusmed this hackachat was for vision based

    John12:54 PM
    @Dan Maloney That's probably the best way to learn. A lot of the concepts you can learn with very simple networks (the simplest is basically just a matrix multiplication!). When it comes to making projects, the neural networks designed to be good at image processing are particularily useful, which is we provide higher level tutorials (We think it can kickstart making projects)

    @Josh Lloyd Are you on the reminder email list? You get a reminder 30 minutes before the chat.

    Inderpreet Singh12:54 PM
    Types of networks and then types of tools like TF vs pyTorch. It can be very overwhelming. I have taught ANN to engineering but the current stuff on the net is just too much UNLESS you focus on a particular problem to be solved

    John12:55 PM
    @Inderpreet Singh Right, it can be a lot to take in. There are many components. I think its important to try to learn the fundamentals (like a single layer ANN) as well as how to really apply to practical problems using existing architectures

    Josh Lloyd12:56 PM
    @Dan Maloney The issue is that I'm asleep. Timezones :)

    John12:56 PM
    In the JetBot project we actually focus on a component of deep learning that can sometimes go under the radar, which is actually collecting a dataset

    Tegwyn☠Twmffat12:56 PM
    I spent a lot of time meandering all over th net until I found this:

    Tegwyn☠Twmffat12:57 PM
    Following the link tutorial. Best intro ever.

    Inderpreet Singh12:57 PM
    @Tegwyn☠Twmffat agreed.

    Inderpreet Singh12:57 PM
    Its a good place to start IF you have a jetson board.

    Josh Lloyd12:57 PM
    @John I really enjoy the idea of running models on low power hardware, because once the model is trained it really is magnitudes less expensive to run inference. Is there a suggested means of training at this point? Is NVIDIA offering cloud based training? Should it just be done on one's own means, on their own computer perhaps, for now?

    Josh Lloyd12:58 PM
    @John Is it likely that something such as Training as a Service might be offerred in the future? At reasonable cost, that would be competative vs. me just running it on my own GTX ?

    Tegwyn☠Twmffat12:59 PM
    @Josh Lloyd use Nvidia container on AWS

    John12:59 PM
    @Josh Lloyd I think it depends what stage you are at. As a gamer, I train on my desktop with GPU to allow me to easily iterate, experiment, etc. Once you've got a lot of data and a complex pipeline you might consider a cloud pipeline or something else.

    John1:00 PM
    @Josh Lloyd You can even train some smaller datasets on the Jetson Nano itself when you're just getting started :)

    Inderpreet Singh1:00 PM
    @John I was hoping youd' say that

    John1:00 PM
    As @Tegwyn☠Twmffat we provide containers that you can launch on a cloud provider that come with deep learning software (like TensorFlow) pre-installed

    We're getting to the top of the hour, which is the official end of the chat, but if @John wants to stick around and answer questions, that fine. Of course he may...

    Read more »

  • Hack Chat Transcript, Part 1

    Lutetium05/01/2019 at 20:05 0 comments

    Christoph11:59 AM
    never seen so many people join the room right before a chat session

    Tegwyn☠Twmffat11:59 AM
    Like bidding on ebay

    Hey everyone, welcome to the Hack Chat. Today we have John Welsh from NVIDIA here to talk about all the exciting stuff that's going on with AI at the Edge.

    Welcome John! Can you tell us a little about yourself and how you came to be working in AI?

    lossowski joined  the room.12:01 PM

    John12:01 PM
    Hey everyone! Of course. As for my job with NVIDIA - I'm an engineer on the Jetson product team focusing on how to apply deep learning with NVIDIA Jetson

    rocketmanrc joined  the room.12:02 PM

    John12:02 PM
    I got into AI during my Masters back in maryland when working on my thesis. I tried a few computer vision techniques, but wanted to give it a shot given all of the material coming out :)

    John12:02 PM
    Ultimately I was trying to make a robot follow me around campus

    Tegwyn☠Twmffat12:02 PM
    like a body guard?

    John12:03 PM
    More like a pet I think

    John12:03 PM
    I'm hoping to hear more about all the project ideas everyone has

    John12:03 PM
    I think it's an exciting time with modern AI coming to such a small form factor

    How close did you get to succeeding?

    Tom Kelley joined  the room.12:04 PM

    John12:04 PM
    The robot followed me around the lab on campus. It was a pretty fun demo, but nothing we deployed anywhere yet

    Tegwyn☠Twmffat12:05 PM
    how many selfies did you have to take ...

    Tegwyn☠Twmffat12:05 PM
    to get it to discriminate against your collegues?

    John12:05 PM
    Hah, well. Not too many actually.

    I'll chip in with my idea: driveway security camera that can differentiate between wildlife and humans/vehicles. Reduced false alarms would be the goal.

    John12:06 PM
    We used an existing dataset for person re-identification to learn important features for distinguishing people. So the neural network actually learned how to recognize people reasonably from a single camera shot

    John12:07 PM
    @Dan Maloney This sounds very cool. Is the goal ultimately to send pictures or alerts when one of these is detected?

    Max-Felix Müller12:08 PM
    When the robot follows you, it only sees your back. I imagine it would be very difficult to differentiate people that way, given that even normal people struggle with that?

    FrazzledBadger12:09 PM
    I'm looking to build a handheld wireless monitor for use in Broadcast, and wanted to use the Nano for encoding/decoding and streaming the video. Do you know the latency off hand?

    I'm thinking more of a tiered response. Keep track of wildlife intrusions (like a game camera) but send alerts for people. Send a high alert if you see a vehicle that's not known to the system, maybe via character recognition of license plates?

    alangixxer joined  the room.12:09 PM

    John12:10 PM
    @Max-Felix Müller Absolutely. Face recognition wouldn't work in that context. Person re-identification is actually using the entire body (all orientations), so it learns features from your general apperance (clothes are helpful). We planned to combine this with face recognition for short term / long term recognition

    jamesonbeebe12:10 PM
    As far as CNN's developed and trained with (TensorFlow, MATLAB, R, etc.), how is the portability of the Network supported by the Nvidia hardware?

    alangixxer12:11 PM
    @Dan Maloney I used openalpr for license plate recognition of the Nano. It works.

    @alangixxer - Sweet, good to know. Thanks!

    John12:12 PM
    @Dan Maloney This may help for general object detection with good performance on Nano. . You can fine tune existing object detectors if you put together your own dataset

    Chitoku12:12 PM
    @FrazzledBadger , that can be a cool project.

    The latency for the encoder/decoder depends on the resolution and other settings, but i guess...

    Read more »

View all 2 event logs

Enjoy this event?



rovierv wrote 10/29/2019 at 05:31 point

I like want to more about the american progressive life insurance and

  Are you sure? yes | no

remi wrote 09/06/2019 at 12:53 point

Machine learning is now very popular among young software engineers, in short, among students. This is a difficult but fascinating process that moves progress into the future, which we saw only in films. I am always pleased to see someone study hard. But they are not robots, they need rest and help. Therefore, students, who need research papers, can Check This Out and any other writing works will become easier and better.

  Are you sure? yes | no

Tegwyn☠Twmffat wrote 04/30/2019 at 17:44 point

Hello John. I've got some questions about using Nvidia DetectNet neural network. I've got a Nvidia container up and running on AWS and it's working really well but need some insights in how DetectNet treats the background in an image and other images labelled with classes specifically being NOT targeted.

For example, I want to detect carrot seedlings, but actively tell the system not to detect buttercups, which grow as a weed among the carrots. I'd call carrots 'class1' and buttercups 'class2' and have images labelled as such 'in the mix'. I'd set up detectNet with 'class1, dontcare'. I want the network to actively 'not detect' the buttercups? Is this the correct way to do it?

Also, I have bare soil as the background, so should I have specific images of bare soil to get the network to bias against soil?

Does detectNet still work ok with images that contain a lot of closeup detail but when in deployment the images are far away and the camera has cant see the details? Is it sensible to create image duplicates with some degree of blurring or just get images of the subject from far away with more background??

How much augmentation does detectNet do? Does it automatically create a set of images rotated 90 or 180 degrees or should I do this myself? Does it flip images vertically or horizontally? Does it cater for different camera exposures / lighting conditions?

So many questions!  ….. Thanks!

  Are you sure? yes | no

Lutetium wrote 04/30/2019 at 18:17 point

Forwarded to John. Looking forward to his answers.

  Are you sure? yes | no

Tegwyn☠Twmffat wrote 04/30/2019 at 18:40 point


  Are you sure? yes | no

Interested in attending?

Become a member to follow this event or host your own