Close

Inference

A project log for Tyhac covid-19 audio diagnostic stack

Tyhac is a multi-mode device made possible by deep learning and AWS. The stack stores and predicts covid-19 audio cough samples in realtime.

mickmick 10/24/2021 at 10:270 Comments

The inference endpoint and deployment used in the notebook earlier creates a 24x7 endpoint using an ML EC2 instance. I'd prefer to do the inference using Lambda, with the previous s3 pre-sign we know that we can integrate with AWS IoT easily and provides a tidy little design.

Lambda Inference

The lambda inference program follows the same inference container I've supplied in the source code repo. This helps to reduce the code and understanding. The inference is simple enough:

def model_fn(model_dir):
    """
    Loads model into memory.
    """
    logger.info('model_fn')
    path = Path(model_dir)
    learn = load_learner(model_dir + '/export.pkl')
    logger.info('TYHAC: Model has been loaded')
    return learn

As you can see above, it's just loaded the pickle file. This file is the trained model discussed eariler in the logs.

Once the model is loaded it follows this simple flow:

  1. Input object
    1. Load the object (audio sample)
  2. Predict
    1. Run the prediction against the object
  3. Output
    1. Return the prediction results

This works well enough and only takes a second or two. Perfect for this use case and proves that for this the ML EC2 instance isn't needed. 

Discussions