Deploying TensorFlow Models to AWS, Azure, and the GCP

Go to class
Write Review

Free Online Course: Deploying TensorFlow Models to AWS, Azure, and the GCP provided by Pluralsight is a comprehensive online course, which lasts for 2-3 hours worth of material. The course is taught in English and is free of charge. Upon completion of the course, you can receive an e-certificate from Pluralsight. Deploying TensorFlow Models to AWS, Azure, and the GCP is taught by Janani Ravi.

Overview
  • This course will show you how to take your TensorFlow model and deploy it locally or to the cloud platform of your choice- Azure, AWS, or the GCP.

    Deploying and hosting your trained TensorFlow model locally or on your cloud platform of choice - Azure, AWS or, the GCP, can be challenging. In this course, Deploying TensorFlow Models to AWS, Azure, and the GCP, you will learn how to take your model to production on the platform of your choice. This course starts off by focusing on how you can save the model parameters of a trained model using the Saved Model interface, a universal interface for TensorFlow models. You will then learn how to scale the locally hosted model by packaging all dependencies in a Docker container. You will then get introduced to the AWS SageMaker service, the fully managed ML service offered by Amazon. Finally, you will get to work on deploying your model on the Google Cloud Platform using the Cloud ML Engine. At the end of the course, you will be familiar with how a production-ready TensorFlow model is set up as well as how to build and train your models end to end on your local machine and on the three major cloud platforms. Software required: TensorFlow, Python.

    Topics:

    • Course Overview
    • Using TensorFlow Serving
    • Containerizing TensorFlow Models Using Docker on Microsoft Azure
    • Deploying TensorFlow Models on Amazon AWS
    • Deploying TensorFlow Models on the Google Cloud Platform