Intelligence, Inside and Outside.

Take Your ML Models From Prototype To Production With Vertex AI

You’re working on a new machine learning problem, and the first environment you use is a notebook. Your data is stored on your local machine, and you try out different model architectures and configurations, executing the cells of your notebook manually each time. This workflow is great for experimentation, but you quickly hit a wall when it comes time to elevate your experiments up to production scale. Suddenly, your concerns are more than just getting the highest accuracy score.

Sound familiar?

Developing production applications or training large models requires additional tooling to help you scale beyond just code in a notebook, and using a cloud service provider can help. But that process can feel a bit daunting.

To make things a little easier for you, we’ve created the Prototype to Production video series, which covers all the foundational concepts you’ll need in order to build, train, scale, and deploy machine learning models on Google Cloud using Vertex AI.

Let’s jump in and see what it takes to get from prototype to production!

Getting started with Notebooks for machine learning

Episode one of this series shows you how to create a managed notebook using Vertex AI Workbench. With your environment set up, you can explore data, test different hardware configurations, train models, and interact with other Google Cloud services.

Training custom models on Vertex AI

You might be wondering, why do I need a training service when I can just run model training directly in my notebook? Well, for models that take a long time to train, a notebook isn’t always the most convenient option. And if you’re building an application with ML, it’s unlikely that you’ll only need to train your model once. Over time, you’ll want to retrain your model to make sure it stays fresh and keeps producing valuable results.

Manually executing the cells of your notebook might be the right option when you’re getting started with a new ML problem. But when you want to automate experimentation at scale, or retrain models for a production application, a managed ML training option will make things much easier.

Episode 3 shows you how to package up your training code with Docker and run a custom container training job on Vertex AI. Don’t worry if you’re new to Docker! This video and the accompanying codelab will cover all the commands you’ll need.

CODELABTraining custom models with Vertex AI

How to get predictions from an ML model

Machine learning is not just about training. What’s the point of all this work if we don’t actually use the model to do something?

Just like with training, you could execute predictions directly from a notebook by calling model.predict. But when you want to get predictions for lots of data, or get low latency predictions on the fly, you’re going to need something more than a notebook. When you’re ready to use your model to solve a real world problem with ML, you don’t want to be manually executing notebook cells to get a prediction.

Read More  We Mapped Every Large Solar Plant On The Planet Using Satellites And Machine Learning

In episode 4, you’ll learn how to use the Vertex AI prediction service for batch and online predictions.

CODELABGetting predictions from custom trained models

Tuning and scaling your ML models

By this point, you’ve seen how to go from notebook code, to a deployed model in the cloud. But in reality, an ML workflow is rarely that linear. A huge part of the machine learning process is experimentation and tuning. You’ll probably need to try out different hyperparameters, different architectures, or even different hardware configurations before you figure out what works best for your use case.

Episode 5, covers the Vertex AI features that can help you with tuning and scaling your ML models. Specifically, you’ll learn about hyperparameter tuningdistributed training, and experiment tracking.

CODELABHyperparameter tuning on Vertex AI
CODELABDistributed Training on Vertex AI

We hope this series inspires you to create ML applications with Vertex AI! Be sure to leave a comment on the videos if you’d like to see any of the concepts in more detail, or learn how to use the Vertex AI MLOps tools.

If you’d like try all the code for yourself, check out the following codelabs:

 

 

By Nikita Namjoshi, Developer Advocate
Source Google Cloud


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Share this article
Shareable URL
Prev Post

Cross-Industry Hardware Specification To Accelerate AI Software Development

Next Post

ABB Expands US Manufacturing Footprint With Investment In New EV Charger Facility

Read next