DevOps for Machine Learning

DevOps for Machine LearningAgile Stacks Machine Learning Stack provides agility and automation for machine learning.  The Machine Learning Stack incorporates open, standard software for machine learning: Kubeflow, TensorFlow, Keras, PyTorch, Argo, and others.  Agile Stacks offers an automation hub that connects all tools in the machine learning pipeline, tightly integrated with infrastructure services for scheduling, monitoring, logging, data management, storage, and security.  It helps data scientists to seamlessly leverage distributed training or simulation. This significantly reduces the effort to train models and achieve desired results.

Based on the most popular, best of breed tools, the Machine Learning Stack gives you a significant head start and saves you precious time in your machine learning development effort.  Since the platform handles the complexity of machine learning infrastructure/pipelines/ETL, data scientists have more time to focus on the modeling tasks.

DevOps automation for ML allows to accelerate the process by which an idea goes from development to production.  It helps to achieve several key objectives:

  • Fastest time to train, with as much data and as accurately as possible
  • Fastest time to inference, with ability to rapidly retrain
  • Safe and reliable deployments to observe model behavior in the real world

Deploying machine learning systems to production typically requires ability to run many models and multiple versions of models at the same time.  Your code, data preparation workflows, and models can be easily versioned in Git, and data sets can be versioned through cloud storage (AWS S3, Minio, Ceph).  Version control of all code, models, and workflows allows to concurrently run multiple versions of models to optimize results, and  provides ability to rollback to previous versions when needed.  Instead of ad-hoc scripts, you can now use Git push/pull commands to move consistent packages of ML models, data, and code into Dev, Test, and Production environments.

Using Agile Stacks Machine Learning stack, you can generate DevOps automation scripts, implement GitOps, spend less time on deployments, and more time working on your models and experiments.

The Agile Stacks Control Plane provides a central access point for data scientists and software developers looking to work together on machine learning projects. It streamlines the process of creating machine learning pipelines, data processing pipelines, and integrate AI/Machine Learning with with existing applications and business processes.  Using Machine Learning stack, you can implement an entire AI pipeline that allows to build, train, and deploy machine learning solutions that are fully automated, scalable, and portable.

A typical AI pipeline includes a number of steps:  

  1. Data preparation / ETL
  2. Model training and testing
  3. Model evaluation and validation
  4. Deployment and versioning
  5. Production and monitoring
  6. Continuous training / reinforcement learning

At the core of Machine Learning Stack is the open source Kubeflow platform, enhanced and automated using AgileStacks’ own security, monitoring, CI/CD, workflows, and configuration management capabilities.  Kubeflow is Google led open source project designed to alleviate some of the more tedious tasks associated with machine learning. It helps with managing deployment of machine learning apps through the full cycle of development, testing, and production, while allowing for resource scaling as demand increases.
 

Machine Learning Template

With Agile Stacks, you can compose multiple best of breed frameworks and tools to build a stack template and essentially define your own reference architecture for Machine Learning.  Stack services are available via simple catalog selection and provide plug-and-play support for monitoring, logging, analytics, and testing tools.  Stack template can also be extended with additional services using import of custom automation scripts.

Stack Service

Description

Available Implementations

ML Platform

The Kubeflow project is dedicated to making deployments of machine learning

workflows on Kubernetes simple, portable and scalable.

Kubeflow, Kubernetes

ML Frameworks

Supported machine learning and deep learning frameworks, toolkits, and libraries.

TensorFlow, Keras, Caffe, PyTorch

Storage Volume Management

Manage storage for data sets (structured, unstructured, audio and video),

automatically deploying required storage implementations, and providing

data backup and restore operations.

Local FS, AWS EFS, AWS EBS,
Ceph (block and object), Minio,

NFS, HDFS

Image Management

Private Docker registry allows to secure and manage the distribution of container

images. A container registry controls how container repositories and their images

are created, stored, and accessed.

Amazon ECR, Harbor Registry

Workflow Engine

Specify, schedule, and coordinate the running of containerized workflows and jobs

on Kubernetes, optimized for scale and performance.

Argo

Model Training

Collaborative & interactive model training

JupyterHub, TensorBoard,

Argo workflow templates

Model Serving

Export and deploy trained models on Kubernetes. Expose ML models via REST

and gRPC for easy integration into business apps that need predictions.

Seldon, tf-serving

Model Validation

Estimate model skill while tuning model’s hyperparameters. Compare desired

outputs with model predictions.

Argo workflow templates

 

Data Storage Services

Distributed data storage and database systems for structured and unstructured data

Minio, S3, MongoDB,

Cassandra, HDFS

Data Preparation and
Processing

Workflow application templates allow to create data processing pipelines to automatically

build container images, ingest data, run transformation code, and schedule workflows
based
on data events or messages

Argo, NATS, workflow

application templates

Infrastructure Monitoring

Monitor performance metrics, collect, visualize, and alert on all performance metric

data using pre-configured monitoring tools. Gain full visibility into your training and

inference jobs.

Prometheus, Grafana 

Model Monitoring

Continuously monitor model accuracy over time, and retrain or modify the model as needed.

Prometheus, Grafana, Istio

Load Balancing & Ingress

Expose cluster services and REST APIs to Internet. Ingress acts as a

“smart router” or entry point into your cluster. Service mesh brings reliability,

security, and manageability to microservices communications.

ELB, Traefik, Ambassador

Security

Generate and manage SSL certificates, securely manage passwords and
secrets, implement SSO and RBAC across all clusters in hybrid cloud environment.

Okta, Hashicorp Vault,
AWS Certificate Manager

Log Management

Aggregate logs to track all errors and exceptions in your model creation pipeline.

Elastic stack: Elasticsearch,
Fluentd, Kibana

 

Kubeflow Pipelines

Kubeflow Pipelines service provides a workbench to compose machine learning workflows, and packages ML code to make it reusable to other users across an organization.  It provides a workbench to compose, deploy and manage machine learning workflows that perform orchestration of many components: a learner for generating models based on training data, modules for model validations, and infrastructure for serving models in production.  Data scientists can also test several ML techniques, to see which one works best for their application.  https://github.com/kubeflow/pipelines/wiki

Machine Learning Workflow Automation

Machine learning models are only as good as their training data, so data preparation pipelines play an important role in preparing data sets for model training and validation.   Workflow automation templates allow to model multi-step workflows as a sequence of tasks, where each step in the workflow is a container.   Data scientists can define ETL tasks and other compute intensive data processing jobs that can auto-scale across multiple Kubernetes containers.  Highly automated approach to data preparation allows to prevent data errors, increase velocity of iterating on new experiments, reduce technical debt, and improve model quality.   

Machine Learning Workflow Templates are based on classic DevOps tools: Git, Docker, Argo.  To avoid unnecessary duplication of code, multiple workflows can be created from a single template.   Containers make it much easier to package and deploy any data preparation task or algorithm.  With distributed training, data scientists can achieve significant reduction in time to train a deep learning model.  When data scientists are enabled with DevOps automation, operations team no longer needs to provide configuration management and provisioning support for common requests such as cluster scale up and scale down, and the whole organization can become more agile.

Get in touch with our team to discuss your machine learning automation requirements and deployment approach.  Agile Stacks generates automation scripts that can be easily extended and customized to implement even the most complex use cases.

 

Book a Demonstration

 

Agile Stacks is a registered trademark of Agile Stacks, Inc. All product names and registered trademarks are property of their respective owners.