Model Deployment at scale on KubernetesΒΆ
Yatai(ε±ε°, food cart) lets you deploy, operate and scale Machine Learning services on Kubernetes.
It supports deploying any ML models via BentoML, the unified model serving framework.
Why Yatai?ΒΆ
π± Made for BentoML, deploy at scale
Scale BentoML to its full potential on a distributed system, optimized for cost saving and performance.
Manage deployment lifecycle to deploy, update, or roll back via API or Web UI.
Centralized registry providing the foundation for CI/CD via artifact management APIs, labeling, and WebHooks for custom integration.
π Cloud native & DevOps friendly
Kubernetes-native workflow via BentoDeployment CRD (Custom Resource Definition), which can easily fit into an existing GitOps workflow.
Native integration with Grafana stack for observability.
Support for traffic control with Istio.
Compatible with all major cloud platforms (AWS, Azure, and GCP).
Learn YataiΒΆ
Staying InformedΒΆ
The BentoML Blog and @bentomlai on Twitter are the official source for updates from the BentoML team. Anything important, including major releases and announcements, will be posted there. We also frequently share tutorials, case studies, and community updates there.
To receive release notification, star & watch the Yatai project on GitHub. For release notes and detailed changelog, see the Releases page.
Getting InvolvedΒΆ
Yatai has a thriving open source community where hundreds of ML practitioners are contributing to the project, helping other users and discuss all things MLOps. π Join us on slack today!