Below you will find pages that utilize the taxonomy term “Docker”
using kubernete jobs for one off ingestion of csv's
Running Postgres on kubernetes locally
While setting up Kubernetes locally might seem like overkill for one off data ingestion tasks, it provides several advantages:
- Creates a consistent development environment that mirrors production
- Allows testing of Kubernetes configurations before cloud deployment
- Enables development of microservices in isolation
- Provides a foundation for scaling your ML pipeline
For this tutorial, we’ll use Docker for Mac with its built-in Kubernetes support (v1.9.8). This setup offers a straightforward development experience with modern Kubernetes tooling while maintaining compatibility with cloud deployments. Though alternatives like Docker Swarm or Compose are a few years old now, Kubernetes provides a platform for building and managing data pipelines that can help ease the transition from local development to prod deployments.
TensorFlow 0.7.0 dockerfile with Python 3
edit: everything has since been updated to Tensorflow 0.7.0 which I based off of my base cuda dockerfile to use with tensorflow or theano (depending on my goals, Keras allows great flexibility in between training vs compiling)
TensorFlow
In 2015 Google came out with a new deep learning framework/tensor library similar in many ways to Theano and I enjoy using it a lot more than Theano simply due to long compile times of Theano when using Keras and TensorBoard. This will not go into detail about using Theano or TensorFlow or Keras but instead is how I built a docker image that uses a slightly older nvidia card (which for my purposes is capable of using multiple gpu’s in isolation and exiting a model on one card and not effecting the other).