Post with Docker definitions and usage
Google Engines
Compute Engine
GCP IaaS (Infrastructure as a Service) offering.
It lets you run VMs in the cloud. Gives you persistent storage, networking and lets you share computer resources by virtualizing the hardware. Each VM has its own instance of an Operating System and you can build an run apps on it. A disadvantage of this is, the smallest unit is a whole VM together with its application. This means it will be harder to scale if it grows.
App Engine
GCP PaaS (Platform as a Service) offering.
Instead of a blank VM you get access to a family of services that apps need. All you do is write your code and self-contained workloads that use these services and include any dependant libraries. As demand increases, your app scales seamlessly. This scales rapidly but you give up control of the underlying server architecture.
Google Kubernetes Engine (GKE)
GKE is Kubernetes as a managed service in the cloud. It is like IaaS in that it saves you infrastructure chores. Its also like a PaaS in that it was build with the needs of developers in mind.
The idea of a container is to give you the independant scalibility of workloads like in PaaS, and an abstraction layer of the OS and hardware like in IaaS. All you need for it is an OS which supports Containers and a Continer run-time. You’re visualizing the OS rather of the hardware. The environment scales like PaaS but gives nearly the same flexibility than IaaS. The container abstraction makes the code very portable. Containers are much faster than VMs and use fewer resources, because each container does not have its own instance of an OS.
GKE workload runs in clusters built from Compute Engine VMs. Because of this, Kubernetes Engine gets to take advantage of Compute’s Engine and Google VPC’s capabilities.
GKE clusters can be customized and they support different machine types, number of nodes and network settings.
Google keeps Engines refreshed with successive versions of Kubernetes. The GKE team periodically performs automatic upgrades of your cluster master to newer stable versions of Kubernetes, and you can enable automatic node upgrades.
GCP offers its own tool for building containers (other than Docker). This is an option, customers may choose not to use it.
(I leave parts of Containers, Kubernetes and Kubernetes Engine video to check again later)
Kubernetes & GKE
Kubernetes is an open source orchestrator of containers so you can better manage and scale your applications. It offers an API that lets authorized people organize your containers.
- cluster set of nodes (group of machines which represent a computing instance) where you may deploy a container in a pod. It controls the system as a whole.
- pod group of one or more containers which are deployed together. They’re started, stopped and replicated as a group.
- container simplest workload that Kubernetes can deploy.
In Google Cloud nodes are virtual machines running in Compute Engine. It makes it easy to run containerized apps.
Whenever kubernetes deploys a container or a set of related containers, it does so inside an abstraction called a pod. A pod is the smallest deployable unit in kubernetes. Think of it like a running process on your cluster. Its common to have only one container per pod but if you have multiple containers with a hard dependency you can pack them into a single pod. They will share networking and have disk storages in common. Each pod gets an unique IP ports. Containers inside a pod can communicate between each other using localhost, this means they don’t care which nodes they’re deployed on.
kubectl run commands starts a deployment with a container running a pod. A deployment represents a group of replicas of the same pod. It keeps your pods running even if a node on which some of them run on fails. You can use a deployment to contain a part of or your entire app.
By default, pods in a deployment are only accesible inside your cluster. To make them publicly available you can connect a load balancer to it, which creates a service with a fixed IP for your pods. A service is the fundamental way Kubernetes represents load balancing. In GKE this load balancer is created as a network load balancer. Any client that hits its IP will be routed to a pod behing the service.
What is a service? A service groups a sets of pods together and provides a stable endpoint for them. For example a public IP managed by a network load balancer. As deployments create and destroy pods, they get their own IP address, but those addresses don’t remain stable over time. This is solved by services.
If you need more power, you may scale it with a command. This will deploy more backend servers but they will all be available through one fixed IP address. You could also use auto-scaling by different parameters as CPU usage.
The strength of Kubernetes comes when you set a configuration file with the desired state of your system and it manages the scalability of your app.
(I leave the last part of Introduction to Kubernetes and GKE video to check again later)
Hybrid and Multi-Cloud Computing (Anthos)
Anthos is a hybrid and multi-cloud solution powered by the latest innovations in distributed systems, and service management software from Google. The Anthos framework rests on Kubernetes GKE deployed on-prem.
It allows you to have parts of your systems infrastructure on-premises while moving other parts to the cloud. Move only specific workloads to the cloud at your own pace because a full scale migration is not required for it to work.
- Kubernetes and GKE On-Prem create the foundation
- On-premises and Cloud environments stay in sync
Enterprise applications may use hundreds of microservices to handle computing workloads. Keeping track of all these services and monitoring them can quickly become a challenge. Anthos, a service mesh, takes all of these guesswork out of managing and securing your microservices. These service mesh layers communicate across the hybrid network using Cloud interconnect, as shown to sync and pass their data.