Use Docker to contain code environment
Use Google Platform to expose to world wide web
Kubernetes to cluster it so it can exist on a server for use across the world.
You can basically make as many clusters as you want.
Some of the gotchas about Kubernetes:
If you have a lot of traffic to your website and a lot of applications up and running, you could burn through the credit quickly and the credit card you used to sign up for your account could be potentially charged.
Creating an account on Google Cloud Platform
Getting a GCP SDK up and running locally
Creating an deployment specification file
Creating a Kubernetes cluster to deploy our Docker images
Load balancing: If you have thousands of people hitting your project at the same time, a load balancer can split off that traffic to another place.
Set up processes so you have the software enabled in their cloud platform so your command line would work.
gcloud
If you get some sort of error or run a command and you have the ability to run gcloud, you've installed SDK properly and your command line interface is up and ready to go.
gcloud --help
will show you all the things you can do with the gcloud command
gcloud components install kubectl
Telling GCloud what project to interface with:
set
GUI:
console.cloud.google.com
Add new project
Create environment variable
Dockerfile:
FROM node:8.7
COPY package.json package-lock.json ./
WORKDIR /
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
server.js:
/*******************************
* Express Setup
**********************************/
const express = require('express');
const app = express();
/*******************************
* Server!
*******************************/
const PORT = 8080;
const HOST = '0.0.0.0';
const server = app.listen(PORT, HOST, () => {
console.log('sever online');
});
/*******************************
* Endpoint
********************************/
app.get('/', (req, res) => {
res.send('Hello World\n');
});
app.get('/version', (req, res) => {
res.send('1.0.0\n');
Array.forEach(req, (key) => {
axios.get('myspecialserver/requestdistributor', (result) => {
axios.get('database', (result) => {
// all this stuff takes a really long time
});
});
});
});
docker build -t gcr.io/${PROJECT_ID}/${DOCKER_IMAGE_NAME}
gcr - Google Container Registry
docker run -d -p 8080:8080 gcr.io/${PROJECT_ID}/${DOCKER_IMAGE_NAME}
Ryan used the GUI instead of the command line (command line instructions did not work for him)
gcloud container clusters get-credentials ${CLUSTER_NAME} --zone="${YOUR TIMEZONE HERE}"
If you're working on multiple projects with your GCP, you need to tell your command line interface what project you're working on:
gcloud config set project ${PROJECT_NAME}
gcloud docker -- push gcr.io/${PROJECT_ID}/${DOCKER_IMAGE_NAME}
yml - YAML Ain't Markup Language
Key/value pairs
Creating file to give Google Cloud Platform the instructions needed
deployment.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ${PROJECT_ID} # You'll need to manually type this out. The environment variable won't work
spec:
replicas: 2
template:
metadata:
labels: # labels to select/identify the deployment
app: ${PROJECT_ID}
spec: # pod spec
containers:
- name: ${PROJECT_ID}
image: gcr.io/${PROJECT_ID}/${DOCKER_IMAGE_NAME}:v1 # image we pushed
ports:
- containerPort: 8080
kubectl create -f ${NAME_OF_YML_FILE}.yml --save-config
Get deployments:
kubectl get deployment
Pods:
kubectl get pods
Will return pods - how many depends on the number of replicas in yml file
Ability to take Docker container, put it on a server, leverage the cluster to go spread it to the world wide web. Will expose public IP of the server.
Recap of What We've Done:
That cluster will be a phyiscal server somewhere in the world (or multiple servers in the world)
Now we're going to expose our cluster to the world and it'll give us a port to access.
The flag LoadBalancer is in place because as one piece of traffic hits the public IP, send it over to this cluster, this pod. If another human hits this IP, send it to that pod. And it balances the load. You could have 5000 users, but 2500 will be split between each pod
kubectl expose deployment ${NAME_OF_DEPLOYMENT} --type="LoadBalancer"
kubectl get services
Public IP address is now hosting web application
We now have a public runtime environment for our Node application. We also have a local environment that we're using Docker to build and launch.
As you continue to play around with these things, know there are a lot of things to continue to learn, but following these steps you could potentially do the same thing with your back-end and eventually once you walk through the same type of process with your front-end project you could see how it could expose a React application. And then you can get those two to talk to each other across the world wide web, knowing that you're going to have to learn a lot about environment variables and production versus development.
Docker is so robust, you can do anything with Docker.
For prototyping, mLab and MongoDB Atlas
mLab
Free tiered account
Create new project
Mongo documents are cheap (in the sense that they're so small in size)
mongodb://<dbuser>: <dbpassword>@ds239128.mlab.com:39128/cs5-show-n-tell
Firebase is a back-end as a service
Do a lot less server-side coding with it
Really cool real-time database environment
You could accomplish similar deployment features as Docker with a virtual machine like Vagrant
# Delete the Kubernetes load balancer service
kubectl delete service/${NAME_OF_DEPLOYMENT}
# Delete the Kubernetes deployment itself
kubectl delete deployment/${NAME_OF_DEPLOYMENT}
# Delete your GCP cluster
gcloud container lusters delete ${NAME_OF_CLUSTER} --zone="${TIME_ZONE}"
If you make changes to your yml file, can run: kubectl apply -f {NAME_OF_YML_FILE.yml}
to force the apply