deploying and using oauth2_proxy to Google Kubernetes Engine

deploying and using oauth2_proxy to Google Kubernetes Engine

Ever have a cool web application (Prometheus, Kubernetes Dashboard) and wanted/needed some sort of authentication mechanism for it? Enter oauth2_proxy. This post covers using a oauth2_proxy with Kubernetes and integrating it with an NGINX ingress controller and kube-cert-manager, allowing a user to slap on authentication to any web application. overview NOTE. I am no oauth expert, but I play one on TV. I may have some details mixed up… NOTE 2 - oauth image credit to Chris Messina
deploying and using kube-cert-manager with an NGINX Ingress Controller on Kubernetes

deploying and using kube-cert-manager with an NGINX Ingress Controller on Kubernetes

As Kubernetes has been used more and more over the past few years, aspects of it have gotten progressively easier. Deploying a web application, creating a loadbalancer ingress, creating an ingress controller, and so on. The manual processes have slowly disappeared. One piece of infrastructure that can be tedius to manage is Kubernetes TLS secrets. This post walks through automating Kubernetes TLS secrets for NGINX Ingress Controller HTTPS endpoints in Kubernetes, using LetsEncrypt and the kube-cert-manager.

deploying kubernetes 1.7.3 using Terraform and Ansible

what and why In a previous post, I walked through an infrastructure deployment of a Kubernetes stack to AWS.. I have come back to it a few times, attempting to clean up the documentation, clean up the process, and improving the accessibility of the overall project. This time, I wanted to modernize it to match the current major release of Kubernetes. But there were other reasons and by modernizing this, it would allow me to explore some interesting topics like NGINX ingress controllers, Prometheus, OAuth integration, Istio, Helm, and Kubernetes Operators.

Kubernetes kubectl commands

A collection of kubectl commands that are handy. ~/.kube/config Here’s the Kubernetes config. Note that I believe that minikube creates this on creation: apiVersion: v1 clusters: - cluster: certificate-authority: /Users/USERNAME/.minikube/ca.crt server: https://192.168.99.100:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: /Users/USERNAME/.minikube/apiserver.crt client-key: /Users/USERNAME/.minikube/apiserver.key commands Get pods in wide view (shows what nodes they’re running on): kubectl get pods --all-namespaces -o wide Get Kubernetes nodes labels: kubectl get nodes --show-labels Remove all pods from a node: kubectl drain ip-10-1-1-2.

another Terraform Ansible Kubernetes

Note - I have updated this for Kubernetes 1.7.x. Deploying Kubernetes, complete with an OpenVPN access point, a CFSSL x509 certificate generation service, and an internal Kubernetes cluster DNS, complete with a Weave CNI daemonset, and kube-dns, the Kubernetes internal DNS resolver. It is a two part process; first, using Terraform, it builds the AWS infrastructure, including VPC settings, IAM roles, security groups, instances, etc. Once the infrastructure is deployed, Ansible is then used to configure the system accordingly.