Learn options to deploy to a private cluster from a GitHub Action without exposing it to the Internet.
Introduction
When it comes to building and deploying code, who actually wants to maintain a CI/CD server for their projects? That’s where services like GitHub Actions come into their own – within a short period of time you can be building containers and publishing images to your private registry.
Featured Content Ads
add advertising hereBut what about when it comes to deploying to your servers? What if they are running on-premises or within a private VPC that you simply cannot expose to the Internet?
There’s three options that come to mind:
1) Establish a full VPN between GitHub Actions and your private network
2) Use a GitHub self-hosted runner for the deployment steps
3) Establish a temporary tunnel for deployment purposes only.
If we want to move away from managing infrastructure, then building a full VPN solution with a product like OpenVPN or Wireguard is going to create management overhead for us. We also need to be certain that we are not going to make our whole private network accessible from GitHub’s network.
Featured Content Ads
add advertising hereSelf-hosted runners can make for a useful alternative. They work by scheduling one or more actions jobs to run on servers that you enroll to the GitHub Actions control-plane. You’ll need to either install these tools to an existing server or provision a new one to act as a proxy. The risk is that you are enabling almost unbounded access to your private network.
The third option is more fine-grained and easier to automate. It involves forwarding one or more local ports from within your private network or Kubernetes cluster to the public GitHub Actions runner. The only thing it will be able to do, is to authenticate and send requests to what you’ve chosen to expose to it.
Conceptual architecture
On the left hand side we have a private VPC running on AWS. This could also be an on-premises Kubernetes cluster for instance. It has no incoming traffic enabled, other than through a load balancer for port 8123 into our inlets server. The inlets server only exposes a control plane to inlets clients. It has authentication and TLS encryption enabled.
On the right hand side, GitHub Actions needs a URL to deploy to OpenFaaS. It cannot access the OpenFaaS gateway running inside our local, private network, so we establish an inlets tunnel and forward the gateway service from the network network to localhost. It’ll only be available for the GitHub Action at this point.
Featured Content Ads
add advertising here
The inlets client binds the remote OpenFaaS Gateway to:
http://127.0.0.1:8080
within the GitHub Actions runner, but does not expose it anywhere on the Internet
Example command for the server:
export SERVER_IP=$(curl -sfSL https://checkip.amazonaws.com)
export SERVER_TOKEN=$(head -c 16 /dev/urandom | shasum | cut -d ' ' -f 1)
inlets-pro tcp server
--auto-tls-san $SERVER_IP
--token $SERVER_TOKEN
--client-forwarding
You can deploy the inlets server through a Kubernetes YAML manifest and place it alongside your OpenFaaS containers.
Create a LoadBalancer service, and wait until you have its public IP address:
cat <<EOF > inlets-forwarding-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: inlets-forwarding-server
namespace: openfaas
spec:
selector:
app: inlets-forwarding-server
ports:
- name: https
protocol: TCP
port: 8123
targetPort: 8123
nodePort: 32007
type: LoadBalancer
---
EOF
kubectl apply -f inlets-forwarding-svc.yaml
Next, create a Deployment for the inlets server:
# Populate with the IP of the LoadBalancer
# You can get the IP of the LoadBalancer by running:
# kubectl get svc -n openfaas inlets-forwarding-server
export SERVER_IP=$(kubectl get svc -n openfaas inlets-forwarding-server -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
export SERVER_TOKEN=$(head -c 16 /dev/urandom | shasum | cut -d ' ' -f 1)
# Capture the token for later use.
echo $SERVER_TOKEN > server-token.txt
cat < inlets-forwarding-deploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: inlets-forwarding-server
namespace: openfaas
labels:
app: inlets-forwarding-server
spec:
replicas: 1
selector:
matchLabels:
app: inlets-forwarding-server
template:
metadata:
labels:
app: inlets-forwarding-server
spec:
containers:
- name: inlets-forwarding-server
image: ghcr.io/inlets/inlets-pro:0.9.1
imagePullPolicy: IfNotPresent
command: ["inlets-pro"]
args:
- "tcp"
- "server"
- "--auto-tls-san=$SERVER_IP"
- "--token=$SERVER_TOKEN"
- "--client-forwarding"
ports:
- containerPort: 8123
---
EOF
kubectl apply -f inlets-forwarding-deploy.yaml
Note that only port 8123 needs to be exposed. Nobody will be able to access any services within your private network.
Check the pod’s logs with: kubectl logs deployment.apps/inlets-forwarding-server
.
In the GitHub Action, you’ll run an inlets client at the beginning of the job or just before you need it.
This is the syntax for running a inlets client with forwarding enabled:
# Populate from previous step
exp