Issue Solved
I'm trying to use ngrok ingress controller from ngrok to expose my nodejs deployment.
I have the followed the above guide to integrate ngrok ingress controller with my deployment to expose it. However I ran into an issue where Liveness and Readiness probe failed and after awhile the pod status will show CrashLoopBackOff. This happens after I run the helm install command from the ngrok guide. I'm not too sure if its a firewall issue or is it something else.
The follwing will show the errors outputs and an attachment of my manifest.yaml file
$ kubectl get all -n pi-deploy
NAME READY STATUS RESTARTS AGE
pod/website-deploy-0 1/1 Running 0 46m
pod/ngrok-ingress-controller-kubernetes-ingress-controller-man4vf2z 0/1 Running 1 (59s ago) 2m1s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/website-deploy ClusterIP 10.43.1.91 80/TCP 46m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ngrok-ingress-controller-kubernetes-ingress-controller-manager 0/1 1 0 2m1s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ngrok-ingress-controller-kubernetes-ingress-controller-manager-7c6f4cf67d 1 1 0 2m1s
NAME READY AGE
statefulset.apps/website-deploy 1/1 46m
$ kubectl describe pod/ngrok-ingress-controller-kubernetes-ingress-controller-man4vf2z -n pi-deploy
NAME READY STATUS RESTARTS AGE
pod/website-deploy-0 1/1 Running 0 47m
pod/ngrok-ingress-controller-kubernetes-ingress-controller-man4vf2z 0/1 Running 3 (15s ago) 3m17s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/website-deploy ClusterIP 10.43.1.91 80/TCP 47m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ngrok-ingress-controller-kubernetes-ingress-controller-manager 0/1 1 0 3m17s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ngrok-ingress-controller-kubernetes-ingress-controller-manager-7c6f4cf67d 1 1 0 3m18s
NAME READY AGE
statefulset.apps/website-deploy 1/1 47m
┌──(cyrof㉿kali)-[~]
└─$ kubectl describe pod/ngrok-ingress-controller-kubernetes-ingress-controller-man4vf2z -n pi-deploy
Name: ngrok-ingress-controller-kubernetes-ingress-controller-man4vf2z
Namespace: pi-deploy
Priority: 0
Service Account: ngrok-ingress-controller-kubernetes-ingress-controller
Node: knode1/192.168.86.31
Start Time: Sun, 23 Jul 2023 23:17:47 +0800
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ngrok-ingress-controller
app.kubernetes.io/name=kubernetes-ingress-controller
pod-template-hash=7c6f4cf67d
Annotations: prometheus.io/path: /metrics
prometheus.io/port: 8080
prometheus.io/scrape: true
Status: Running
IP: 10.42.1.91
IPs:
IP: 10.42.1.91
Controlled By: ReplicaSet/ngrok-ingress-controller-kubernetes-ingress-controller-manager-7c6f4cf67d
Containers:
ngrok-ingress-controller:
Container ID: containerd://e1affe57cc4756ed15e417e712c0814fdf29ef7139903162613fba1736aa1d77
Image: docker.io/ngrok/kubernetes-ingress-controller:0.8.0
Image ID: docker.io/ngrok/kubernetes-ingress-controller@sha256:ca189da6c28d02a6480ad8dbb615df7609fa08445aebf026ff1e6a46ec267540
Port:
Host Port:
Command:
/manager
Args:
--controller-name=k8s.ngrok.com/ingress-controller
--zap-log-level=info
--zap-stacktrace-level=error
--zap-encoder=json
--health-probe-bind-address=:8081
--metrics-bind-address=:8080
--election-id=ngrok-ingress-controller-kubernetes-ingress-controller-leader
State: Running
Started: Sun, 23 Jul 2023 23:20:49 +0800
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Sun, 23 Jul 2023 23:19:49 +0800
Finished: Sun, 23 Jul 2023 23:20:49 +0800
Ready: False
Restart Count: 3
Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
NGROK_API_KEY: Optional: false
NGROK_AUTHTOKEN: Optional: false
POD_NAMESPACE: pi-deploy (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jgng5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-jgng5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m23s default-scheduler Successfully assigned pi-deploy/ngrok-ingress-controller-kubernetes-ingress-controller-man4vf2z to knode1
Normal Killing 2m22s kubelet Container ngrok-ingress-controller failed liveness probe, will be restarted
Warning Unhealthy 2m21s kubelet Readiness probe failed: Get "http://10.42.1.91:8081/readyz": read tcp 10.42.1.1:59128->10.42.1.91:8081: read: connection reset by peer
Normal Pulled 2m21s (x2 over 3m22s) kubelet Container image "docker.io/ngrok/kubernetes-ingress-controller:0.8.0" already present on machine
Normal Created 2m21s (x2 over 3m22s) kubelet Created container ngrok-ingress-controller
Normal Started 2m21s (x2 over 3m21s) kubelet Started container ngrok-ingress-controller
Warning Unhealthy 102s (x5 over 3m2s) kubelet Liveness probe failed: Get "http://10.42.1.91:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 92s (x12 over 3m12s) kubelet Readiness probe failed: Get "http://10.42.1.91:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
$ kubectl logs pod/ngrok-ingress-controller-kubernetes-ingress-controller-man4vf2z -n pi-deploy
{"level":"info","ts":"2023-07-23T15:21:49Z","logger":"setup","msg":"starting manager","version":"0.8.0","commit":"2d38bc7e6cbc1e413f6d75a7e15e781242c7674f"}
{"level":"info","ts":"2023-07-23T15:21:50Z","logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":"2023-07-23T15:21:50Z","logger":"setup","msg":"found matching ingress","ingress-name":"website-deploy-ingress","ingress-namespace":"pi-deploy"}
This is my manifest.yaml file
apiVersion: v1
kind: Service
metadata:
name: website-deploy
namespace: pi-deploy
spec:
# type: LoadBalancer
selector:
app: website-deploy
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: website-deploy
namespace: pi-deploy
spec:
replicas: 1
selector:
matchLabels:
app: website-deploy
template:
metadata:
labels:
app: website-deploy
spec:
restartPolicy: Always
containers:
- name: backend
image: cyrof/pi_website_docker:master
ports:
- name: http
containerPort: 80
envFrom:
- secretRef:
name: pi-env
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health-check
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /health-check
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
startupProbe:
httpGet:
path: /health-check
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
dnsPolicy: None
dnsConfig:
nameservers: ["8.8.8.8"]
searches:
- default.svc.cluster.local
- svc.cluster.local
- cluster.local
options:
- name: ndots
value: "2"
---
# ngrok Ingress Controller Configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: website-deploy-ingress
namespace: pi-deploy
spec:
ingressClassName: ngrok
rules:
- host: 9f3122c8f71c-10425669031728034697.ngrok-free.app
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: website-deploy
port:
number: 80
I tried to increase the timeout for the liveness and readiness probes but to no avial. I'm really in a pinch here on how to go about it.
helm install ngrok-ingress-controller ngrok/kubernetes-ingress-controller \
--namespace ngrok-ingress-controller \
--create-namespace \
--set credentials.apiKey=$NGROK_API_KEY \
--set credentials.authtoken=$NGROK_AUTHTOKEN \
--set tunner.timeout=60s
Update
I think its a dns issue. When i remove this part of the code my website-deploy pod also gives the same error as the ingress controller. Is there a way to make this part of the code global instead of specifying each pod?
dnsPolicy: None
dnsConfig:
nameservers: ["8.8.8.8"]
searches:
- default.svc.cluster.local
- svc.cluster.local
- cluster.local
options:
- name: ndots
value: "2"
Solved
I manage to fix the issue by configuring coredns to make nameserver to 8.8.8.8 instead of the default which fix my error. Thanks everyone for the help.