Configuring Plugin dependencies while coding a task is a common approach in programming today. For example, Java based Applications download plugins during compilation and use them during execution. Few Plugins like Database, AWS SDK reach external endpoints to establish connection.
When an application fails to execute these plugins due to connection errors, the first thing that comes in the workflow is DNS.
Application (reach out to an endpoint)—--> DNS Configuration (/etc/resolv.conf file)— — -> App tries to reach nameserver from DNS Config— — --> Nameserver responds back with IP address of the domain name
When someone deploys application on Kubernetes, Nameserver IP address would be IP address of coredns Service running configured as Service Object on the Cluster. You can validate this using following command.
- Get coredns Service IP
kubectl get service kube-dns -n kube-systemOutput:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 104d
- Validate /etc/resolv.conf file on a container
kubectl create deploy test-pod1 --image nginx -- sh -c “cat /etc/resolv.conf” && kubectl logs deploy/test-pod1Output:kubectl logs deploy/test-pod1
deployment.apps/test-pod1 created
nameserver 10.100.0.10
search default.svc.cluster.local svc.cluster.local cluster.local us-west-2.compute.internal
options ndots:5
Enable Logging configuration for coredns
Coredns uses “coredns” ConfigMap to store configuration in a file named “corefile”. Default configuration that comes with Amazon EKS enables few plugins in corefile — errors, health plugin etc..,
- errors plugin logs errors returned for coredns. Example: Error reaching upstream DNS Server.
- health plugin allows coredns to expose health metrics on a health endpoint /health on port 8080.
- Below is default configmap for coredns from Amazon EKS Cluster.
$ kubectl get configmap coredns -n kube-system -o yamlapiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
To enable logging for all DNS queries, we will add log plugin to the list of existing plug-ins from above i.e., under line .:53{
$ kubectl edit configmap coredns -n kube-system -o yamlapiVersion: v1
data:
Corefile: |
.:53 {
log
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
Post above configuration changes, coredns pod will automatically pull the log changes. reload plugin from above will enable this feature on coredns.
Validate coredns pod logs for dns queries
$ kubectl logs deploy/coredns -n kube-system[INFO] 192.168.7.31:51714 - 13221 "AAAA IN logs.us-west-2.amazonaws.com. udp 46 false 512" NOERROR qr,aa,rd,ra 142 0.000058115s
[INFO] 192.168.77.98:42067 - 58243 "AAAA IN logs.us-west-2.amazonaws.com.amazon-cloudwatch.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000190081s
[INFO] 192.168.77.98:53695 - 1299 "A IN logs.us-west-2.amazonaws.com.amazon-cloudwatch.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000206089s
If your application pods, Kubernetes Cluster is not configured for IPv6, you can ignore the log lines with “AAAA” (represents IPv6).
We can now trace DNS failures for application pod by going through timestamp of coredns pod logs from above for DNS issue analysis.
I hope this article is helpful. Please add your comments, view points so that can help me improve this and future articles.