diff options
author | 2017-04-19 17:42:45 -0400 | |
---|---|---|
committer | 2017-04-19 22:42:45 +0100 | |
commit | 8fc7ec776dfecf29ff34481e2e826a02968419a1 (patch) | |
tree | 8658e91e01a7d68a3c108969e8bb029560a15fb2 | |
parent | 1c53d4130e83aecdbd06d40ed3daaf90e7e26a03 (diff) | |
download | coredns-8fc7ec776dfecf29ff34481e2e826a02968419a1.tar.gz coredns-8fc7ec776dfecf29ff34481e2e826a02968419a1.tar.zst coredns-8fc7ec776dfecf29ff34481e2e826a02968419a1.zip |
Update the various Kubernetes middleware README files. (#630)
-rw-r--r-- | middleware/kubernetes/DEV-README.md | 144 | ||||
-rw-r--r-- | middleware/kubernetes/README.md | 7 | ||||
-rw-r--r-- | middleware/kubernetes/SkyDNS.md | 44 |
3 files changed, 21 insertions, 174 deletions
diff --git a/middleware/kubernetes/DEV-README.md b/middleware/kubernetes/DEV-README.md index be937e27b..4f652b578 100644 --- a/middleware/kubernetes/DEV-README.md +++ b/middleware/kubernetes/DEV-README.md @@ -2,140 +2,32 @@ ## Launch Kubernetes -Kubernetes is launched using the commands in the `.travis/kubernetes/00_run_k8s.sh` script. +To run the tests, you'll need a private, live Kubernetes cluster. If you don't have one, +you can try out [minikube](https://github.com/kubernetes/minikube), which is +also available via Homebrew for OS X users. -## Configure kubectl and Test +## Configure Test Data -The kubernetes control client can be downloaded from the generic URL: -`http://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY}` +The test data is all in [this manifest](https://github.com/coredns/coredns/blob/master/.travis/kubernetes/dns-test.yaml) +and you can load it with `kubectl apply -f`. It will create a couple namespaces and some services. +For the tests to pass, you should not create anything else in the cluster. -For example, the kubectl client for Linux can be downloaded using the command: -`curl -sSL "http://storage.googleapis.com/kubernetes-release/release/v1.2.4/bin/linux/amd64/kubectl"` +## Proxy the API Server -The `contrib/kubernetes/testscripts/10_setup_kubectl.sh` script can be stored in the same directory as -kubectl to setup kubectl to communicate with kubernetes running on the localhost. +Assuming your Kuberentes API server isn't running on http://localhost:8080, you will need to proxy from that +port to your cluster. You can do this with `kubectl proxy --port 8080`. -## Launch a kubernetes service and expose the service +## Run CoreDNS Kubernetes Tests -The following commands will create a kubernetes namespace "demo", -launch an nginx service in the namespace, and expose the service on port 80: +Now you can run the tests locally, for example: ~~~ -$ ./kubectl create namespace demo -$ ./kubectl get namespace - -$ ./kubectl run mynginx --namespace=demo --image=nginx -$ ./kubectl get deployment --namespace=demo - -$ ./kubectl expose deployment mynginx --namespace=demo --port=80 -$ ./kubectl get service --namespace=demo -~~~ - -The script `.travis/kubernetes/20_setup_k8s_services.sh` creates a couple of sample namespaces -with services running in those namespaces. The automated kubernetes integration tests in -`test/kubernetes_test.go` depend on these services and namespaces to exist in kubernetes. - - -## Launch CoreDNS - -Build CoreDNS and launch using this configuration file: - -~~~ txt -# Serve on port 53 -.:53 { - kubernetes coredns.local { - resyncperiod 5m - endpoint http://localhost:8080 - namespaces demo - # Only expose the records for kubernetes objects - # that matches this label selector. - # See http://kubernetes.io/docs/user-guide/labels/ - # Example selector below only exposes objects tagged as - # "application=nginx" in the staging or qa environments. - #labels environment in (staging, qa),application=nginx - } - #cache 180 coredns.local # optionally enable caching -} -~~~ - -Put it in `~/k8sCorefile` for instance. This configuration file sets up CoreDNS to use the zone -`coredns.local` for the kubernetes services. - -The command to launch CoreDNS is: - -~~~ -$ ./coredns -conf ~/k8sCorefile -~~~ - -In a separate terminal a DNS query can be issued using dig: - -~~~ -$ dig @localhost mynginx.demo.coredns.local - -;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47614 -;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 - -;; OPT PSEUDOSECTION: -; EDNS: version: 0, flags:; udp: 4096 -;; QUESTION SECTION: -;mynginx.demo.coredns.local. IN A - -;; ANSWER SECTION: -mynginx.demo.coredns.local. 0 IN A 10.0.0.10 - -;; Query time: 2 msec -;; SERVER: ::1#53(::1) -;; WHEN: Thu Jun 02 11:07:18 PDT 2016 -;; MSG SIZE rcvd: 71 +$ cd $GOPATH/src/github.com/coredns/coredns/test +$ go test -v -tags k8s ~~~ # Implementation Notes/Ideas -## Internal IP or External IP? -* Should the Corefile configuration allow control over whether the internal IP or external IP is exposed? -* If the Corefile configuration allows control over internal IP or external IP, then the config should allow users to control the precedence. - -For example a service "myservice" running in namespace "mynamespace" with internal IP "10.0.0.100" and external IP "1.2.3.4". - -This example could be published as: - -| Corefile directive | Result | -|------------------------------|---------------------| -| iporder = internal | 10.0.0.100 | -| iporder = external | 1.2.3.4 | -| iporder = external, internal | 10.0.0.100, 1.2.3.4 | -| iporder = internal, external | 1.2.3.4, 10.0.0.100 | -| _no directive_ | 10.0.0.100, 1.2.3.4 | - - -## TODO -* SkyDNS compatibility/equivalency: - * Kubernetes packaging and execution - * Automate packaging to allow executing in Kubernetes. That is, add Docker - container build as target in Makefile. Also include anything else needed - to simplify launch as the k8s DNS service. - Note: Dockerfile already exists in coredns repo to build the docker image. - This work item should identify how to pass configuration and run as a SkyDNS - replacement. - * Identify any kubernetes changes necessary to use coredns as k8s DNS server. That is, - how do we consume the "--cluster-dns=" and "--cluster-domain=" arguments. - * Work out how to pass CoreDNS configuration via kubectl command line and yaml - service definition file. - * Ensure that resolver in each kubernetes container is configured to use - coredns instance. - * Update kubernetes middleware documentation to describe running CoreDNS as a - SkyDNS replacement. (Include descriptions of different ways to pass CoreFile - to coredns command.) - * Remove dependency on healthz for health checking in - `kubernetes-rc.yaml` file. - * Functional work - * Calculate SRV priority based on number of instances running. - (See SkyDNS README.md) - * Performance - * Improve lookup to reduce size of query result obtained from k8s API. - (namespace-based?, other ideas?) - * reduce cache size by caching data into custom structs, instead of caching whole API objects - * add (and use) indexes on the caches that support indexing * Additional features: * Implement IP selection and ordering (internal/external). Related to wildcards and SkyDNS use of CNAMES. @@ -144,16 +36,8 @@ This example could be published as: * Do we need to generate synthetic zone records for namespaces? * Do we need to generate synthetic zone records for the skydns synthetic zones? * Test cases - * Implement test cases for SkyDNS equivalent functionality. - * Add test cases for lables based filtering * Test with CoreDNS caching. CoreDNS caching for DNS response is working using the `cache` directive. Tested working using 20s cache timeout and A-record queries. Automate testing with cache in place. * Automate CoreDNS performance tests. Initially for zone files, and for pre-loaded k8s API cache. With and without CoreDNS response caching. - * Try to get rid of kubernetes launch scripts by moving operations into - .travis.yml file. - * Find root cause of timing condition that results in no data returned to - test client when running k8s integration tests. Current work-around is a - nasty hack of waiting 5 seconds after setting up test server before performing - client calls. (See hack in test/kubernetes_test.go) diff --git a/middleware/kubernetes/README.md b/middleware/kubernetes/README.md index 9232e4756..323dc9197 100644 --- a/middleware/kubernetes/README.md +++ b/middleware/kubernetes/README.md @@ -108,6 +108,13 @@ kubernetes coredns.local { # cidrs 10.0.0.0/24 10.0.10.0/25 + # fallthrough + # + # If a query for a record in the cluster zone results in NXDOMAIN, + # normally that is what the response will be. However, if you specify + # this option, the query will instead be passed on down the middleware + # chain, which can include another middleware to handle the query. + fallthrough } ``` diff --git a/middleware/kubernetes/SkyDNS.md b/middleware/kubernetes/SkyDNS.md deleted file mode 100644 index 02fbc8115..000000000 --- a/middleware/kubernetes/SkyDNS.md +++ /dev/null @@ -1,44 +0,0 @@ -## DNS Schema - -Notes about the SkyDNS record naming scheme. (Copied from SkyDNS project README for reference while -hacking on the k8s middleware.) - -### Services - -#### A Records - -"Normal" (not headless) Services are assigned a DNS A record for a name of the form `my-svc.my-namespace.svc.cluster.local.` -This resolves to the cluster IP of the Service. - -"Headless" (without a cluster IP) Services are also assigned a DNS A record for a name of the form `my-svc.my-namespace.svc.cluster.local.` -Unlike normal Services, this resolves to the set of IPs of the pods selected by the Service. -Clients are expected to consume the set or else use standard round-robin selection from the set. - - -### Pods - -#### A Records - -When enabled, pods are assigned a DNS A record in the form of `pod-ip-address.my-namespace.pod.cluster.local.` - -For example, a pod with ip `1.2.3.4` in the namespace default with a dns name of `cluster.local` would have -an entry: `1-2-3-4.default.pod.cluster.local.` - -####A Records and hostname Based on Pod Annotations - A Beta Feature in Kubernetes v1.2 -Currently when a pod is created, its hostname is the Pod's `metadata.name` value. -With v1.2, users can specify a Pod annotation, `pod.beta.kubernetes.io/hostname`, to specify what the Pod's hostname should be. -If the annotation is specified, the annotation value takes precedence over the Pod's name, to be the hostname of the pod. -For example, given a Pod with annotation `pod.beta.kubernetes.io/hostname: my-pod-name`, the Pod will have its hostname set to "my-pod-name". - -v1.2 introduces a beta feature where the user can specify a Pod annotation, `pod.beta.kubernetes.io/subdomain`, to specify what the Pod's subdomain should be. -If the annotation is specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". -For example, given a Pod with the hostname annotation set to "foo", and the subdomain annotation set to "bar", in namespace "my-namespace", the pod will set its own FQDN as "foo.bar.my-namespace.svc.cluster.local" - -If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster's KubeDNS Server will also return an A record for the Pod's fully qualified hostname. -Given a Pod with the hostname annotation set to "foo" and the subdomain annotation set to "bar", and a headless Service named "bar" in the same namespace, the pod will see it's own FQDN as "foo.bar.my-namespace.svc.cluster.local". DNS will serve an A record at that name, pointing to the Pod's IP. - -With v1.2, the Endpoints object also has a new annotation `endpoints.beta.kubernetes.io/hostnames-map`. Its value is the json representation of map[string(IP)][endpoints.HostRecord], for example: '{"10.245.1.6":{HostName: "my-webserver"}}'. -If the Endpoints are for a headless service, then A records will be created with the format <hostname>.<service name>.<pod namespace>.svc.<cluster domain> -For the example json, if endpoints are for a headless service named "bar", and one of the endpoints has IP "10.245.1.6", then a A record will be created with the name "my-webserver.bar.my-namespace.svc.cluster.local" and the A record lookup would return "10.245.1.6". -This endpoints annotation generally does not need to be specified by end-users, but can used by the internal service controller to deliver the aforementioned feature. - |