beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lgaj...@apache.org
Subject [beam] branch master updated: [BEAM-6822] Add Kafka Kubernetes cluster config files and setup script (#8140)
Date Wed, 03 Apr 2019 09:24:14 GMT
This is an automated email from the ASF dual-hosted git repository.

lgajowy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/master by this push:
     new 0532e2a  [BEAM-6822] Add Kafka Kubernetes cluster config files and setup script (#8140)
0532e2a is described below

commit 0532e2a0e9544a63eca2cb38740f04344c1cea03
Author: Michal Walenia <32354134+mwalenia@users.noreply.github.com>
AuthorDate: Wed Apr 3 11:24:04 2019 +0200

    [BEAM-6822] Add Kafka Kubernetes cluster config files and setup script (#8140)
---
 .../kubernetes/kafka-cluster/00-namespace.yml      |  19 ++
 .../01-configure/gke-storageclass-broker-pd.yml    |  24 ++
 .../gke-storageclass-zookeeper-ssd.yml             |  24 ++
 .../02-rbac-namespace-default/node-reader.yml      |  45 ++++
 .../02-rbac-namespace-default/pod-labler.yml       |  48 ++++
 .../03-zookeeper/10zookeeper-config.yml            |  56 +++++
 .../kafka-cluster/03-zookeeper/20pzoo-service.yml  |  30 +++
 .../kafka-cluster/03-zookeeper/30service.yml       |  26 ++
 .../kafka-cluster/03-zookeeper/50pzoo.yml          | 101 ++++++++
 .../04-outside-services/outside-0.yml              |  30 +++
 .../04-outside-services/outside-1.yml              |  30 +++
 .../04-outside-services/outside-2.yml              |  30 +++
 .../kafka-cluster/05-kafka/10broker-config.yml     | 275 +++++++++++++++++++++
 .../kubernetes/kafka-cluster/05-kafka/20dns.yml    |  27 ++
 .../kafka-cluster/05-kafka/30bootstrap-service.yml |  25 ++
 .../kubernetes/kafka-cluster/05-kafka/50kafka.yml  | 120 +++++++++
 .../kafka-cluster/05-kafka/configmap-config.yaml   |  38 +++
 .../kafka-cluster/05-kafka/job-config.yaml         |  40 +++
 .test-infra/kubernetes/kafka-cluster/README.md     |  30 +++
 .../kubernetes/kafka-cluster/setup-cluster.sh      |  18 ++
 20 files changed, 1036 insertions(+)

diff --git a/.test-infra/kubernetes/kafka-cluster/00-namespace.yml b/.test-infra/kubernetes/kafka-cluster/00-namespace.yml
new file mode 100644
index 0000000..5f9c317
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/00-namespace.yml
@@ -0,0 +1,19 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+---
+apiVersion: v1
+kind: Namespace
+metadata:
+  name: kafka
diff --git a/.test-infra/kubernetes/kafka-cluster/01-configure/gke-storageclass-broker-pd.yml b/.test-infra/kubernetes/kafka-cluster/01-configure/gke-storageclass-broker-pd.yml
new file mode 100644
index 0000000..2d12582
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/01-configure/gke-storageclass-broker-pd.yml
@@ -0,0 +1,24 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: kafka-broker
+provisioner: kubernetes.io/gce-pd
+reclaimPolicy: Retain
+allowVolumeExpansion: true
+parameters:
+  type: pd-standard
diff --git a/.test-infra/kubernetes/kafka-cluster/01-configure/gke-storageclass-zookeeper-ssd.yml b/.test-infra/kubernetes/kafka-cluster/01-configure/gke-storageclass-zookeeper-ssd.yml
new file mode 100644
index 0000000..380fc95
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/01-configure/gke-storageclass-zookeeper-ssd.yml
@@ -0,0 +1,24 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: kafka-zookeeper
+provisioner: kubernetes.io/gce-pd
+reclaimPolicy: Retain
+allowVolumeExpansion: true
+parameters:
+  type: pd-ssd
diff --git a/.test-infra/kubernetes/kafka-cluster/02-rbac-namespace-default/node-reader.yml b/.test-infra/kubernetes/kafka-cluster/02-rbac-namespace-default/node-reader.yml
new file mode 100644
index 0000000..1fed01f
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/02-rbac-namespace-default/node-reader.yml
@@ -0,0 +1,45 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: node-reader
+  labels:
+    origin: github.com_Yolean_kubernetes-kafka
+rules:
+- apiGroups:
+  - ""
+  resources:
+  - nodes
+  - services
+  verbs:
+  - get
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: kafka-node-reader
+  labels:
+    origin: github.com_Yolean_kubernetes-kafka
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: node-reader
+subjects:
+- kind: ServiceAccount
+  name: default
+  namespace: kafka
diff --git a/.test-infra/kubernetes/kafka-cluster/02-rbac-namespace-default/pod-labler.yml b/.test-infra/kubernetes/kafka-cluster/02-rbac-namespace-default/pod-labler.yml
new file mode 100644
index 0000000..28f8fae
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/02-rbac-namespace-default/pod-labler.yml
@@ -0,0 +1,48 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: pod-labler
+  namespace: kafka
+  labels:
+    origin: github.com_Yolean_kubernetes-kafka
+rules:
+- apiGroups:
+  - ""
+  resources:
+  - pods
+  verbs:
+  - get
+  - update
+  - patch
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: kafka-pod-labler
+  namespace: kafka
+  labels:
+    origin: github.com_Yolean_kubernetes-kafka
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: Role
+  name: pod-labler
+subjects:
+- kind: ServiceAccount
+  name: default
+  namespace: kafka
diff --git a/.test-infra/kubernetes/kafka-cluster/03-zookeeper/10zookeeper-config.yml b/.test-infra/kubernetes/kafka-cluster/03-zookeeper/10zookeeper-config.yml
new file mode 100644
index 0000000..db75e1d
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/03-zookeeper/10zookeeper-config.yml
@@ -0,0 +1,56 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+kind: ConfigMap
+metadata:
+  name: zookeeper-config
+  namespace: kafka
+apiVersion: v1
+data:
+  init.sh: |-
+    #!/bin/bash
+    set -e
+    set -x
+
+    [ -d /var/lib/zookeeper/data ] || mkdir /var/lib/zookeeper/data
+    [ -z "$ID_OFFSET" ] && ID_OFFSET=1
+    export ZOOKEEPER_SERVER_ID=$((${HOSTNAME##*-} + $ID_OFFSET))
+    echo "${ZOOKEEPER_SERVER_ID:-1}" | tee /var/lib/zookeeper/data/myid
+    cp -Lur /etc/kafka-configmap/* /etc/kafka/
+    sed -i "s/server\.$ZOOKEEPER_SERVER_ID\=[a-z0-9.-]*/server.$ZOOKEEPER_SERVER_ID=0.0.0.0/" /etc/kafka/zookeeper.properties
+
+  zookeeper.properties: |-
+    tickTime=2000
+    dataDir=/var/lib/zookeeper/data
+    dataLogDir=/var/lib/zookeeper/log
+    clientPort=2181
+    maxClientCnxns=1
+    initLimit=5
+    syncLimit=2
+    server.1=pzoo-0.pzoo:2888:3888:participant
+    server.2=pzoo-1.pzoo:2888:3888:participant
+    server.3=pzoo-2.pzoo:2888:3888:participant
+    server.4=zoo-0.zoo:2888:3888:participant
+    server.5=zoo-1.zoo:2888:3888:participant
+
+  log4j.properties: |-
+    log4j.rootLogger=INFO, stdout
+    log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+    log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+    log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
+
+    # Suppress connection log messages, three lines per livenessProbe execution
+    log4j.logger.org.apache.zookeeper.server.NIOServerCnxnFactory=WARN
+    log4j.logger.org.apache.zookeeper.server.NIOServerCnxn=WARN
diff --git a/.test-infra/kubernetes/kafka-cluster/03-zookeeper/20pzoo-service.yml b/.test-infra/kubernetes/kafka-cluster/03-zookeeper/20pzoo-service.yml
new file mode 100644
index 0000000..00cc81e
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/03-zookeeper/20pzoo-service.yml
@@ -0,0 +1,30 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+apiVersion: v1
+kind: Service
+metadata:
+  name: pzoo
+  namespace: kafka
+spec:
+  ports:
+  - port: 2888
+    name: peer
+  - port: 3888
+    name: leader-election
+  clusterIP: None
+  selector:
+    app: zookeeper
+    storage: persistent
diff --git a/.test-infra/kubernetes/kafka-cluster/03-zookeeper/30service.yml b/.test-infra/kubernetes/kafka-cluster/03-zookeeper/30service.yml
new file mode 100644
index 0000000..08e7350
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/03-zookeeper/30service.yml
@@ -0,0 +1,26 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+apiVersion: v1
+kind: Service
+metadata:
+  name: zookeeper
+  namespace: kafka
+spec:
+  ports:
+  - port: 2181
+    name: client
+  selector:
+    app: zookeeper
diff --git a/.test-infra/kubernetes/kafka-cluster/03-zookeeper/50pzoo.yml b/.test-infra/kubernetes/kafka-cluster/03-zookeeper/50pzoo.yml
new file mode 100644
index 0000000..cea4eb1
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/03-zookeeper/50pzoo.yml
@@ -0,0 +1,101 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+  name: pzoo
+  namespace: kafka
+spec:
+  selector:
+    matchLabels:
+      app: zookeeper
+      storage: persistent
+  serviceName: "pzoo"
+  replicas: 3
+  updateStrategy:
+    type: RollingUpdate
+  podManagementPolicy: Parallel
+  template:
+    metadata:
+      labels:
+        app: zookeeper
+        storage: persistent
+      annotations:
+    spec:
+      terminationGracePeriodSeconds: 10
+      initContainers:
+      - name: init-config
+        image: solsson/kafka-initutils@sha256:2cdb90ea514194d541c7b869ac15d2d530ca64889f56e270161fe4e5c3d076ea
+        command: ['/bin/bash', '/etc/kafka-configmap/init.sh']
+        volumeMounts:
+        - name: configmap
+          mountPath: /etc/kafka-configmap
+        - name: config
+          mountPath: /etc/kafka
+        - name: data
+          mountPath: /var/lib/zookeeper
+      containers:
+      - name: zookeeper
+        image: solsson/kafka:2.1.1@sha256:8bc8242c649c395ab79d76cc83b1052e63b4efea7f83547bf11eb3ef5ea6f8e1
+        env:
+        - name: KAFKA_LOG4J_OPTS
+          value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
+        command:
+        - ./bin/zookeeper-server-start.sh
+        - /etc/kafka/zookeeper.properties
+        lifecycle:
+          preStop:
+            exec:
+             command: ["sh", "-ce", "kill -s TERM 1; while $(kill -0 1 2>/dev/null); do sleep 1; done"]
+        ports:
+        - containerPort: 2181
+          name: client
+        - containerPort: 2888
+          name: peer
+        - containerPort: 3888
+          name: leader-election
+        resources:
+          requests:
+            cpu: 10m
+            memory: 100Mi
+          limits:
+            memory: 120Mi
+        readinessProbe:
+          exec:
+            command:
+            - /bin/sh
+            - -c
+            - '[ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]'
+        volumeMounts:
+        - name: config
+          mountPath: /etc/kafka
+        - name: data
+          mountPath: /var/lib/zookeeper
+      volumes:
+      - name: configmap
+        configMap:
+          name: zookeeper-config
+      - name: config
+        emptyDir: {}
+  volumeClaimTemplates:
+  - metadata:
+      name: data
+    spec:
+      accessModes: [ "ReadWriteOnce" ]
+      storageClassName: kafka-zookeeper
+      resources:
+        requests:
+          storage: 1Gi
diff --git a/.test-infra/kubernetes/kafka-cluster/04-outside-services/outside-0.yml b/.test-infra/kubernetes/kafka-cluster/04-outside-services/outside-0.yml
new file mode 100644
index 0000000..bbadf76
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/04-outside-services/outside-0.yml
@@ -0,0 +1,30 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+kind: Service
+apiVersion: v1
+metadata:
+  name: outside-0
+  namespace: kafka
+spec:
+  selector:
+    app: kafka
+    kafka-broker-id: "0"
+  ports:
+  - protocol: TCP
+    targetPort: 9094
+    port: 32400
+    nodePort: 32400
+  type: LoadBalancer
diff --git a/.test-infra/kubernetes/kafka-cluster/04-outside-services/outside-1.yml b/.test-infra/kubernetes/kafka-cluster/04-outside-services/outside-1.yml
new file mode 100644
index 0000000..ea5fc9d
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/04-outside-services/outside-1.yml
@@ -0,0 +1,30 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+kind: Service
+apiVersion: v1
+metadata:
+  name: outside-1
+  namespace: kafka
+spec:
+  selector:
+    app: kafka
+    kafka-broker-id: "1"
+  ports:
+  - protocol: TCP
+    targetPort: 9094
+    port: 32401
+    nodePort: 32401
+  type: LoadBalancer
diff --git a/.test-infra/kubernetes/kafka-cluster/04-outside-services/outside-2.yml b/.test-infra/kubernetes/kafka-cluster/04-outside-services/outside-2.yml
new file mode 100644
index 0000000..d7f1eac
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/04-outside-services/outside-2.yml
@@ -0,0 +1,30 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+kind: Service
+apiVersion: v1
+metadata:
+  name: outside-2
+  namespace: kafka
+spec:
+  selector:
+    app: kafka
+    kafka-broker-id: "2"
+  ports:
+  - protocol: TCP
+    targetPort: 9094
+    port: 32402
+    nodePort: 32402
+  type: LoadBalancer
diff --git a/.test-infra/kubernetes/kafka-cluster/05-kafka/10broker-config.yml b/.test-infra/kubernetes/kafka-cluster/05-kafka/10broker-config.yml
new file mode 100644
index 0000000..27bc4e7
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/05-kafka/10broker-config.yml
@@ -0,0 +1,275 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+kind: ConfigMap
+metadata:
+  name: broker-config
+  namespace: kafka
+apiVersion: v1
+data:
+  init.sh: |-
+    #!/bin/bash
+    set -e
+    set -x
+    cp /etc/kafka-configmap/log4j.properties /etc/kafka/
+
+    KAFKA_BROKER_ID=${HOSTNAME##*-}
+    SEDS=("s/#init#broker.id=#init#/broker.id=$KAFKA_BROKER_ID/")
+    LABELS="kafka-broker-id=$KAFKA_BROKER_ID"
+    ANNOTATIONS=""
+
+    hash kubectl 2>/dev/null || {
+      SEDS+=("s/#init#broker.rack=#init#/#init#broker.rack=# kubectl not found in path/")
+    } && {
+      ZONE=$(kubectl get node "$NODE_NAME" -o=go-template='{{index .metadata.labels "failure-domain.beta.kubernetes.io/zone"}}')
+      if [ "x$ZONE" == "x<no value>" ]; then
+        SEDS+=("s/#init#broker.rack=#init#/#init#broker.rack=# zone label not found for node $NODE_NAME/")
+      else
+        SEDS+=("s/#init#broker.rack=#init#/broker.rack=$ZONE/")
+        LABELS="$LABELS kafka-broker-rack=$ZONE"
+      fi
+      OUTSIDE_HOST=""
+      while [ -z $OUTSIDE_HOST ]; do
+        echo "Waiting for end point..."
+        OUTSIDE_HOST=$(kubectl get svc outside-${KAFKA_BROKER_ID} --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}")
+        [ -z "$OUTSIDE_HOST" ] && sleep 10
+      done
+      # OUTSIDE_HOST=$(kubectl get node "$NODE_NAME" -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}')
+      OUTSIDE_PORT=3240${KAFKA_BROKER_ID}
+      SEDS+=("s|#init#advertised.listeners=OUTSIDE://#init#|advertised.listeners=OUTSIDE://${OUTSIDE_HOST}:${OUTSIDE_PORT}|")
+      ANNOTATIONS="$ANNOTATIONS kafka-listener-outside-host=$OUTSIDE_HOST kafka-listener-outside-port=$OUTSIDE_PORT"
+
+      if [ ! -z "$LABELS" ]; then
+        kubectl -n $POD_NAMESPACE label pod $POD_NAME $LABELS || echo "Failed to label $POD_NAMESPACE.$POD_NAME - RBAC issue?"
+      fi
+      if [ ! -z "$ANNOTATIONS" ]; then
+        kubectl -n $POD_NAMESPACE annotate pod $POD_NAME $ANNOTATIONS || echo "Failed to annotate $POD_NAMESPACE.$POD_NAME - RBAC issue?"
+      fi
+    }
+    printf '%s\n' "${SEDS[@]}" | sed -f - /etc/kafka-configmap/server.properties > /etc/kafka/server.properties.tmp
+    [ $? -eq 0 ] && mv /etc/kafka/server.properties.tmp /etc/kafka/server.properties
+
+  server.properties: |-
+    ############################# Log Basics #############################
+
+    # A comma seperated list of directories under which to store log files
+    # Overrides log.dir
+    log.dirs=/var/lib/kafka/data/topics
+
+    # The default number of log partitions per topic. More partitions allow greater
+    # parallelism for consumption, but this will also result in more files across
+    # the brokers.
+    num.partitions=12
+
+    default.replication.factor=3
+
+    min.insync.replicas=2
+
+    auto.create.topics.enable=false
+
+    # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
+    # This value is recommended to be increased for installations with data dirs located in RAID array.
+    #num.recovery.threads.per.data.dir=1
+
+    ############################# Server Basics #############################
+
+    # The id of the broker. This must be set to a unique integer for each broker.
+    #init#broker.id=#init#
+
+    #init#broker.rack=#init#
+
+    ############################# Socket Server Settings #############################
+
+    # The address the socket server listens on. It will get the value returned from 
+    # java.net.InetAddress.getCanonicalHostName() if not configured.
+    #   FORMAT:
+    #     listeners = listener_name://host_name:port
+    #   EXAMPLE:
+    #     listeners = PLAINTEXT://your.host.name:9092
+    #listeners=PLAINTEXT://:9092
+    listeners=OUTSIDE://:9094,PLAINTEXT://:9092
+
+    # Hostname and port the broker will advertise to producers and consumers. If not set, 
+    # it uses the value for "listeners" if configured.  Otherwise, it will use the value
+    # returned from java.net.InetAddress.getCanonicalHostName().
+    #advertised.listeners=PLAINTEXT://your.host.name:9092
+    #init#advertised.listeners=OUTSIDE://#init#,PLAINTEXT://:9092
+
+    # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
+    #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
+    listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL,OUTSIDE:PLAINTEXT
+    inter.broker.listener.name=PLAINTEXT
+
+    # The number of threads that the server uses for receiving requests from the network and sending responses to the network
+    #num.network.threads=3
+
+    # The number of threads that the server uses for processing requests, which may include disk I/O
+    #num.io.threads=8
+
+    # The send buffer (SO_SNDBUF) used by the socket server
+    #socket.send.buffer.bytes=102400
+
+    # The receive buffer (SO_RCVBUF) used by the socket server
+    #socket.receive.buffer.bytes=102400
+
+    # The maximum size of a request that the socket server will accept (protection against OOM)
+    #socket.request.max.bytes=104857600
+
+    ############################# Internal Topic Settings  #############################
+    # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
+    # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
+    #offsets.topic.replication.factor=1
+    #transaction.state.log.replication.factor=1
+    #transaction.state.log.min.isr=1
+
+    ############################# Log Flush Policy #############################
+
+    # Messages are immediately written to the filesystem but by default we only fsync() to sync
+    # the OS cache lazily. The following configurations control the flush of data to disk.
+    # There are a few important trade-offs here:
+    #    1. Durability: Unflushed data may be lost if you are not using replication.
+    #    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
+    #    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
+    # The settings below allow one to configure the flush policy to flush data after a period of time or
+    # every N messages (or both). This can be done globally and overridden on a per-topic basis.
+
+    # The number of messages to accept before forcing a flush of data to disk
+    #log.flush.interval.messages=10000
+
+    # The maximum amount of time a message can sit in a log before we force a flush
+    #log.flush.interval.ms=1000
+
+    ############################# Log Retention Policy #############################
+
+    # The following configurations control the disposal of log segments. The policy can
+    # be set to delete segments after a period of time, or after a given size has accumulated.
+    # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
+    # from the end of the log.
+
+    # https://cwiki.apache.org/confluence/display/KAFKA/KIP-186%3A+Increase+offsets+retention+default+to+7+days
+    offsets.retention.minutes=10080
+
+    # The minimum age of a log file to be eligible for deletion due to age
+    log.retention.hours=-1
+
+    # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
+    # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
+    #log.retention.bytes=1073741824
+
+    # The maximum size of a log segment file. When this size is reached a new log segment will be created.
+    #log.segment.bytes=1073741824
+
+    # The interval at which log segments are checked to see if they can be deleted according
+    # to the retention policies
+    #log.retention.check.interval.ms=300000
+
+    ############################# Zookeeper #############################
+
+    # Zookeeper connection string (see zookeeper docs for details).
+    # This is a comma separated host:port pairs, each corresponding to a zk
+    # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
+    # You can also append an optional chroot string to the urls to specify the
+    # root directory for all kafka znodes.
+    zookeeper.connect=zookeeper:2181
+
+    # Timeout in ms for connecting to zookeeper
+    #zookeeper.connection.timeout.ms=6000
+
+
+    ############################# Group Coordinator Settings #############################
+
+    # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
+    # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
+    # The default value for this is 3 seconds.
+    # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
+    # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
+    #group.initial.rebalance.delay.ms=0
+
+  log4j.properties: |-
+    # Unspecified loggers and loggers with additivity=true output to server.log and stdout
+    # Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise
+    log4j.rootLogger=INFO, stdout
+
+    log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+    log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+    log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
+
+    log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
+    log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
+    log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
+    log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
+    log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
+
+    log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
+    log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
+    log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
+    log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
+    log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
+
+    log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
+    log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
+    log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
+    log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
+    log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
+
+    log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
+    log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
+    log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
+    log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
+    log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
+
+    log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
+    log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
+    log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
+    log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
+    log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
+
+    log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
+    log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
+    log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
+    log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
+    log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
+
+    # Change the two lines below to adjust ZK client logging
+    log4j.logger.org.I0Itec.zkclient.ZkClient=INFO
+    log4j.logger.org.apache.zookeeper=INFO
+
+    # Change the two lines below to adjust the general broker logging level (output to server.log and stdout)
+    log4j.logger.kafka=INFO
+    log4j.logger.org.apache.kafka=INFO
+
+    # Change to DEBUG or TRACE to enable request logging
+    log4j.logger.kafka.request.logger=WARN, requestAppender
+    log4j.additivity.kafka.request.logger=false
+
+    # Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output
+    # related to the handling of requests
+    #log4j.logger.kafka.network.Processor=TRACE, requestAppender
+    #log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
+    #log4j.additivity.kafka.server.KafkaApis=false
+    log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
+    log4j.additivity.kafka.network.RequestChannel$=false
+
+    log4j.logger.kafka.controller=TRACE, controllerAppender
+    log4j.additivity.kafka.controller=false
+
+    log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
+    log4j.additivity.kafka.log.LogCleaner=false
+
+    log4j.logger.state.change.logger=TRACE, stateChangeAppender
+    log4j.additivity.state.change.logger=false
+
+    # Change to DEBUG to enable audit log for the authorizer
+    log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender
+    log4j.additivity.kafka.authorizer.logger=false
diff --git a/.test-infra/kubernetes/kafka-cluster/05-kafka/20dns.yml b/.test-infra/kubernetes/kafka-cluster/05-kafka/20dns.yml
new file mode 100644
index 0000000..2e14e76
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/05-kafka/20dns.yml
@@ -0,0 +1,27 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+# A headless service to create DNS records
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: broker
+  namespace: kafka
+spec:
+  ports:
+  - port: 9092
+  clusterIP: None
+  selector:
+    app: kafka
diff --git a/.test-infra/kubernetes/kafka-cluster/05-kafka/30bootstrap-service.yml b/.test-infra/kubernetes/kafka-cluster/05-kafka/30bootstrap-service.yml
new file mode 100644
index 0000000..5428795
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/05-kafka/30bootstrap-service.yml
@@ -0,0 +1,25 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: bootstrap
+  namespace: kafka
+spec:
+  ports:
+  - port: 9092
+  selector:
+    app: kafka
diff --git a/.test-infra/kubernetes/kafka-cluster/05-kafka/50kafka.yml b/.test-infra/kubernetes/kafka-cluster/05-kafka/50kafka.yml
new file mode 100644
index 0000000..9e19a74
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/05-kafka/50kafka.yml
@@ -0,0 +1,120 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+  name: kafka
+  namespace: kafka
+spec:
+  selector:
+    matchLabels:
+      app: kafka
+  serviceName: "broker"
+  replicas: 3
+  updateStrategy:
+    type: RollingUpdate
+  podManagementPolicy: Parallel
+  template:
+    metadata:
+      labels:
+        app: kafka
+      annotations:
+    spec:
+      terminationGracePeriodSeconds: 30
+      initContainers:
+      - name: init-config
+        image: solsson/kafka-initutils@sha256:2cdb90ea514194d541c7b869ac15d2d530ca64889f56e270161fe4e5c3d076ea
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: spec.nodeName
+        - name: POD_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        - name: POD_NAMESPACE
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.namespace
+        command: ['/bin/bash', '/etc/kafka-configmap/init.sh']
+        volumeMounts:
+        - name: configmap
+          mountPath: /etc/kafka-configmap
+        - name: config
+          mountPath: /etc/kafka
+        - name: extensions
+          mountPath: /opt/kafka/libs/extensions
+      containers:
+      - name: broker
+        image: solsson/kafka:2.1.1@sha256:8bc8242c649c395ab79d76cc83b1052e63b4efea7f83547bf11eb3ef5ea6f8e1
+        env:
+        - name: CLASSPATH
+          value: /opt/kafka/libs/extensions/*
+        - name: KAFKA_LOG4J_OPTS
+          value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
+        - name: JMX_PORT
+          value: "5555"
+        ports:
+        - name: inside
+          containerPort: 9092
+        - name: outside
+          containerPort: 9094
+        - name: jmx
+          containerPort: 5555
+        command:
+        - ./bin/kafka-server-start.sh
+        - /etc/kafka/server.properties
+        lifecycle:
+          preStop:
+            exec:
+             command: ["sh", "-ce", "kill -s TERM 1; while $(kill -0 1 2>/dev/null); do sleep 1; done"]
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+          limits:
+            # This limit was intentionally set low as a reminder that
+            # the entire Yolean/kubernetes-kafka is meant to be tweaked
+            # before you run production workloads
+            memory: 600Mi
+        readinessProbe:
+          tcpSocket:
+            port: 9092
+          timeoutSeconds: 1
+        volumeMounts:
+        - name: config
+          mountPath: /etc/kafka
+        - name: data
+          mountPath: /var/lib/kafka/data
+        - name: extensions
+          mountPath: /opt/kafka/libs/extensions
+      volumes:
+      - name: configmap
+        configMap:
+          name: broker-config
+      - name: config
+        emptyDir: {}
+      - name: extensions
+        emptyDir: {}
+  volumeClaimTemplates:
+  - metadata:
+      name: data
+    spec:
+      accessModes: [ "ReadWriteOnce" ]
+      storageClassName: kafka-broker
+      resources:
+        requests:
+          storage: 10Gi
diff --git a/.test-infra/kubernetes/kafka-cluster/05-kafka/configmap-config.yaml b/.test-infra/kubernetes/kafka-cluster/05-kafka/configmap-config.yaml
new file mode 100644
index 0000000..cd52225
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/05-kafka/configmap-config.yaml
@@ -0,0 +1,38 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  namespace: kafka
+  name: kafka-config
+data:
+  runtimeConfig.sh: |
+    #!/bin/bash
+    set -e
+    cd /usr/bin
+    until kafka-configs --zookeeper zookeeper:2181 --entity-type topics --describe || (( count++ >= 6 ))
+    do
+      echo "Waiting for Zookeeper..."
+      sleep 20
+    done
+    until nc -z kafka 9092 || (( retries++ >= 6 ))
+    do
+      echo "Waiting for Kafka..."
+      sleep 20
+    done
+    echo "Applying runtime configuration using confluentinc/cp-kafka:5.0.1"
+    kafka-topics --zookeeper zookeeper:2181 --create --if-not-exists --force --topic apache-beam-load-test --partitions 3 --replication-factor 2
+    kafka-configs --zookeeper zookeeper:2181 --entity-type topics --entity-name apache-beam-load-test --describe
diff --git a/.test-infra/kubernetes/kafka-cluster/05-kafka/job-config.yaml b/.test-infra/kubernetes/kafka-cluster/05-kafka/job-config.yaml
new file mode 100644
index 0000000..1aee3df
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/05-kafka/job-config.yaml
@@ -0,0 +1,40 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+---
+# Source: kafka/templates/job-config.yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+  name: "kafka-config-eff079ec"
+  namespace: kafka
+spec:
+  template:
+    metadata:
+      labels:
+        app: kafka
+    spec:
+      restartPolicy: OnFailure
+      volumes:
+        - name: config-volume
+          configMap:
+            name: kafka-config
+            defaultMode: 0744
+      containers:
+        - name: kafka-config
+          image: "confluentinc/cp-kafka:5.0.1"
+          command: ["/usr/local/script/runtimeConfig.sh"]
+          volumeMounts:
+            - name: config-volume
+              mountPath: "/usr/local/script"
diff --git a/.test-infra/kubernetes/kafka-cluster/README.md b/.test-infra/kubernetes/kafka-cluster/README.md
new file mode 100644
index 0000000..124d300
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/README.md
@@ -0,0 +1,30 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+## Kafka test cluster
+
+The kubernetes config files in this directory create a Kafka cluster
+comprised of 3 Kafka replicas and 3 Zookeeper replicas. They
+expose themselves using 3 LoadBalancer services. To deploy the cluster, simply run
+
+    sh setup-cluster.sh
+    
+Before executing the script, ensure that your account has cluster-admin 
+privileges, as setting RBAC cluster roles requires that.
+
+The scripts are based on [Yolean kubernetes-kafka](https://github.com/Yolean/kubernetes-kafka)
diff --git a/.test-infra/kubernetes/kafka-cluster/setup-cluster.sh b/.test-infra/kubernetes/kafka-cluster/setup-cluster.sh
new file mode 100755
index 0000000..65c021a
--- /dev/null
+++ b/.test-infra/kubernetes/kafka-cluster/setup-cluster.sh
@@ -0,0 +1,18 @@
+#!/usr/bin/env bash
+
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+kubectl apply -R -f .


Mime
View raw message