Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactoring #91

Open
wants to merge 18 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion 02_deployHono/modules/hono/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,10 @@ adapters:
enabled: false
kafkaMessagingSpec:
commonClientConfig:
bootstrap.servers: kafka-0.kafka-headless.default.svc.cluster.local:9094
bootstrap.servers: kafka-0.kafka-headless.default.svc.cluster.local:9092
security.protocol: SASL_PLAINTEXT
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"hono\" password=\"hono-secret\";"
Comment on lines +27 to +30
Copy link
Member

@msrn msrn Dec 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Configure Hono to authenticate with custom kafka as common client

messagingNetworkTypes:
- kafka
kafkaMessagingClusterExample:
Expand Down
180 changes: 87 additions & 93 deletions 02_deployHono/modules/kafka/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,7 @@ service:
port: 9094
auth:
clientProtocol: sasl
interBrokerProtocol: plaintext
jaas:
clientUser: hono
clientPassword: hono-secret
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These option were deprecated on the kafka chart

interBrokerProtocol: sasl
sasl:
jaas:
clientUsers:
Expand All @@ -45,93 +42,90 @@ auth:
- "hono-secret"
zookeeperUser: zookeeperUser
zookeeperPassword: zookeeperPassword
tls:
type: jks
existingSecret: "{{ .Release.Name }}-kafka-jks"
password: honotrust

extraDeploy:
- |
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
spec:
replicas: 1
selector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: connector
template:
metadata:
labels: {{- include "common.labels.standard" . | nindent 8 }}
app.kubernetes.io/component: connector
spec:
containers:
- name: connect
image: nindemic/mongokafkaconnect:1.0
imagePullPolicy: IfNotPresent
ports:
- name: connector
containerPort: 8083
volumeMounts:
- name: configuration
mountPath: /bitnami/kafka/config
volumes:
- name: configuration
configMap:
name: {{ include "kafka.fullname" . }}-connect
- |
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
data:
connect-standalone.properties: |-
bootstrap.servers = {{ include "kafka.fullname" . }}-0.{{ include "kafka.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}:{{ .Values.service.port }}
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/opt/bitnami/kafka/plugins
sasl.mechanism=PLAIN
security.protocol=SASL_PLAINTEXT
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="hono" \
password="hono-secret";
consumer.sasl.mechanism=PLAIN
consumer.security.protocol=SASL_PLAINTEXT
consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="hono" \
password="hono-secret";
mongodb.properties: |-
connection.uri=mongodb://user:[email protected]:28018/mongo-telemetry
name=mongo-sink
topics=.
connector.class=com.mongodb.kafka.connect.MongoSinkConnector
tasks.max=1
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
database=telemetry
collection=sink
- |
apiVersion: v1
kind: Service
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
spec:
ports:
- protocol: TCP
port: 8083
targetPort: connector
selector: {{- include "common.labels.matchLabels" . | nindent 4 }}
app.kubernetes.io/component: connector

# extraDeploy:
# - |
# apiVersion: apps/v1
# kind: Deployment
# metadata:
# name: {{ include "kafka.fullname" . }}-connect
# labels: {{- include "common.labels.standard" . | nindent 4 }}
# app.kubernetes.io/component: connector
# spec:
# replicas: 1
# selector:
# matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
# app.kubernetes.io/component: connector
# template:
# metadata:
# labels: {{- include "common.labels.standard" . | nindent 8 }}
# app.kubernetes.io/component: connector
# spec:
# containers:
# - name: connect
# image: nindemic/mongokafkaconnect:1.0
# imagePullPolicy: IfNotPresent
# ports:
# - name: connector
# containerPort: 8083
# volumeMounts:
# - name: configuration
# mountPath: /bitnami/kafka/config
# volumes:
# - name: configuration
# configMap:
# name: {{ include "kafka.fullname" . }}-connect
# - |
# apiVersion: v1
# kind: ConfigMap
# metadata:
# name: {{ include "kafka.fullname" . }}-connect
# labels: {{- include "common.labels.standard" . | nindent 4 }}
# app.kubernetes.io/component: connector
# data:
# connect-standalone.properties: |-
# bootstrap.servers = {{ include "kafka.fullname" . }}-0.{{ include "kafka.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}:{{ .Values.service.port }}
# key.converter=org.apache.kafka.connect.json.JsonConverter
# value.converter=org.apache.kafka.connect.json.JsonConverter
# key.converter.schemas.enable=true
# value.converter.schemas.enable=true
# offset.storage.file.filename=/tmp/connect.offsets
# offset.flush.interval.ms=10000
# plugin.path=/opt/bitnami/kafka/plugins
# sasl.mechanism=PLAIN
# security.protocol=SASL_PLAINTEXT
# sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
# username="hono" \
# password="hono-secret";
# consumer.sasl.mechanism=PLAIN
# consumer.security.protocol=SASL_PLAINTEXT
# consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
# username="hono" \
# password="hono-secret";
# mongodb.properties: |-
# connection.uri=mongodb://user:[email protected]:28018/mongo-telemetry
# name=mongo-sink
# topics=.
# connector.class=com.mongodb.kafka.connect.MongoSinkConnector
# tasks.max=1
# key.converter=org.apache.kafka.connect.json.JsonConverter
# value.converter=org.apache.kafka.connect.json.JsonConverter
# key.converter.schemas.enable=false
# value.converter.schemas.enable=false
# database=telemetry
# collection=sink
# - |
# apiVersion: v1
# kind: Service
# metadata:
# name: {{ include "kafka.fullname" . }}-connect
# labels: {{- include "common.labels.standard" . | nindent 4 }}
# app.kubernetes.io/component: connector
# spec:
# ports:
# - protocol: TCP
# port: 8083
# targetPort: connector
# selector: {{- include "common.labels.matchLabels" . | nindent 4 }}
# app.kubernetes.io/component: connector
10 changes: 5 additions & 5 deletions tests/honoscript/receiver.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ java -jar hono-cli-1.9.0-exec.jar \
--spring.profiles.active=receiver,local,kafka \
--tenant.id=$MY_TENANT \
--hono.kafka.commonClientConfig.bootstrap.servers=${DOMAIN_NAME}:${KAFKA_PORT} \
--hono.kafka.commonClientConfig.security.protocol=SASL_SSL \
--hono.kafka.commonClientConfig.security.protocol=SASL_PLAINTEXT \
--hono.kafka.commonClientConfig.sasl.jaas.config="org.apache.kafka.common.security.scram.ScramLoginModule required username=\"hono\" password=\"hono-secret\";" \
--hono.kafka.commonClientConfig.sasl.mechanism=SCRAM-SHA-512 \
--hono.kafka.commonClientConfig.ssl.truststore.location=$KAFKA_TRUSTSTORE_PATH \
--hono.kafka.commonClientConfig.ssl.truststore.password=honotrust \
--hono.kafka.commonClientConfig.ssl.endpoint.identification.algorithm=""
--hono.kafka.commonClientConfig.sasl.mechanism=SCRAM-SHA-512
#--hono.kafka.commonClientConfig.ssl.truststore.location=$KAFKA_TRUSTSTORE_PATH \
#--hono.kafka.commonClientConfig.ssl.truststore.password=honotrust \
#--hono.kafka.commonClientConfig.ssl.endpoint.identification.algorithm=""
#--hono.client.username=consumer@HONO \
#--hono.client.password=verysecret \