Skip to content

feat: Hive listener integration #605

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 31 commits into
base: main
Choose a base branch
from

Conversation

Maleware
Copy link
Member

@Maleware Maleware commented May 27, 2025

Description

Adds listener Support

Definition of Done Checklist

  • Not all of these items are applicable to all PRs, the author should update this template to only leave the boxes in that are relevant
  • Please make sure all these things are done and tick the boxes

Author

  • Changes are OpenShift compatible
  • CRD changes approved
  • CRD documentation for all fields, following the style guide.
  • Helm chart can be installed and deployed operator works
  • Integration tests passed (for non trivial changes)
  • Changes need to be "offline" compatible

Reviewer

  • Code contains useful comments
  • Code contains useful logging statements
  • (Integration-)Test cases added
  • Documentation added or updated. Follows the style guide.
  • Changelog updated
  • Cargo.toml only contains references to git tags (not specific commits or branches)

Acceptance

  • Feature Tracker has been updated
  • Proper release label has been added
  • Roadmap has been updated

@Maleware
Copy link
Member Author

=== NAME  kuttl
    harness.go:403: run tests finished
    harness.go:510: cleaning up
    harness.go:567: removing temp folder: ""
--- PASS: kuttl (2381.48s)
    --- PASS: kuttl/harness (0.00s)
        --- PASS: kuttl/harness/smoke_postgres-12.5.6_hive-3.1.3_openshift-false_s3-use-tls-false (109.29s)
        --- PASS: kuttl/harness/resources_hive-4.0.1_openshift-false (25.30s)
        --- PASS: kuttl/harness/kerberos-hdfs_postgres-12.5.6_hive-4.0.1_hdfs-latest-3.4.1_zookeeper-latest-3.9.3_krb5-1.21.1_openshift-false_kerberos-realm-PROD.MYCORP_kerberos-backend-mit (174.61s)
        --- PASS: kuttl/harness/kerberos-hdfs_postgres-12.5.6_hive-4.0.0_hdfs-latest-3.4.1_zookeeper-latest-3.9.3_krb5-1.21.1_openshift-false_kerberos-realm-PROD.MYCORP_kerberos-backend-mit (173.43s)
        --- PASS: kuttl/harness/kerberos-hdfs_postgres-12.5.6_hive-3.1.3_hdfs-latest-3.4.1_zookeeper-latest-3.9.3_krb5-1.21.1_openshift-false_kerberos-realm-PROD.MYCORP_kerberos-backend-mit (191.31s)
        --- PASS: kuttl/harness/kerberos-s3_postgres-12.5.6_hive-4.0.1_krb5-1.21.1_openshift-false_s3-use-tls-true_kerberos-realm-PROD.MYCORP_kerberos-backend-mit (109.84s)
        --- PASS: kuttl/harness/kerberos-s3_postgres-12.5.6_hive-4.0.1_krb5-1.21.1_openshift-false_s3-use-tls-false_kerberos-realm-PROD.MYCORP_kerberos-backend-mit (107.09s)
        --- PASS: kuttl/harness/kerberos-s3_postgres-12.5.6_hive-4.0.0_krb5-1.21.1_openshift-false_s3-use-tls-true_kerberos-realm-PROD.MYCORP_kerberos-backend-mit (132.82s)
        --- PASS: kuttl/harness/kerberos-s3_postgres-12.5.6_hive-4.0.0_krb5-1.21.1_openshift-false_s3-use-tls-false_kerberos-realm-PROD.MYCORP_kerberos-backend-mit (110.03s)
        --- PASS: kuttl/harness/kerberos-s3_postgres-12.5.6_hive-3.1.3_krb5-1.21.1_openshift-false_s3-use-tls-true_kerberos-realm-PROD.MYCORP_kerberos-backend-mit (119.68s)
        --- PASS: kuttl/harness/kerberos-s3_postgres-12.5.6_hive-3.1.3_krb5-1.21.1_openshift-false_s3-use-tls-false_kerberos-realm-PROD.MYCORP_kerberos-backend-mit (118.54s)
        --- PASS: kuttl/harness/upgrade_postgres-12.5.6_hive-old-3.1.3_hive-new-4.0.1_openshift-false (70.49s)
        --- PASS: kuttl/harness/cluster-operation_hive-latest-4.0.1_openshift-false (61.16s)
        --- PASS: kuttl/harness/logging_postgres-12.5.6_hive-4.0.0_openshift-false (79.92s)
        --- PASS: kuttl/harness/resources_hive-4.0.0_openshift-false (26.53s)
        --- PASS: kuttl/harness/resources_hive-3.1.3_openshift-false (26.19s)
        --- PASS: kuttl/harness/external-access_hive-latest-4.0.1_openshift-false (36.33s)
        --- PASS: kuttl/harness/orphaned-resources_hive-latest-4.0.1_openshift-false (37.07s)
        --- PASS: kuttl/harness/logging_postgres-12.5.6_hive-4.0.1_openshift-false (78.39s)
        --- PASS: kuttl/harness/smoke_postgres-12.5.6_hive-4.0.1_openshift-false_s3-use-tls-false (102.36s)
        --- PASS: kuttl/harness/logging_postgres-12.5.6_hive-3.1.3_openshift-false (87.95s)
        --- PASS: kuttl/harness/smoke_postgres-12.5.6_hive-4.0.1_openshift-false_s3-use-tls-true (96.98s)
        --- PASS: kuttl/harness/smoke_postgres-12.5.6_hive-4.0.0_openshift-false_s3-use-tls-false (102.11s)
        --- PASS: kuttl/harness/smoke_postgres-12.5.6_hive-4.0.0_openshift-false_s3-use-tls-true (100.83s)
        --- PASS: kuttl/harness/smoke_postgres-12.5.6_hive-3.1.3_openshift-false_s3-use-tls-true (103.19s)
PASS

@Maleware Maleware marked this pull request as ready for review May 28, 2025 13:22
@Maleware
Copy link
Member Author

Maleware commented Jun 4, 2025

I might have run into stackabletech/hdfs-operator#686 during development.

I recognized that I can have an empty string in my discovery configMap from time to time. Behaviour appears to be flaky, but yet more often then not the emtpy string appears:

Expected

apiVersion: v1
data:
  HIVE: thrift://hive-postgres-s3-metastore-default.default.svc.cluster.local:9083
kind: ConfigMap
metadata:

Flaky faulty one

apiVersion: v1
data:
  HIVE: ""
kind: ConfigMap
metadata:

@lfrancke lfrancke moved this to Development: In Progress in Stackable Engineering Jun 4, 2025
@Maleware Maleware changed the title WIP: Listener integration feat: Hive listener integration Jun 5, 2025
@Maleware
Copy link
Member Author

Maleware commented Jun 5, 2025

🟢

=== NAME  kuttl
    harness.go:403: run tests finished
    harness.go:510: cleaning up
    harness.go:567: removing temp folder: ""
--- PASS: kuttl (38.21s)
    --- PASS: kuttl/harness (0.00s)
        --- PASS: kuttl/harness/external-access_hive-latest-4.0.1_openshift-false (38.17s)
PASS

@Maleware Maleware moved this from Development: In Progress to Development: Waiting for Review in Stackable Engineering Jun 5, 2025
@Maleware Maleware requested a review from adwk67 June 5, 2025 13:56
@maltesander maltesander self-requested a review June 10, 2025 11:22
@maltesander maltesander moved this from Development: Waiting for Review to Development: In Review in Stackable Engineering Jun 10, 2025
chroot: Option<&str>,
listener_refs: BTreeMap<&String, Listener>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should explicitly mention this in the release notes / breaking changes that the discovery key changed. Or we keep the HIVE key for backwards compatibility?
Trino wont work anymore etc.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are pro and cons about it:

Each rolegroup has it's service and thus it's own entry in the configmap.
like

apiVersion: v1
data:
  cluster-internal: thrift://hive-postgres-s3-metastore-cluster-internal.default.svc.cluster.local:9083
  default: thrift://172.18.0.2:32658
  external-unstable: thrift://172.18.0.2:30607
kind: ConfigMap
metadata:
  creationTimestamp: "2025-06-04T13:39:09Z"
  labels:
    app.kubernetes.io/component: metastore

I like it better as you can decide which hive rolegroup you want to talk to.

However I understand the argument being backwards compatible. I could offer to make it upper case. Keeping compatibility would then mean to change the rg from default to hive to have the same picture as before?

I just don't like

apiVersion: v1
data:
 HIVE: thrift://hive-postgres-s3-metastore-cluster-internal.default.svc.cluster.local:9083
 thrift://172.18.0.2:32658
 thrift://172.18.0.2:30607
kind: ConfigMap
metadata:
 creationTimestamp: "2025-06-04T13:39:09Z"
 labels:
   app.kubernetes.io/component: metastore

as it can't be distinguished anymore if every rolegroup uses NodePort

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that "HIVE" is hardcoded/expected by the trino operator, using different rolegroups will cause problems (or at least not completely utilize the cluster) as its implemented currently. Will hit that up on Slack.

@maltesander maltesander linked an issue Jun 11, 2025 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Development: In Review
Development

Successfully merging this pull request may close these issues.

Integrate Hive Operator with Listener Operator
2 participants