Skip to content

Latest commit

 

History

History

graphql

Debezium - GraphQL Example

This demo shows how to build a GraphQL Subscription on top of Debezium Change Events.

The domain consists of Order objects that have among others a quantity field. These objects are stored in a MySQL database. Debezium captures the changes in the database and publishes new orders to a Kafka Topic. Using the GraphQL API you can receive the new orders in real-time. The API also allows you to filter events. For example you might only be interested in Orders with a large quantity (for example for fraud detection) or for a special product.

There are two applications:

  1. The event-source, that persists random orders in a MySQL database (simulates 'real' business)

  2. The aggregator consumes the messages from the Kafka topics and publishes new orders via a GraphQL API. The aggregator is a web app deployed to Thorntail.

Preparations

Build data generator application and aggregator application:

mvn clean install -f event-source/pom.xml
mvn clean install -f aggregator/pom.xml

Start Kafka, Kafka Connect, MySQL, event source and aggregator:

export DEBEZIUM_VERSION=2.1
docker-compose up --build

Once you see the message "Waiting for source connector to be deployed" in the logs, deploy the Debezium MySQL connector:

curl -i -X POST -H "Accept:application/json" -H  "Content-Type:application/json" http://localhost:8083/connectors/ -d @mysql-source.json

Consume messages using GraphiQL

Once you see the message "Thorntail is Ready" in the logs, open the following URL in your browser: http://localhost:8079/graphiql.

It opens GraphiQL, a GraphQL API Browser.

While writing your GraphQL queries in the editor, you can get code assist using Ctrl+Space. Click on the Docs tab on the right side to get the API description.

GraphiQL API Explorer

Example GraphQL Queries

Return the latest order that has been placed:

query { latestOrder { id quantity } }

Subscribe to all new orders, return the fields id, productId, customerId and quantity from the Order:

subscription {
  onNewOrder {
    id
    productId
    customerId
    quantity
  }
}

Subscribe to new orders that have a quantity of at least 3:

subscription {
  onNewOrder(withMinQuantity: 3) {
    id
    customerId
    productId
    quantity
  }
}

Subscribe to new orders that have a productId 103:

subscription {
  onNewOrder(withProductId: 103) {
    id
    customerId
    quantity
  }
}

Subscribe to new orders that have a a quantity of at least 2 and a productId 103:

subscription {
  onNewOrder(withMinQuantity: 2 withProductId: 103) {
    id
    customerId
    quantity
  }
}

Note: The GraphiQL UI might not show all data_. If responses from the server come too fast, GraphiQL "skips" some of the responses.

Consume messages using a command-line tool

Included in the examples folder is a simple Java application that runs a GraphQL subscription and displays the incoming data on the console. Other than GraphiQL this tool really shows all messages received from the GraphQL server.

Build the application:

mvn clean package -f ws-client/pom.xml

Run the application (default configuration):

java -jar ws-client/target/ws-client-jar-with-dependencies.jar

The application expects the cluster running as described above, esp. that the GraphQL WebSocket endpoint is available at ws://localhost:8079/graphql. It runs a sample subscription and displays the received responses on the console until you quit the application using Ctrl+C.

If you want to the application with another host and/or another GraphQL subscription query, you can pass them via command line args:

java -jar ws-client/target/ws-client-jar-with-dependencies.jar URI GRAPHQL_SUBSCRIPTION_QUERY

For example:

java -jar ws-client/target/ws-client-jar-with-dependencies.jar ws://localhost:8079/graphql "subscription { onNewOrder { id productId } }"

(Please surround the query using double quotes).

Shut down the cluster

docker-compose down

Locally testing the aggregator

  1. Add - ADVERTISED_HOST_NAME=<YOUR HOST IP> to the environment section of the "kafka" service in docker-compose.yaml.

  2. Run all services except the aggregator service:

docker-compose up --build connect event-source

(or run all services as described and then docker-compose down aggregator)

  1. Run the aggregator from you IDE by running the class org.wildfly.swarm.Swarm from the aggregator project. Set the env variables KAFKA_SERVICE_HOST to and KAFKA_SERVICE_PORT to 9092