diff --git a/docs/docs.json b/docs/docs.json index 05d35367..f4a5bf8b 100644 --- a/docs/docs.json +++ b/docs/docs.json @@ -518,7 +518,8 @@ "router/event-driven-federated-subscriptions-edfs/nats/stream-and-consumer-configuration" ] }, - "router/event-driven-federated-subscriptions-edfs/kafka" + "router/event-driven-federated-subscriptions-edfs/kafka", + "router/event-driven-federated-subscriptions-edfs/redis" ] }, "router/compliance-and-data-management", diff --git a/docs/federation/event-driven-federated-subscriptions/composition-rules.mdx b/docs/federation/event-driven-federated-subscriptions/composition-rules.mdx index 2075a3b9..8dfadbfe 100644 --- a/docs/federation/event-driven-federated-subscriptions/composition-rules.mdx +++ b/docs/federation/event-driven-federated-subscriptions/composition-rules.mdx @@ -1,6 +1,6 @@ --- title: "Composition Rules" -description: "The Event-Driven Graph (or EDG) is an \"abstract\" subgraph, so it must not define any resolvers. EDGs are also subject to special compositional rules." +description: "The Event-Driven Graph (or EDG) is an "abstract" subgraph, so it must not define any resolvers. EDGs are also subject to special compositional rules." icon: "network-wired" --- @@ -13,8 +13,10 @@ EDG Root fields must define their respective event directive and a valid respons | Query | @edfs__natsRequest | A non-nullable entity object | | Mutation | @edfs__natsPublish | `edfs__PublishResult!` | | | @edfs__kafkaPublish | `edfs__PublishResult!` | +| | @edfs__redisPublish | `edfs__PublishResult!` | | Subscription | @edfs__natsSubscribe | A non-nullable entity object | | | @edfs__kafkaSubscribe | A non-nullable entity object | +| | @edfs__redisSubscribe | A non-nullable entity object | Note that the `edfs__NatsStreamConfiguration` input object must _always_ be defined to satisfy the `@edfs__natsSubscribe` directive: diff --git a/docs/federation/event-driven-federated-subscriptions/redis.mdx b/docs/federation/event-driven-federated-subscriptions/redis.mdx new file mode 100644 index 00000000..bc73a444 --- /dev/null +++ b/docs/federation/event-driven-federated-subscriptions/redis.mdx @@ -0,0 +1,48 @@ +--- +title: "Redis" +icon: "sitemap" +--- + +## Definitions + + + The `providerId` argument, including the default value "default", _must_ correspond to an equivalent property in _events.providers.redis_ entry of the router config.yml. + + +### @edfs_kafkaPublish + +```js +directive @edfs__redisPublish( + channel: String!, + providerId: String! = "default" +) on FIELD_DEFINITION + +type edfs__PublishResult { + success: Boolean! +} +``` + +| Argument name | Type | Value | +| ------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------- | +| channel | String\! | The event channel. | +| providerId | String\! | The provider ID, which identifies the connection in the router config.yaml.
If unsupplied, the provider ID "default" will be used. | + +### @edfs__redisSubscribe + +```js +directive @edfs__redisSubscribe( + channels: [String!]!, + providerId: String! = "default" +) on FIELD_DEFINITION +``` + +| Argument name | Type | Value | +| ------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| channels | [String\!]\! | The event channels. It is possible to subscribe to multiple topics.
Also, subscription is done using `PSUBSCRIBE `so some pattern matching can be used, as per redis documentation: https://redis.io/docs/latest/commands/psubscribe/. | +| providerId | String\! | The provider ID, which identifies the connection in the router config.yaml.
If unsupplied, the provider ID "default" will be used. | + + + + + + \ No newline at end of file diff --git a/docs/image(1).png b/docs/image(1).png new file mode 100644 index 00000000..39132c8b Binary files /dev/null and b/docs/image(1).png differ diff --git a/docs/image-4.png b/docs/image-4.png new file mode 100644 index 00000000..9e4c8bc1 Binary files /dev/null and b/docs/image-4.png differ diff --git a/docs/image.png b/docs/image.png new file mode 100644 index 00000000..3693ce50 Binary files /dev/null and b/docs/image.png differ diff --git a/docs/images/image.png b/docs/images/image.png new file mode 100644 index 00000000..43ead4c7 Binary files /dev/null and b/docs/images/image.png differ diff --git a/docs/router/configuration.mdx b/docs/router/configuration.mdx index fc52ea0c..004bfc39 100644 --- a/docs/router/configuration.mdx +++ b/docs/router/configuration.mdx @@ -105,8 +105,7 @@ The following sections describe each configuration in detail with all available Intervals, timeouts, and delays are specified in Go [duration](https://pkg.go.dev/maze.io/x/duration#ParseDuration) syntax e.g 1s, 5m or 1h. -Sizes can be specified in 2MB, 1mib. - + Sizes can be specified in 2MB, 1mib. | Environment Variable | YAML | Required | Description | Default Value | @@ -187,9 +186,9 @@ watch_config: Hot reloading has some limitations: - - Changes to the `watch_config` section are not hot-reloaded. - - Changes to flags and environment variables are not possible. - - `watch_config` based reloads are based on the filesystem's modification time, edits that somehow circumvent this mechanism will not trigger a reload. + - Changes to the `watch_config` section are not hot-reloaded. + - Changes to flags and environment variables are not possible. + - `watch_config` based reloads are based on the filesystem's modification time, edits that somehow circumvent this mechanism will not trigger a reload. ## Access Logs @@ -292,17 +291,17 @@ graph: The Model Context Protocol (MCP) server allows AI models to discover and interact with your GraphQL API in a secure way. -| Environment Variable | YAML | Required | Description | Default Value | -| ------------------------------- | ------------------------------- | ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------- | -| MCP_ENABLED | mcp.enabled | | Enable or disable the MCP server | false | -| MCP_SERVER_LISTEN_ADDR | mcp.server.listen_addr | | The address and port where the MCP server will listen for requests | localhost:5025 | -| MCP_SERVER_BASE_URL | mcp.server.base_url | | The base URL of the MCP server. This is the URL advertised to the LLM clients when SSE is used as primary transport. | - | -| MCP_ROUTER_URL | mcp.router_url | | Custom URL to use for the router GraphQL endpoint in MCP. Use this when your router is behind a proxy. Purely metadata for AI model. | - | -| MCP_STORAGE_PROVIDER_ID | mcp.storage.provider_id | | The ID of a storage provider to use for loading GraphQL operations. Only file_system providers are supported. | - | -| MCP_GRAPH_NAME | mcp.graph_name | | The name of the graph to be used by the MCP server | mygraph | -| MCP_EXCLUDE_MUTATIONS | mcp.exclude_mutations | | Whether to exclude mutation operations from being exposed | false | -| MCP_ENABLE_ARBITRARY_OPERATIONS | mcp.enable_arbitrary_operations | | Whether to allow arbitrary GraphQL operations to be executed. Security risk: Should only be enabled in secure, internal environments. | false | -| MCP_EXPOSE_SCHEMA | mcp.expose_schema | | Whether to expose the full GraphQL schema. Security risk: Should only be enabled in secure, internal environments. | false | +| Environment Variable | YAML | Required | Description | Default Value | +| ------------------------------- | ------------------------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------- | +| MCP_ENABLED | mcp.enabled | | Enable or disable the MCP server | false | +| MCP_SERVER_LISTEN_ADDR | mcp.server.listen_addr | | The address and port where the MCP server will listen for requests | localhost:5025 | +| MCP_SERVER_BASE_URL | mcp.server.base_url | | The base URL of the MCP server. This is the URL advertised to the LLM clients when SSE is used as primary transport. | - | +| MCP_ROUTER_URL | mcp.router_url | | Custom URL to use for the router GraphQL endpoint in MCP. Use this when your router is behind a proxy. Purely metadata for AI model. | - | +| MCP_STORAGE_PROVIDER_ID | mcp.storage.provider_id | | The ID of a storage provider to use for loading GraphQL operations. Only file_system providers are supported. | - | +| MCP_GRAPH_NAME | mcp.graph_name | | The name of the graph to be used by the MCP server | mygraph | +| MCP_EXCLUDE_MUTATIONS | mcp.exclude_mutations | | Whether to exclude mutation operations from being exposed | false | +| MCP_ENABLE_ARBITRARY_OPERATIONS | mcp.enable_arbitrary_operations | | Whether to allow arbitrary GraphQL operations to be executed. Security risk: Should only be enabled in secure, internal environments. | false | +| MCP_EXPOSE_SCHEMA | mcp.expose_schema | | Whether to expose the full GraphQL schema. Security risk: Should only be enabled in secure, internal environments. | false | Example YAML config: @@ -818,10 +817,10 @@ modules: The configuration for the router gRPC plugins. To learn more about the gRPC plugins, see the [gRPC plugins documentation](/router/plugins). -| Environment Variable | YAML | Required | Description | Default Value | -| -------------------- | ------- | ---------------------- | ----------------------------------------------------------------------------------- | ------------- | -| PLUGINS_ENABLED | enabled | | Enable the router plugins. If the value is true, the router plugins are enabled. | false | -| PLUGINS_PATH | path | | The path to the plugins directory. The plugins directory is used to load the plugins.| plugins | +| Environment Variable | YAML | Required | Description | Default Value | +| -------------------- | ------- | ---------------------- | ------------------------------------------------------------------------------------- | ------------- | +| PLUGINS_ENABLED | enabled | | Enable the router plugins. If the value is true, the router plugins are enabled. | false | +| PLUGINS_PATH | path | | The path to the plugins directory. The plugins directory is used to load the plugins. | plugins | ### Example YAML config: @@ -832,6 +831,7 @@ plugins: enabled: true path: "plugins" ``` + ## Headers Configure Header propagation rules for all Subgraphs or individual Subgraphs by name. @@ -1540,7 +1540,7 @@ cdn: The Events section lets you define Event Sources for [Event-Driven Federated Subscriptions (EDFS)](/router/event-driven-federated-subscriptions-edfs). -We support NATS and Kafka as event bus provider. +We support NATS, Kafka and Redis as event bus provider. @@ -1567,15 +1567,19 @@ events: username: "username" brokers: - "localhost:9092" + redis: + - id: my-redis + urls: + - "localhost:9092" ``` ### Provider -| Environment Variable | YAML | Required | Description | Default Value | -| -------------------- | --------- | --------------------------------------------- | ------------------- | ------------- | -| | providers | | one of: nats, kafka | | +| Environment Variable | YAML | Required | Description | Default Value | +| -------------------- | --------- | --------------------------------------------- | -------------------------- | ------------- | +| | providers | | one of: nats, kafka, redis | | ### NATS Provider @@ -1602,7 +1606,12 @@ events: | | tls | | TLS configuration for the Kafka provider. If enabled, it uses SystemCertPool for RootCAs by default. | | | | tls.enabled | | Enable the TLS. | | -### Nats Provider +### Redis Provider + +| Environment Variable | YAML | Required | Description | Default Value | +| :------------------- | :--- | :------- | :--------------------------------------------------------------------------------------- | :------------ | +| | id | | The ID of the provider. This have to match with the ID specified in the subgraph schema. | [] | +| | urls | | A list of redis instances URLS, e.g: `redis://localhost:6379/2` | | ## Router Engine Configuration @@ -1632,12 +1641,13 @@ Configure the GraphQL Execution Engine of the Router. | ENGINE_ENABLE_VALIDATION_CACHE | enable_validation_cache | | Enable the validation cache. The validation cache is used to cache results of validating GraphQL Operations. | true | | ENGINE_VALIDATION_CACHE_SIZE | validation_cache_size | | The size of the validation cache. | 1024 | | ENGINE_DISABLE_EXPOSING_VARIABLES_CONTENT_ON_VALIDATION_ERROR | disable_exposing_variables_content_on_validation_error | | Disables exposing the variables content in the error response. This is useful to avoid leaking sensitive information in the error response. | false | -| ENGINE_ENABLE_SUBGRAPH_FETCH_OPERATION_NAME | enable_subgraph_fetch_operation_name | | Enable appending the operation name to subgraph fetches. This will ensure that the operation name will be included in the corresponding subgraph requests using the following format: $operationName\_\_$subgraphName\_\_$sequenceID. | true | +| ENGINE_ENABLE_SUBGRAPH_FETCH_OPERATION_NAME | enable_subgraph_fetch_operation_name | | Enable appending the operation name to subgraph fetches. This will ensure that the operation name will be included in the corresponding subgraph requests using the following format: $operationName\_\_$subgraphName_\_\$sequenceID. | true | | ENGINE_SUBSCRIPTION_FETCH_TIMEOUT | subscription_fetch_timeout | | The maximum time a subscription fetch can take before it is considered timed out. | 30s | ### Example YAML config: + ```yaml config.yaml version: "1" @@ -1665,7 +1675,8 @@ engine: disable_exposing_variables_content_on_validation_error: false enable_subgraph_fetch_operation_name: true subscription_fetch_timeout: 30s -```` +``` + ### Debug Configuration @@ -1712,7 +1723,7 @@ engine: enable_normalization_cache_response_header: false always_include_query_plan: false always_skip_loader: false -```` +``` @@ -1972,7 +1983,7 @@ This configuration is used to enable full compatibility with Apollo Federation, ### Apollo Compatibility Value Completion -Invalid \_\_typename values will be returned in extensions.valueCompletion instead of errors. +Invalid \__typename values will be returned in extensions.valueCompletion instead of errors. ### Apollo Compatibility Truncate Floats @@ -2104,4 +2115,4 @@ cache_warmup: path: "./cache-warmer/operations" ``` - + \ No newline at end of file diff --git a/docs/router/event-driven-federated-subscriptions-edfs.mdx b/docs/router/event-driven-federated-subscriptions-edfs.mdx index 25dfd2b7..9e12b63e 100644 --- a/docs/router/event-driven-federated-subscriptions-edfs.mdx +++ b/docs/router/event-driven-federated-subscriptions-edfs.mdx @@ -1,6 +1,6 @@ --- title: "Event-Driven Federated Subscriptions (EDFS)" -description: "EDFS combines the power of GraphQL Federation and Event-Driven Architecture (Kafka, NATS) to update a user GraphQL Subscription after an event occurs in your system." +description: "EDFS combines the power of GraphQL Federation and Event-Driven Architecture (Kafka, NATS, Redis) to update a user GraphQL Subscription after an event occurs in your system." icon: "circle-info" sidebarTitle: "Overview" --- @@ -40,14 +40,17 @@ Furthermore, classic Subscriptions with Federation are quite expensive when it c Enter Event-Driven Federated Subscriptions, a simple way to scale Federated Subscriptions in a resource-efficient manner. -EDFS supports two event providers: +EDFS supports three event providers: - + + + + @@ -86,6 +89,17 @@ directive @edfs__kafkaSubscribe( providerId: String! = "default" ) on FIELD_DEFINITION +# Redis integration +directive @edfs__redisPublish( + channel: String!, + providerId: String! = "default" +) on FIELD_DEFINITION + +directive @edfs__redisSubscribe( + channels: [String!]!, + providerId: String! = "default" +) on FIELD_DEFINITION + # OpenFederation directive to filter subscription events directive @openfed__subscriptionFilter( condition: openfed__SubscriptionFilterCondition! @@ -96,9 +110,9 @@ Let's explain each directive in detail: The `@edfs__natsRequest` directive is a specific NATS directive to extend a Graph through an Event Source. It makes a request to a NATS subject and waits synchronously of the response. Under the hood it uses [Request/Reply ](https://docs.nats.io/nats-concepts/core-nats/reqreply)semantics from NATS. -The `@edfs__natsPublish` and `@edfs__kafkaPublish` directive allows you to publish an event through a Mutation. +The `@edfs__natsPublish`, `@edfs__kafkaPublish` and `@edfs__redisPublish ` directive allows you to publish an event through a Mutation. -Using the `@edfs__natsSubscribe` and `@edfs__kafkaSubscribe` directives, you can create a Subscription to the corresponding message bus. By default, both provider implementations are stateless, meaning every client receives the same events in a broadcast fashion. This behavior can be adjusted. NATS allows you to create a [consumer group](https://docs.nats.io/nats-concepts/jetstream/consumers), resulting in multiple independent streams of the subject, where each client can consume events at their own pace. +Using the `@edfs__natsSubscribe`, `@edfs__kafkaSubscribe` and `@edfs__redisSubscribe` directives, you can create a Subscription to the corresponding message bus. By default, all the providers implementations are stateless, meaning every client receives the same events in a broadcast fashion. This behavior can be adjusted. NATS allows you to create a [consumer group](https://docs.nats.io/nats-concepts/jetstream/consumers), resulting in multiple independent streams of the subject, where each client can consume events at their own pace. The `@openfed__subscriptionFilter` directive allows you to filter subscription messages based on specified conditions. For more information see [Subscription Filter](/router/event-driven-federated-subscriptions-edfs#subscription-filter). @@ -106,7 +120,7 @@ An Event-Driven Subgraph does not need to be implemented. It is simply a Subgrap ## Prerequisites -To use EDFS, you need to have an Event Source running and connected to the Router. Currently, we support NATS and Kafka. For simplicity, NATS is used to explain the examples. +To use EDFS, you need to have an Event Source running and connected to the Router. Currently, we support NATS, Kafka and Redis. For simplicity, NATS is used to explain the examples. To get started, run a NATS instance and add the following configuration to your `config.yaml` Router Configuration: @@ -188,7 +202,7 @@ type Employee @key(fields: "id", resolvable: false) { #### The "subjects" Argument -The subjects/topics argument of all events Directives allows you to use templating Syntax to use an argument to render the topic. +The subjects/topics/channels argument of all events Directives allows you to use templating Syntax to use an argument to render the topic. Given the following Schema: @@ -233,7 +247,7 @@ Once the initial result is coming back from the "Event Subgraph", the Router is ### Publish -The `@edfs_natsPublish` and `@edfs_kafkaPublish` directive sends a JSON representation of all arguments, including arguments being used to render the topic, to the rendered topic. Fields using the `eventsPublish` directive MUST return the type `PublishEventResult` with one single field `success` of type `Boolean!`, indicating whether publishing the event was successful or not. +The `@edfs_natsPublish`, `@edfs_kafkaPublish` and `@edfs__redisPublish` directive sends a JSON representation of all arguments, including arguments being used to render the topic, to the rendered topic. Fields using the `eventsPublish` directive MUST return the type `PublishEventResult` with one single field `success` of type `Boolean!`, indicating whether publishing the event was successful or not. Given that we send the following Mutation: diff --git a/docs/router/event-driven-federated-subscriptions-edfs/kafka.mdx b/docs/router/event-driven-federated-subscriptions-edfs/kafka.mdx index c763d171..990f7e7e 100644 --- a/docs/router/event-driven-federated-subscriptions-edfs/kafka.mdx +++ b/docs/router/event-driven-federated-subscriptions-edfs/kafka.mdx @@ -5,16 +5,16 @@ descripton: "Kafka event provider support for EDFS" --- - + ![](/images/router/event-driven-federated-subscriptions-edfs/image-3.png) ## Minimum requirements -|Package|Minimum version| -|---|---| -|controlplane|0.88.3| -|router|0.88.0| -|wgc|0.55.0| +| Package | Minimum version | +| ------------ | --------------- | +| controlplane | 0.88.3 | +| router | 0.88.0 | +| wgc | 0.55.0 | ## Full schema example @@ -97,20 +97,22 @@ wgc subgraph publish edg --namespace default --schema eedg.graphqls Based on the example above, you will need a compatible router configuration. - ```yaml config.yaml - events: - providers: - kafka: - - id: my-kafka # Needs to match with the providerID in the directive - tls: - enabled: true - authentication: - sasl_plain: - password: "password" - username: "username" - brokers: - - "localhost:9092" - ``` + +```yaml config.yaml +events: + providers: + kafka: + - id: my-kafka # Needs to match with the providerID in the directive + tls: + enabled: true + authentication: + sasl_plain: + password: "password" + username: "username" + brokers: + - "localhost:9092" +``` + ## Example Query @@ -132,5 +134,5 @@ subscription { ## System diagram - - + ![](/images/router/event-driven-federated-subscriptions-edfs/image-4.png) + \ No newline at end of file diff --git a/docs/router/event-driven-federated-subscriptions-edfs/redis.mdx b/docs/router/event-driven-federated-subscriptions-edfs/redis.mdx new file mode 100644 index 00000000..aa413cb9 --- /dev/null +++ b/docs/router/event-driven-federated-subscriptions-edfs/redis.mdx @@ -0,0 +1,134 @@ +--- +title: "Redis" +icon: "sitemap" +descripton: "Redis event provider support for EDFS" +--- + + + + + ![Image Pn](/docs/image\(1\).png) + + +## Minimum requirements + +| Package | Minimum version | +| ------------ | --------------- | +| controlplane | - | +| router | - | +| wgc | - | + +## Full schema example + +Here is a comprehensive example of how to use Redis with EDFS. This guide covers publish, subscribe, and the filter directive. All examples can be modified to suit your specific needs. The schema directives and `edfs__*` types belong to the EDFS schema contract and must not be modified. + +```js +# EDFS + +directive @edfs__redisPublish(channel: String!, providerId: String! = "default") on FIELD_DEFINITION +directive @edfs__redisSubscribe(channels: [String!]!, providerId: String! = "default") on FIELD_DEFINITION + +# OpenFederation + +directive @openfed__subscriptionFilter(condition: openfed__SubscriptionFilterCondition!) on FIELD_DEFINITION + +scalar openfed__SubscriptionFilterValue + +input openfed__SubscriptionFieldCondition { + fieldPath: String! + values: [openfed__SubscriptionFilterValue]! +} + +input openfed__SubscriptionFilterCondition { + AND: [openfed__SubscriptionFilterCondition!] + IN: openfed__SubscriptionFieldCondition + NOT: openfed__SubscriptionFilterCondition + OR: [openfed__SubscriptionFilterCondition!] +} + +# Custom + +input UpdateEmployeeInput { + name: String + email: String +} + +type Mutation { + updateEmployeeMyRedis(employeeID: Int!, update: UpdateEmployeeInput!): edfs__PublishResult! @edfs__redisPublish(topic: "employeeUpdated", providerId: "my-redis") +} + +type Subscription { + filteredEmployeeUpdatedMyRedis(employeeID: ID!): Employee! + @edfs__redisSubscribe(topics: ["employeeUpdated", "employeeUpdatedTwo"], providerId: "my-redis") + @openfed__subscriptionFilter(condition: { IN: { fieldPath: "id", values: [1, 3, 4, 7, 11] } }) + filteredEmployeeUpdatedMyRedisWithListFieldArguments(firstIds: [ID!]!, secondIds: [ID!]!): Employee! + @edfs__redisSubscribe(topics: ["employeeUpdated", "employeeUpdatedTwo"], providerId: "my-redis") + filteredEmployeeUpdatedMyRedisWithNestedListFieldArgument(input: RedisInput!): Employee! + @edfs__redisSubscribe(topics: ["employeeUpdated", "employeeUpdatedTwo"], providerId: "my-redis") + @openfed__subscriptionFilter(condition: { + OR: [ + { IN: { fieldPath: "id", values: ["{{ args.input.ids }}"] } }, + { IN: { fieldPath: "id", values: [1] } }, + ], + }) +} + +input RedisInput { + ids: [Int!]! +} + +# Subgraph schema + +type Employee @key(fields: "id", resolvable: false) { + id: Int! @external +} + +type edfs__PublishResult { + success: Boolean! +} +``` + +You can create the above Event-Driven Graph (EDG—an abstract subgraph) with the following [wgc](/cli/intro) command: + +```js +wgc subgraph publish edg --namespace default --schema eedg.graphqls +``` + +## Router configuration + +Based on the example above, you will need a compatible router configuration. + + + +```yaml config.yaml +events: + providers: + redis: + - id: my-redis # Needs to match with the providerID in the directive + urls: + - "redis://localhost:6379/" +``` + + + +## Example Query + +In the example query below, one or more subgraphs have been implemented alongside the Event-Driven Graph to resolve any other fields defined on `Employee`, e.g., `tag` and `details.surname`. + +```js +subscription { + filteredEmployeeUpdatedMyRedis(employeeID: 1) { + id # resolved by the Event-Driven Graph (through the event) + tag # resolved by another subgraph + details { # resolved by another subgraph + surname + } + } +} +``` + +## System diagram + + + ![](/docs/image.png) + \ No newline at end of file