From 77e22fc7a1aff935af76a43b3090b0355e258b5e Mon Sep 17 00:00:00 2001 From: Dhanalakshmi N Date: Wed, 27 Oct 2021 18:54:14 +0530 Subject: [PATCH] Update readme file for the Keycloak setup and keycloak user details --- README.md | 274 ++++++++++-------- .../domain/KeycloakSynchronizationSaga.java | 16 + 2 files changed, 174 insertions(+), 116 deletions(-) diff --git a/README.md b/README.md index 56f24a1a..c208b089 100644 --- a/README.md +++ b/README.md @@ -1,55 +1,56 @@ # Welcome! + [![Build status](https://badge.buildkite.com/b44f55806fca0e349ecc8d470fe0fdcea8f49c1375f33e86d5.svg?branch=main)](https://buildkite.com/everest-engineering/lhotse) [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=everest-engineering_lhotse&metric=alert_status)](https://sonarcloud.io/dashboard?id=everest-engineering_lhotse) This is Lhotse, a starter kit for writing event sourced web applications following domain driven design principles. It is based on [Spring Boot](https://spring.io/projects/spring-boot), [Axon](https://axoniq.io/) and [Hazelcast](https://hazelcast.com/). -Whether you're starting a new project or refactoring an existing one, you should consider this project if you're seeking: - - * horizontal scalability via self forming clusters with distributed command processing - * crypto-shredding support for your event log to address privacy regulations such as the GDPR - * Axon's awesome event sourcing and [CQRS](https://martinfowler.com/bliki/CQRS.html) features - * deduplicating filestore abstractions for a variety of backing stores such as S3 buckets and Mongo GridFS - * role based authorisation - * Spring Boot features such as OAuth ready authorisation and Prometheus integration +Whether you're starting a new project or refactoring an existing one, you should consider this project if you're seeking: + +- horizontal scalability via self forming clusters with distributed command processing +- crypto-shredding support for your event log to address privacy regulations such as the GDPR +- Axon's awesome event sourcing and [CQRS](https://martinfowler.com/bliki/CQRS.html) features +- deduplicating filestore abstractions for a variety of backing stores such as S3 buckets and Mongo GridFS +- role based authorisation +- Spring Boot features such as OAuth ready authorisation and Prometheus integration -... without the time and effort involved in starting a new project from scratch. - -The only end user functionality provided out of the box is basic support for creating organisations and users. The -sample code demonstrates end-to-end command handling and event processing flows from API endpoints down to projections. +... without the time and effort involved in starting a new project from scratch. +The only end user functionality provided out of the box is basic support for creating organisations and users. The +sample code demonstrates end-to-end command handling and event processing flows from API endpoints down to projections. # Table of Contents + - [Tooling](#tooling) - - [IntelliJ configuration](#intellij-configuration) - - [Building](#building) - - [Semantic versioning](#semantic-versioning) - - [Jupyter notebook](#jupyter-notebook) - - [Swagger documentation](#swagger-documentation) - - [Checkstyle and PMD configuration](#checkstyle-and-pmd-configuration) + - [IntelliJ configuration](#intellij-configuration) + - [Building](#building) + - [Semantic versioning](#semantic-versioning) + - [Jupyter notebook](#jupyter-notebook) + - [Swagger documentation](#swagger-documentation) + - [Checkstyle and PMD configuration](#checkstyle-and-pmd-configuration) - [Features](#features) - - [Axon: DDD and event sourcing](#axon-ddd-and-event-sourcing) - - [Distributed command handling](#distributed-command-handling) - - [Command validation](#command-validation) - - [Event processing](#event-processing) - - [Event replays](#event-replays) - - [Crypto shredding](#crypto-shredding) - - [Security support](#security-support) - - [User and Authentication Context](#user-and-authentication-context) - - [Endpoint access control](#endpoint-access-control) - - [File support](#file-support) - - [Configuring Mongo GridFS](#configuring-mongo-gridfs) - - [Configuring AWS S3](#configuring-aws-s3) - - [Media support](#media-support) - - [ETag HTTP headers](#etag-http-headers) + - [Axon: DDD and event sourcing](#axon-ddd-and-event-sourcing) + - [Distributed command handling](#distributed-command-handling) + - [Command validation](#command-validation) + - [Event processing](#event-processing) + - [Event replays](#event-replays) + - [Crypto shredding](#crypto-shredding) + - [Security and access control](#security-and-access-control) + - [User and Keycloak authentication token](#user-and-authentication-token) + - [Endpoint access control](#endpoint-access-control) + - [File support](#file-support) + - [Configuring Mongo GridFS](#configuring-mongo-gridfs) + - [Configuring AWS S3](#configuring-aws-s3) + - [Media support](#media-support) + - [ETag HTTP headers](#etag-http-headers) - [Project Info](#project-info) - - [Maintainers](#maintainers) - - [Contributing](#contributing) - - [License](#license) - + - [Maintainers](#maintainers) + - [Contributing](#contributing) + - [License](#license) # Tooling + This project uses [Java 11](https://openjdk.java.net/projects/jdk/11/). The container system is [Docker](https://www.docker.com/). @@ -59,21 +60,49 @@ The container system is [Docker](https://www.docker.com/). The build system is [Gradle](https://gradle.org/) and it supports all of the above. ## IntelliJ configuration + The [Lombok plugin](https://plugins.jetbrains.com/plugin/6317-lombok/) is required for IntelliJ, else the code generated by the Lombok annotations will not be visible (and the project will be littered with red squiggle line errors). ## Building + To build the entire application, including running unit and functional tests: -`./gradlew build` +`./gradlew setupTheKeycloakTestServer && ./gradlew build` (Note that functional tests share the same port number for embedded database as for the containerised database, if -tests fail try running `./gradlew composeDown` first). +tests fail try running `./gradlew composeDown` first. Free-up port for the keycloak test server as well). + +Create a `.env` file in the project root directory and do the env variables setup for keycloak: + +``` +KEYCLOAK_SERVER_PORT=8180 +KEYCLOAK_USER=admin@everest.engineering +KEYCLOAK_PASSWORD=****** +``` + +(Note: the above env variable values should match with the keycloak server details specified in the `application.properties` file and the `build.gradle` file present in the launcher module. +Keycloak test server also uses the same credentials for authentication.) -Start up containers for Postgres and MongoDB: +Start up containers for Postgres, MongoDB and Keycloak: `docker-compose up` +This project uses keycloak for authentication and authorization. `docker-compose up` will run the keycloak container on the `KEYCLOAK_SERVER_PORT` which is specified in the `.env` file. + +#### Client secret + +We need to get the client secret from the keycloak server and configure it in the `application.properties` for `keycloak.credentials.secret` prop. + +Open the http://localhost:8180/auth url and login to app using `KEYCLOAK_USER` and `KEYCLOAK_PASSWORD`. + +Select the `default` realm and go to the `Clients` tab. + +Select the `default-client` under Clients tab. + +Go to the `Credentials` tab and regenerate the client secret if required. +Get the `secret` value and add it in the `application.properties` as mentioned above. + To run the application server using Gradle: `./gradlew bootRun` @@ -99,12 +128,14 @@ To run the dependencies license plugin, which will generate a report at `build/r `./gradlew generateLicenseReport` ## Semantic versioning + [Semantic versioning](https://semver.org/) is automatically applied using git tags. Simply create a tag of, say, `1.2.0` and Gradle will build a JAR package (or a docker container, if you wish) tagged with version `1.2.0-0` where `-0` is the number of commits since the `1.2.0` tag was applied. Each subsequent commit will increment the version (`1.2.0-1`, `1.2.0-2`, ... `1.2.0-N`) until the next release is tagged. ## Jupyter notebook + An initial [Jupyter notebook](https://jupyter.org/) can be found in 'doc/notebook'. It acts as an interactive reference for the API endpoints and should be in your development workflow. Note that this requires [IRuby](https://github.com/sciruby/iruby). @@ -114,77 +145,81 @@ Jupyter notebook can be run as a Docker container: `docker-compose -f doc/notebook/docker-compose.yml up`. ## Swagger documentation + Swagger API documentation is automatically generated by [Springfox](https://springfox.github.io/springfox/docs/current/). API documentation is accessible when running the application locally by visiting -[Swagger UI](http://localhost:8080/swagger-ui/index.html). Default credentials for logging in as an administrator can +[Swagger UI](http://localhost:8080/swagger-ui/index.html). Default credentials for logging in as an administrator can be found in `application.properties` along with the client ID and client secret. Functional tests generate a Swagger JSON API definition at `./launcher/build/web-app-api.json` ## Checkstyle and PMD configuration + [PMD](https://pmd.github.io/) and [Checkstyle](https://checkstyle.org/) quality checks are automatically applied to all sub-projects. - # Features ## Axon: DDD and event sourcing -Previously known as Axon Framework, [Axon](https://axoniq.io/) is a framework for implementing -[domain driven design](https://dddcommunity.org/learning-ddd/what_is_ddd/) using -[event sourced](https://martinfowler.com/eaaDev/EventSourcing.html) -[aggregates](https://www.martinfowler.com/bliki/DDD_Aggregate.html) and + +Previously known as Axon Framework, [Axon](https://axoniq.io/) is a framework for implementing +[domain driven design](https://dddcommunity.org/learning-ddd/what_is_ddd/) using +[event sourced](https://martinfowler.com/eaaDev/EventSourcing.html) +[aggregates](https://www.martinfowler.com/bliki/DDD_Aggregate.html) and [CQRS](https://www.martinfowler.com/bliki/CQRS.html). -DDD is, at its core, about __linguistics__. Establishing a ubiquitous language helps identify sources of overlap or -tension in conceptual understanding that __may be__ indicative of a separation of concern in a system. Rather than +DDD is, at its core, about **linguistics**. Establishing a ubiquitous language helps identify sources of overlap or +tension in conceptual understanding that **may be** indicative of a separation of concern in a system. Rather than attempting to model a domain in intricate detail inside a common model, DDD places great emphasis on identifying these boundaries in order to define [bounded contexts](https://www.martinfowler.com/bliki/BoundedContext.html). These reduce -complexity of the system by avoiding [anemic domain models](https://www.martinfowler.com/bliki/AnemicDomainModel.html) -due to a slow migration of complex domain logic from within the domain model to helper classes on the periphery as the +complexity of the system by avoiding [anemic domain models](https://www.martinfowler.com/bliki/AnemicDomainModel.html) +due to a slow migration of complex domain logic from within the domain model to helper classes on the periphery as the system evolves. -Event sourcing captures the activities of a business in an event log, an append-only history of every important -__business action__ that has ever been taken by users or by the system itself. Events are mapped to an arbitrary number +Event sourcing captures the activities of a business in an event log, an append-only history of every important +**business action** that has ever been taken by users or by the system itself. Events are mapped to an arbitrary number of projections for use by the query side of the system. Being able to replay events offers several significant benefits: -* Projections can be optimised for reading by denormalising data -* Events can be __upcasted__. That is, events are marked with a revision that allows them to be transformed to an updated +- Projections can be optimised for reading by denormalising data +- Events can be **upcasted**. That is, events are marked with a revision that allows them to be transformed to an updated version of the same event. This protects developers from creating significant errors in users' data due to, for example, accidentally transposing two fields within a command or event; -* Projections can be updated with new information that was either captured by or derived from events. New business - requirements can be met and projections generated such that historical user actions can be incorporated as new +- Projections can be updated with new information that was either captured by or derived from events. New business + requirements can be met and projections generated such that historical user actions can be incorporated as new features are rolled out; Axon provides the event sourcing framework. User actions of interest to a business (typically anything that modifies data) are dispatched to command handlers that, after validation, emit events. - + ## Distributed command handling + Part of Axon's appeal is the ability to horizontally scale an application using Axon Server for command and event -dispatching, and event log persistence. While well suited to applications that require massive scale, deploying Axon -Server introduces additional maintenance and configuration overhead. +dispatching, and event log persistence. While well suited to applications that require massive scale, deploying Axon +Server introduces additional maintenance and configuration overhead. -We have taken a different approach that provides a good starting point for small to medium applications while still +We have taken a different approach that provides a good starting point for small to medium applications while still allowing migration to Axon Server in the future. Horizontal scalability is achieved via a [command distribution extension](https://github.com/everest-engineering/axon-command-distribution-extension) that wraps the standard Axon command gateway with a [Hazelcast](https://hazelcast.com/) based distributed command gateway. -An arbitrary number of application instances started together, either on the same network or within a Kubernetes cluster, -will be automatically discovered to form a cluster. +An arbitrary number of application instances started together, either on the same network or within a Kubernetes cluster, +will be automatically discovered to form a cluster. -Commands dispatched through the Hazelcast command gateway are deterministically routed to a single application instance +Commands dispatched through the Hazelcast command gateway are deterministically routed to a single application instance which, based on the aggregate identifier, has ownership of an aggregate for as long as that instance remains a member of the cluster. This clear aggregate ownership is vital to avoid a split brain scenario as aggregates are cached in memory -and are responsible for command validation. A split brain situation could arise if multiple unsynchronised copies were -to be distributed among cluster members. +and are responsible for command validation. A split brain situation could arise if multiple unsynchronised copies were +to be distributed among cluster members. Hazelcast will automatically reassign aggregate ownership if an application instance leaves the cluster due to a restart, network disconnection or other failure. -Events emitted by an aggregate are passed to the application instance's local event bus. Events are persisted to the +Events emitted by an aggregate are passed to the application instance's local event bus. Events are persisted to the event log by the instance handling the command. Subscribing event processing (the default in our configuration) guarantees that the same instance will be performing the event handling. ## Command validation + Commands represent user actions that may be rejected by the system. Events, however, represent historical events and cannot be rejected (though, they can be upcasted or ignored depending on circumstances). It is therefore vital that robust command validation is performed to protect the integrity of the system. @@ -195,7 +230,7 @@ overhead. There is a philosophical argument for defining aggregates such that all information required to validate commands is held by an aggregate in memory. In practice, however, more natural aggregates can be formed by allowing some validation -to be based on __projections__. We also know from experience that some validation will be shared among multiple aggregates. +to be based on **projections**. We also know from experience that some validation will be shared among multiple aggregates. The amount of testing required to verify all possible command failure situations tends to grow non-linearly as the number of checks that are performed inside an aggregate grows. @@ -207,6 +242,7 @@ detect validators at application start up and register them with a command inter command handler method being called. This significantly reduces testing effort and human error. ## Event processing + Axon provides two types of event processors, [subscribing and tracking](https://docs.axoniq.io/reference-guide/configuring-infrastructure-components/event-processing/event-processors). @@ -230,6 +266,7 @@ make the system easier to reason about by defaulting to subscribing event proces processing group unless explicitly assigned elsewhere. ## Event replays + Event replaying takes the system back to a previous point in time in order to apply a different interpretation of what it means to process an event. @@ -239,82 +276,79 @@ system has scaled up. More advanced approaches are made possible by assigning ev groups and running a mixture of subscribing and tracking event processors. Advanced configuration opens up the possibility of: - * Replaying events into a new projection database while continuing to project to an existing one, making the replay - transparent to end users. The system can then be switched over to use the new projection while optionally continuing - to maintain the old one. - * Tracking event processors can be used to generate projections for new features that are not yet released to users - until the projections are ready for use. - * Processing groups allow replays to be limited to bounded contexts that are naturally isolated. +- Replaying events into a new projection database while continuing to project to an existing one, making the replay + transparent to end users. The system can then be switched over to use the new projection while optionally continuing + to maintain the old one. +- Tracking event processors can be used to generate projections for new features that are not yet released to users + until the projections are ready for use. +- Processing groups allow replays to be limited to bounded contexts that are naturally isolated. The starter kit comes with programmatic support for triggering replays. To perform a replay: - * disconnect the application from load balancers - * trigger a replay via a [Spring Boot Actuator](https://spring.io/guides/gs/actuator-service/) call to `/actuator/replay` - * monitor the state of replay via a Spring actuator endpoint - * reconnect the application to load balancers +- disconnect the application from load balancers +- trigger a replay via a [Spring Boot Actuator](https://spring.io/guides/gs/actuator-service/) call to `/actuator/replay` +- monitor the state of replay via a Spring actuator endpoint +- reconnect the application to load balancers Behind the scenes, replays are being executed by: - * shutting down [tracking event processors](https://axoniq.io/blog-overview/tracking-event-processors) (TEP) processing; - * clearing the tracking tokens in the Axon database; - * placing a marker event into the event log that will end replays; and - * resstarting TEP processing. + +- shutting down [tracking event processors](https://axoniq.io/blog-overview/tracking-event-processors) (TEP) processing; +- clearing the tracking tokens in the Axon database; +- placing a marker event into the event log that will end replays; and +- resstarting TEP processing. ## Crypto shredding -Crypto shredding is a technique for disabling access to sensitive information by discarding encryption keys. You might use -this on the behest of a user or when retention is no longer justified in order to comply with the European Union's General + +Crypto shredding is a technique for disabling access to sensitive information by discarding encryption keys. You might use +this on the behest of a user or when retention is no longer justified in order to comply with the European Union's General Data Protection Regulation (GDPR) without compromising the append-only nature of your event log. Documentation in the [crypto shredding](https://github.com/everest-engineering/axon-crypto-shredding-extension) repository explains how it works, its limitations and an important caveat. -## Security support -The [security](https://github.com/everest-engineering/lhotse-security) module builds on -[Spring Security OAuth](https://projects.spring.io/spring-security-oauth/docs/oauth2.html). Out of the box, it sets up -both an __authorization server__ and a __resource server__ (the main application) that facilitate an authentication and -authorisation workflow based on [OAuth2](https://oauth.net/2/). Stateless sessions using -[JSON Web Tokens](https://jwt.io/) (JWT) makes it easy to extract microservices. +## Security and access control + +[Keycloak](https://www.keycloak.org/) is an open-source identity and access management solution aimed at modern applications and services. It makes it easy to secure applications and services with little to no code. -JWT tokens are issued by the authorization server which client applications include as part of the `Authorization` -header for every API request. The main application -- the resource server in OAuth parlance -- uses a shared -secret to validate each request and enforces role based authorisation. +We are using Keycloak to manage users' authentication, authorization, and session management. -Our initial set up has both authorisation and resource servers running together in a single application. A single -hard-coded client, `web-app-ui`, is configured in the authorisation server to support the -[password grant](https://oauth.net/2/grant-types/password/) approach to exchanging credentials. Front end applications -need to specify this identify to perform authentication & authorisation on behalf of end users. +Keycloak has the following three main concepts. -If necessary, the authorisation server can be extracted into its own service to serve multiple resource servers. Third -party OAuth2 providers can also be integrated with the resource server. +- _Realm_: A realm secures and manages security metadata for a set of users, applications, and registered auth clients. By default Keycloak will provide us with a `master` realm but using the `master` realm for the admin users is the best practice so we have created a new realm called `default` and all the app users will be managed from it. +- _Client_: Clients are entities that can request authentication of a user within a realm. By default, Keycloak will provide us with a few clients but using a separate client is the best practice. So we have created `default client` for the `default` realm. Using it we can access the Keycloak APIs from outside of its admin console. +- _Role_: Roles identify a type or category of user. Keycloak often assigns access and permissions to specific roles rather than individual users for fine-grained access control. Currently, we using the `ADMIN` role for admin users. -### User and Authentication Context -An User object in the business domain often requires more attributes than the User object from spring security module. -For an example, the starter kit's User object has an extra `organizationId` property. To bridge this difference for -authentication context, a `AuthenticationContextProvider` bean is configured to get a domain User object from the -authentication context. This domain User object can be injected into any controller handler methods so that any user -properties can be easily checked for security purpose. +_Note: Our app has some internal roles for managing the user's access levels but those roles don't have any relation with Keycloak roles._ + +### User and Authentication Token + +A user object in the business domain often requires more attributes than a user object from the Keycloak authentication token. +For an example, the starter kit's user object has an extra `organizationId` property. To bridge this difference for +authentication context, we added this info in the Keycloak authentication token as other claims, and we can access these claims like remaining claims of a user object. ### Endpoint access control + Controller end-points are secured based on user roles, properties and entity permissions. Annotations are used to configure the access control for each handler method. To reduce repetition and improve readability, a few meta-annotations are created for common security configuration, e.g. `AdminOrAdminOfTargetOrganization`. There are situations when user roles and properties are not sufficient to determine the access control. This is where the entity -permission check comes in. An entity in this case is a representation of domain object in the application layer. It +permission check comes in. An entity in this case is a representation of domain object in the application layer. It corresponds to at least one persistable object. For an example, one `Organization` entity corresponds to one -`PersistableOrganization`. To put it simply in the event sourcing context, it can be just considered as the projection. +`PersistableOrganization`. To put it simply in the event sourcing context, it can be just considered as the projection. -The entity permission check is specified within the security annotation and takes the form of `hasPermission(#entityId, -'EntityClassName', 'permissionType')`. This expression is evaluated by `EntityPermissionEvaluator`, which in turn +The entity permission check is specified within the security annotation and takes the form of `hasPermission(#entityId, 'EntityClassName', 'permissionType')`. This expression is evaluated by `EntityPermissionEvaluator`, which in turn delegates to corresponding permission check methods of an entity, where customized permission requirements can be -implemented. This workflow is made possible by: a) having all entity classes implementing the `Identifiable` interface -and b) having a `ReadService` for each `Identifable` entity. The `Identifiable` interface provides default *reject all* +implemented. This workflow is made possible by: a) having all entity classes implementing the `Identifiable` interface +and b) having a `ReadService` for each `Identifable` entity. The `Identifiable` interface provides default _reject all_ permission checks which can be overridden by implementing entities. The `ReadService` provides a way to load an entity by its simple class name. To help managing increasing number of `ReadService`, the starter kit provides a `ReadServiceProvider` bean which collects all `ReadService` beans during start of the application context. When adding new controllers and security configurations, it is important to refer to existing patterns and ensure -consistency. This also applies to tests where fixtures are provided to support the necessary *automagic* behaviours. +consistency. This also applies to tests where fixtures are provided to support the necessary _automagic_ behaviours. ## File support + The [storage](https://github.com/everest-engineering/lhotse-storage) module implements two file stores: one is referred to as _permanent_, the other as the _ephemeral_ store. The permanent file store is for storing critical files that, such as user uploads, cannot be recovered. The ephemeral store is for non-critical files that can be regenerated by the system @@ -328,13 +362,17 @@ File stores need backing service such as a blob store or filesystem. This starte [Mongo GridFS](https://docs.mongodb.com/manual/core/gridfs/) and AWS S3 at time of writing. ### Configuring Mongo GridFS + Set the application property: + ``` application.filestore.backend=mongoGridFs ``` ### Configuring AWS S3 + Set the following application properties: + ``` application.filestore.backend=awsS3 application.filestore.awsS3.buckets.permanent=sample-bucket-permanent @@ -346,31 +384,35 @@ and [DefaultAwsRegionProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/lates for fetching AWS credentials and the AWS region. ## Media support + The [media](https://github.com/everest-engineering/lhotse-media) module adds additional support for managing of image and video updates. It generates thumbnail images on the fly, caching them in the ephemeral file store for subsequent requests. Thumbnail sizes are limited to prevent the system from being overwhelmed by malicious requests. ## ETag HTTP headers -[ETag HTTP headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) are enabled by default for all API + +[ETag HTTP headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) are enabled by default for all API endpoints via a shallow filter. The filter is unaware of any changes made to underlying data so while clients may use the ETag to avoid unnecessary transfers and client side processing, there is no benefit to application server performance. # Project Info ## Maintainers + [@sluehr](https://github.com/sluehr), [@ywangd](https://github.com/ywangd) ## Contributing + We appreciate your help! - -[Open an issue](https://github.com/everest-engineering/lhotse/issues/new/choose) or submit a pull request for an enhancement. -You may want to view the [project board](https://github.com/orgs/everest-engineering/projects/1) or browse through the + +[Open an issue](https://github.com/everest-engineering/lhotse/issues/new/choose) or submit a pull request for an enhancement. +You may want to view the [project board](https://github.com/orgs/everest-engineering/projects/1) or browse through the [current open issues](https://github.com/everest-engineering/lhotse/issues). ## License + [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![License: EverestEngineering](https://img.shields.io/badge/Copyright%20%C2%A9-EVERESTENGINEERING-blue)](https://everest.engineering) ->Talk to us `hi@everest.engineering`. - +> Talk to us `hi@everest.engineering`. diff --git a/src/users/src/main/java/engineering/everest/lhotse/users/domain/KeycloakSynchronizationSaga.java b/src/users/src/main/java/engineering/everest/lhotse/users/domain/KeycloakSynchronizationSaga.java index 6496031a..373d7ef5 100644 --- a/src/users/src/main/java/engineering/everest/lhotse/users/domain/KeycloakSynchronizationSaga.java +++ b/src/users/src/main/java/engineering/everest/lhotse/users/domain/KeycloakSynchronizationSaga.java @@ -6,6 +6,7 @@ import engineering.everest.lhotse.axon.common.domain.UserAttribute; import engineering.everest.lhotse.axon.common.services.KeycloakSynchronizationService; import engineering.everest.lhotse.users.domain.events.UserDeletedAndForgottenEvent; +import engineering.everest.lhotse.users.domain.events.UserDetailsUpdatedByAdminEvent; import engineering.everest.lhotse.users.domain.events.UserRolesUpdatedByAdminEvent; import engineering.everest.lhotse.users.services.UsersReadService; import org.axonframework.modelling.saga.EndSaga; @@ -52,6 +53,21 @@ public void on(UserRolesUpdatedByAdminEvent event) throws Exception { Map.of("attributes", new UserAttribute(user.getOrganizationId(), event.getRoles(), user.getDisplayName()))); } + @StartSaga + @EndSaga + @SagaEventHandler(associationProperty = USER_ID_PROPERTY) + public void on(UserDetailsUpdatedByAdminEvent event) throws Exception { + var user = usersReadService.getById(event.getUserId()); + + waitForTheProjectionUpdate(() -> user.getEmail() + .equals(event.getEmailChange() ) || user.getDisplayName().equals(event.getDisplayNameChange()), + "user email and displayName projection update"); + + keycloakSynchronizationService.updateUserAttributes(event.getUserId(), + Map.of("attributes", new UserAttribute(user.getOrganizationId(), user.getRoles(), event.getDisplayNameChange()), + "email", event.getEmailChange()) ); + } + @StartSaga @EndSaga @SagaEventHandler(associationProperty = DELETED_USER_ID_PROPERTY)