Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from spinnaker:master #10

Open
wants to merge 58 commits into
base: master
Choose a base branch
from

Conversation

pull[bot]
Copy link

@pull pull bot commented Mar 14, 2024

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

spinnakerbot and others added 3 commits March 12, 2024 16:24
Co-authored-by: root <root@fff03430e3e7>
Co-authored-by: root <root@b363a8ce22ba>
Co-authored-by: root <root@e87e1aef4e07>
Copy link

armory-io bot commented Mar 14, 2024

PR title: [pull] master from spinnaker:master does not meet Armory Engineering best practices around conventional commits.

Reason: The category was missing. EX: category: description or category(scope): description. Common Categories: build, ci, chore, docs, feat, fix, perf, refactor, revert, style, test, ops

Because we enforce conventional commits and squash merges of PRs, the PR Title becomes the commit title.

<type>[optional scope]: <description> <- The PR title becomes this part.

[optional body] <- You can add this when you merge your PR.

[optional footer] <- You can add this when you merge your PR.

For more information, see the Conventional Commits specification.

Some common examples are:

docs: correct spelling of CHANGELOG
chore!: drop Node 6 from testing matrix
chore(ops)!: drop Node 6 from testing matrix
feat(armory.io): add docs.armory.io
fix(parser): grammar and spelling
style(loadingPage): made it super pretty!
refactor(helpMessages): changed order to make more sense

@pull pull bot added the ⤵️ pull label Mar 14, 2024
spinnakerbot and others added 25 commits March 21, 2024 16:45
Co-authored-by: root <root@b8ac3e53ca36>
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 1 to 2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](softprops/action-gh-release@v1...v2)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: root <root@f3c95bdfe066>
Co-authored-by: root <root@e299e24f53a9>
Co-authored-by: root <root@9bcdea888557>
…n to skip invalid objects (#1450)

* fix(sql): teach SqlStorageService.loadObjects to skip invalid objects

and return the valid ones.  This allows the cache to populate and gives the health check a
chance to succeed.

* fix(sql): teach SqlStorageService.loadObjectsNewerThan to skip invalid objects

and return the valid ones.  This allows the cache to populate and gives the health check a
chance to succeed.
…t docker image (#1451)

* fix(core): don't include CommonStorageServiceDAOConfig when redis is enabled

The Redis*DAO family of beans handle this functionality when redis is enabled, so disable CommonStorageServiceDAOConfig.

This fixes this error on startup:

    ***************************
    APPLICATION FAILED TO START
    ***************************

    Description:

    Parameter 0 of method pipelineTemplateDAO in com.netflix.spinnaker.front50.config.CommonStorageServiceDAOConfig required a bean of type 'com.netflix.spinnaker.front50.model.StorageService' that could not be found.

    The injection point has the following annotations:
        - @org.springframework.beans.factory.annotation.Autowired(required=false)

    Action:

    Consider defining a bean of type 'com.netflix.spinnaker.front50.model.StorageService' in your configuration.

* chore(build): give local gradle builds more memory

Match what github actions uses to prevent e.g.

> Task :front50-s3:compileJava
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: /Users/dbyron/src/spinnaker/salesforce/front50/front50-s3/src/main/java/com/netflix/spinnaker/front50/model/S3StorageService.java uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
Expiring Daemon because JVM heap space is exhausted

* feat(docker): add HEALTHCHECK

to facilitate testing container startup

* feat(build): add front50-integration module to exercise the just-built docker image

* feat(integration): run integration test in pr builds

multi-arch with --load doesn't work, so add a separate step using the local platform to
make an image available for testing.

see docker/buildx#59

* feat(integration): run integration test in branch builds
Co-authored-by: root <root@ecdf0ca0fcc6>
…oller.save (#1452)

* refactor(web/test): construct Pipeline objects in the first place

instead of making Maps and casting

* fix(web): restore check for regenerateCronTriggerIds in PipelineController.save

https://github.com/spinnaker/front50/pull/1035/files#diff-9b514be177faf5444c86a88ab6bb9e6a0add032bfa67862bc8e33b17c4bb9cc9L159
removed it, but orca's SavePipelineTask still sets it.

* refactor(web): adjust pipeline triggers directly

The comment in https://github.com/spinnaker/front50/pull/987/files#r515405296 is no longer
true after #1035.  Pipeline.getTriggers no longer
makes a copy, so there's no need to get/modify/set.

---------

Co-authored-by: Jason <[email protected]>
Co-authored-by: root <root@28ff04fd44ba>
…#1457)

and use an additional filter to clean up the code.

With
[pipeline.getTriggers()](https://github.com/spinnaker/front50/pull/1035/files#diff-0b2bc300fd3965c64ba4184955384aa8609cace04e1c015b31ef4a83552e53d1R36)
implemented as an accessor, ensureCronTriggersHaveIdentifier was already mutating the
pipeline.  Let's make that more obvious by making it return void and adding some javadoc.
Co-authored-by: root <root@8b70676aa6c3>
Co-authored-by: root <root@cfe88d4b31d9>
Co-authored-by: root <root@8b2740453d3d>
* feat(migrations): Support for migrations defined in plugins

The current implementation of the MigrationRunner is initialized before any migrations in plugins are defined. Even if they are defined using `@ExposeToApp`, the beans are never picked up by the MigrationRunner bean.
This fix looks up Migration beans directly from the applicationContext on every run, and will also delay the first run with 10 seconds in order to give some time to plugins to be initialized.

* Fix import order

* Add tests for MigrationRunner

---------

Co-authored-by: Jason <[email protected]>
Co-authored-by: root <root@759306dd2744>
Before:
```
$ ./gradlew front50-sql-mysql:dI --dependency mysql-connector-java --configuration runtimeClasspath

> Task :front50-sql-mysql:dependencyInsight
mysql:mysql-connector-java:8.0.33
  Variant runtime:
    | Attribute Name                 | Provided     | Requested    |
    |--------------------------------|--------------|--------------|
    | org.gradle.status              | release      |              |
    | org.gradle.category            | library      | library      |
    | org.gradle.libraryelements     | jar          | jar          |
    | org.gradle.usage               | java-runtime | java-runtime |
    | org.gradle.dependency.bundling |              | external     |
    | org.gradle.jvm.environment     |              | standard-jvm |
    | org.gradle.jvm.version         |              | 11           |
   Selection reasons:
      - By constraint
      - Forced

mysql:mysql-connector-java:8.0.33
\--- io.spinnaker.kork:kork-bom:7.227.0
     \--- runtimeClasspath

mysql:mysql-connector-java:8.0.12 -> 8.0.33
\--- runtimeClasspath
```

After:
```
$ ./gradlew front50-sql-mysql:dI --dependency mysql-connector-java --configuration runtimeClasspath

> Task :front50-sql-mysql:dependencyInsight
mysql:mysql-connector-java:8.0.33
  Variant runtime:
    | Attribute Name                 | Provided     | Requested    |
    |--------------------------------|--------------|--------------|
    | org.gradle.status              | release      |              |
    | org.gradle.category            | library      | library      |
    | org.gradle.libraryelements     | jar          | jar          |
    | org.gradle.usage               | java-runtime | java-runtime |
    | org.gradle.dependency.bundling |              | external     |
    | org.gradle.jvm.environment     |              | standard-jvm |
    | org.gradle.jvm.version         |              | 11           |
   Selection reasons:
      - By constraint
      - Forced

mysql:mysql-connector-java:8.0.33
\--- io.spinnaker.kork:kork-bom:7.227.0
     \--- runtimeClasspath

mysql:mysql-connector-java -> 8.0.33
\--- runtimeClasspath
```
* fix(web): Retrieve dependent pipelines correctly

* fix(web): Add tests

* fix(web): Add test proving the broken behaviour of earlier
dependentPipelines API
Co-authored-by: root <root@2fd9414555e9>
Co-authored-by: root <root@4f16d982b5e9>
…fix (#1466)

* fix(migrator): GCS to SQL migrator APPLICATION_PERMISSION fix/refactor

* fix(migrator): GCS to SQL migrator APPLICATION_PERMISSION fix/refactor
* chore(build): enable cross compilation plugin for Java 17

* chore(build): fix usage of size property on lists
* chore(dependencies): Autobump korkVersion

* refactor(mysql): update mysql connector coordinate during upgrade to spring boot 2.7.x

In spring boot 2.7.8 onwards mysql connector coordinate `mysql:mysql-connector-java` has been removed and only `com.mysql:mysql-connector-j` coordinate exist.

https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.7-Release-Notes#mysql-jdbc-driver

So, updating the mysql connector coordinate as `com.mysql:mysql-connector-j` with spring boot upgrade to 2.7.18.

https://repo1.maven.org/maven2/org/springframework/boot/spring-boot-dependencies/2.7.18/spring-boot-dependencies-2.7.18.pom

---------

Co-authored-by: root <root@51dce6428a99>
Co-authored-by: j-sandy <[email protected]>
Co-authored-by: root <root@66f7060f0a52>
dependabot bot and others added 30 commits July 1, 2024 15:11
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](docker/build-push-action@v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: root <root@de6989adb87f>
Co-authored-by: root <root@35d9e249176d>
Co-authored-by: root <root@6fca7a9c2849>
Co-authored-by: root <root@ab7e4966783c>
Co-authored-by: root <root@7e5a124a1944>
Co-authored-by: root <root@31d6695ca59c>
Co-authored-by: root <root@c9835c7522f6>
Co-authored-by: root <root@792b5f58cd6a>
* refactor(web/test): configure the ObjectMapper in PipelineControllerTck

to match the one that Front50CoreConfiguration provides.  This paves the way to test
additional PipelineController functionality.

* feat(web/new config): add PipelineControllerConfig to hold the configurations to be used for save/update controller mappings

* Add new configuration class PipelineControllerConfig
* Update Front50WebConfig to use PipelineControllerConfig
* Update PipelineController to use PipelineControllerConfig
* Update PipelineControllerSpec to use PipelineControllerConfig
* Update PipelineControllerTck to use PipelineControllerConfig
* add test to check duplicate pipelines when refreshCacheOnDuplicatesCheck flag is enabled and disabled

* feat(sql): make the bulk save operation atomic

* refactor SqlStorageService.storeObjects() method to make the bulk save an atomic operation
* without this change, in case of db exception, some chunks of pipelines get saved while the others fail leading to inconsistency.
* Last Catch block is now removed as it's no longer partial storage of supplied pipelines
* add test for bulk create pipelines which tests the atomic behaviour of the SqlStorageService.storeObjects() method

* refactor(web): refactor validatePipeline() so that it can be reused for batchUpdate().

checkForDuplicatePipeline() is removed from validatePipeline() and cron trigger validations are moved into validatePipeline() so that reusable code stays at on e place.

remove unused overloaded checkForDuplicatePipeline() method

Fix NPE caused in a test(should create pipelines in a thread safe way) in PipelineControllerSpec due to a newly added log message in PipelineController.save()

* feat(batchUpdate): update /pipelines/batchUpdate POST handler method to address deserialization issues and add some useful log statements

* feat(web): add a write permission check and validation to PipelineController.batchUpdate

* Check if user has WRITE permissions on the pipeline, if not the pipeline will be added to invalid pipelines list
* This change is a first step towards controlling access at pipeline level in a batch update. batchUpdate is still allowed only for admins but in the next few commits, the access level will be equated to that of individual pipeline save.
* Check if duplicate pipeline exists in the same app
* Validate pipeline id
* Adjust test classes for PipelineController changes

* feat(web): make batchUpdate return a map response with succeeded and failed pipelines and their counts

* The response will be in the following format:
[
"successful_pipelines_count" : <int>,
"successful_pipelines"        : <List<String>>,
"failed_pipelines_count"      : <int>,
"failed_pipelines"            : <List<Map<String, Object>>>
]

* feat(web): add staleCheck to batchUpdate so that if a submitted pipeline in the batch already exists and their lastModified timestamps don't match then the pipeline is stale and hence added to invalid pipelines list. This behaviour is same as that of individual save and update operations.

* add test to validate the code around staleCheck for batchUpdate

* feat(web): fine tune permissions on batchUpdate

* adjust permissions to batchUpdate (before: isAdmin, now: verifies application write permission).

* enforce runAsUser permissions while deserializing pipelines

* This puts batchUpdate on a par with individual save w.r.t. access restrictions

* adjust test classes according to the changes to the PipelineController

* refactor(web): simplify code for setting trigger ids in PipelineController.validatePipeline

* test(batchUpdate): add test cases for testing batchUpdate changes

Fixed test exceptions by making the following changes:
- added @EqualsAndHashCode to Pipeline
- added `pipelineDAO.all(true)` in SqlPipelineControllerTck.createPipelineDAO() to initialize the cache with empty set. Otherwise, the tests fail due to NPE.

* fix(web): minor fixes/improvements

---------

Co-authored-by: David Byron <[email protected]>
Co-authored-by: Jason <[email protected]>
Since halyard uses a different versioning scheme, it doesn't make sense for halyard to
consume non-master (e.g. release-1.34.x) versions of front50.
* chore(dependencies): Autobump korkVersion

* refactor(dependency): replace groovy coordinates during upgrade of groovy 4.x

Replacing the groovy coordinates from `org.codehaus.groovy` to `org.apache.groovy` supported by groovy 4.x and above versions.

---------

Co-authored-by: root <root@b3e368603f7c>
Co-authored-by: j-sandy <[email protected]>
Co-authored-by: root <root@84926203b695>
Co-authored-by: root <root@15a60bc3bd71>
Co-authored-by: root <root@0e8a9747aac9>
Co-authored-by: root <root@76e4fcc2b807>
* chore(dependencies): Autobump korkVersion

* chore(dependencies): remove force dependencies for google-api

* fix(gcs): add dummy metadata to updateLastModified blob with newer client

* test(gcs): update test that match new google cloud client

---------

Co-authored-by: root <root@b2d607896795>
Co-authored-by: Edgar Garcia <[email protected]>
Co-authored-by: root <root@c407cd7c4eac>
…s/{application} endpoint (#1504)

* feat(pipelineController): add a pipelineNameFilter query param to the /pipelines/{application} endpoint

This adds a pipelineNameFilter query parameter to the /pipelines/{application} endpoint. If pipelineNameFilter is present, the endpoint will return a list of pipelines whose pipeline name contains the pipelineNameFilter.

This change is necessary to enable some optimizations in the front end - namely, filtering pipelines on the backend, instead of always querying for every pipeline in an application, and filtering the list in the front end.

* feat(pipeline): check for null pipeline name and log error

* fix(test/pipeline): updating tests and fixing getPipelinesByApplication implementation.

* fix(pipeline): make the comparison case insensitive.

* refactor(tests): convert groovy tests to java

* fix(tests): address minor issues

---------

Co-authored-by: Richard Timpson <[email protected]>
Co-authored-by: root <root@f15c9eb9818b>
* chore(java): Full Java 17 support only

* chore(java): Full Java 17 support only
Co-authored-by: root <root@7a9eafb286db>
* chore(java): Full Java 17 support only

* chore(java): Full Java 17 support only

* chore(upgrades): Update OS to latest supported releases
Co-authored-by: root <root@0e2c439806f3>
and, once it's tagged, move the release-1.36.x branch there.

We created the release-1.36.x branch too early, so the [2.36.0 build](https://github.com/spinnaker/front50/actions/runs/11925822078/job/33238616660) failed:

Run BRANCHES=$(git branch -r --contains refs/tags/v2.36.0)
BRANCHES is '  origin/master
  origin/release-1.36.x'
NUM_BRANCHES is '2'
exactly one branch required to release front50, but there are 2 (  origin/master
  origin/release-1.36.x)
* fix(openapi): Rewrite Swagger to OpenAPI annotations

* chore(deps): bump latest kork version

---------

Co-authored-by: Edgar Garcia <[email protected]>
Co-authored-by: root <root@c80011301680>
* chore(dependencies): Autobump fiatVersion

* refactor(retrofit2): refactor the code to align with the retrofit2 upgrade of fiat-api

* refactor(retrofit2): use retrofit-mock library instead of mocking Call.

---------

Co-authored-by: root <root@4f180b5207b5>
Co-authored-by: kirangodishala <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants