Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache any value & redis driver for cache #1217

Draft
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

georgeboot
Copy link
Contributor

@georgeboot georgeboot commented Jan 24, 2025

Fixes #1208

WIP

src/config.rs Outdated
#[derive(Debug, Clone, Deserialize, Serialize)]
#[serde(tag = "kind")]
pub enum CacheConfig {
/// Redis queue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to align comments here

@jondot
Copy link
Contributor

jondot commented Jan 26, 2025

This looks good so far!

@kaplanelad
Copy link
Contributor

I believe we should consider using the Redis implementation provided by Opendal instead of writing custom logic for the cache function.

Since we're already using Opendal in Loco, this would only require adding a feature flag. Additionally, Opendal supports multiple cache engines, making it easier to integrate other types of engines in the future if needed.

WDYT?

@kaplanelad
Copy link
Contributor

kaplanelad commented Jan 26, 2025

Maybe @Xuanwo can help you with that
also take a look on this issue: #982

@georgeboot
Copy link
Contributor Author

georgeboot commented Jan 27, 2025

@kaplanelad ive looked at it, and while Opendal supports redis, it only exposes an api to store and retrieve items. Cache also needs expiry, flush and others, so we can't use it.

The other way however, we might consider some re-use. There is for example #1181 that could benefit from atomic increments in redis, something that is currently not yet in our cache api. Another use case could be distributed locks via redis.

Should we maybe consider making redis its own thing (just like databases are), and re-use that redis in our cache implementation? And perhaps in a future session driver as well. Laravel (rails for PHP) does something similar.

@kaplanelad
Copy link
Contributor

From what I can see, Opendal implements expiry (see here).

Have you reviewed their implementation?

Reusing your own implementation doesn’t conflict with the idea of leveraging Opendal’s implementation. We can still integrate and reuse their work where applicable.

@georgeboot
Copy link
Contributor Author

@kaplanelad it exists yes, but opendal does not expose it. I don't know why these methods on the redis core are public, as they are all undocumented and you can't create your own redis from from scratch. So doesn't sound like the best idea to use it.

@kaplanelad
Copy link
Contributor

I’m not sure I fully understand what you mean by “not expose it.”

If there’s a library we’re already using in other parts of the codebase, why avoid using it? Is there a specific technical issue preventing us from doing so?

The Redis implementation in Opendal addresses use cases that are not covered in your implementation.

@georgeboot
Copy link
Contributor Author

georgeboot commented Jan 28, 2025

The redis core you refer to basically wraps a redis connection. If i were to use it, the cache adapter would wrap the redis core which wraps the redis connection.

Rails and Laravel make redis its own thing, and reuse one or more redis connections for queues, caches, session, distributed locks, distributed rate limiters etc. Most of them depend on specific functionalities.

Sure, the redis core from opendal support some redis functionalities, but in the near future, when rate limiters or will be implemented, we will find ourselves writing our own redis store so that we have access to the raw redis connection, so we can do things like atomic counters (rate limiting) or atomic sets (distributed locks).

Opendal in the end also uses bb8 and redis crate, and our implementation is just a thin layer, which will give lots of possibilities in the future.

Edit: oke I see the redis connection is public, so we could at least get it out if we needed to.

@kaplanelad
Copy link
Contributor

@georgeboot I think it would be a good to split this PR into two parts.

The first PR can focus on making the Cache accept any value, which is what you already did with DeserializeOwned. In the second PR, we can continue our discussion and implement the Redis integration.

What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cache values should be anything that can de (de)serialized
3 participants