A distributed cache implementation providing reliable resource management and data synchronization across distributed systems.
- Automatic Synchronization: Background watcher task keeps local cache in sync with remote data store
- Concurrency Control: Two-level concurrency control mechanism for safe access
- Event-based Updates: Real-time updates through watch API
- Safe Reconnection: Automatic recovery from connection failures with state consistency
<prefix>/foo
<prefix>/..
<prefix>/..
<prefix>
: User-defined string to identify a cache instance
Cache
: The main entry point for cache operations- Provides safe access to cached data
CacheData
: Internal data structure holding the cached valuesEventWatcher
: Background task that watches for changes in the remote data store- Handles synchronization with remote data store
let client = RemoteClient::try_create(/*..*/);
let cache = Cache::new(
client,
"your/cache/key/space",
"your-app-name-for-logging",
).await;
// Access cached data
cache.try_access(|c: &CacheData| {
println!("last-seq:{}", c.last_seq);
println!("all data: {:?}", c.data);
}).await?;
// Get a specific value
let value = cache.try_get("key").await?;
// List all entries under a prefix
let entries = cache.try_list_dir("prefix").await?;
The cache employs a two-level concurrency control mechanism:
-
Internal Lock (Mutex): Protects concurrent access between user operations and the background cache updater. This lock is held briefly during each operation.
-
External Lock (Method Design): Public methods require
&mut self
even for read-only operations. This prevents concurrent access to the cache instance from multiple call sites. External synchronization should be implemented by the caller if needed.
This design intentionally separates concerns:
- The internal lock handles short-term, fine-grained synchronization with the updater
- The external lock requirement (
&mut self
) enables longer-duration access patterns without blocking the background updater unnecessarily
Note that despite requiring &mut self
, all operations are logically read-only with respect to the cache's public API.
When a Cache
is created, it goes through the following steps:
- Creates a new instance with specified prefix and context
- Spawns a background task to watch for key-value changes
- Establishes a watch stream to the remote data store
- Fetches and processes initial data
- Waits for the cache to be fully initialized before returning
- Maintains continuous synchronization
The initialization is complete only when the cache has received a full copy of the data from the remote data store, ensuring users see a consistent view of the data.
The cache implements robust error handling:
- Connection failures are automatically retried in the background
- Background watcher task automatically recovers from errors
- Users are shielded from transient errors through the abstraction
- The cache ensures data consistency by tracking sequence numbers
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.