diff --git a/content/develop/ai/langcache.md b/content/develop/ai/langcache.md
deleted file mode 100644
index 5c6e88927c..0000000000
--- a/content/develop/ai/langcache.md
+++ /dev/null
@@ -1,96 +0,0 @@
----
-Title: Redis LangCache
-alwaysopen: false
-categories:
-- docs
-- develop
-- ai
-description: Redis LangCache provides semantic caching-as-a-service to reduce LLM costs and improve response times for AI applications.
-linkTitle: LangCache
-weight: 30
----
-
-Redis LangCache is a fully-managed semantic caching service that reduces large language model (LLM) costs and improves response times for AI applications.
-
-## How LangCache works
-
-LangCache uses semantic caching to store and reuse previous LLM responses for similar queries. Instead of calling the LLM for every request, LangCache:
-
-- **Checks for similar cached responses** when a new query arrives
-- **Returns cached results instantly** if a semantically similar response exists
-- **Stores new responses** for future reuse when no cache match is found
-
-## Key benefits
-
-### Cost reduction
-LangCache significantly reduces LLM costs by eliminating redundant API calls. Since up to 90% of LLM requests are repetitive, caching frequently-requested responses provides substantial cost savings.
-
-### Improved performance
-Cached responses are retrieved from memory, providing response times up to 15 times faster than LLM API calls. This improvement is particularly beneficial for retrieval-augmented generation (RAG) applications.
-
-### Simple deployment
-LangCache is available as a managed service through a REST API. The service includes:
-
-- Automated embedding generation
-- Configurable cache controls
-- Simple billing structure
-- No database management required
-
-### Advanced cache management
-The service provides comprehensive cache management features:
-
-- Data access and privacy controls
-- Configurable eviction protocols
-- Usage monitoring and analytics
-- Cache hit rate tracking
-
-## Use cases
-
-### AI assistants and chatbots
-Optimize conversational AI applications by caching common responses and reducing latency for frequently asked questions.
-
-### RAG applications
-Improve retrieval-augmented generation performance by caching responses to similar queries, reducing both cost and response time.
-
-### AI agents
-Enhance multi-step reasoning chains and agent workflows by caching intermediate results and common reasoning patterns.
-
-### AI gateways
-Integrate LangCache into centralized AI gateway services to manage and control LLM costs across multiple applications.
-
-## Getting started
-
-LangCache is currently available through a private preview program. The service is accessible via REST API and supports any programming language.
-
-### Prerequisites
-
-To use LangCache, you need:
-
-- An AI application that makes LLM API calls
-- A use case involving repetitive or similar queries
-- Willingness to provide feedback during the preview phase
-
-### Access
-
-LangCache is offered as a fully-managed cloud service. During the private preview:
-
-- Participation is free
-- Usage limits may apply
-- Dedicated support is provided
-- Regular feedback sessions are conducted
-
-## Data security and privacy
-
-LangCache stores your data on your Redis servers. Redis does not access your data or use it to train AI models. The service maintains enterprise-grade security and privacy standards.
-
-## Support
-
-Private preview participants receive:
-
-- Dedicated onboarding resources
-- Documentation and tutorials
-- Email and chat support
-- Regular check-ins with the product team
-- Exclusive roadmap updates
-
-For more information about joining the private preview, visit the [Redis LangCache website](https://redis.io/langcache/).
diff --git a/content/develop/ai/langcache/_index.md b/content/develop/ai/langcache/_index.md
new file mode 100644
index 0000000000..db56ec3c9c
--- /dev/null
+++ b/content/develop/ai/langcache/_index.md
@@ -0,0 +1,111 @@
+---
+Title: Redis LangCache
+alwaysopen: false
+categories:
+- docs
+- develop
+- ai
+description: Store LLM responses for AI apps in a semantic cache.
+linkTitle: LangCache
+hideListLinks: true
+weight: 30
+---
+
+Redis LangCache is a fully-managed semantic caching service that reduces large language model (LLM) costs and improves response times for AI applications.
+
+## LangCache overview
+
+LangCache uses semantic caching to store and reuse previous LLM responses for repeated queries. Instead of calling the LLM for every request, LangCache checks if a similar response has already been generated and is stored in the cache. If a match is found, LangCache returns the cached response instantly, saving time and resources.
+
+Imagine you’re using an LLM to build an agent to answer questions about your company's products. Your users may ask questions like the following:
+
+- "What are the features of Product A?"
+- "Can you list the main features of Product A?"
+- "Tell me about Product A’s features."
+
+These prompts may have slight variations, but they essentially ask the same question. LangCache can help you avoid calling the LLM for each of these prompts by caching the response to the first prompt and returning it for any similar prompts.
+
+Using LangCache as a semantic caching service has the following benefits:
+
+- **Lower LLM costs**: Reduce costly LLM calls by easily storing the most frequently-requested responses.
+- **Faster AI app responses**: Get faster AI responses by retrieving previously-stored requests from memory.
+- **Simpler Deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required.
+- **Advanced cache management**: Manage data access and privacy, eviction protocols, and monitor usage and cache hit rates.
+
+LangCache works well for the following use cases:
+
+- **AI assistants and chatbots**: Optimize conversational AI applications by caching common responses and reducing latency for frequently asked questions.
+- **RAG applications**: Enhance retrieval-augmented generation performance by caching responses to similar queries, reducing both cost and response time.
+- **AI agents**: Improve multi-step reasoning chains and agent workflows by caching intermediate results and common reasoning patterns.
+- **AI gateways**: Integrate LangCache into centralized AI gateway services to manage and control LLM costs across multiple applications..
+
+### LLM cost reduction with LangCache
+
+{{< embed-md "langcache-cost-reduction.md" >}}
+
+## LangCache architecture
+
+The following diagram displays how you can integrate LangCache into your GenAI app:
+
+{{< image filename="images/rc/langcache-process.png" alt="The LangCache process diagram." >}}
+
+1. A user sends a prompt to your AI app.
+1. Your app sends the prompt to LangCache through the `POST /v1/caches/{cacheId}/search` endpoint.
+1. LangCache calls an embedding model service to generate an embedding for the prompt.
+1. LangCache searches the cache to see if a similar response already exists by matching the embeddings of the new query with the stored embeddings.
+1. If a semantically similar entry is found (also known as a cache hit), LangCache gets the cached response and returns it to your app. Your app can then send the cached response back to the user.
+1. If no match is found (also known as a cache miss), your app receives an empty response from LangCache. Your app then queries your chosen LLM to generate a new response.
+1. Your app sends the prompt and the new response to LangCache through the `POST /v1/caches/{cacheId}/entries` endpoint.
+1. LangCache stores the embedding with the new response in the cache for future use.
+
+See the [LangCache API reference]({{< relref "/develop/ai/langcache/api-reference" >}}) for more information on how to use the LangCache API.
+
+## Get started
+
+LangCache is currently in preview:
+
+- Public preview on [Redis Cloud]({{< relref "/operate/rc/langcache" >}})
+- Fully-managed [private preview](https://redis.io/langcache/)
+
+{{< multitabs id="langcache-get-started"
+ tab1="Redis Cloud"
+ tab2="Private preview" >}}
+
+{{< embed-md "rc-langcache-get-started.md" >}}
+
+-tab-sep-
+
+### Prerequisites
+
+To use LangCache in private preview, you need:
+
+- An AI application that makes LLM API calls
+- A use case involving repetitive or similar queries
+- Willingness to provide feedback during the preview phase
+
+### Access
+
+LangCache is offered as a fully-managed service. During the private preview:
+
+- Participation is free
+- Usage limits may apply
+- Dedicated support is provided
+- Regular feedback sessions are conducted
+
+### Data security and privacy
+
+LangCache stores your data on your Redis servers. Redis does not access your data or use it to train AI models. The service maintains enterprise-grade security and privacy standards.
+
+### Support
+
+Private preview participants receive:
+
+- Dedicated onboarding resources
+- Documentation and tutorials
+- Email and chat support
+- Regular check-ins with the product team
+- Exclusive roadmap updates
+
+For more information about joining the private preview, visit the [Redis LangCache website](https://redis.io/langcache/).
+
+{{< /multitabs >}}
diff --git a/content/develop/ai/langcache/api-reference.md b/content/develop/ai/langcache/api-reference.md
new file mode 100644
index 0000000000..1140751554
--- /dev/null
+++ b/content/develop/ai/langcache/api-reference.md
@@ -0,0 +1,129 @@
+---
+alwaysopen: false
+categories:
+- docs
+- develop
+- ai
+description: Learn to use the Redis LangCache API for semantic caching.
+hideListLinks: true
+linktitle: API and SDK reference
+title: LangCache API and SDK reference
+weight: 10
+---
+
+You can use the LangCache API from your client app to store and retrieve LLM, RAG, or agent responses.
+
+To access the LangCache API, you need:
+
+- LangCache API base URL
+- LangCache service API key
+- Cache ID
+
+When you call the API, you need to pass the LangCache API key in the `Authorization` header as a Bearer token and the Cache ID as the `cacheId` path parameter.
+
+For example, to check the health of the cache using `cURL`:
+
+```bash
+curl -s -X GET "https://$HOST/v1/caches/$CACHE_ID/health" \
+ -H "accept: application/json" \
+ -H "Authorization: Bearer $API_KEY"
+```
+
+- The example expects several variables to be set in the shell:
+
+ - **$HOST** - the LangCache API base URL
+ - **$CACHE_ID** - the Cache ID of your cache
+ - **$API_KEY** - The LangCache API token
+
+{{% info %}}
+This example uses `cURL` and Linux shell scripts to demonstrate the API; you can use any standard REST client or library.
+{{% /info %}}
+
+You can also use the [LangCache SDKs](#langcache-sdk) for Javascript and Python to access the API.
+
+## API examples
+
+### Check cache health
+
+Use `GET /v1/caches/{cacheId}/health` to check the health of the cache.
+
+```sh
+GET https://[host]/v1/caches/{cacheId}/health
+```
+
+### Search LangCache for similar responses
+
+Use `POST /v1/caches/{cacheId}/entries/search` to search the cache for matching responses to a user prompt.
+
+```sh
+POST https://[host]/v1/caches/{cacheId}/entries/search
+{
+ "prompt": "User prompt text"
+}
+```
+
+Place this call in your client app right before you call your LLM's REST API. If LangCache returns a response, you can send that response back to the user instead of calling the LLM.
+
+If LangCache does not return a response, you should call your LLM's REST API to generate a new response. After you get a response from the LLM, you can [store it in LangCache](#store-a-new-response-in-langcache) for future use.
+
+You can also scope the responses returned from LangCache by adding an `attributes` object to the request. LangCache will only return responses that match the attributes you specify.
+
+```sh
+POST https://[host]/v1/caches/{cacheId}/entries/search
+{
+ "prompt": "User prompt text",
+ "attributes": {
+ "customAttributeName": "customAttributeValue"
+ }
+}
+```
+
+### Store a new response in LangCache
+
+Use `POST /v1/caches/{cacheId}/entries` to store a new response in the cache.
+
+```sh
+POST https://[host]/v1/caches/{cacheId}/entries
+{
+ "prompt": "User prompt text",
+ "response": "LLM response text"
+}
+```
+
+Place this call in your client app after you get a response from the LLM. This will store the response in the cache for future use.
+
+You can also store the responses with custom attributes by adding an `attributes` object to the request.
+
+```sh
+POST https://[host]/v1/caches/{cacheId}/entries
+{
+ "prompt": "User prompt text",
+ "response": "LLM response text",
+ "attributes": {
+ "customAttributeName": "customAttributeValue"
+ }
+}
+```
+
+### Delete cached responses
+
+Use `DELETE /v1/caches/{cacheId}/entries/{entryId}` to delete a cached response from the cache.
+
+You can also use `DELETE /v1/caches/{cacheId}/entries` to delete multiple cached responses at once. If you provide an `attributes` object, LangCache will delete all responses that match the attributes you specify.
+
+```sh
+DELETE https://[host]/v1/caches/{cacheId}/entries
+{
+ "attributes": {
+ "customAttributeName": "customAttributeValue"
+ }
+}
+```
+## LangCache SDK
+
+If your app is written in Javascript or Python, you can also use the LangCache Software Development Kits (SDKs) to access the API.
+
+To learn how to use the LangCache SDKs:
+
+- [LangCache SDK for Javascript](https://www.npmjs.com/package/@redis-ai/langcache)
+- [LangCache SDK for Python](https://pypi.org/project/langcache/)
diff --git a/content/embeds/langcache-cost-reduction.md b/content/embeds/langcache-cost-reduction.md
new file mode 100644
index 0000000000..5850486ac2
--- /dev/null
+++ b/content/embeds/langcache-cost-reduction.md
@@ -0,0 +1,21 @@
+LangCache reduces your LLM costs by caching responses and avoiding repeated API calls. When a response is served from cache, you don’t pay for output tokens. Input token costs are typically offset by embedding and storage costs.
+
+For every cached response, you'll save the output token cost. To calculate your monthly savings with LangCache, you can use the following formula:
+
+```bash
+Est. monthly savings with LangCache =
+ (Monthly output token costs) × (Cache hit rate)
+```
+
+The more requests you serve from LangCache, the more you save, because you’re not paying to regenerate the output.
+
+Here’s an example:
+- Monthly LLM spend: $200
+- Percentage of output tokens in your spend: 60%
+- Cost of output tokens: $200 × 60% = $120
+- Cache hit rate: 50%
+- Estimated savings: $120 × 50% = $60/month
+
+{{}}
+The formula and numbers above provide a rough estimate of your monthly savings. Actual savings will vary depending on your usage.
+{{}}
\ No newline at end of file
diff --git a/content/embeds/rc-langcache-get-started.md b/content/embeds/rc-langcache-get-started.md
new file mode 100644
index 0000000000..8d7e337fc1
--- /dev/null
+++ b/content/embeds/rc-langcache-get-started.md
@@ -0,0 +1,7 @@
+To set up LangCache on Redis Cloud:
+
+1. [Create a database]({{< relref "/operate/rc/databases/create-database" >}}) on Redis Cloud.
+2. [Create a LangCache service]({{< relref "/operate/rc/langcache/create-service" >}}) for your database on Redis Cloud.
+3. [Use the LangCache API]({{< relref "/operate/rc/langcache/use-langcache" >}}) from your client app.
+
+After you set up LangCache, you can [view and edit the cache]({{< relref "/operate/rc/langcache/view-edit-cache" >}}) and [monitor the cache's performance]({{< relref "/operate/rc/langcache/monitor-cache" >}}).
\ No newline at end of file
diff --git a/content/operate/rc/changelog/2023/august-2023.md b/content/operate/rc/changelog/2023/august-2023.md
index ba40cb5357..ee339741f7 100644
--- a/content/operate/rc/changelog/2023/august-2023.md
+++ b/content/operate/rc/changelog/2023/august-2023.md
@@ -38,7 +38,7 @@ If you'd like to use triggers and functions with a [Flexible subscription]({{< r
For more information about triggers and functions, see the [triggers and functions documentation]({{< relref "/operate/oss_and_stack/stack-with-enterprise/deprecated-features/triggers-and-functions/" >}}).
{{< note >}}
-Triggers and functions is discontinued as of [May 2024]({{< relref "/operate/rc/changelog/may-2024" >}}).
+Triggers and functions is discontinued as of [May 2024]({{< relref "/operate/rc/changelog/2024/may-2024" >}}).
{{< /note >}}
### Maintenance windows
diff --git a/content/operate/rc/changelog/2024/_index.md b/content/operate/rc/changelog/2024/_index.md
new file mode 100644
index 0000000000..9aa4c96a2d
--- /dev/null
+++ b/content/operate/rc/changelog/2024/_index.md
@@ -0,0 +1,19 @@
+---
+Title: Redis Cloud changelog (2024)
+alwaysopen: false
+categories:
+- docs
+- operate
+- rc
+description: All Redis Cloud changelogs from 2024.
+hideListLinks: true
+linktitle: 2024
+highlights: All Redis Cloud changelogs from 2024.
+tags:
+- changelog
+weight: 94
+---
+
+Select a month from the following table to see a more detailed changelog for that month:
+
+{{}}
\ No newline at end of file
diff --git a/content/operate/rc/changelog/april-2024.md b/content/operate/rc/changelog/2024/april-2024.md
similarity index 96%
rename from content/operate/rc/changelog/april-2024.md
rename to content/operate/rc/changelog/2024/april-2024.md
index ecd20cbb5d..fffb3daed5 100644
--- a/content/operate/rc/changelog/april-2024.md
+++ b/content/operate/rc/changelog/2024/april-2024.md
@@ -1,5 +1,7 @@
---
Title: Redis Cloud changelog (April 2024)
+aliases:
+- /operate/rc/changelog/april-2024/
alwaysopen: false
categories:
- docs
diff --git a/content/operate/rc/changelog/december-2024.md b/content/operate/rc/changelog/2024/december-2024.md
similarity index 96%
rename from content/operate/rc/changelog/december-2024.md
rename to content/operate/rc/changelog/2024/december-2024.md
index 182bf4f99c..335d0e78e0 100644
--- a/content/operate/rc/changelog/december-2024.md
+++ b/content/operate/rc/changelog/2024/december-2024.md
@@ -1,5 +1,7 @@
---
Title: Redis Cloud changelog (December 2024)
+aliases:
+- /operate/rc/changelog/december-2024/
alwaysopen: false
categories:
- docs
diff --git a/content/operate/rc/changelog/february-2024.md b/content/operate/rc/changelog/2024/february-2024.md
similarity index 97%
rename from content/operate/rc/changelog/february-2024.md
rename to content/operate/rc/changelog/2024/february-2024.md
index 86c9224c59..f2fb6c00db 100644
--- a/content/operate/rc/changelog/february-2024.md
+++ b/content/operate/rc/changelog/2024/february-2024.md
@@ -1,5 +1,7 @@
---
Title: Redis Cloud changelog (February 2024)
+aliases:
+- /operate/rc/changelog/february-2024/
alwaysopen: false
categories:
- docs
diff --git a/content/operate/rc/changelog/january-2024.md b/content/operate/rc/changelog/2024/january-2024.md
similarity index 98%
rename from content/operate/rc/changelog/january-2024.md
rename to content/operate/rc/changelog/2024/january-2024.md
index 394484c4d9..eb6412358b 100644
--- a/content/operate/rc/changelog/january-2024.md
+++ b/content/operate/rc/changelog/2024/january-2024.md
@@ -1,5 +1,7 @@
---
Title: Redis Cloud changelog (January 2024)
+aliases:
+- /operate/rc/changelog/january-2024/
alwaysopen: false
categories:
- docs
diff --git a/content/operate/rc/changelog/july-2024.md b/content/operate/rc/changelog/2024/july-2024.md
similarity index 98%
rename from content/operate/rc/changelog/july-2024.md
rename to content/operate/rc/changelog/2024/july-2024.md
index bed26c9422..d6e49ba6cf 100644
--- a/content/operate/rc/changelog/july-2024.md
+++ b/content/operate/rc/changelog/2024/july-2024.md
@@ -1,5 +1,7 @@
---
Title: Redis Cloud changelog (July 2024)
+aliases:
+- /operate/rc/changelog/july-2024/
alwaysopen: false
categories:
- docs
diff --git a/content/operate/rc/changelog/june-2024.md b/content/operate/rc/changelog/2024/june-2024.md
similarity index 95%
rename from content/operate/rc/changelog/june-2024.md
rename to content/operate/rc/changelog/2024/june-2024.md
index f5e6780be3..9fafc43014 100644
--- a/content/operate/rc/changelog/june-2024.md
+++ b/content/operate/rc/changelog/2024/june-2024.md
@@ -1,5 +1,7 @@
---
Title: Redis Cloud changelog (June 2024)
+aliases:
+- /operate/rc/changelog/june-2024/
alwaysopen: false
categories:
- docs
diff --git a/content/operate/rc/changelog/march-2024.md b/content/operate/rc/changelog/2024/march-2024.md
similarity index 95%
rename from content/operate/rc/changelog/march-2024.md
rename to content/operate/rc/changelog/2024/march-2024.md
index 4c686e276f..00da2be2e7 100644
--- a/content/operate/rc/changelog/march-2024.md
+++ b/content/operate/rc/changelog/2024/march-2024.md
@@ -1,5 +1,7 @@
---
Title: Redis Cloud changelog (March 2024)
+aliases:
+- /operate/rc/changelog/march-2024/
alwaysopen: false
categories:
- docs
diff --git a/content/operate/rc/changelog/may-2024.md b/content/operate/rc/changelog/2024/may-2024.md
similarity index 98%
rename from content/operate/rc/changelog/may-2024.md
rename to content/operate/rc/changelog/2024/may-2024.md
index 32771717c7..611fbec95a 100644
--- a/content/operate/rc/changelog/may-2024.md
+++ b/content/operate/rc/changelog/2024/may-2024.md
@@ -1,5 +1,7 @@
---
Title: Redis Cloud changelog (May 2024)
+aliases:
+- /operate/rc/changelog/may-2024/
alwaysopen: false
categories:
- docs
diff --git a/content/operate/rc/changelog/november-2024.md b/content/operate/rc/changelog/2024/november-2024.md
similarity index 92%
rename from content/operate/rc/changelog/november-2024.md
rename to content/operate/rc/changelog/2024/november-2024.md
index 96408654b8..f5b3820f39 100644
--- a/content/operate/rc/changelog/november-2024.md
+++ b/content/operate/rc/changelog/2024/november-2024.md
@@ -1,5 +1,7 @@
---
Title: Redis Cloud changelog (November 2024)
+aliases:
+- /operate/rc/changelog/november-2024/
alwaysopen: false
categories:
- docs
diff --git a/content/operate/rc/changelog/october-2024.md b/content/operate/rc/changelog/2024/october-2024.md
similarity index 95%
rename from content/operate/rc/changelog/october-2024.md
rename to content/operate/rc/changelog/2024/october-2024.md
index cc5dbfe622..c8eaaa7331 100644
--- a/content/operate/rc/changelog/october-2024.md
+++ b/content/operate/rc/changelog/2024/october-2024.md
@@ -1,5 +1,7 @@
---
Title: Redis Cloud changelog (October 2024)
+aliases:
+- /operate/rc/changelog/october-2024/
alwaysopen: false
categories:
- docs
diff --git a/content/operate/rc/changelog/april-2025.md b/content/operate/rc/changelog/april-2025.md
index ee9e6e2356..4e10de7ee6 100644
--- a/content/operate/rc/changelog/april-2025.md
+++ b/content/operate/rc/changelog/april-2025.md
@@ -11,7 +11,7 @@ highlights: New UI and dark mode, Map multiple Redis Cloud accounts to marketpla
linktitle: April 2025
tags:
- changelog
-weight: 32
+weight: 76
---
## New features
diff --git a/content/operate/rc/changelog/february-2025.md b/content/operate/rc/changelog/february-2025.md
index adf30121c3..f5808a5a97 100644
--- a/content/operate/rc/changelog/february-2025.md
+++ b/content/operate/rc/changelog/february-2025.md
@@ -7,11 +7,11 @@ categories:
- rc
description: New features, enhancements, and other changes added to Redis Cloud during
February 2025.
-highlights: Pico billing unit, Redis hashing policy
+highlights: Pico billing unit
linktitle: February 2025
tags:
- changelog
-weight: 36
+weight: 80
---
## New features
diff --git a/content/operate/rc/changelog/july-2025.md b/content/operate/rc/changelog/july-2025.md
new file mode 100644
index 0000000000..0e81632b85
--- /dev/null
+++ b/content/operate/rc/changelog/july-2025.md
@@ -0,0 +1,27 @@
+---
+Title: Redis Cloud changelog (July 2025)
+alwaysopen: false
+categories:
+- docs
+- operate
+- rc
+description: New features, enhancements, and other changes added to Redis Cloud during
+ July 2025.
+highlights: LangCache public preview
+linktitle: July 2025
+weight: 72
+tags:
+- changelog
+---
+
+## New features
+
+### LangCache public preview
+
+[LangCache]({{< relref "/operate/rc/langcache" >}}) is now available in public preview on Redis Cloud.
+
+LangCache is a semantic caching service available as a REST API that stores LLM responses for fast and cheaper retrieval, built on the Redis vector database. By using semantic caching, you can significantly reduce API costs and lower the average latency of your generative AI applications.
+
+For more information about how LangCache works, see the [LangCache overview]({{< relref "/develop/ai/langcache" >}}).
+
+{{< embed-md "rc-langcache-get-started.md" >}}
\ No newline at end of file
diff --git a/content/operate/rc/changelog/june-2025.md b/content/operate/rc/changelog/june-2025.md
index 05178f11f0..59e79ca146 100644
--- a/content/operate/rc/changelog/june-2025.md
+++ b/content/operate/rc/changelog/june-2025.md
@@ -9,7 +9,9 @@ description: New features, enhancements, and other changes added to Redis Cloud
June 2025.
highlights: Block public endpoints, Free database selection, Faster scaling with Redis hashing policy
linktitle: June 2025
-weight: 28
+weight: 72
+tags:
+- changelog
---
## New features
diff --git a/content/operate/rc/changelog/march-2025.md b/content/operate/rc/changelog/march-2025.md
index 26fb032512..2833c2ad57 100644
--- a/content/operate/rc/changelog/march-2025.md
+++ b/content/operate/rc/changelog/march-2025.md
@@ -11,7 +11,7 @@ highlights: Redis Insight on Redis Cloud, Redis Hashing policy
linktitle: March 2025
tags:
- changelog
-weight: 34
+weight: 78
---
## New features
diff --git a/content/operate/rc/changelog/may-2025.md b/content/operate/rc/changelog/may-2025.md
index 8a28d942d3..359651b7f0 100644
--- a/content/operate/rc/changelog/may-2025.md
+++ b/content/operate/rc/changelog/may-2025.md
@@ -11,7 +11,7 @@ highlights: Upgrade database version for a single Pro database, Business address
linktitle: May 2025
tags:
- changelog
-weight: 30
+weight: 74
---
## New features
diff --git a/content/operate/rc/changelog/version-release-notes/_index.md b/content/operate/rc/changelog/version-release-notes/_index.md
index 2e9f2dd0aa..0dfa51e9c9 100644
--- a/content/operate/rc/changelog/version-release-notes/_index.md
+++ b/content/operate/rc/changelog/version-release-notes/_index.md
@@ -8,7 +8,7 @@ categories:
description: Lists release notes and breaking changes for available Redis database versions on Redis Cloud.
hideListLinks: true
linktitle: Redis version release notes
-weight: 95
+weight: 1
---
When new versions of Redis Open Source change existing commands, upgrading your Redis Cloud database to a new version can potentially break some functionality. Before you upgrade, read the provided list of changes that affect Redis Cloud and update any applications that connect to your database to handle these changes.
diff --git a/content/operate/rc/langcache/_index.md b/content/operate/rc/langcache/_index.md
new file mode 100644
index 0000000000..9d121dd31f
--- /dev/null
+++ b/content/operate/rc/langcache/_index.md
@@ -0,0 +1,26 @@
+---
+alwaysopen: false
+categories:
+- docs
+- operate
+- rc
+description: Store LLM responses for AI applications in Redis Cloud.
+hideListLinks: true
+linktitle: LangCache
+title: Semantic caching with LangCache on Redis Cloud
+weight: 36
+bannerText: LangCache on Redis Cloud is currently available as a public preview.
+bannerChildren: true
+---
+
+LangCache is a semantic caching service available as a REST API that stores LLM responses for fast and cheaper retrieval, built on the Redis vector database. By using semantic caching, you can significantly reduce API costs and lower the average latency of your generative AI applications.
+
+For more information about how LangCache works, see the [LangCache overview]({{< relref "/develop/ai/langcache" >}}).
+
+## LLM cost reduction with LangCache
+
+{{< embed-md "langcache-cost-reduction.md" >}}
+
+## Get started with LangCache on Redis Cloud
+
+{{< embed-md "rc-langcache-get-started.md" >}}
\ No newline at end of file
diff --git a/content/operate/rc/langcache/create-service.md b/content/operate/rc/langcache/create-service.md
new file mode 100644
index 0000000000..f1b1710c8f
--- /dev/null
+++ b/content/operate/rc/langcache/create-service.md
@@ -0,0 +1,132 @@
+---
+alwaysopen: false
+categories:
+- docs
+- operate
+- rc
+description: null
+hideListLinks: true
+linktitle: Create service
+title: Create a LangCache service
+weight: 5
+---
+
+Redis LangCache provides vector search capabilities and efficient caching for AI-powered applications. This guide walks you through creating and configuring a LangCache service in Redis Cloud.
+
+## Prerequisites
+
+To create a LangCache service, you will need:
+
+- A Redis Cloud database. If you don't have one, see [Create a database]({{< relref "/operate/rc/databases/create-database" >}}).
+
+ {{< note >}}
+LangCache does not support the following databases during public preview:
+- Databases with a [CIDR allow list]({{< relref "/operate/rc/security/cidr-whitelist" >}})
+- [Active-Active]({{< relref "/operate/rc/databases/configuration/active-active-redis" >}}) databases
+- Databases with the [default user]({{< relref "/operate/rc/security/access-control/data-access-control/default-user" >}}) turned off
+ {{< /note >}}
+
+- An [OpenAI API key](https://platform.openai.com/api-keys). LangCache uses OpenAI to generate embeddings for prompts and responses.
+
+## Create a LangCache service
+
+From the [Redis Cloud console](https://cloud.redis.io/), select **LangCache** from the left-hand menu.
+
+When you access the LangCache page for the first time, you will see a page with an introduction to LangCache. Select **Let's create a service** to create your first service.
+
+{{}}
+
+If you have already created a LangCache service, select **New service** to create another one.
+
+{{}}
+
+This takes you to the **Create LangCache service** page. The page is divided into the following sections:
+
+1. The [General settings](#general-settings) section defines basic properties of your service.
+1. The [Embedding settings](#embedding-settings) section defines the embedding model used by your service.
+1. The [Attributes settings](#attributes-settings) section allows you to define attributes for your service.
+
+### General settings
+
+The **General settings** section defines basic properties of your service.
+
+{{}}
+
+| Setting name |Description|
+|:----------------------|:----------|
+| **Service name** | Enter a name for your LangCache service. We recommend you use a name that describes your service's purpose. |
+| **Select database** | Select the Redis Cloud database to use for this service from the list. |
+| **TTL** | The number of seconds to cache entries before they expire. Default: `No expiration` - items in the cache will remain until manually removed. |
+| **User** | The [database access user]({{< relref "/operate/rc/security/access-control/data-access-control/role-based-access-control" >}}) to use for this service. LangCache only supports the [`default` user]({{< relref "/operate/rc/security/access-control/data-access-control/default-user" >}}) during public preview. |
+
+### Embedding settings
+
+The **Embedding settings** section defines the embedding model used by your service.
+
+{{}}
+
+| Setting name |Description|
+|:----------------------|:----------|
+| **Supported Embedding Provider** | The embedding provider to use for your service. LangCache only supports OpenAI during public preview. |
+| **Embedding provider API key** | Enter your embedding provider's API key. |
+| **Model** | Select the embedding model to use for your service. |
+| **Similarity threshold** | Set the minimum similarity score required to consider a cached response a match. Range: `0.0` to `1.0`. Default: `0.9`
A higher value means more precise matches, but if it's too high, you will compromise on the number of matches and may lose relevant matches. A lower value means more matches, but may include less relevant matches. We recommend starting between `0.8` and `0.9` and then fine-tuning based on your results. |
+
+### Attributes settings
+
+Attributes provide powerful scoping capabilities for your LangCache operations. Think of them as tags or labels that help you organize and manage your cached data with precision.
+
+The **Attributes settings** section allows you to define attributes for your service. It is collapsed by default.
+
+{{}}
+
+LangCache allows you to define up to 5 custom attributes that align with your specific use case. To add a new attribute:
+
+1. Select **Add attribute**.
+
+ {{}}
+
+1. Give your custom attribute a descriptive name and select the check mark button to save it.
+
+ {{}}
+
+After you save your custom attribute, it will appear in the list of custom attributes. Use the **Delete** button to remove it.
+
+{{}}
+
+You can also select **Add attribute** again to add an additional attribute.
+
+{{}}
+
+### Create service
+
+When you are done setting the details of your LangCache service, select **Create** to create it.
+
+{{}}
+
+A window containing your LangCache service key will appear. Select **Copy** to copy the key to your clipboard.
+
+{{}}
+
+{{}}
+This is the only time the value of the user key is available. Save it to a secure location before closing the dialog box.
+
+If you lose the service key value, you will need to [replace the service key]({{< relref "/operate/rc/langcache/view-edit-cache#replace-service-api-key" >}}) to be able to use the LangCache API.
+{{}}
+
+You'll be taken to your LangCache service's **Configuration** page. You'll also be able to see your LangCache service in the LangCache service list.
+
+{{}}
+
+If an error occurs, verify that:
+- Your database is active.
+- You have provided a valid OpenAI API key.
+- You have provided valid values for all the required fields.
+
+For help, [contact support](https://redis.io/support/).
+
+## Next steps
+
+After your cache is created, you can [use the LangCache API]({{< relref "/operate/rc/langcache/use-langcache" >}}) from your client app.
+
+You can also [view and edit the cache]({{< relref "/operate/rc/langcache/view-edit-cache" >}}) and [monitor the cache's performance]({{< relref "/operate/rc/langcache/monitor-cache" >}}).
diff --git a/content/operate/rc/langcache/monitor-cache.md b/content/operate/rc/langcache/monitor-cache.md
new file mode 100644
index 0000000000..a250230751
--- /dev/null
+++ b/content/operate/rc/langcache/monitor-cache.md
@@ -0,0 +1,55 @@
+---
+alwaysopen: false
+categories:
+- docs
+- operate
+- rc
+description: null
+hideListLinks: true
+linktitle: Monitor cache
+title: Monitor a LangCache service
+weight: 20
+---
+
+You can monitor a LangCache service's performance from the **Metrics** tab of the service's page.
+
+{{}}
+
+The **Metrics** tab provides a series of graphs showing performance data for your LangCache service.
+
+You can switch between daily and weekly stats using the **Day** and **Week** buttons at the top of the page. Each graph also includes minimum, average, maximum, and latest values.
+
+## LangCache metrics reference
+
+### Cache hit ratio
+
+The percentage of requests that were successfully served from the cache without needing to call the LLM API. A healthy cache will generally show an increasing hit ratio over time as it becomes more populated by cached responses.
+
+To optimize your cache hit ratio:
+
+- Tune similarity thresholds to capture more semantically related queries.
+- Analyze recurring query patterns to fine-tune your embedding strategies.
+- Test different embedding models to understand their impact on cache hit rates.
+
+A higher cache hit ratio does not always mean better performance. If the cache is too lenient in its similarity matching, it may return irrelevant responses, leading to a higher cache hit rate but poorer overall performance.
+
+### Cache search requests
+
+The number of read attempts against the cache at the specified time. This metric can help you understand the load on your cache and identify periods of high or low activity.
+
+### Cache latency
+
+The average time to process a cache lookup request. This metric can help you identify performance bottlenecks and optimize your cache configuration.
+
+Cache latency is highly dependent on embedding model performance, since the cache must generate embeddings for each request in order to compare them to the cached responses.
+
+High cache latency may indicate one of the following:
+
+- Inefficient embedding generation from the embedding provider
+- Large cache requiring longer comparison times
+- Network latency between the cache and embedding provider
+- Resource constraints
+
+### Cache items
+
+The total number of entries stores in your cache. Each item includes the query string, embedding, response, and other metadata.
\ No newline at end of file
diff --git a/content/operate/rc/langcache/use-langcache.md b/content/operate/rc/langcache/use-langcache.md
new file mode 100644
index 0000000000..80713073d1
--- /dev/null
+++ b/content/operate/rc/langcache/use-langcache.md
@@ -0,0 +1,28 @@
+---
+alwaysopen: false
+categories:
+- docs
+- operate
+- rc
+description: null
+hideListLinks: true
+linktitle: Use LangCache
+title: Use the LangCache API on Redis Cloud
+weight: 10
+---
+
+You can use the LangCache API from your client app to store and retrieve LLM, RAG, or agent responses.
+
+To access the LangCache API, you need:
+
+- LangCache API base URL
+- LangCache service API key
+- Cache ID
+
+For LangCache on Redis Cloud, the base URL and cache ID are available in the LangCache service's **Configuration** page in the [**Connectivity** section]({{< relref "/operate/rc/langcache/view-edit-cache#connectivity" >}}).
+
+The LangCache API key is only available immediately after you create the LangCache service. If you lost this value, you will need to [replace the service API key]({{< relref "/operate/rc/langcache/view-edit-cache#replace-service-api-key" >}}) to be able to use the LangCache API.
+
+When you call the API, you need to pass the LangCache API key in the `Authorization` header as a Bearer token and the Cache ID as the `cacheId` path parameter.
+
+See the [LangCache API reference]({{< relref "/develop/ai/langcache/api-reference" >}}) for more information on how to use the LangCache API.
diff --git a/content/operate/rc/langcache/view-edit-cache.md b/content/operate/rc/langcache/view-edit-cache.md
new file mode 100644
index 0000000000..87250105d5
--- /dev/null
+++ b/content/operate/rc/langcache/view-edit-cache.md
@@ -0,0 +1,126 @@
+---
+alwaysopen: false
+categories:
+- docs
+- operate
+- rc
+description: null
+hideListLinks: true
+linktitle: View and edit cache
+title: View and edit LangCache service
+weight: 15
+---
+
+After you have [created your first LangCache service]({{< relref "/operate/rc/langcache/create-service" >}}), selecting **LangCache AI** from the Redis Cloud Console menu will take you to the **LangCache Services** page.
+
+This page displays a list of all LangCache services associated with your account.
+
+{{}}
+
+Select your LangCache service from the list to view the service's details.
+
+## Configuration tab
+
+The **Configuration** tab lets you view the details of your LangCache service. It contains the following sections:
+
+- The **Connectivity** section provides the connection details for your LangCache service.
+- The **General** section provides the cache settings for your LangCache service.
+- The **Actions** section lets you flush or delete your LangCache service.
+
+### Connectivity
+
+The **Connectivity** section provides the connection details for your LangCache service.
+
+{{}}
+
+| Setting name |Description|
+|:----------------------|:----------|
+| **API Key** | The Bearer token for your LangCache API requests. |
+| **Cache ID** | The unique ID of your LangCache service. |
+| **API Base URL** | The base URL for LangCache API requests. |
+
+Select the **Copy** button next to the Cache ID and API Base URL to copy them to the clipboard. If you lost the API key value or need to rotate the key, you can [replace the service API key](#replace-service-api-key) at any time.
+
+See [use the LangCache API]({{< relref "/operate/rc/langcache/use-langcache" >}}) for more information on how to use these values.
+
+#### Replace service API key
+
+The API key is only available immediately after you create the LangCache service. If you lost this value, or need to rotate the key, you can replace the service key at any time.
+
+To replace the service key:
+
+1. Select **Replace key**.
+
+ {{}}
+
+1. A confirmation dialog will appear. Select **Confirm** to confirm.
+
+1. The new key will appear in a dialog box. Select **Copy** to copy the key to the clipboard.
+
+ {{}}
+
+ {{}}
+This is the only time the value of the user key is available. Save it to a secure location before closing the dialog box.
+
+If you lose the service key value, you will need to replace the key again.
+ {{}}
+
+### General
+
+The **General** section provides configuration details for your LangCache service.
+
+{{}}
+
+| Setting name |Description|
+|:----------------------|:----------|
+| **Service name** | The name of the LangCache service. |
+| **Database** | The database that stores your cache data. |
+| **Similarity threshold** | The minimum similarity score required to consider a cached response a match. _(Editable)_ |
+| **TTL** | The number of seconds to cache entries before they expire. _(Editable)_ |
+| **Embedding Provider** | The embedding provider to use for your service. |
+
+Some of these settings can be changed after cache creation. To do so, select the **Edit** button.
+
+### Attributes
+
+The **Attributes** section provides the custom attributes defined for your LangCache service.
+
+{{}}
+
+You can not edit custom attributes after cache creation.
+
+### Actions
+
+The **Actions** section lets you flush or delete your LangCache service.
+
+{{}}
+
+#### Flush cache
+
+Flushing the cache completely erases all cached data while preserving the service configuration and the search index used by the cache.
+
+To flush the cache:
+
+1. Select **Flush**.
+
+1. A confirmation dialog will appear. Select **Flush** again to confirm.
+
+Flushing the cache is permanent and cannot be undone, and will result in cache misses until the cache is repopulated.
+
+#### Delete service
+
+Deleting your LangCache service permanently deletes all associated cached data, the service configuration, and the LangCache search index. It also immediately terminates all API keys associated with the service. Data stored in other indexes within the same database will remain unaffected.
+
+To delete your LangCache service:
+
+1. Select **Delete**.
+
+1. A confirmation dialog will appear. Select the checkbox to confirm that you want to delete the service.
+
+1. Select **Delete** again to confirm.
+
+Deleting the LangCache service is permanent and cannot be undone.
+
+## Metrics tab
+
+The **Metrics** tab provides a series of graphs showing performance data for your LangCache service. See [Monitor a LangCache service]({{< relref "/operate/rc/langcache/monitor-cache" >}}) for more information.
\ No newline at end of file
diff --git a/static/images/rc/langcache-add-attribute.png b/static/images/rc/langcache-add-attribute.png
new file mode 100644
index 0000000000..bb8b2fd060
Binary files /dev/null and b/static/images/rc/langcache-add-attribute.png differ
diff --git a/static/images/rc/langcache-attribute-settings.png b/static/images/rc/langcache-attribute-settings.png
new file mode 100644
index 0000000000..150038c267
Binary files /dev/null and b/static/images/rc/langcache-attribute-settings.png differ
diff --git a/static/images/rc/langcache-create-first-service.png b/static/images/rc/langcache-create-first-service.png
new file mode 100644
index 0000000000..f0bc395a2e
Binary files /dev/null and b/static/images/rc/langcache-create-first-service.png differ
diff --git a/static/images/rc/langcache-custom-attributes.png b/static/images/rc/langcache-custom-attributes.png
new file mode 100644
index 0000000000..e11dee8542
Binary files /dev/null and b/static/images/rc/langcache-custom-attributes.png differ
diff --git a/static/images/rc/langcache-embedding-settings.png b/static/images/rc/langcache-embedding-settings.png
new file mode 100644
index 0000000000..16e3e6ca2f
Binary files /dev/null and b/static/images/rc/langcache-embedding-settings.png differ
diff --git a/static/images/rc/langcache-general-settings.png b/static/images/rc/langcache-general-settings.png
new file mode 100644
index 0000000000..45ede217a1
Binary files /dev/null and b/static/images/rc/langcache-general-settings.png differ
diff --git a/static/images/rc/langcache-metrics.png b/static/images/rc/langcache-metrics.png
new file mode 100644
index 0000000000..8d662be82f
Binary files /dev/null and b/static/images/rc/langcache-metrics.png differ
diff --git a/static/images/rc/langcache-new-service.png b/static/images/rc/langcache-new-service.png
new file mode 100644
index 0000000000..303b07d215
Binary files /dev/null and b/static/images/rc/langcache-new-service.png differ
diff --git a/static/images/rc/langcache-process.png b/static/images/rc/langcache-process.png
new file mode 100644
index 0000000000..d753bd0a18
Binary files /dev/null and b/static/images/rc/langcache-process.png differ
diff --git a/static/images/rc/langcache-replace-key.png b/static/images/rc/langcache-replace-key.png
new file mode 100644
index 0000000000..910e8abc45
Binary files /dev/null and b/static/images/rc/langcache-replace-key.png differ
diff --git a/static/images/rc/langcache-service-key.png b/static/images/rc/langcache-service-key.png
new file mode 100644
index 0000000000..29e645f24e
Binary files /dev/null and b/static/images/rc/langcache-service-key.png differ
diff --git a/static/images/rc/langcache-service-list.png b/static/images/rc/langcache-service-list.png
new file mode 100644
index 0000000000..84242e00e7
Binary files /dev/null and b/static/images/rc/langcache-service-list.png differ
diff --git a/static/images/rc/langcache-view-actions.png b/static/images/rc/langcache-view-actions.png
new file mode 100644
index 0000000000..f9927b8905
Binary files /dev/null and b/static/images/rc/langcache-view-actions.png differ
diff --git a/static/images/rc/langcache-view-attributes.png b/static/images/rc/langcache-view-attributes.png
new file mode 100644
index 0000000000..8658320897
Binary files /dev/null and b/static/images/rc/langcache-view-attributes.png differ
diff --git a/static/images/rc/langcache-view-connectivity.png b/static/images/rc/langcache-view-connectivity.png
new file mode 100644
index 0000000000..6e78345a95
Binary files /dev/null and b/static/images/rc/langcache-view-connectivity.png differ
diff --git a/static/images/rc/langcache-view-general.png b/static/images/rc/langcache-view-general.png
new file mode 100644
index 0000000000..ce25fe6447
Binary files /dev/null and b/static/images/rc/langcache-view-general.png differ