diff --git a/website/src/pages/ar/about.mdx b/website/src/pages/ar/about.mdx
index 93dbeb51f658..7fda868aab9d 100644
--- a/website/src/pages/ar/about.mdx
+++ b/website/src/pages/ar/about.mdx
@@ -1,67 +1,46 @@
---
-title: حول The Graph
+title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## What is The Graph?
-The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier.
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## Understanding the Basics
+Its data services include:
-Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain.
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### Challenges Without The Graph
+### Why Blockchain Data is Difficult to Query
-In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply.
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**.
+The result is slow performance, complex infrastructure, and scalability issues.
-- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself.
+## How The Graph Solves This
-- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it.
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### Why is this a problem?
+Find the perfect data service for you:
-It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions.
+### 1. Custom Real-Time Data Streams
-Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
+**Use Case:** High-frequency trading, live analytics.
-Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data.
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph Provides a Solution
+### 2. Instant Token Data
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
+- [Start with Token API](/token-api/quick-start/)
-### How The Graph Functions
+### 3. Flexible Historical Queries
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+**Use Case:** Dapp frontends, custom analytics.
-#### Specifics
-
-- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-
-- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-
-- When creating a Subgraph, you need to write a Subgraph manifest.
-
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-
-The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.
-
-
-
-تدفق البيانات يتبع الخطوات التالية:
-
-1. A dapp adds data to Ethereum through a transaction on a smart contract.
-2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء.
-3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
-4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
-5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats.
-
-## الخطوات التالية
-
-The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-
-Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/ar/index.json b/website/src/pages/ar/index.json
index 2443372843a8..c237a3690285 100644
--- a/website/src/pages/ar/index.json
+++ b/website/src/pages/ar/index.json
@@ -2,7 +2,7 @@
"title": "Home",
"hero": {
"title": "The Graph Docs",
- "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "How The Graph works",
"cta2": "Build your first subgraph"
},
@@ -19,10 +19,10 @@
"description": "Fetch and consume blockchain data with parallel execution.",
"cta": "Develop with Substreams"
},
- "sps": {
- "title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "Set up a Substreams-powered subgraph"
+ "tokenApi": {
+ "title": "Token API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "Graph Node",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "Firehose",
- "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Get started with Firehose"
}
},
@@ -58,6 +58,7 @@
"networks": "networks",
"completeThisForm": "complete this form"
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "Subgraphs",
"substreams": "متعدد-السلاسل",
"firehose": "Firehose",
- "tokenapi": "Token API"
+ "tokenApi": "Token API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "متعدد-السلاسل",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "Graph Explorer",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/ar/indexing/new-chain-integration.mdx b/website/src/pages/ar/indexing/new-chain-integration.mdx
index b204d002b25d..ad200a351dd2 100644
--- a/website/src/pages/ar/indexing/new-chain-integration.mdx
+++ b/website/src/pages/ar/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`، ضمن طلب دفعة استدعاء الإجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت
-- `trace_filter` *(limited tracing and optionally required for Graph Node)*
+- `trace_filter` _(limited tracing and optionally required for Graph Node)_
### 2. Firehose Integration
@@ -63,7 +63,7 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
> Do not change the env var name itself. It must remain `ethereum` even if the network name is different.
-3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## Substreams-powered Subgraphs
diff --git a/website/src/pages/ar/indexing/overview.mdx b/website/src/pages/ar/indexing/overview.mdx
index 200a3a6a64e5..d4a9e01205e4 100644
--- a/website/src/pages/ar/indexing/overview.mdx
+++ b/website/src/pages/ar/indexing/overview.mdx
@@ -110,12 +110,12 @@ Indexers may differentiate themselves by applying advanced techniques for making
- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
-| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
+| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -131,7 +131,7 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
@@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer
#### Graph Node
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Getting started using Docker
@@ -708,42 +708,6 @@ Note that supported action types for allocation management have different input
Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
-#### Agora
-
-The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query.
-
-A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression.
-
-Example cost model:
-
-```
-# This statement captures the skip value,
-# uses a boolean expression in the predicate to match specific queries that use `skip`
-# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-# This default will match any GraphQL expression.
-# It uses a Global substituted into the expression to calculate cost
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-Example query costing using the above model:
-
-| Query | Price |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### Applying the cost model
-
-Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them.
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## Interacting with the network
### Stake in the protocol
diff --git a/website/src/pages/ar/indexing/tooling/graph-node.mdx b/website/src/pages/ar/indexing/tooling/graph-node.mdx
index edde8a157fd3..56cea09618e3 100644
--- a/website/src/pages/ar/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/ar/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ While some Subgraphs may just require a full node, some may have indexing featur
### IPFS Nodes
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### Prometheus metrics server
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Getting started with Kubernetes
@@ -77,15 +77,20 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
When it is running Graph Node exposes the following ports:
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
-
-> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## Advanced Graph Node configuration
@@ -330,7 +335,7 @@ Database tables that store entities seem to generally come in two varieties: 'tr
For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table.
-The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show ` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions.
@@ -340,6 +345,4 @@ For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates f
#### Removing Subgraphs
-> This is new functionality, which will be available in Graph Node 0.29.x
-
At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/ar/resources/claude-mcp.mdx b/website/src/pages/ar/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..5b55bbcbe0a4
--- /dev/null
+++ b/website/src/pages/ar/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/ar/subgraphs/_meta-titles.json b/website/src/pages/ar/subgraphs/_meta-titles.json
index 3fd405eed29a..f095d374344f 100644
--- a/website/src/pages/ar/subgraphs/_meta-titles.json
+++ b/website/src/pages/ar/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "Querying",
"developing": "Developing",
"guides": "How-to Guides",
- "best-practices": "Best Practices"
+ "best-practices": "Best Practices",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5f964d3cbb78..edc1d88dc6cf 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Patch Changes
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Minor Changes
diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx
index a721f6bcd8d4..ef43760cfdbf 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs:
The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| الاصدار | ملاحظات الإصدار |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
-| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
-| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
-| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
-| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Added `input` field to the Ethereum Transaction object |
+| الاصدار | ملاحظات الإصدار |
+| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
+| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
+| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
+| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
+| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
+| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Added `input` field to the Ethereum Transaction object |
### الأنواع المضمنة (Built-in)
diff --git a/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx
index fa6c44e61fb2..b7d5f7168427 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -22,14 +22,14 @@ Start the process and build a Subgraph that matches your needs:
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
-| الاصدار | ملاحظات الإصدار |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+| الاصدار | ملاحظات الإصدار |
+| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx
index 3b2b1bbc70ae..5c8016b18c91 100644
--- a/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx
@@ -212,7 +212,7 @@ Every Subgraph affected with this policy has an option to bring the version in q
If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph: `https://indexer.upgrade.thegraph.com/status`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
diff --git a/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 1e0826bfe148..15ac3901d9fb 100644
--- a/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -88,6 +88,8 @@ graph auth
Once you are ready, you can deploy your Subgraph to Subgraph Studio.
> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
+>
+> **Note**: Each account is limited to 3 deployed (unpublished) Subgraphs. If you reach this limit, you must archive or publish existing Subgraphs before deploying new ones.
Use the following CLI command to deploy your Subgraph:
@@ -104,6 +106,8 @@ After running this command, the CLI will ask for a version label.
After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+> **Note**: The development query URL is limited to 3,000 queries per day.
+
Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
diff --git a/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index 2bc0ec5f514c..e3e3a7e3d455 100644
--- a/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -53,7 +53,7 @@ USAGE
FLAGS
-h, --help Show CLI help.
- -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node.
+ -i, --ipfs= [default: https://ipfs.thegraph.com/api/v0] Upload build results to an IPFS node.
--ipfs-hash= IPFS hash of the subgraph manifest to deploy.
--protocol-network= [default: arbitrum-one] The network to use for the subgraph deployment.
diff --git a/website/src/pages/ar/subgraphs/explorer.mdx b/website/src/pages/ar/subgraphs/explorer.mdx
index 57d7712cc383..1f4add453b77 100644
--- a/website/src/pages/ar/subgraphs/explorer.mdx
+++ b/website/src/pages/ar/subgraphs/explorer.mdx
@@ -2,83 +2,103 @@
title: Graph Explorer
---
-Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Use [Graph Explorer](https://thegraph.com/explorer) and take full advantage of its core features.
## نظره عامة
-Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+This guide explains how to use [Graph Explorer](https://thegraph.com/explorer) to quickly discover and interact with Subgraphs on The Graph Network, delegate GRT, view participant metrics, and analyze network performance.
-## Inside Explorer
+> When you visit Graph Explorer, you can also access the link to [explore Substreams](https://substreams.dev/).
-The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
+## Prerequisites
-### Subgraphs Page
+- To perform actions, you need a wallet (e.g., MetaMask) connected to [Graph Explorer](https://thegraph.com/explorer).
+ > Make sure your wallet is connected to the correct network (e.g., Arbitrum). Features and data shown are network specific.
+- GRT tokens if you plan to delegate or curate.
+- Basic knowledge of [Subgraphs](https://thegraph.com/docs/en/subgraphs/developing/subgraphs/)
-After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+## Navigating Graph Explorer
-- Your own finished Subgraphs
-- Subgraphs published by others
-- The exact Subgraph you want (based on the date created, signal amount, or name).
+### Step 1. Explore Subgraphs
-
+> For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
-When you click into a Subgraph, you will be able to do the following:
+Go to the Subgraphs page in [Graph Explorer](https://thegraph.com/explorer).
-- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
+- If you've deployed and published your Subgraph in Subgraph Studio, you can view it here.
+- Search all published Subgraphs and filter them by indexed network, specific categories (such as DeFI, NFTs, and DAOs), and **most queried, most curated, recently created, and recently updated**.
+
+
+
+To find Subgraphs indexing a specific contract, enter the contract address into the search bar.
+
+- For example, you can enter the L2GNS contract on Artbitrum (`0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec`) and this returns all Subgraphs indexing that contract:
+
+
- - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+> Looking for indexing contracts? Check out [this Subgraph](https://thegraph.com/explorer/subgraphs/FMTUN6d7sY2bLnAmNEPJTqiU3iuQht6ZXurpBh71wbWR?view=About&chain=arbitrum-one) which indexes contract addresses listed in its manifest. It shows all current deployments indexing those contracts on Arbitrum One, along with the signal allocated to each.
-
+You can click into any Subgraph, to:
+
+- Test queries in the playground and be able to leverage network details to make informed decisions.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make Indexers aware of its importance and quality.
+ > This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it'll eventually surface on the network to serve queries.
+
+
On each Subgraph’s dedicated page, you can do the following:
-- Signal/Un-signal on Subgraphs
-- اعرض المزيد من التفاصيل مثل المخططات و ال ID الحالي وبيانات التعريف الأخرى
-- Switch versions to explore past iterations of the Subgraph
- Query Subgraphs via GraphQL
+- View Subgraph ID, current deployment ID, Query URL, and other metadata
+- Signal/unsignal on Subgraphs
- Test Subgraphs in the playground
- View the Indexers that are indexing on a certain Subgraph
- إحصائيات subgraphs (المخصصات ، المنسقين ، إلخ)
-- View the entity who published the Subgraph
+- View query fees and charts
+- Change versions to explore past iterations of the Subgraph
+- View entity types
+- View Subgraph activity
-
+
-### Delegate Page
+### Step 2. Delegate GRT
-On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer.
+Go to the [Delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) page to learn how to delegate, get GRT, and choose an Indexer.
-On this page, you can see the following:
+Here, you can:
-- Indexers who collected the most query fees
-- Indexers with the highest estimated APR
+- Compare Indexers by most query fees earned and highest estimated APR.
+- Use the built-in ROI calculator or search by Indexer name or address.
+- Click **"Delegate"** next to an Indexer to stake your GRT.
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
+### Step 3. Monitor Participants in the Network
-### Participants Page
+Go to the [Participants](https://thegraph.com/explorer/participants?chain=arbitrum-one) page to view:
-This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators.
+- Indexers: stakes, allocations, rewards, and delegation parameters
+- Curators: signal amounts, Subgraph shares, and activity history
+- Delegators: current and historical delegations, rewards, and Indexer metrics
-#### 1. Indexers
+#### Indexers
-
+
Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexer's delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
-- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators.
-- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
-- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
-- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
-- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
-- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
-- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
-- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
-- مكافآت المفهرس Indexer Rewards - هو مجموع مكافآت المفهرس التي حصل عليها المفهرس ومفوضيهم Delegators. تدفع مكافآت المفهرس ب GRT.
+- Query Fee Cut: The % of the query fee rebates that the Indexer keeps when splitting with Delegators.
+- Effective Reward Cut: The indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
+- Cooldown Remaining: The time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
+- Owned: This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
+- Delegated: Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
+- Allocated: Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
+- Available Delegation Capacity: The amount of delegated stake the Indexers can still receive before they become over-delegated.
+- Max Delegation Capacity: The maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
+- Query Fees: This is the total fees that end users have paid for queries from an Indexer over all time.
+- Indexer Rewards: This is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters.
@@ -86,9 +106,9 @@ Indexers can earn both query fees and indexing rewards. Functionally, this happe
To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/)
-
+
-#### 3. المفوضون Delegators
+#### Curators
Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
@@ -102,11 +122,11 @@ In the The Curator table listed below you can see:
- عدد ال GRT الذي تم إيداعه
- عدد الأسهم التي يمتلكها المنسق
-
+
If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/).
-#### 3. المفوضون Delegators
+#### Delegators
Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers.
@@ -114,7 +134,7 @@ Delegators play a key role in maintaining the security and decentralization of T
- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts.
- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/).
-
+
In the Delegators table you can see the active Delegators in the community and important metrics:
@@ -127,9 +147,9 @@ In the Delegators table you can see the active Delegators in the community and i
If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers).
-### Network Page
+### Step 4. Analyze Network Performance
-On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
+On the [Network](https://thegraph.com/explorer/network?chain=arbitrum-one) page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
#### نظره عامة
@@ -147,7 +167,7 @@ A few key details to note:
- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
-
+
#### الفترات Epochs
@@ -161,69 +181,77 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as:
- فترات التوزيع هي تلك الفترات التي يتم فيها تسوية قنوات الحالة للفترات ويمكن للمفهرسين المطالبة بخصم رسوم الاستعلام الخاصة بهم.
- The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers.
-
+
+
+## Access and Manage Your User Profile
+
+### Step 1. Access Your Profile
-## ملف تعريف المستخدم الخاص بك
+- Click your wallet address in the top right corner
+- Your wallet acts as your user profile
+- In your profile dashboard, you can view and interact with several useful tabs
-Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs:
+### Step 2. Explore the Tabs
-### نظرة عامة على الملف الشخصي
+#### نظرة عامة على الملف الشخصي
In this section, you can view the following:
-- Any of your current actions you've done.
-- Your profile information, description, and website (if you added one).
+- Your activity
+- Your profile information: total query fees, total shares value, owned stake, stake delegating
-
+
-### تبويب ال Subgraphs
+#### تبويب ال Subgraphs
-In the Subgraphs tab, you’ll see your published Subgraphs.
+The Subgraphs tab displays all your published Subgraphs.
-> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> Subgraphs deployed with the CLI for testing purposes will not show up here. Subgraphs will only show up when they are published to the decentralized network.
-
+
-### تبويب الفهرسة
+#### تبويب الفهرسة
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+> If you haven't indexed, you will see links to stake to index Subgraphs and browse Subgraphs on Graph Explorer.
-هذا القسم سيتضمن أيضا تفاصيل حول صافي مكافآت المفهرس ورسوم الاستعلام الصافي الخاصة بك. سترى المقاييس التالية:
+The Indexing tab displays a table where you can review active and historical allocations to Subgraphs.
-- الحصة المفوضة Delegated Stake - هي الحصة المفوضة من قبل المفوضين والتي يمكنك تخصيصها ولكن لا يمكن شطبها
-- إجمالي رسوم الاستعلام Total Query Fees - هو إجمالي الرسوم التي دفعها المستخدمون مقابل الاستعلامات التي قدمتها بمرور الوقت
-- مكافآت المفهرس Indexer Rewards - هو المبلغ الإجمالي لمكافآت المفهرس التي تلقيتها ك GRT
-- اقتطاع الرسوم Fee Cut -هي النسبة المئوية لخصوم رسوم الاستعلام التي ستحتفظ بها عند التقسيم مع المفوضين
-- اقتطاع المكافآت Rewards Cut -هي النسبة المئوية لمكافآت المفهرس التي ستحتفظ بها عند التقسيم مع المفوضين
-- مملوكة Owned - هي حصتك المودعة ، والتي يمكن شطبها بسبب السلوك الضار أو غير الصحيح
+Track your Indexer performance with visual charts and key metrics, including:
-
+- Delegated Stake: Stake from Delegators that can be allocated by you but cannot be slashed.
+- Total Query Fees: Cumulative fees from served queries.
+- Indexer Rewards (in GRT): Total rewards earned.
+- Fee Cut & Rewards Cut: The % of query fee rebates and Indexer rewards you'll keep when you split with Delegators.
+- Owned Stake: Your deposited stake, which could be slashed for malicious or incorrect behavior.
-### تبويب التفويض Delegating Tab
+
-Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards.
+#### تبويب التفويض Delegating Tab
-In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards.
+> To learn more about the benefits of delegating, check out [delegating](/resources/roles/delegating/delegating/).
-في النصف الأول من الصفحة ، يمكنك رؤية مخطط التفويض الخاص بك ، بالإضافة إلى مخطط المكافآت فقط. إلى اليسار ، يمكنك رؤية KPIs التي تعكس مقاييس التفويض الحالية.
+The Delegators tab displays your active and historical delegations, along with the metrics for the Indexers you've delegated to.
-مقاييس التفويض التي ستراها هنا في علامة التبويب هذه تشمل ما يلي:
+Top Section:
-- إجمالي مكافآت التفويض
-- إجمالي المكافآت الغير محققة
-- إجمالي المكافآت المحققة
+- View delegation and rewards-only charts
+- Track key metrics:
+ - إجمالي مكافآت التفويض
+ - Unrealized rewards
+ - Realized Rewards
-في النصف الثاني من الصفحة ، لديك جدول التفويضات. هنا يمكنك رؤية المفهرسين الذين فوضتهم ، بالإضافة إلى تفاصيلهم (مثل المكافآت المقتطعة rewards cuts، و cooldown ، الخ).
+Bottom Section:
-باستخدام الأزرار الموجودة على الجانب الأيمن من الجدول ، يمكنك إدارة التفويض - تفويض المزيد أو إلغاء التفويض أو سحب التفويض بعد فترة الإذابة.
+- Explore a table of your Indexer delegations, including reward cuts, cooldowns, and more.
+- Use the buttons on the right side of the table to manage your delegation - delegate more, undelegate, or withdraw it after the thawing period.
-باستخدام الأزرار الموجودة على الجانب الأيمن من الجدول ، يمكنك إدارة تفويضاتك أو تفويض المزيد أو إلغاء التفويض أو سحب التفويض بعد فترة الذوبان thawing.
+> This table is horizontally scrollable, so scroll right to see delegation status: delegating, undelegating, or withdrawable.
-
+
-### تبويب التنسيق Curating
+#### تبويب التنسيق Curating
-In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
+The Curation tab displays all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
ضمن علامة التبويب هذه ، ستجد نظرة عامة حول:
@@ -232,22 +260,22 @@ In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus
- Query rewards per Subgraph
- تحديث في تفاصيل التاريخ
-
+
-### إعدادات ملف التعريف الخاص بك
+#### إعدادات ملف التعريف الخاص بك
ضمن ملف تعريف المستخدم الخاص بك ، ستتمكن من إدارة تفاصيل ملفك الشخصي (مثل إعداد اسم ENS). إذا كنت مفهرسا ، فستستطيع الوصول إلى إعدادت أكثر. في ملف تعريف المستخدم الخاص بك ، ستتمكن من إعداد بارامترات التفويض والمشغلين.
- Operators تتخذ إجراءات محدودة في البروتوكول نيابة عن المفهرس ، مثل عمليات فتح وإغلاق المخصصات. Operators هي عناوين Ethereum أخرى ، منفصلة عن محفظة staking الخاصة بهم ، مع بوابة وصول للشبكة التي يمكن للمفهرسين تعيينها بشكل شخصي
- تسمح لك بارامترات التفويض بالتحكم في توزيع GRT بينك وبين المفوضين.
-
+
As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.

-## مصادر إضافية
+### مصادر إضافية
### Video Guide
diff --git a/website/src/pages/ar/subgraphs/fair-use-policy.mdx b/website/src/pages/ar/subgraphs/fair-use-policy.mdx
new file mode 100644
index 000000000000..9dd13b9993c1
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/fair-use-policy.mdx
@@ -0,0 +1,51 @@
+---
+title: Fair Use Policy
+---
+
+> Effective Date: May 15, 2025
+
+## نظره عامة
+
+This outlines storage limits for Subgraphs that rely solely on [Edge & Node's Upgrade Indexer](/subgraphs/upgrade-indexer/). This policy is designed to ensure fair and optimized use of queries across the community.
+
+To maintain performance and reliability across its infrastructure, Edge & Node is updating its Upgrade Indexer Subgraph storage policy. Free usage tiers remain available, but users who exceed specified limits will need to upgrade to a paid plan. Storage allocations and thresholds vary by feature.
+
+### 1. Scope
+
+This policy applies to all individual users, teams, chains, and dapps using Edge & Node's Upgrade Indexer in Subgraph Studio for storage and queries.
+
+### 2. Fair Use Storage Limits
+
+**Free Storage: Up to 10 GB**
+
+Beyond that, pricing is variable and adjusts based on usage patterns, network conditions, infrastructure requirements, and specific use cases.
+
+Reach out to Edge & Node at [info@edgeandnode.com](mailto:info@edgeandnode.com) to discuss options that meet your technical needs.
+
+You can monitor your usage via [Subgraph Studio](https://thegraph.com/studio/).
+
+### 3. Fair Use Limits
+
+To preserve the stability of Edge & Node's Subgraph Studio and preserve the reliability of The Graph Network, the Edge & Node Support Team will monitor storage usage and take corresponding action with Subgraphs that have:
+
+- Abnormally high or sustained bandwidth or storage usage beyond posted limits
+- Circumvention of storage thresholds (e.g., use of multiple free-tier accounts)
+
+The Edge & Node Support Team reserves the right to revise storage limits or impose temporary constraints for operational integrity.
+
+If you exceed your included storage:
+
+- Try [pruning Subgraph data](/subgraphs/best-practices/pruning/) to remove unused entities and help stay within storage limits
+- [Add signal to the Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to encourage other Indexers on the network to serve it
+- You will receive multiple notifications and email alerts
+- A grace period of 14 days will be provided to upgrade or reduce storage
+
+Edge & Node's team is committed to helping users avoid unnecessary interruptions and will continue to support all web3 builders.
+
+### 4. Subgraph Data Retention
+
+Subgraphs inactive for over 14 days or Subgraphs that exceed free-tier storage limits will be subject to automatic data archival or deletion. Edge & Node's team will notify you before any such actions are taken.
+
+### 5. Support
+
+If you believe your usage is incorrectly flagged or have unique use cases (e.g. approved special request pending new Subgraph upgrade plan), reach out the Edge & Node team at [info@edgeandnode.com](mailto:info@edgeandnode.com).
diff --git a/website/src/pages/ar/subgraphs/guides/near.mdx b/website/src/pages/ar/subgraphs/guides/near.mdx
index 04daec8b6ac7..ff812b4eee58 100644
--- a/website/src/pages/ar/subgraphs/guides/near.mdx
+++ b/website/src/pages/ar/subgraphs/guides/near.mdx
@@ -186,7 +186,7 @@ Once your Subgraph has been created, you can deploy your Subgraph by using the `
```sh
$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
+$ graph deploy --node --ipfs https://ipfs.thegraph.com # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
The node configuration will depend on where the Subgraph is being deployed.
diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx
index 080de99b5ba1..35cdef554157 100644
--- a/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx
+++ b/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx
@@ -39,20 +39,20 @@ While the source Subgraph is a standard Subgraph, the dependent Subgraph uses th
### Source Subgraphs
-- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs).
- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
-- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
-- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
-- Source Subgraphs cannot use grafting on top of existing entities
-- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+- Immutable entities only: All Subgraphs must have [immutable entities](/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed.
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of.
+- Source Subgraphs cannot use grafting on top of existing entities.
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly.
### Composed Subgraphs
-- You can only compose up to a **maximum of 5 source Subgraphs**
-- Composed Subgraphs can only use **datasources from the same chain**
-- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
-- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
-- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+- You can only compose up to a **maximum of 5 source Subgraphs.**
+- Composed Subgraphs can only use **datasources from the same chain.**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time.
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly.
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph).
Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
diff --git a/website/src/pages/ar/subgraphs/mcp/claude.mdx b/website/src/pages/ar/subgraphs/mcp/claude.mdx
new file mode 100644
index 000000000000..8b61438d2ab7
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/mcp/claude.mdx
@@ -0,0 +1,180 @@
+---
+title: Claude Desktop
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Claude to interact directly with Subgraphs on The Graph Network. This integration allows you to find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries—all through natural language conversations with Claude.
+
+## What You Can Do
+
+The Subgraph MCP integration enables you to:
+
+- Access the GraphQL schema for any Subgraph on The Graph Network
+- Execute GraphQL queries against any Subgraph deployment
+- Find top Subgraph deployments for a given keyword or contract address
+- Get 30-day query volume for Subgraph deployments
+- Ask natural language questions about Subgraph data without writing GraphQL queries manually
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+
+## Installation Options
+
+### Option 1: Using npx (Recommended)
+
+#### Configuration Steps using npx
+
+#### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+#### 2. Add Configuration
+
+Paste the following settings into your config file:
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+#### 3. Add Your Gateway API Key
+
+Replace `GATEWAY_API_KEY` with your API key from [Subgraph Studio](https://thegraph.com/studio/).
+
+#### 4. Save and Restart
+
+Once you've entered your Gateway API key into your settings, save the file and restart Claude Desktop.
+
+### Option 2: Building from Source
+
+#### Requirements
+
+- Rust (latest stable version recommended: 1.75+)
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+ Follow the on-screen instructions. For other platforms, see the [official Rust installation guide](https://www.rust-lang.org/tools/install).
+
+#### Installation Steps
+
+1. **Clone and Build the Repository**
+
+ ```bash
+ git clone git@github.com:graphops/subgraph-mcp.git
+ cd subgraph-mcp
+ cargo build --release
+ ```
+
+2. **Find the Command Path**
+
+ After building, the executable will be located at `target/release/subgraph-mcp` inside your project directory.
+
+ - Navigate to your `subgraph-mcp` directory in terminal
+ - Run `pwd` to get the full path
+ - Combine the output with `/target/release/subgraph-mcp`
+
+3. **Configure Claude Desktop**
+
+ Open your `claude_desktop_config.json` file as described above and add:
+
+ ```json
+ {
+ "mcpServers": {
+ "subgraph": {
+ "command": "/path/to/your/subgraph-mcp/target/release/subgraph-mcp",
+ "env": {
+ "GATEWAY_API_KEY": "your-api-key-here"
+ }
+ }
+ }
+ }
+ ```
+
+ Replace `/path/to/your/subgraph-mcp/target/release/subgraph-mcp` with the actual path to the compiled binary.
+
+## Using The Graph Resource in Claude
+
+After configuring Claude Desktop:
+
+1. Restart Claude Desktop
+2. Start a new conversation
+3. Click on the context menu (top right)
+4. Add "Subgraph Server Instructions" as a resource by adding `graphql://subgraph` to your chat context
+
+> **Important**: Claude Desktop may not automatically utilize the Subgraph MCP. You must manually add "Subgraph Server Instructions" resource to your chat context for each conversation where you want to use it.
+
+## Troubleshooting
+
+To enable logs for the MCP when using the npx option, add the `--verbose true` option to your args array.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID/IPFS hash**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Search subgraphs by keyword**: Find subgraphs by keyword in their display names, ordered by signal
+- **Get deployment 30-day query counts**: Get aggregate query count over the last 30 days for multiple subgraph deployments
+- **Get top Subgraph deployments for a contract**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain, ordered by query fees
+
+## Key Identifier Types
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a subgraph. Use `execute_query_by_subgraph_id` or `get_schema_by_subgraph_id`.
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment. Use `execute_query_by_deployment_id` or `get_schema_by_deployment_id`.
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific, immutable deployment. Use `execute_query_by_deployment_id` (the gateway treats it like a deployment ID for querying) or `get_schema_by_ipfs_hash`.
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Claude will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+```
+Find the top subgraphs for contract 0x1f98431c8ad98523631ae4a59f267346ea31f984 on arbitrum-one
+```
diff --git a/website/src/pages/ar/subgraphs/mcp/cline.mdx b/website/src/pages/ar/subgraphs/mcp/cline.mdx
new file mode 100644
index 000000000000..156221d9a127
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/mcp/cline.mdx
@@ -0,0 +1,99 @@
+---
+title: Cline
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cline to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cline.
+
+## Prerequisites
+
+- [Cline](https://cline.bot/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+## Using The Graph Resource in Cline
+
+After configuring Cline:
+
+1. Restart Cline
+2. Start a new conversation
+3. Enable the Subgraph MCP from the context menu
+4. Add "Subgraph Server Instructions" as a resource to your chat context
+
+## Available Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cline will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/ar/subgraphs/mcp/cursor.mdx b/website/src/pages/ar/subgraphs/mcp/cursor.mdx
new file mode 100644
index 000000000000..298f43ece048
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/mcp/cursor.mdx
@@ -0,0 +1,94 @@
+---
+title: Cursor
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cursor to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cursor.
+
+## Prerequisites
+
+- [Cursor](https://www.cursor.com/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+### 4. Restart Cursor
+
+Restart Cursor, and start a new chat.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cursor will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/ar/subgraphs/querying/best-practices.mdx b/website/src/pages/ar/subgraphs/querying/best-practices.mdx
index f469ff02de9c..612e6792581d 100644
--- a/website/src/pages/ar/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/ar/subgraphs/querying/best-practices.mdx
@@ -2,9 +2,7 @@
title: أفضل الممارسات للاستعلام
---
-The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-
-Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
+Use The Graph's GraphQL API to query [Subgraph](/subgraphs/developing/subgraphs/) data efficiently. This guide outlines essential GraphQL rules, guides, and best practices to help you write optimized, reliable queries.
---
@@ -12,9 +10,11 @@ Learn the essential GraphQL language rules and best practices to optimize your S
### The Anatomy of a GraphQL Query
-على عكس REST API ، فإن GraphQL API مبنية على مخطط يحدد الاستعلامات التي يمكن تنفيذها.
+> GraphQL queries use the GraphQL language, which is defined in the [GraphQL specification](https://spec.graphql.org/).
+
+Unlike REST APIs, GraphQL APIs are built on a schema-driven design that defines which queries can be performed.
-For example, a query to get a token using the `token` query will look as follows:
+Here's a typical query to fetch a `token`:
```graphql
query GetToken($id: ID!) {
@@ -25,7 +25,7 @@ query GetToken($id: ID!) {
}
```
-which will return the following predictable JSON response (_when passing the proper `$id` variable value_):
+which will return a predictable JSON response (when passing the proper `$id` variable value):
```json
{
@@ -36,8 +36,6 @@ which will return the following predictable JSON response (_when passing the pro
}
```
-GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/).
-
The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders):
```graphql
@@ -50,33 +48,31 @@ query [operationName]([variableName]: [variableType]) {
}
```
-## Rules for Writing GraphQL Queries
+### Rules for Writing GraphQL Queries
-- Each `queryName` must only be used once per operation.
-- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`)
-- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
-- يجب أن يكون أي متغير تم تعيينه لوسيط متطابقًا مع نوعه.
-- في قائمة المتغيرات المعطاة ، يجب أن يكون كل واحد منها فريدًا.
-- يجب استخدام جميع المتغيرات المحددة.
+> Important: Failing to follow these rules will result in an error from The Graph API.
-> Note: Failing to follow these rules will result in an error from The Graph API.
+1. Each `queryName` must only be used once per operation.
+2. Each `field` must be used only once in a selection (you cannot query `id` twice under `token`).
+3. Complex types require a selection of sub-fields.
+ - For example, some `fields' or queries (like `tokens`) return complex types which will require a selection of sub-fields. Not providing a selection when expected or providing one when not expected will raise an error, such as `id\`. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
+4. يجب أن يكون أي متغير تم تعيينه لوسيط متطابقًا مع نوعه.
+5. Variables must be uniquely defined and used.
-For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/).
+**For a complete list of rules with code examples, check out the [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)**.
-### إرسال استعلام إلى GraphQL API
+### How to Send a Query to a GraphQL API
-GraphQL is a language and set of conventions that transport over HTTP.
+[GraphQL is a query language](https://graphql.org/learn/) and a set of conventions for APIs, typically used over HTTP to request and send data between clients and servers. This means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-
-However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
+However, as recommended in [Querying from an Application](/subgraphs/querying/from-an-application/), it's best to use `graph-client`, which supports the following unique features:
- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- نتيجة مكتوبة بالكامل
-Here's how to query The Graph with `graph-client`:
+Example query using `graph-client`:
```tsx
import { execute } from '../.graphclient'
@@ -100,15 +96,15 @@ async function main() {
main()
```
-More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/).
+For more alternatives, see ["Querying from an Application"](/subgraphs/querying/from-an-application/).
---
## Best Practices
-### اكتب دائمًا استعلامات ثابتة
+### 1. Always Write Static Queries
-A common (bad) practice is to dynamically build query strings as follows:
+A common bad practice is to dynamically build a query string as follows:
```tsx
const id = params.id
@@ -124,14 +120,16 @@ query GetToken {
// Execute query...
```
-While the above snippet produces a valid GraphQL query, **it has many drawbacks**:
+While the example above produces a valid GraphQL query, it comes with several issues:
+
+- The full query is harder to understand.
+- Developers are responsible for safely sanitizing the string interpolation.
+- Not sending the values of the variables as part of the request can block server-side caching.
+- It prevents tools from statically analyzing the query (e.g.linters or type generation tools).
-- it makes it **harder to understand** the query as a whole
-- developers are **responsible for safely sanitizing the string interpolation**
-- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side**
-- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools)
+Instead, it's recommended to **always write queries as static strings**.
-For this reason, it is recommended to always write queries as static strings:
+Example of static string:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -153,18 +151,21 @@ const result = await execute(query, {
})
```
-Doing so brings **many advantages**:
+Static strings have several **key advantages**:
-- **Easy to read and maintain** queries
-- The GraphQL **server handles variables sanitization**
-- **Variables can be cached** at server-level
-- **Queries can be statically analyzed by tools** (more on this in the following sections)
+- Queries are easier to read, manage, and debug.
+- Variable sanitization is handled by the GraphQL server The GraphQL.
+- Variables can be cached at the server level.
+- Queries can be statically analyzed by tools (see [GraphQL Essential Tools](/subgraphs/querying/best-practices/#graphql-essential-tools-guides)).
-### How to include fields conditionally in static queries
+### 2. Include Fields Conditionally in Static Queries
-You might want to include the `owner` field only on a particular condition.
+Including fields in static queries only for a particular condition improves performance and keeps responses lightweight by fetching only the necessary data when it's relevant.
-For this, you can leverage the `@include(if:...)` directive as follows:
+- The `@include(if:...)` directive tells the query to **include** a specific field only if the given condition is true.
+- The `@skip(if: ...)` directive tells the query to **exclude** a specific field if the given condition is true.
+
+Example using `owner` field with `@include(if:...)` directive:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -187,15 +188,11 @@ const result = await execute(query, {
})
```
-> Note: The opposite directive is `@skip(if: ...)`.
-
-### Ask for what you want
-
-GraphQL became famous for its "Ask for what you want" tagline.
+### 3. Ask Only For What You Want
-For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually.
+GraphQL is known for its "Ask for what you want” tagline, which is why it requires explicitly listing each field you want. There's no built-in way to fetch all available fields automatically.
-- When querying GraphQL APIs, always think of querying only the fields that will be actually used.
+- When querying GraphQL APIs, always think of querying only the fields that will actually be used.
- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities.
For example, in the following query:
@@ -215,9 +212,9 @@ query listTokens {
The response could contain 100 transactions for each of the 100 tokens.
-If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field.
+If the application only needs 10 transactions, the query should explicitly set **`first: 10`** on the transactions field.
-### Use a single query to request multiple records
+### 4. Use a Single Query to Request Multiple Records
By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
@@ -249,7 +246,7 @@ query ManyRecords {
}
```
-### Combine multiple queries in a single request
+### 5. Combine Multiple Queries in a Single Request
Your application might require querying multiple types of data as follows:
@@ -281,9 +278,9 @@ const [tokens, counters] = Promise.all(
)
```
-While this implementation is totally valid, it will require two round trips with the GraphQL API.
+While this implementation is valid, it will require two round trips with the GraphQL API.
-Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows:
+It's best to send multiple queries in the same GraphQL request as follows:
```graphql
import { execute } from "your-favorite-graphql-client"
@@ -304,9 +301,9 @@ query GetTokensandCounters {
const { result: { tokens, counters } } = execute(query)
```
-This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**.
+Sending multiple queries in the same GraphQL request **improves the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and provides a **more concise implementation**.
-### الاستفادة من أجزاء GraphQL
+### 6. Leverage GraphQL Fragments
A helpful feature to write GraphQL queries is GraphQL Fragment.
@@ -335,7 +332,7 @@ Such repeated fields (`id`, `active`, `status`) bring many issues:
- More extensive queries become harder to read.
- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces.
-A refactored version of the query would be the following:
+An optimized version of the query would be the following:
```graphql
query {
@@ -359,15 +356,18 @@ fragment DelegateItem on Transcoder {
}
```
-Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation.
+Using a GraphQL `fragment` improves readability (especially at scale) and results in better TypeScript types generation.
When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_).
-### ما يجب فعله وما لا يجب فعله في GraphQL Fragment
+## GraphQL Fragment Guidelines
-### Fragment base must be a type
+### Do's and Don'ts for Fragments
-A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**:
+1. Fragments cannot be based on a non-applicable types (types without fields).
+2. `BigInt` cannot be used as a fragment's base because it's a **scalar** (native "plain" type).
+
+Example:
```graphql
fragment MyFragment on BigInt {
@@ -375,11 +375,8 @@ fragment MyFragment on BigInt {
}
```
-`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base.
-
-#### How to spread a Fragment
-
-Fragments are defined on specific types and should be used accordingly in queries.
+3. Fragments belong to specific types and must be used with those same types in queries.
+4. Spread only fragments matching the correct type.
Example:
@@ -402,20 +399,23 @@ fragment VoteItem on Vote {
}
```
-`newDelegate` and `oldDelegate` are of type `Transcoder`.
+- `newDelegate` and `oldDelegate` are of type `Transcoder`. It's not possible to spread a fragment of type `Vote` here.
-It is not possible to spread a fragment of type `Vote` here.
+5. Fragments must be defined based on their specific usage.
+6. Define fragments as an atomic business unit of data.
-#### Define Fragment as an atomic business unit of data
+---
-GraphQL `Fragment`s must be defined based on their usage.
+### How to Define `Fragment` as an Atomic Business Unit of Data
-For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient.
+> For most use-cases, defining one fragment per type (in the case of repeated fields usage or type generation) is enough.
Here is a rule of thumb for using fragments:
- When fields of the same type are repeated in a query, group them in a `Fragment`.
-- When similar but different fields are repeated, create multiple fragments, for instance:
+- When similar but different fields are repeated, create multiple fragments.
+
+Example:
```graphql
# base fragment (mostly used in listing)
@@ -438,35 +438,45 @@ fragment VoteWithPoll on Vote {
---
-## The Essential Tools
+## GraphQL Essential Tools Guides
+
+### Test Queries with Graph Explorer
+
+Before integrating GraphQL queries into your dapp, it's best to test them. Instead of running them directly in your app, use a web-based playground.
+
+Start with [Graph Explorer](https://thegraph.com/explorer), a preconfigured GraphQL playground built specifically for Subgraphs. You can experiment with queries and see the structure of the data returned without writing any frontend code.
+
+If you want alternatives to debug/test your queries, check out other similar web-based tools:
+
+- [GraphiQL](https://graphiql-online.com/graphiql)
+- [Altair](https://altairgraphql.dev/)
-### GraphQL web-based explorers
+### Setting up Workflow and IDE Tools
-Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries.
+In order to keep up with querying best practices and syntax rules, use the following workflow and IDE tools.
-If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql).
+#### GraphQL ESLint
-### GraphQL Linting
+1. Install GraphQL ESLint
-In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools.
+Use [GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) to enforce best practices and syntax rules with zero effort.
-**GraphQL ESLint**
+2. Use the ["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config
-[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort.
+This will enforce essential rules such as:
-[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as:
+- `@graphql-eslint/fields-on-correct-type`: Ensures fields match the proper type.
+- `@graphql-eslint/no-unused variables`: Flags unused variables in your queries.
-- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type?
-- `@graphql-eslint/no-unused variables`: should a given variable stay unused?
-- و اكثر!
+Result: You'll **catch errors without even testing queries** on the playground or running them in production!
-This will allow you to **catch errors without even testing queries** on the playground or running them in production!
+#### Use IDE plugins
-### IDE plugins
+GraphQL plugins streamline your workflow by offering real-time feedback while you code. They highlight mistakes, suggest completions, and help you explore your schema faster.
-**VSCode and GraphQL**
+1. VS Code
-The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get:
+Install the [GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) to unlock:
- Syntax highlighting
- Autocomplete suggestions
@@ -474,11 +484,11 @@ The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemNa
- Snippets
- Go to definition for fragments and input types
-If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly.
+If you are using `graphql-eslint`, use the [ESLint VS Code extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) to visualize errors and warnings inlined in your code correctly.
-**WebStorm/Intellij and GraphQL**
+2. WebStorm/Intellij and GraphQL
-The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing:
+Install the [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/). It significantly improves the experience of working with GraphQL by providing:
- Syntax highlighting
- Autocomplete suggestions
diff --git a/website/src/pages/ar/subgraphs/querying/graphql-api.mdx b/website/src/pages/ar/subgraphs/querying/graphql-api.mdx
index 14e11ff80306..40817616c2dc 100644
--- a/website/src/pages/ar/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/ar/subgraphs/querying/graphql-api.mdx
@@ -2,23 +2,37 @@
title: GraphQL API
---
-Learn about the GraphQL Query API used in The Graph.
+Explore the GraphQL Query API for interacting with Subgraphs on The Graph Network.
-## What is GraphQL?
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with existing data.
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
+The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
+## Core Concepts
-## Queries with GraphQL
+### Entities
+
+- **What they are**: Persistent data objects defined with `@entity` in your schema
+- **Key requirement**: Must contain `id: ID!` as primary identifier
+- **Usage**: Foundation for all query operations
+
+### المخطط
+
+- **Purpose**: Blueprint defining the data structure and relationships using GraphQL [IDL](https://facebook.github.io/graphql/draft/#sec-Type-System)
+- **Key characteristics**:
+ - Auto-generates query endpoints
+ - Read-only operations (no mutations)
+ - Defines entity interfaces and derived fields
-In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+## Query Structure
-> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
+GraphQL queries in The Graph target entities defined in the Subgraph schema. Each `Entity` type generates corresponding `entity` and `entities` fields on the root `Query` type.
-### Examples
+> Note: The `query` keyword is not required at the top level of GraphQL queries.
-Query for a single `Token` entity defined in your schema:
+### Single Entity Queries Example
+
+Query for a single `Token` entity:
```graphql
{
@@ -29,9 +43,11 @@ Query for a single `Token` entity defined in your schema:
}
```
-> Note: When querying for a single entity, the `id` field is required, and it must be written as a string.
+> Note: Single entity queries require the `id` parameter as a string.
+
+### Collection Queries Example
-Query all `Token` entities:
+Query format for all `Token` entities:
```graphql
{
@@ -42,14 +58,14 @@ Query all `Token` entities:
}
```
-### Sorting
+### Sorting Example
-When querying a collection, you may:
+Collection queries support the following sort parameters:
-- Use the `orderBy` parameter to sort by a specific attribute.
-- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending.
+- `orderBy`: Specifies the attribute for sorting
+- `orderDirection`: Accepts `asc` (ascending) or `desc` (descending)
-#### Example
+#### Standard Sorting Example
```graphql
{
@@ -60,11 +76,7 @@ When querying a collection, you may:
}
```
-#### Example for nested entity sorting
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities.
-
-The following example shows tokens sorted by the name of their owner:
+#### Nested Entity Sorting Example
```graphql
{
@@ -77,20 +89,18 @@ The following example shows tokens sorted by the name of their owner:
}
```
-> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
+> Note: Nested sorting supports one-level-deep `String` or `ID` types on `@entity` and `@derivedFrom` fields.
-### Pagination
+### Pagination Example
-When querying a collection, it's best to:
+When querying a collection, it is best to:
- Use the `first` parameter to paginate from the beginning of the collection.
- The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time.
- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities.
- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above.
-#### Example using `first`
-
-Query the first 10 tokens:
+#### Standard Pagination Example
```graphql
{
@@ -101,11 +111,7 @@ Query the first 10 tokens:
}
```
-To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection.
-
-#### Example using `first` and `skip`
-
-Query 10 `Token` entities, offset by 10 places from the beginning of the collection:
+#### Offset Pagination Example
```graphql
{
@@ -116,9 +122,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect
}
```
-#### Example using `first` and `id_ge`
-
-If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query:
+#### Cursor-based Pagination Example
```graphql
query manyTokens($lastID: String) {
@@ -129,16 +133,11 @@ query manyTokens($lastID: String) {
}
```
-The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values.
-
### Filtering
-- You can use the `where` parameter in your queries to filter for different properties.
-- You can filter on multiple values within the `where` parameter.
-
-#### Example using `where`
+The `where` parameter filters entities based on specified conditions.
-Query challenges with `failed` outcome:
+#### Basic Filtering Example
```graphql
{
@@ -152,9 +151,7 @@ Query challenges with `failed` outcome:
}
```
-You can use suffixes like `_gt`, `_lte` for value comparison:
-
-#### Example for range filtering
+#### Numeric Comparison Example
```graphql
{
@@ -166,11 +163,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
}
```
-#### Example for block filtering
-
-You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-
-This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
+#### Block-based Filtering Example
```graphql
{
@@ -182,11 +175,7 @@ This can be useful if you are looking to fetch only entities which have changed,
}
```
-#### Example for nested entity filtering
-
-Filtering on the basis of nested entities is possible in the fields with the `_` suffix.
-
-This can be useful if you are looking to fetch only entities whose child-level entities meet the provided conditions.
+#### Nested Entity Filtering Example
```graphql
{
@@ -200,11 +189,9 @@ This can be useful if you are looking to fetch only entities whose child-level e
}
```
-#### Logical operators
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria.
+#### Logical Operators
-##### `AND` Operator
+##### AND Operations Example
The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`.
@@ -220,27 +207,11 @@ The following example filters for challenges with `outcome` `succeeded` and `num
}
```
-> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas.
->
-> ```graphql
-> {
-> challenges(where: { number_gte: 100, outcome: "succeeded" }) {
-> challenger
-> outcome
-> application {
-> id
-> }
-> }
-> }
-> ```
-
-##### `OR` Operator
-
-The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`.
+**Syntactic sugar:** You can simplify the above query by removing the `and` operator and by passing a sub-expression separated by commas.
```graphql
{
- challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenges(where: { number_gte: 100, outcome: "succeeded" }) {
challenger
outcome
application {
@@ -250,52 +221,36 @@ The following example filters for challenges with `outcome` `succeeded` or `numb
}
```
-> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
-
-#### All Filters
-
-Full list of parameter suffixes:
+##### OR Operations Example
+```graphql
+{
+ challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenger
+ outcome
+ application {
+ id
+ }
+ }
+}
```
-_
-_not
-_gt
-_lt
-_gte
-_lte
-_in
-_not_in
-_contains
-_contains_nocase
-_not_contains
-_not_contains_nocase
-_starts_with
-_starts_with_nocase
-_ends_with
-_ends_with_nocase
-_not_starts_with
-_not_starts_with_nocase
-_not_ends_with
-_not_ends_with_nocase
-```
-
-> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types.
-In addition, the following global filters are available as part of `where` argument:
+Global filter parameter:
```graphql
_change_block(number_gte: Int)
```
-### Time-travel queries
+### Time-travel Queries Example
-You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries.
+Queries support historical state retrieval using the `block` parameter:
-The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change.
+- `number`: Integer block number
+- `hash`: String block hash
> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail.
-#### Example
+#### Block Number Query Example
```graphql
{
@@ -309,9 +264,7 @@ The result of such a query will not change over time, i.e., querying at a certai
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000.
-
-#### Example
+#### Block Hash Query Example
```graphql
{
@@ -325,28 +278,26 @@ This query will return `Challenge` entities, and their associated `Application`
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash.
-
-### Fulltext Search Queries
+### Full-Text Search Example
-Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
+Full-text search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Full-text Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add full-text search to your Subgraph.
-Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
+Full-text search queries have one required field, `text`, for supplying search terms. Several special full-text operators are available to be used in this `text` search field.
-Fulltext search operators:
+Full-text search fields use the required `text` parameter with the following operators:
-| رمز | عامل التشغيل | الوصف |
-| --- | --- | --- |
-| `&` | `And` | لدمج عبارات بحث متعددة في فلتر للكيانات التي تتضمن جميع العبارات المتوفرة |
-| | | `Or` | الاستعلامات التي تحتوي على عبارات بحث متعددة مفصولة بواسطة عامل التشغيل or ستعيد جميع الكيانات المتطابقة من أي عبارة متوفرة |
-| `<->` | `Follow by` | يحدد المسافة بين كلمتين. |
-| `:*` | `Prefix` | يستخدم عبارة البحث prefix للعثور على الكلمات التي تتطابق بادئتها (مطلوب حرفان.) |
+| Operator | رمز | Description |
+| --------- | ------ | --------------------------------------------------------------- |
+| And | `&` | Matches entities containing all terms |
+| Or | `\|` | Return all entities with a match from any of the provided terms |
+| Follow by | `<->` | Matches terms with specified distance |
+| Prefix | `:*` | Matches word prefixes (minimum 2 characters) |
-#### Examples
+#### Search Examples
-Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields.
+OR operator:
-```graphql
+```
{
blogSearch(text: "anarchism | crumpets") {
id
@@ -357,7 +308,7 @@ Using the `or` operator, this query will filter to blog entities with variations
}
```
-The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy"
+“Follow” by operator:
```graphql
{
@@ -370,7 +321,7 @@ The `follow by` operator specifies a words a specific distance apart in the full
}
```
-Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music".
+Combined operators:
```graphql
{
@@ -383,29 +334,19 @@ Combine fulltext operators to make more complex filters. With a pretext search o
}
```
-### Validation
+### تعريف المخطط
-Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
-
-## المخطط
-
-The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
-
-> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
-
-### Entities
+Entity types require:
-All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field.
+- GraphQL Interface Definition Language (IDL) format
+- `@entity` directive
+- `ID` field
-> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported.
+### Subgraph Metadata Example
-### Subgraph Metadata
+The `_Meta_` object provides subgraph metadata:
-All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
-
-```graphQL
+```graphql
{
_meta(block: { number: 123987 }) {
block {
@@ -419,14 +360,49 @@ All Subgraphs have an auto-generated `_Meta_` object, which provides access to S
}
```
-If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
-
-`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
+Metadata fields:
+
+- `deployment`: IPFS CID of the subgraph.yaml
+- `block`: Latest block information
+- `hasIndexingErrors`: Boolean indicating past indexing errors
+
+> Note: When writing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use `and` operators instead of `or` whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
+
+### GraphQL Filter Operators Reference
+
+This table explains each filter operator available in The Graph's GraphQL API. These operators are used as suffixes to field names when filtering data using the `where` parameter.
+
+| عامل التشغيل | الوصف | Example |
+| ------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
+| `_` | Matches entities where the specified field equals another entity | `{ where: { owner_: { name: "Alice" } } }` |
+| `_not` | Negates the specified condition | `{ where: { active_not: true } }` |
+| `_gt` | Greater than (>) | `{ where: { price_gt: "100" } }` |
+| `_lt` | Less than (`\<`) | `{ where: { price_lt: "100" } }` |
+| `_gte` | Greater than or equal to (>=) | `{ where: { price_gte: "100" } }` |
+| `_lte` | Less than or equal to (`\<=`) | `{ where: { price_lte: "100" } }` |
+| `_in` | Value is in the specified array | `{ where: { category_in: ["Art", "Music"] } }` |
+| `_not_in` | Value is not in the specified array | `{ where: { category_not_in: ["Art", "Music"] } }` |
+| `_contains` | Field contains the specified string (case-sensitive) | `{ where: { name_contains: "token" } }` |
+| `_contains_nocase` | Field contains the specified string (case-insensitive) | `{ where: { name_contains_nocase: "token" } }` |
+| `_not_contains` | Field does not contain the specified string (case-sensitive) | `{ where: { name_not_contains: "test" } }` |
+| `_not_contains_nocase` | Field does not contain the specified string (case-insensitive) | `{ where: { name_not_contains_nocase: "test" } }` |
+| `_starts_with` | Field starts with the specified string (case-sensitive) | `{ where: { name_starts_with: "Crypto" } }` |
+| `_starts_with_nocase` | Field starts with the specified string (case-insensitive) | `{ where: { name_starts_with_nocase: "crypto" } }` |
+| `_ends_with` | Field ends with the specified string (case-sensitive) | `{ where: { name_ends_with: "Token" } }` |
+| `_ends_with_nocase` | Field ends with the specified string (case-insensitive) | `{ where: { name_ends_with_nocase: "token" } }` |
+| `_not_starts_with` | Field does not start with the specified string (case-sensitive) | `{ where: { name_not_starts_with: "Test" } }` |
+| `_not_starts_with_nocase` | Field does not start with the specified string (case-insensitive) | `{ where: { name_not_starts_with_nocase: "test" } }` |
+| `_not_ends_with` | Field does not end with the specified string (case-sensitive) | `{ where: { name_not_ends_with: "Test" } }` |
+| `_not_ends_with_nocase` | Field does not end with the specified string (case-insensitive) | `{ where: { name_not_ends_with_nocase: "test" } }` |
+
+#### Notes
+
+- Type support varies by operator. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`.
+- The `_` operator is only available for object and interface types.
+- String comparison operators are especially useful for text fields.
+- Numeric comparison operators work with both number and string-encoded number fields.
+- Use these operators in combination with logical operators (`and`, `or`) for complex filtering.
-`block` provides information about the latest block (taking into account any block constraints passed to `_meta`):
-
-- hash: the hash of the block
-- number: the block number
-- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
+### Validation
-`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
+Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
diff --git a/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx
index 7b91a147ef47..ef1f3e6781b4 100644
--- a/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx
@@ -1,34 +1,86 @@
---
-title: Managing API keys
+title: How to Manage API keys
---
+This guide shows you how to create, manage, and secure API keys for your [Subgraphs](/subgraphs/developing/subgraphs/).
+
## نظره عامة
-API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are required to query Subgraphs. They authenticate users and devices, authorize access to specific endpoints, enforce rate limits, and enable usage tracking across The Graph.
+
+## Prerequisites
+
+- A [Subgraph Studio](https://thegraph.com/studio/) account
+
+## Create a New API Key
+
+1. Navigate to [Subgraph Studio](https://thegraph.com/studio/)
+2. Click the **API Keys** tab in the navigation menu
+3. Click the **Create API Key** button
+
+A new window will pop up:
+
+4. Enter a name for your API key
+5. Optional: You can enable a period spending limit
+6. Click **Create API Key**
+
+
+
+## Manage API Keys
+
+The “API keys” table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+
+### How to Set Spending Limits
+
+1. Find your API key in the API keys table
+2. Click the "three dots" icon next to the key
+3. Select "Manage spending limit"
+4. Enter your desired monthly limit in USD
+5. Click **Save**
+
+> Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+
+### How to Rename an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Rename API key"
+3. Enter the new name
+4. Click **Save**
+
+### How to Regenerate an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Regenerate API key"
+3. Confirm the action in the pop up dialog
+
+> Warning: Regenerating an API key will invalidate the previous key immediately. Update your applications with the new key to prevent service interruption.
+
+## API Key Details
-### Create and Manage API Keys
+### Monitoring Usage
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
+1. Click on your API key to view the Details page
+2. Check the **Overview** section for:
+ - Total number of queries
+ - GRT spent
+ - Current usage statistics
-The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+### Restricting Domain Access
-You can click the "three dots" menu to the right of a given API key to:
+1. Click on your API key to open the Details page
+2. Navigate to the **Security** section
+3. Click "Add Domain"
+4. Enter the authorized domain name
+5. Click **Save**
-- Rename API key
-- Regenerate API key
-- Delete API key
-- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+### Limiting Subgraph Access
-### API Key Details
+1. In the API key Details page
+2. Navigate to the **Security** section
+3. Click "Assign Subgraphs"
+4. Select the Subgraphs you want to authorize
+5. Click **Save**
-You can click on an individual API key to view the Details page:
+## مصادر إضافية
-1. Under the **Overview** section, you can:
- - تعديل اسم المفتاح الخاص بك
- - إعادة إنشاء مفاتيح API
- - عرض الاستخدام الحالي لمفتاح API مع الإحصائيات:
- - عدد الاستعلامات
- - كمية GRT التي تم صرفها
-2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- - عرض وإدارة أسماء النطاقات المصرح لها باستخدام مفتاح API الخاص بك
- - Assign Subgraphs that can be queried with your API key
+[Deploying Using Subgraph Studio](/subgraphs/developing/deploying/using-subgraph-studio/)
diff --git a/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 17258dd13ea1..c48a3021233a 100644
--- a/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,15 +2,19 @@
title: Subgraph ID vs Deployment ID
---
+Managing and accessing Subgraphs relies on two distinct identification systems: Subgraph IDs and Deployment IDs.
+
A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
-Here are some key differences between the two IDs: 
+Both identifiers are accessible in [Subgraph Studio](https://thegraph.com/studio/):
+
+
## Deployment ID
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://ipfs.thegraph.com/ipfs/QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
@@ -18,6 +22,12 @@ Example endpoint that uses Deployment ID:
`https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB`
+Using Deployment IDs for queries offers precise version control but comes with specific implications:
+
+- Advantages: Complete control over which version you're querying, ensuring consistent results
+- Challenges: Requires manual updates to query code when new Subgraph versions are published
+- Use case: Ideal for production environments where stability and predictability are crucial
+
## Subgraph ID
The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
@@ -25,3 +35,20 @@ The Subgraph ID is a unique identifier for a Subgraph. It remains constant acros
Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
+
+Using Subgraph IDs comes with important considerations:
+
+- Benefits: Automatically queries the latest version, reducing maintenance overhead
+- Limitations: May encounter version synchronization delays or breaking schema changes
+- Use case: Better suited for development environments or when staying current is more important than version stability
+
+## Deployment ID vs Subgraph ID
+
+Here are the key differences between the two IDs:
+
+| Consideration | Deployment ID | Subgraph ID |
+| ----------------------- | --------------------- | --------------- |
+| Version Pinning | Specific version | Always latest |
+| Maintenance Effort | High (manual updates) | Low (automatic) |
+| Environment Suitability | Production | Development |
+| Sync Status Awareness | Not required | Critical |
diff --git a/website/src/pages/ar/subgraphs/quick-start.mdx b/website/src/pages/ar/subgraphs/quick-start.mdx
index 9b7bf860e87d..05a51a6a9a02 100644
--- a/website/src/pages/ar/subgraphs/quick-start.mdx
+++ b/website/src/pages/ar/subgraphs/quick-start.mdx
@@ -2,24 +2,28 @@
title: بداية سريعة
---
-Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Create, deploy, and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph Network.
+
+By the end, you'll have:
+
+- Initialized a Subgraph from a smart contract
+- Deployed it to Subgraph Studio for testing
+- Published to The Graph Network for decentralized indexing
## Prerequisites
- محفظة عملات رقمية
-- A smart contract address on a [supported network](/supported-networks/)
-- [Node.js](https://nodejs.org/) installed
-- A package manager of your choice (`npm`, `yarn` or `pnpm`)
+- A deployed smart contract on a [supported network](/supported-networks/)
+- [Node.js](https://nodejs.org/) & a package manager of your choice (`npm`, `yarn` or `pnpm`)
## How to Build a Subgraph
### 1. Create a Subgraph in Subgraph Studio
-Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-
-Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-
-Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
+1. Go to [Subgraph Studio](https://thegraph.com/studio/)
+2. Connect your wallet
+3. Click "Create a Subgraph"
+4. Name it in Title Case: "Subgraph Name Chain Name"
### 2. Install the Graph CLI
@@ -37,20 +41,22 @@ Using [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your Subgraph
+Verify install:
-> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
+```sh
+graph --version
+```
-The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
+### 3. Initialize your Subgraph
-The following command initializes your Subgraph from an existing contract:
+> You can find commands for your specific Subgraph in [Subgraph Studio](https://thegraph.com/studio/).
+
+The following command initializes your Subgraph from an existing contract and indexes events:
```sh
graph init
```
-If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-
When you initialize your Subgraph, the CLI will ask you for the following information:
- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
@@ -59,19 +65,17 @@ When you initialize your Subgraph, the CLI will ask you for the following inform
- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block where the contract was deployed to optimize Subgraph indexing of blockchain data.
- **Contract Name**: Input the name of your contract.
- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-See the following screenshot for an example for what to expect when initializing your Subgraph:
+See the following screenshot for an example of what to expect when initializing your Subgraph:

### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-
When making changes to the Subgraph, you will mainly work with three files:
- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
@@ -82,9 +86,7 @@ For a detailed breakdown on how to write your Subgraph, check out [Creating a Su
### 5. Deploy your Subgraph
-> Remember, deploying is not the same as publishing.
-
-When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
Once your Subgraph is written, run the following commands:
@@ -107,8 +109,6 @@ graph deploy
```
````
-The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-
### 6. Review your Subgraph
If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
@@ -125,55 +125,13 @@ When your Subgraph is ready for a production environment, you can publish it to
- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-
-> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
-
-#### Publishing with Subgraph Studio
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to add curation signal.
-To publish your Subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard and select your network.

-Select the network to which you would like to publish your Subgraph.
-
-#### Publishing from the CLI
-
-As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
-
-Open the `graph-cli`.
-
-Use the following commands:
-
-````
-```sh
-graph codegen && graph build
-```
-
-Then,
-
-```sh
-graph publish
-```
-````
-
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.
-
-
-
-To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-
-#### Adding signal to your Subgraph
-
-1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
-
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
-
-2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
-
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
-
-To learn more about curation, read [Curating](/resources/roles/curating/).
+> It is recommended that you curate your own Subgraph with at least 3,000 GRT to incentivize indexing.
To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:
diff --git a/website/src/pages/ar/subgraphs/upgrade-indexer.mdx b/website/src/pages/ar/subgraphs/upgrade-indexer.mdx
new file mode 100644
index 000000000000..dce3a784d917
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/upgrade-indexer.mdx
@@ -0,0 +1,25 @@
+---
+title: Edge & Node Upgrade Indexer
+sidebarTitle: Upgrade Indexer
+---
+
+## نظره عامة
+
+The Upgrade Indexer is a specialized Indexer operated by Edge & Node. It supports newly integrated chains within The Graph ecosystem and ensures new Subgraphs are immediately available for querying, eliminating potential downtime.
+
+Originally designed as a transitional support, its primary purpose was to facilitate the migration of Subgraphs from the hosted service to the decentralized network. Currently, it supports newly deployed Subgraphs before the full Chain Integration Process (CIP) Indexing rewards are activated.
+
+### What it does
+
+- Provides immediate query support for all newly deployed Subgraphs.
+- Functions as the sole supporting Indexer for each chain until indexing rewards are activated.
+
+### What it does **not** do
+
+- Does not permanently index Subgraphs. Subgraph owners should curate Subgraphs to use independent Indexers long term.
+- Does not compete for rewards. The Upgrade Indexer's participation on the Graph Network does not dilute rewards for other Indexers.
+- Doesn't support Time Travel Queries (TTQ). All Subgraphs on the Upgrade Indexer are auto-pruned. If TTQs are needed on a Subgraph, [curation signal can be added](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to attract Indexers that will support this feature.
+
+### Conclusion
+
+The Edge & Node Upgrade Indexer is foundational in supporting chain integrations and mitigating data latency risks. It plays a critical role in scaling The Graph's decentralized infrastructure by ensuring immediate query support and fostering community-driven indexing.
diff --git a/website/src/pages/ar/substreams/_meta-titles.json b/website/src/pages/ar/substreams/_meta-titles.json
index 6262ad528c3a..b8799cc89251 100644
--- a/website/src/pages/ar/substreams/_meta-titles.json
+++ b/website/src/pages/ar/substreams/_meta-titles.json
@@ -1,3 +1,4 @@
{
- "developing": "Developing"
+ "developing": "Developing",
+ "sps": "Substreams-powered Subgraphs"
}
diff --git a/website/src/pages/ar/substreams/developing/sinks.mdx b/website/src/pages/ar/substreams/developing/sinks.mdx
index 34d2f8624e7d..7774ae25769e 100644
--- a/website/src/pages/ar/substreams/developing/sinks.mdx
+++ b/website/src/pages/ar/substreams/developing/sinks.mdx
@@ -8,14 +8,13 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database or a file.
## Sinks
> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed.
- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database.
-- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network.
- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application.
- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic.
- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks.
@@ -26,26 +25,26 @@ Sinks are integrations that allow you to send the extracted data to different de
### Official
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Official Support (by one of the main Substreams providers)
- C = Community Support
diff --git a/website/src/pages/ar/substreams/quick-start.mdx b/website/src/pages/ar/substreams/quick-start.mdx
index 7428d67bf4ff..8698cc7102e3 100644
--- a/website/src/pages/ar/substreams/quick-start.mdx
+++ b/website/src/pages/ar/substreams/quick-start.mdx
@@ -31,6 +31,7 @@ If you can't find a Substreams package that meets your specific needs, you can d
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/).
diff --git a/website/src/pages/ar/substreams/sps/faq.mdx b/website/src/pages/ar/substreams/sps/faq.mdx
new file mode 100644
index 000000000000..c19b0a950297
--- /dev/null
+++ b/website/src/pages/ar/substreams/sps/faq.mdx
@@ -0,0 +1,96 @@
+---
+title: Substreams-Powered Subgraphs FAQ
+sidebarTitle: FAQ
+---
+
+## What are Substreams?
+
+Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications.
+
+Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere.
+
+Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
+
+## What are Substreams-powered Subgraphs?
+
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
+
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
+
+## How are Substreams-powered Subgraphs different from Subgraphs?
+
+Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain.
+
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+
+## What are the benefits of using Substreams-powered Subgraphs?
+
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+
+## ماهي فوائد سبستريمز؟
+
+There are many benefits to using Substreams, including:
+
+- Composable: You can stack Substreams modules like LEGO blocks, and build upon community modules, further refining public data.
+
+- High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery).
+
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
+
+- Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks.
+
+- الوصول إلى بيانات إضافية غير متاحة كجزء من إجراء الإستدعاء عن بعد للترميز الكائني لجافاسكريبت
+
+- All the benefits of the Firehose.
+
+## What is the Firehose?
+
+تم تطوير فايرهوز بواسطة [StreamingFast] (https://www.streamingfast.io/) وهو طبقة استخراج بيانات سلاسل الكتل مصممة من الصفر لمعالجة كامل تاريخ سلاسل الكتل بسرعات لم يشهدها من قبل. يوفر نهجاً قائماً على الملفات وأولوية-التدفق، وهو مكون أساسي في مجموعة تقنيات ستريمنج فاست مفتوحة المصدر والأساس لسبستريمز.
+
+انتقل إلى [الوثائق](https://firehose.streamingfast.io/) لمعرفة المزيد حول فايرهوز.
+
+## What are the benefits of the Firehose?
+
+There are many benefits to using Firehose, including:
+
+- أقل تأخير وعدم الاستقصاء: بطريقة قائمة على أولوية-التدفق، تم تصميم نقاط فايرهوز للتسابق لدفع بيانات الكتلة أولاً.
+
+- Prevents downtimes: Designed from the ground up for High Availability.
+
+- Never miss a beat: The Firehose stream cursor is designed to handle forks and to continue where you left off in any condition.
+
+- Richest data model: Best data model that includes the balance changes, the full call tree, internal transactions, logs, storage changes, gas costs, and more.
+
+- يستفيد من الملفات المسطحة: يتم استخراج بيانات سلسلة الكتل إلى ملفات مسطحة، وهي أرخص وأكثر موارد الحوسبة تحسيناً.
+
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
+
+The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules.
+
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+
+The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
+
+## What is the role of Rust modules in Substreams?
+
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
+
+See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
+
+## What makes Substreams composable?
+
+When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used.
+
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
+
+## كيف يمكنك إنشاء ونشر غراف فرعي مدعوم بسبستريمز؟
+
+After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
+
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
+
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
+
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
+
+إن التكامل مع سبستريمز والغرافات الفرعية المدعومة بسبستريمز واعدة بالعديد من الفوائد، بما في ذلك عمليات فهرسة عالية الأداء وقابلية أكبر للتركيبية من خلال استخدام وحدات المجتمع والبناء عليها.
diff --git a/website/src/pages/ar/substreams/sps/introduction.mdx b/website/src/pages/ar/substreams/sps/introduction.mdx
new file mode 100644
index 000000000000..e74abf2f0998
--- /dev/null
+++ b/website/src/pages/ar/substreams/sps/introduction.mdx
@@ -0,0 +1,31 @@
+---
+title: Introduction to Substreams-Powered Subgraphs
+sidebarTitle: مقدمة
+---
+
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+
+## نظره عامة
+
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+
+### Specifics
+
+There are two methods of enabling this technology:
+
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
+
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
+
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+
+### مصادر إضافية
+
+Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly:
+
+- [Solana](/substreams/developing/solana/transactions/)
+- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm)
+- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
+- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
+- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/ar/substreams/sps/triggers.mdx b/website/src/pages/ar/substreams/sps/triggers.mdx
new file mode 100644
index 000000000000..1bf1a2cf3f51
--- /dev/null
+++ b/website/src/pages/ar/substreams/sps/triggers.mdx
@@ -0,0 +1,47 @@
+---
+title: Substreams Triggers
+---
+
+Use Custom Triggers and enable the full use GraphQL.
+
+## نظره عامة
+
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
+
+### Defining `handleTransactions`
+
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
+
+```tsx
+export function handleTransactions(bytes: Uint8Array): void {
+ let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1.
+ if (transactions.length == 0) {
+ log.info('No transactions found', [])
+ return
+ }
+
+ for (let i = 0; i < transactions.length; i++) {
+ // 2.
+ let transaction = transactions[i]
+
+ let entity = new Transaction(transaction.hash) // 3.
+ entity.from = transaction.from
+ entity.to = transaction.to
+ entity.save()
+ }
+}
+```
+
+Here's what you're seeing in the `mappings.ts` file:
+
+1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object
+2. Looping over the transactions
+3. Create a new Subgraph entity for every transaction
+
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
+
+### مصادر إضافية
+
+To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/).
diff --git a/website/src/pages/ar/substreams/sps/tutorial.mdx b/website/src/pages/ar/substreams/sps/tutorial.mdx
new file mode 100644
index 000000000000..dd85fa999764
--- /dev/null
+++ b/website/src/pages/ar/substreams/sps/tutorial.mdx
@@ -0,0 +1,155 @@
+---
+title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana"
+sidebarTitle: Tutorial
+---
+
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
+
+## Get Started
+
+For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial)
+
+### Prerequisites
+
+Before starting, make sure to:
+
+- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container.
+- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs.
+
+### Step 1: Initialize Your Project
+
+1. Open your Dev Container and run the following command to initialize your project:
+
+ ```bash
+ substreams init
+ ```
+
+2. Select the "minimal" project option.
+
+3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID:
+
+```yaml
+specVersion: v0.1.0
+package:
+ name: my_project_sol
+ version: v0.1.0
+
+imports: # Pass your spkg of interest
+ solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg
+
+modules:
+ - name: map_spl_transfers
+ use: solana:map_block # Select corresponding modules available within your spkg
+ initialBlock: 260000082
+
+ - name: map_transactions_by_programid
+ use: solana:solana:transactions_by_programid_without_votes
+
+network: solana-mainnet-beta
+
+params: # Modify the param fields to meet your needs
+ # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA
+ map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE
+```
+
+### Step 2: Generate the Subgraph Manifest
+
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
+
+```bash
+substreams codegen subgraph
+```
+
+You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source:
+
+```yaml
+---
+dataSources:
+ - kind: substreams
+ name: my_project_sol
+ network: solana-mainnet-beta
+ source:
+ package:
+ moduleName: map_spl_transfers # Module defined in the substreams.yaml
+ file: ./my-project-sol-v0.1.0.spkg
+ mapping:
+ apiVersion: 0.0.9
+ kind: substreams/graph-entities
+ file: ./src/mappings.ts
+ handler: handleTriggers
+```
+
+### Step 3: Define Entities in `schema.graphql`
+
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
+
+Here is an example:
+
+```graphql
+type MyTransfer @entity {
+ id: ID!
+ amount: String!
+ source: String!
+ designation: String!
+ signers: [String!]!
+}
+```
+
+This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`.
+
+### Step 4: Handle Substreams Data in `mappings.ts`
+
+With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
+
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
+
+```ts
+import { Protobuf } from 'as-proto/assembly'
+import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events'
+import { MyTransfer } from '../generated/schema'
+
+export function handleTriggers(bytes: Uint8Array): void {
+ const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode)
+
+ for (let i = 0; i < input.data.length; i++) {
+ const event = input.data[i]
+
+ if (event.transfer != null) {
+ let entity_id: string = `${event.txnId}-${i}`
+ const entity = new MyTransfer(entity_id)
+ entity.amount = event.transfer!.instruction!.amount.toString()
+ entity.source = event.transfer!.accounts!.source
+ entity.designation = event.transfer!.accounts!.destination
+
+ if (event.transfer!.accounts!.signer!.single != null) {
+ entity.signers = [event.transfer!.accounts!.signer!.single!.signer]
+ } else if (event.transfer!.accounts!.signer!.multisig != null) {
+ entity.signers = event.transfer!.accounts!.signer!.multisig!.signers
+ }
+ entity.save()
+ }
+ }
+}
+```
+
+### Step 5: Generate Protobuf Files
+
+To generate Protobuf objects in AssemblyScript, run the following command:
+
+```bash
+npm run protogen
+```
+
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
+
+### Conclusion
+
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+
+### Video Tutorial
+
+
+
+### مصادر إضافية
+
+For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana).
diff --git a/website/src/pages/ar/supported-networks.mdx b/website/src/pages/ar/supported-networks.mdx
index ac7050638264..bc974d709753 100644
--- a/website/src/pages/ar/supported-networks.mdx
+++ b/website/src/pages/ar/supported-networks.mdx
@@ -4,17 +4,17 @@ hideTableOfContents: true
hideContentHeader: true
---
-import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, NetworksTable } from '@/supportedNetworks'
import { Heading } from '@/components'
import { useI18n } from '@/i18n'
export const getStaticProps = getSupportedNetworksStaticProps
-
+
{useI18n().t('index.supportedNetworks.title')}
-
+
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
diff --git a/website/src/pages/ar/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/ar/token-api/evm/get-balances-evm-by-address.mdx
index 3386fd078059..68385ffc4272 100644
--- a/website/src/pages/ar/token-api/evm/get-balances-evm-by-address.mdx
+++ b/website/src/pages/ar/token-api/evm/get-balances-evm-by-address.mdx
@@ -1,9 +1,9 @@
---
-title: Token Balances by Wallet Address
+title: Balances by Address
template:
type: openApi
apiId: tokenApi
operationId: getBalancesEvmByAddress
---
-The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
+Provides latest ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/ar/token-api/evm/get-historical-balances-evm-by-address.mdx b/website/src/pages/ar/token-api/evm/get-historical-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..d96ed1b81fa2
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-historical-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Historical Balances
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHistoricalBalancesEvmByAddress
+---
+
+Provides historical ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/ar/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-holders-evm-by-contract.mdx
index 0bb79e41ed54..01a52bbf7ad2 100644
--- a/website/src/pages/ar/token-api/evm/get-holders-evm-by-contract.mdx
+++ b/website/src/pages/ar/token-api/evm/get-holders-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders by Contract Address
+title: Token Holders
template:
type: openApi
apiId: tokenApi
operationId: getHoldersEvmByContract
---
-The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
+Provides ERC-20 token holder balances by contract address.
diff --git a/website/src/pages/ar/token-api/evm/get-nft-activities-evm.mdx b/website/src/pages/ar/token-api/evm/get-nft-activities-evm.mdx
new file mode 100644
index 000000000000..f76eb35f653a
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-nft-activities-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Activities
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftActivitiesEvm
+---
+
+Provides NFT Activities (ex: transfers, mints & burns).
diff --git a/website/src/pages/ar/token-api/evm/get-nft-collections-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-nft-collections-evm-by-contract.mdx
new file mode 100644
index 000000000000..c8e9bfb64219
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-nft-collections-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Collection
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftCollectionsEvmByContract
+---
+
+Provides single NFT collection metadata, total supply, owners & total transfers.
diff --git a/website/src/pages/ar/token-api/evm/get-nft-holders-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-nft-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..091d01a197f4
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-nft-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Holders
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftHoldersEvmByContract
+---
+
+Provides NFT holders per contract.
diff --git a/website/src/pages/ar/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx b/website/src/pages/ar/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
new file mode 100644
index 000000000000..cf9ff1c6e1b8
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Items
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftItemsEvmContractByContractToken_idByToken_id
+---
+
+Provides single NFT token metadata, ownership & traits.
diff --git a/website/src/pages/ar/token-api/evm/get-nft-ownerships-evm-by-address.mdx b/website/src/pages/ar/token-api/evm/get-nft-ownerships-evm-by-address.mdx
new file mode 100644
index 000000000000..4c33526eceb7
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-nft-ownerships-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Ownerships
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftOwnershipsEvmByAddress
+---
+
+Provides NFT Ownerships for Account.
diff --git a/website/src/pages/ar/token-api/evm/get-nft-sales-evm.mdx b/website/src/pages/ar/token-api/evm/get-nft-sales-evm.mdx
new file mode 100644
index 000000000000..f2d78bea4052
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-nft-sales-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Sales
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftSalesEvm
+---
+
+Provides latest NFT marketplace sales.
diff --git a/website/src/pages/ar/token-api/evm/get-ohlc-pools-evm-by-pool.mdx b/website/src/pages/ar/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
new file mode 100644
index 000000000000..d5bc5357eadf
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
@@ -0,0 +1,9 @@
+---
+title: OHLCV by Pool
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPoolsEvmByPool
+---
+
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/ar/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
index d1558ddd6e78..ff8f590b0433 100644
--- a/website/src/pages/ar/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
+++ b/website/src/pages/ar/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token OHLCV prices by Contract Address
+title: OHLCV by Contract
template:
type: openApi
apiId: tokenApi
operationId: getOhlcPricesEvmByContract
---
-The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/ar/token-api/evm/get-pools-evm.mdx b/website/src/pages/ar/token-api/evm/get-pools-evm.mdx
new file mode 100644
index 000000000000..db32376f5a17
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-pools-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Liquidity Pools
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getPoolsEvm
+---
+
+Provides Uniswap V2 & V3 liquidity pool metadata.
diff --git a/website/src/pages/ar/token-api/evm/get-swaps-evm.mdx b/website/src/pages/ar/token-api/evm/get-swaps-evm.mdx
new file mode 100644
index 000000000000..0a7697f38c8b
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-swaps-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Swap Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getSwapsEvm
+---
+
+Provides Uniswap V2 & V3 swap events.
diff --git a/website/src/pages/ar/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-tokens-evm-by-contract.mdx
index b6fab8011fc2..aed206c15272 100644
--- a/website/src/pages/ar/token-api/evm/get-tokens-evm-by-contract.mdx
+++ b/website/src/pages/ar/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders and Supply by Contract Address
+title: Token Metadata
template:
type: openApi
apiId: tokenApi
operationId: getTokensEvmByContract
---
-The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
+Provides ERC-20 token contract metadata.
diff --git a/website/src/pages/ar/token-api/evm/get-transfers-evm.mdx b/website/src/pages/ar/token-api/evm/get-transfers-evm.mdx
new file mode 100644
index 000000000000..d8e73c90a03c
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-transfers-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Transfer Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvm
+---
+
+Provides ERC-20 & Native transfer events.
diff --git a/website/src/pages/ar/token-api/faq.mdx b/website/src/pages/ar/token-api/faq.mdx
index 8c1032894ddb..e130a8baf710 100644
--- a/website/src/pages/ar/token-api/faq.mdx
+++ b/website/src/pages/ar/token-api/faq.mdx
@@ -6,21 +6,37 @@ Get fast answers to easily integrate and scale with The Graph's high-performance
## عام
-### What blockchains does the Token API support?
+### Which blockchains are supported by the Token API?
-Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+Currently, the Token API supports Ethereum, BNB Smart Chain (BSC), Polygon, Optimism, Base, Unichain, and Arbitrum One.
-### Why isn't my API key from The Graph Market working?
+### Does the Token API support NFTs?
-Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+Yes, The Graph Token API currently supports ERC-721 and ERC-1155 NFT token standards, with support for additional NFT standards planned. Endpoints are offered for ownership, collection stats, metadata, sales, holders, and transfer activity.
+
+### Do NFTs include off-chain data?
+
+NFT endpoints currently only include on-chain data. To get off-chain data, use the IPFS or HTTP links included in the NFT item response.
+
+### How do I authenticate requests to the Token API, and why doesn't my API key from The Graph Market work?
+
+Authentication is managed via API tokens obtained through [The Graph Market](https://thegraph.market/). If you're experiencing issues, make sure you're using the API Token generated from the API key, not the API key itself. An API token can be found on The Graph Market dashboard next to each API key. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
### How current is the data provided by the API relative to the blockchain?
The API provides data up to the latest finalized block.
-### How do I authenticate requests to the Token API?
+### How do I retrieve token prices?
+
+By default, token prices are returned with token-related responses, including token balances, token transfers, token metadata, and token holders. Historical prices are available with the Open-High-Low-Close (OHLC) endpoints.
+
+### Does the Token API support historical token data?
-Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+The Token API supports historical token balances with the `/historical/balances/evm/{address}` endpoint. You can query historical price data by pool at `/ohlc/pools/evm/{pool}` and by contract at `/ohlc/prices/evm/{contract}`.
+
+### What exchanges does the Token API use for token prices?
+
+The Token API currently tracks prices on Uniswap v2 and Uniswap v3, with plans to support additional exchanges in the future.
### Does the Token API provide a client SDK?
@@ -34,9 +50,9 @@ Yes, more blockchains will be supported in the future. Please share feedback on
Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
-### Are there plans to support additional use cases such as NFTs?
+### Are there plans to support additional use cases?
-The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
## MCP / LLM / AI Topics
@@ -60,17 +76,25 @@ You can find the code for the MCP client in [The Graph's repo](https://github.co
Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
-### Are there rate limits or usage costs?\*\*
+### Why am I getting 500 errors?
+
+Networks that are currently or temporarily unavailable on a given endpoint will return a `bad_database_response`, `Endpoint is currently not supported for this network` error. Databases that are in the process of ingestion will produce this response.
+
+### Are there rate limits or usage costs?
+
+During Beta, the Token API is free for authorized developers. There are no specific rate limits, but reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What do I do if I notice data inconsistencies in the data returned by the Token API?
-During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+If you notice data inconsistencies, please report the issue on our [Discord](https://discord.gg/graphprotocol). Identifying edge cases can help make sure all data is accurate and up-to-date.
-### What networks are supported, and how do I specify them?
+### How do I specify a network?
-You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of the exact network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`, `unichain`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
### Why do I only see 10 results? How can I get more data?
-Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` and `page` (1-indexed) to return more results. For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
### How do I fetch older transfer history?
diff --git a/website/src/pages/ar/token-api/monitoring/get-health.mdx b/website/src/pages/ar/token-api/monitoring/get-health.mdx
index 57a827b3343b..09f7b954dbf3 100644
--- a/website/src/pages/ar/token-api/monitoring/get-health.mdx
+++ b/website/src/pages/ar/token-api/monitoring/get-health.mdx
@@ -1,7 +1,9 @@
---
-title: Get health status of the API
+title: Health Status
template:
type: openApi
apiId: tokenApi
operationId: getHealth
---
+
+Get health status of the API
diff --git a/website/src/pages/ar/token-api/monitoring/get-networks.mdx b/website/src/pages/ar/token-api/monitoring/get-networks.mdx
index 0ea3c485ddb9..a91071511b82 100644
--- a/website/src/pages/ar/token-api/monitoring/get-networks.mdx
+++ b/website/src/pages/ar/token-api/monitoring/get-networks.mdx
@@ -1,7 +1,9 @@
---
-title: Get supported networks of the API
+title: الشبكات المدعومة
template:
type: openApi
apiId: tokenApi
operationId: getNetworks
---
+
+Get supported networks of the API
diff --git a/website/src/pages/ar/token-api/monitoring/get-version.mdx b/website/src/pages/ar/token-api/monitoring/get-version.mdx
index 0be6b7e92d04..c4437d4d3246 100644
--- a/website/src/pages/ar/token-api/monitoring/get-version.mdx
+++ b/website/src/pages/ar/token-api/monitoring/get-version.mdx
@@ -1,7 +1,9 @@
---
-title: Get the version of the API
+title: الاصدار
template:
type: openApi
apiId: tokenApi
operationId: getVersion
---
+
+Get the version of the API
diff --git a/website/src/pages/ar/token-api/quick-start.mdx b/website/src/pages/ar/token-api/quick-start.mdx
index c5fa07fa9371..c784bb744650 100644
--- a/website/src/pages/ar/token-api/quick-start.mdx
+++ b/website/src/pages/ar/token-api/quick-start.mdx
@@ -9,15 +9,15 @@ sidebarTitle: بداية سريعة
The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
-The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+The Token API provides access to onchain NFT and fungible token data, including live and historical balances, holders, prices, market data, token metadata, and token transfers. This API also uses the Model Context Protocol (MCP) to allow AI tools such as Claude to enrich raw blockchain data with contextual insights.
## Prerequisites
-Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+Before you begin, get a JWT API token by signing up on [The Graph Market](https://thegraph.market/). Make sure to use the JWT API Token, not the API key. Each API key can generate a new JWT API Token at any time.
## Authentication
-All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+All API endpoints are authenticated using a JWT API token inserted in the header as `Authorization: Bearer `.
```json
{
@@ -64,6 +64,20 @@ Make sure to replace `` with the JWT Token generated from your API key.
> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+## Chain and Feature Support
+
+| Network | evm-tokens | evm-uniswaps | evm-nft-tokens |
+| ---------------- | :---------: | :----------: | :------------: |
+| Ethereum Mainnet | ✅ | ✅ | ✅ |
+| BSC | ✅\* | ✅ | ✅ |
+| Base | ✅ | ✅ | ✅ |
+| Unichain | ✅ | ✅ | ✅ |
+| Arbitrum-One | Ingesting\* | Ingesting\* | Ingesting\* |
+| Optimism | ✅ | ✅ | ✅ |
+| Polygon | ✅ | ✅ | ✅ |
+
+\*Some chains are still in the process of syncing. You may encounter `bad_database_response` errors or incorrect response values until data is fully synced.
+
## Troubleshooting
If the API call fails, try printing out the full response object for additional error details. For example:
diff --git a/website/src/pages/cs/about.mdx b/website/src/pages/cs/about.mdx
index 1f43c663437f..90620be41ac8 100644
--- a/website/src/pages/cs/about.mdx
+++ b/website/src/pages/cs/about.mdx
@@ -1,67 +1,46 @@
---
-title: O Grafu
+title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## Co je Graf?
-The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier.
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## Understanding the Basics
+Its data services include:
-Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain.
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### Challenges Without The Graph
+### Why Blockchain Data is Difficult to Query
-In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply.
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**.
+The result is slow performance, complex infrastructure, and scalability issues.
-- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself.
+## How The Graph Solves This
-- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it.
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### Why is this a problem?
+Find the perfect data service for you:
-It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions.
+### 1. Custom Real-Time Data Streams
-Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
+**Use Case:** High-frequency trading, live analytics.
-Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data.
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph Provides a Solution
+### 2. Instant Token Data
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
+- [Start with Token API](/token-api/quick-start/)
-### How The Graph Functions
+### 3. Flexible Historical Queries
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+**Use Case:** Dapp frontends, custom analytics.
-#### Specifics
-
-- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-
-- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-
-- When creating a Subgraph, you need to write a Subgraph manifest.
-
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-
-The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.
-
-
-
-Průběh se řídí těmito kroky:
-
-1. Dapp přidává data do Ethereum prostřednictvím transakce na chytrém kontraktu.
-2. Chytrý smlouva vysílá při zpracování transakce jednu nebo více událostí.
-3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
-4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
-5. Aplikace dapp se dotazuje grafického uzlu na data indexovaná z blockchainu pomocí [GraphQL endpoint](https://graphql.org/learn/). Uzel Grafu zase překládá dotazy GraphQL na dotazy pro své podkladové datové úložiště, aby tato data načetl, přičemž využívá indexovací schopnosti úložiště. Dapp tato data zobrazuje v bohatém UI pro koncové uživatele, kteří je používají k vydávání nových transakcí na platformě Ethereum. Cyklus se opakuje.
-
-## Další kroky
-
-The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-
-Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/cs/index.json b/website/src/pages/cs/index.json
index 545b2b717b56..af63620e2101 100644
--- a/website/src/pages/cs/index.json
+++ b/website/src/pages/cs/index.json
@@ -2,7 +2,7 @@
"title": "Domov",
"hero": {
"title": "The Graph Docs",
- "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "How The Graph works",
"cta2": "Build your first subgraph"
},
@@ -19,10 +19,10 @@
"description": "Fetch and consume blockchain data with parallel execution.",
"cta": "Develop with Substreams"
},
- "sps": {
- "title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "Set up a Substreams-powered subgraph"
+ "tokenApi": {
+ "title": "Token API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "Uzel Graf",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "Firehose",
- "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Get started with Firehose"
}
},
@@ -58,6 +58,7 @@
"networks": "networks",
"completeThisForm": "complete this form"
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "Podgrafy",
"substreams": "Substreams",
"firehose": "Firehose",
- "tokenapi": "Token API"
+ "tokenApi": "Token API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "Substreams",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "Průzkumník grafů",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/cs/indexing/new-chain-integration.mdx b/website/src/pages/cs/indexing/new-chain-integration.mdx
index 0d856bfa9374..6b034d765a7a 100644
--- a/website/src/pages/cs/indexing/new-chain-integration.mdx
+++ b/website/src/pages/cs/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, in a JSON-RPC batch request
-- `trace_filter` *(limited tracing and optionally required for Graph Node)*
+- `trace_filter` _(limited tracing and optionally required for Graph Node)_
### 2. Firehose Integration
@@ -63,7 +63,7 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
> Do not change the env var name itself. It must remain `ethereum` even if the network name is different.
-3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## Substreams-powered Subgraphs
diff --git a/website/src/pages/cs/indexing/overview.mdx b/website/src/pages/cs/indexing/overview.mdx
index 8acf4fdf72a9..21485c772522 100644
--- a/website/src/pages/cs/indexing/overview.mdx
+++ b/website/src/pages/cs/indexing/overview.mdx
@@ -110,12 +110,12 @@ Indexers may differentiate themselves by applying advanced techniques for making
- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
-| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
+| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -131,7 +131,7 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
@@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer
#### Uzel Graf
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Getting started using Docker
@@ -708,42 +708,6 @@ Note that supported action types for allocation management have different input
Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
-#### Agora
-
-The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query.
-
-A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression.
-
-Example cost model:
-
-```
-# This statement captures the skip value,
-# uses a boolean expression in the predicate to match specific queries that use `skip`
-# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-# This default will match any GraphQL expression.
-# It uses a Global substituted into the expression to calculate cost
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-Example query costing using the above model:
-
-| Query | Price |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### Applying the cost model
-
-Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them.
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## Interacting with the network
### Stake in the protocol
diff --git a/website/src/pages/cs/indexing/tooling/graph-node.mdx b/website/src/pages/cs/indexing/tooling/graph-node.mdx
index 9257902fe247..9cee63a11f0c 100644
--- a/website/src/pages/cs/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/cs/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ While some Subgraphs may just require a full node, some may have indexing featur
### IPFS uzly
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### Metrický server Prometheus
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Začínáme s Kubernetes
@@ -77,15 +77,20 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
Když je Graf Uzel spuštěn, zpřístupňuje následující ports:
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
-
-> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## Pokročilá konfigurace uzlu Graf
@@ -330,7 +335,7 @@ Zdá se, že databázové tabulky, které uchovávají entity, se obecně vyskyt
For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table.
-The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show ` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions.
@@ -340,6 +345,4 @@ For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates f
#### Removing Subgraphs
-> Jedná se o novou funkci, která bude k dispozici v uzlu Graf 0.29.x
-
At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/cs/resources/claude-mcp.mdx b/website/src/pages/cs/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..5b55bbcbe0a4
--- /dev/null
+++ b/website/src/pages/cs/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/cs/subgraphs/_meta-titles.json b/website/src/pages/cs/subgraphs/_meta-titles.json
index c2d850dfc35c..815ad1b8f4b4 100644
--- a/website/src/pages/cs/subgraphs/_meta-titles.json
+++ b/website/src/pages/cs/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "Querying",
"developing": "Developing",
"guides": "How-to Guides",
- "best-practices": "Osvědčené postupy"
+ "best-practices": "Osvědčené postupy",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx b/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx
index e8db267667c0..0ae33c1efe69 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx
@@ -246,7 +246,7 @@ The CID of the file as a readable string can be accessed via the `dataSource` as
const cid = dataSource.stringParam()
```
-Příklad
+Příklad
```typescript
import { json, Bytes, dataSource } from '@graphprotocol/graph-ts'
diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5f964d3cbb78..edc1d88dc6cf 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Patch Changes
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Minor Changes
diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx
index 87734452737d..e794c1caa32c 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ Knihovna `@graphprotocol/graph-ts` poskytuje následující API:
The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| Verze | Poznámky vydání |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Přidá ověření existence polí ve schéma při ukládání entity. |
-| 0.0.7 | Přidání tříd `TransactionReceipt` a `Log` do typů Ethereum Přidání pole `receipt` do objektu Ethereum událost |
-| 0.0.6 | Přidáno pole `nonce` do objektu Ethereum Transaction Přidáno `baseFeePerGas` do objektu Ethereum bloku |
+| Verze | Poznámky vydání |
+| :---: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Přidá ověření existence polí ve schéma při ukládání entity. |
+| 0.0.7 | Přidání tříd `TransactionReceipt` a `Log` do typů Ethereum Přidání pole `receipt` do objektu Ethereum událost |
+| 0.0.6 | Přidáno pole `nonce` do objektu Ethereum Transaction Přidáno `baseFeePerGas` do objektu Ethereum bloku |
| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Přidání pole `functionSignature` do objektu Ethereum SmartContractCall |
-| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Přidání pole `input` do objektu Ethereum Transackce |
+| 0.0.4 | Přidání pole `functionSignature` do objektu Ethereum SmartContractCall |
+| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Přidání pole `input` do objektu Ethereum Transackce |
### Vestavěné typy
@@ -147,7 +147,7 @@ _Math_
- `x.notEqual(y: BigInt): bool` –lze zapsat jako `x != y`.
- `x.lt(y: BigInt): bool` – lze zapsat jako `x < y`.
- `x.le(y: BigInt): bool` – lze zapsat jako `x <= y`.
-- `x.gt(y: BigInt): bool` – lze zapsat jako `x > y`.
+- `x.gt(y: BigInt): bool` – lze zapsat jako `x > y`.
- `x.ge(y: BigInt): bool` – lze zapsat jako `x >= y`.
- `x.neg(): BigInt` – lze zapsat jako `-x`.
- `x.divDecimal(y: BigDecimal): BigDecimal` – dělí desetinným číslem, čímž získá desetinný výsledek.
diff --git a/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx
index a0fcb52875ca..04f1eee28246 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -22,14 +22,14 @@ Start the process and build a Subgraph that matches your needs:
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
-| Verze | Poznámky vydání |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
+| Verze | Poznámky vydání |
+| :---: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx
index e9848601ebc7..796f1de30b74 100644
--- a/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx
@@ -212,7 +212,7 @@ Every Subgraph affected with this policy has an option to bring the version in q
If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph: `https://indexer.upgrade.thegraph.com/status`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
diff --git a/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 14be0175123c..01056c092ca2 100644
--- a/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -88,6 +88,8 @@ graph auth
Once you are ready, you can deploy your Subgraph to Subgraph Studio.
> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
+>
+> **Note**: Each account is limited to 3 deployed (unpublished) Subgraphs. If you reach this limit, you must archive or publish existing Subgraphs before deploying new ones.
Use the following CLI command to deploy your Subgraph:
@@ -104,6 +106,8 @@ After running this command, the CLI will ask for a version label.
After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+> **Note**: The development query URL is limited to 3,000 queries per day.
+
Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
diff --git a/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index 29c75273aa17..6a3f991fa0b6 100644
--- a/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -53,7 +53,7 @@ USAGE
FLAGS
-h, --help Show CLI help.
- -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node.
+ -i, --ipfs= [default: https://ipfs.thegraph.com/api/v0] Upload build results to an IPFS node.
--ipfs-hash= IPFS hash of the subgraph manifest to deploy.
--protocol-network= [default: arbitrum-one] The network to use for the subgraph deployment.
diff --git a/website/src/pages/cs/subgraphs/explorer.mdx b/website/src/pages/cs/subgraphs/explorer.mdx
index 2d918567ee9d..ef576d74973a 100644
--- a/website/src/pages/cs/subgraphs/explorer.mdx
+++ b/website/src/pages/cs/subgraphs/explorer.mdx
@@ -2,83 +2,103 @@
title: Průzkumník grafů
---
-Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Use [Graph Explorer](https://thegraph.com/explorer) and take full advantage of its core features.
## Přehled
-Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+This guide explains how to use [Graph Explorer](https://thegraph.com/explorer) to quickly discover and interact with Subgraphs on The Graph Network, delegate GRT, view participant metrics, and analyze network performance.
-## Inside Explorer
+> When you visit Graph Explorer, you can also access the link to [explore Substreams](https://substreams.dev/).
-The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
+## Prerequisites
-### Subgraphs Page
+- To perform actions, you need a wallet (e.g., MetaMask) connected to [Graph Explorer](https://thegraph.com/explorer).
+ > Make sure your wallet is connected to the correct network (e.g., Arbitrum). Features and data shown are network specific.
+- GRT tokens if you plan to delegate or curate.
+- Basic knowledge of [Subgraphs](https://thegraph.com/docs/en/subgraphs/developing/subgraphs/)
-After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+## Navigating Graph Explorer
-- Your own finished Subgraphs
-- Subgraphs published by others
-- The exact Subgraph you want (based on the date created, signal amount, or name).
+### Step 1. Explore Subgraphs
-
+> For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
-When you click into a Subgraph, you will be able to do the following:
+Go to the Subgraphs page in [Graph Explorer](https://thegraph.com/explorer).
-- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
+- If you've deployed and published your Subgraph in Subgraph Studio, you can view it here.
+- Search all published Subgraphs and filter them by indexed network, specific categories (such as DeFI, NFTs, and DAOs), and **most queried, most curated, recently created, and recently updated**.
+
+
+
+To find Subgraphs indexing a specific contract, enter the contract address into the search bar.
+
+- For example, you can enter the L2GNS contract on Artbitrum (`0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec`) and this returns all Subgraphs indexing that contract:
+
+
- - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+> Looking for indexing contracts? Check out [this Subgraph](https://thegraph.com/explorer/subgraphs/FMTUN6d7sY2bLnAmNEPJTqiU3iuQht6ZXurpBh71wbWR?view=About&chain=arbitrum-one) which indexes contract addresses listed in its manifest. It shows all current deployments indexing those contracts on Arbitrum One, along with the signal allocated to each.
-
+You can click into any Subgraph, to:
+
+- Test queries in the playground and be able to leverage network details to make informed decisions.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make Indexers aware of its importance and quality.
+ > This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it'll eventually surface on the network to serve queries.
+
+
On each Subgraph’s dedicated page, you can do the following:
-- Signal/Un-signal on Subgraphs
-- Zobrazit další podrobnosti, například grafy, ID aktuálního nasazení a další metadata
-- Switch versions to explore past iterations of the Subgraph
- Query Subgraphs via GraphQL
+- View Subgraph ID, current deployment ID, Query URL, and other metadata
+- Signal/unsignal on Subgraphs
- Test Subgraphs in the playground
- View the Indexers that are indexing on a certain Subgraph
- Statistiky podgrafů (alokace, kurátoři atd.)
-- View the entity who published the Subgraph
+- View query fees and charts
+- Change versions to explore past iterations of the Subgraph
+- View entity types
+- View Subgraph activity
-
+
-### Delegate Page
+### Step 2. Delegate GRT
-On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer.
+Go to the [Delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) page to learn how to delegate, get GRT, and choose an Indexer.
-On this page, you can see the following:
+Here, you can:
-- Indexers who collected the most query fees
-- Indexers with the highest estimated APR
+- Compare Indexers by most query fees earned and highest estimated APR.
+- Use the built-in ROI calculator or search by Indexer name or address.
+- Click **"Delegate"** next to an Indexer to stake your GRT.
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
+### Step 3. Monitor Participants in the Network
-### Participants Page
+Go to the [Participants](https://thegraph.com/explorer/participants?chain=arbitrum-one) page to view:
-This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators.
+- Indexers: stakes, allocations, rewards, and delegation parameters
+- Curators: signal amounts, Subgraph shares, and activity history
+- Delegators: current and historical delegations, rewards, and Indexer metrics
-#### 1. Indexery
+#### Indexers
-
+
Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexer's delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
-- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators.
-- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
-- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
-- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
-- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
-- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
-- Maximální kapacita delegování - maximální množství delegovaných podílů, které může indexátor produktivně přijmout. Nadměrný delegovaný podíl nelze použít pro alokace nebo výpočty odměn.
-- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
-- Odměny indexátorů - jedná se o celkové odměny indexátorů, které indexátor a jeho delegáti získali za celou dobu. Odměny indexátorů jsou vypláceny prostřednictvím vydání GRT.
+- Query Fee Cut: The % of the query fee rebates that the Indexer keeps when splitting with Delegators.
+- Effective Reward Cut: The indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
+- Cooldown Remaining: The time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
+- Owned: This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
+- Delegated: Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
+- Allocated: Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
+- Available Delegation Capacity: The amount of delegated stake the Indexers can still receive before they become over-delegated.
+- Max Delegation Capacity: The maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
+- Query Fees: This is the total fees that end users have paid for queries from an Indexer over all time.
+- Indexer Rewards: This is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters.
@@ -86,9 +106,9 @@ Indexers can earn both query fees and indexing rewards. Functionally, this happe
To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/)
-
+
-#### 2. Kurátoři
+#### Curators
Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
@@ -102,11 +122,11 @@ In the The Curator table listed below you can see:
- Počet uložených GRT
- Počet akcií, které kurátor vlastní
-
+
If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/).
-#### 3. Delegáti
+#### Delegators
Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers.
@@ -114,7 +134,7 @@ Delegators play a key role in maintaining the security and decentralization of T
- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts.
- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/).
-
+
In the Delegators table you can see the active Delegators in the community and important metrics:
@@ -127,9 +147,9 @@ In the Delegators table you can see the active Delegators in the community and i
If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers).
-### Network Page
+### Step 4. Analyze Network Performance
-On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
+On the [Network](https://thegraph.com/explorer/network?chain=arbitrum-one) page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
#### Přehled
@@ -147,7 +167,7 @@ A few key details to note:
- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
-
+
#### Epochs
@@ -161,69 +181,77 @@ V části Epochy můžete na základě jednotlivých epoch analyzovat metriky, j
- Distribuční epochy jsou epochy, ve kterých se vypořádávají státní kanály pro epochy a indexátoři si mohou nárokovat slevy z poplatků za dotazy.
- The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers.
-
+
+
+## Access and Manage Your User Profile
+
+### Step 1. Access Your Profile
-## Váš uživatelský profil
+- Click your wallet address in the top right corner
+- Your wallet acts as your user profile
+- In your profile dashboard, you can view and interact with several useful tabs
-Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs:
+### Step 2. Explore the Tabs
-### Přehled profilů
+#### Přehled profilů
In this section, you can view the following:
-- Any of your current actions you've done.
-- Your profile information, description, and website (if you added one).
+- Your activity
+- Your profile information: total query fees, total shares value, owned stake, stake delegating
-
+
-### Tab Podgrafy
+#### Tab Podgrafy
-In the Subgraphs tab, you’ll see your published Subgraphs.
+The Subgraphs tab displays all your published Subgraphs.
-> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> Subgraphs deployed with the CLI for testing purposes will not show up here. Subgraphs will only show up when they are published to the decentralized network.
-
+
-### Tab Indexování
+#### Tab Indexování
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+> If you haven't indexed, you will see links to stake to index Subgraphs and browse Subgraphs on Graph Explorer.
-Tato část bude také obsahovat podrobnosti o vašich čistých odměnách za indexování a čistých poplatcích za dotazy. Zobrazí se následující metriky:
+The Indexing tab displays a table where you can review active and historical allocations to Subgraphs.
-- Delegovaná sázka – sázka od delegátů, kterou můžete přidělit vy, ale nelze ji snížit
-- Celkové poplatky za dotazy - celkové poplatky, které uživatelé zaplatili za dotazy, které jste obsloužili v průběhu času
-- Odměny indexátora - celková částka odměn indexátora, kterou jste obdrželi, v GRT
-- Fee Cut - % slevy z poplatku za dotaz, které si ponecháte při rozdělení s delegáty
-- Rozdělení odměn - % odměn indexátorů, které si ponecháte při dělení s delegáty
-- Ve vlastnictví - váš vložený vklad, který může být snížen za škodlivé nebo nesprávné chování
+Track your Indexer performance with visual charts and key metrics, including:
-
+- Delegated Stake: Stake from Delegators that can be allocated by you but cannot be slashed.
+- Total Query Fees: Cumulative fees from served queries.
+- Indexer Rewards (in GRT): Total rewards earned.
+- Fee Cut & Rewards Cut: The % of query fee rebates and Indexer rewards you'll keep when you split with Delegators.
+- Owned Stake: Your deposited stake, which could be slashed for malicious or incorrect behavior.
-### Tab Delegování
+
-Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards.
+#### Tab Delegování
-In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards.
+> To learn more about the benefits of delegating, check out [delegating](/resources/roles/delegating/delegating/).
-V první polovině stránky vidíte graf delegování a také graf odměn. Vlevo vidíte klíčové ukazatele výkonnosti, které odrážejí vaše aktuální metriky delegování.
+The Delegators tab displays your active and historical delegations, along with the metrics for the Indexers you've delegated to.
-Metriky delegáta, které uvidíte na této tab, zahrnují:
+Top Section:
-- Celkové odměny za delegování
-- Nerealizované odměny celkem
-- Celkové realizované odměny
+- View delegation and rewards-only charts
+- Track key metrics:
+ - Celkové odměny za delegování
+ - Unrealized rewards
+ - Realized Rewards
-V druhé polovině stránky je tabulka delegací. Zde vidíte indexátory, které jste delegovali, a také jejich podrobnosti (například snížení odměn, zkrácení doby platnosti atd.).
+Bottom Section:
-Pomocí tlačítek na pravé straně tabulky můžete spravovat delegování - delegovat více, zrušit delegování nebo stáhnout delegování po uplynutí doby rozmrazení.
+- Explore a table of your Indexer delegations, including reward cuts, cooldowns, and more.
+- Use the buttons on the right side of the table to manage your delegation - delegate more, undelegate, or withdraw it after the thawing period.
-Nezapomeňte, že tento graf lze horizontálně posouvat, takže pokud se posunete úplně doprava, uvidíte také stav svého delegování (delegování, nedelegování, odvolání).
+> This table is horizontally scrollable, so scroll right to see delegation status: delegating, undelegating, or withdrawable.
-
+
-### Tab Kurátorství
+#### Tab Kurátorství
-In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
+The Curation tab displays all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
Na této tab najdete přehled:
@@ -232,22 +260,22 @@ Na této tab najdete přehled:
- Query rewards per Subgraph
- Aktualizováno v detailu data
-
+
-### Nastavení profilu
+#### Nastavení profilu
Ve svém uživatelském profilu budete moci spravovat své osobní údaje (například nastavit jméno ENS). Jste-li indexátorem, máte k dispozici ještě více nastavení. Ve svém uživatelském profilu budete moci nastavit parametry delegování a operátory.
- Operátoři provádějí v protokolu jménem indexátoru omezené akce, například otevírají a zavírají alokace. Operátoři jsou obvykle jiné adresy Ethereum, oddělené od jejich stakingové peněženky, s uzavřeným přístupem do sítě, který si indexátoři mohou osobně nastavit
- Parametry delegování umožňují řídit rozdělení GRT mezi vás a vaše delegáty.
-
+
As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.

-## Další zdroje
+### Další zdroje
### Video Guide
diff --git a/website/src/pages/cs/subgraphs/fair-use-policy.mdx b/website/src/pages/cs/subgraphs/fair-use-policy.mdx
new file mode 100644
index 000000000000..3b1a866eb263
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/fair-use-policy.mdx
@@ -0,0 +1,51 @@
+---
+title: Fair Use Policy
+---
+
+> Effective Date: May 15, 2025
+
+## Přehled
+
+This outlines storage limits for Subgraphs that rely solely on [Edge & Node's Upgrade Indexer](/subgraphs/upgrade-indexer/). This policy is designed to ensure fair and optimized use of queries across the community.
+
+To maintain performance and reliability across its infrastructure, Edge & Node is updating its Upgrade Indexer Subgraph storage policy. Free usage tiers remain available, but users who exceed specified limits will need to upgrade to a paid plan. Storage allocations and thresholds vary by feature.
+
+### 1. Scope
+
+This policy applies to all individual users, teams, chains, and dapps using Edge & Node's Upgrade Indexer in Subgraph Studio for storage and queries.
+
+### 2. Fair Use Storage Limits
+
+**Free Storage: Up to 10 GB**
+
+Beyond that, pricing is variable and adjusts based on usage patterns, network conditions, infrastructure requirements, and specific use cases.
+
+Reach out to Edge & Node at [info@edgeandnode.com](mailto:info@edgeandnode.com) to discuss options that meet your technical needs.
+
+You can monitor your usage via [Subgraph Studio](https://thegraph.com/studio/).
+
+### 3. Fair Use Limits
+
+To preserve the stability of Edge & Node's Subgraph Studio and preserve the reliability of The Graph Network, the Edge & Node Support Team will monitor storage usage and take corresponding action with Subgraphs that have:
+
+- Abnormally high or sustained bandwidth or storage usage beyond posted limits
+- Circumvention of storage thresholds (e.g., use of multiple free-tier accounts)
+
+The Edge & Node Support Team reserves the right to revise storage limits or impose temporary constraints for operational integrity.
+
+If you exceed your included storage:
+
+- Try [pruning Subgraph data](/subgraphs/best-practices/pruning/) to remove unused entities and help stay within storage limits
+- [Add signal to the Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to encourage other Indexers on the network to serve it
+- You will receive multiple notifications and email alerts
+- A grace period of 14 days will be provided to upgrade or reduce storage
+
+Edge & Node's team is committed to helping users avoid unnecessary interruptions and will continue to support all web3 builders.
+
+### 4. Subgraph Data Retention
+
+Subgraphs inactive for over 14 days or Subgraphs that exceed free-tier storage limits will be subject to automatic data archival or deletion. Edge & Node's team will notify you before any such actions are taken.
+
+### 5. Support
+
+If you believe your usage is incorrectly flagged or have unique use cases (e.g. approved special request pending new Subgraph upgrade plan), reach out the Edge & Node team at [info@edgeandnode.com](mailto:info@edgeandnode.com).
diff --git a/website/src/pages/cs/subgraphs/guides/near.mdx b/website/src/pages/cs/subgraphs/guides/near.mdx
index 275c2aba0fd4..6b0cddb546cf 100644
--- a/website/src/pages/cs/subgraphs/guides/near.mdx
+++ b/website/src/pages/cs/subgraphs/guides/near.mdx
@@ -186,7 +186,7 @@ Once your Subgraph has been created, you can deploy your Subgraph by using the `
```sh
$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
+$ graph deploy --node --ipfs https://ipfs.thegraph.com # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
The node configuration will depend on where the Subgraph is being deployed.
diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx
index f5480ab15a48..aef8badbe0e1 100644
--- a/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx
+++ b/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx
@@ -39,20 +39,20 @@ While the source Subgraph is a standard Subgraph, the dependent Subgraph uses th
### Source Subgraphs
-- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs).
- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
-- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
-- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
-- Source Subgraphs cannot use grafting on top of existing entities
-- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+- Immutable entities only: All Subgraphs must have [immutable entities](/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed.
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of.
+- Source Subgraphs cannot use grafting on top of existing entities.
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly.
### Composed Subgraphs
-- You can only compose up to a **maximum of 5 source Subgraphs**
-- Composed Subgraphs can only use **datasources from the same chain**
-- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
-- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
-- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+- You can only compose up to a **maximum of 5 source Subgraphs.**
+- Composed Subgraphs can only use **datasources from the same chain.**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time.
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly.
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph).
Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
diff --git a/website/src/pages/cs/subgraphs/mcp/claude.mdx b/website/src/pages/cs/subgraphs/mcp/claude.mdx
new file mode 100644
index 000000000000..8b61438d2ab7
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/mcp/claude.mdx
@@ -0,0 +1,180 @@
+---
+title: Claude Desktop
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Claude to interact directly with Subgraphs on The Graph Network. This integration allows you to find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries—all through natural language conversations with Claude.
+
+## What You Can Do
+
+The Subgraph MCP integration enables you to:
+
+- Access the GraphQL schema for any Subgraph on The Graph Network
+- Execute GraphQL queries against any Subgraph deployment
+- Find top Subgraph deployments for a given keyword or contract address
+- Get 30-day query volume for Subgraph deployments
+- Ask natural language questions about Subgraph data without writing GraphQL queries manually
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+
+## Installation Options
+
+### Option 1: Using npx (Recommended)
+
+#### Configuration Steps using npx
+
+#### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+#### 2. Add Configuration
+
+Paste the following settings into your config file:
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+#### 3. Add Your Gateway API Key
+
+Replace `GATEWAY_API_KEY` with your API key from [Subgraph Studio](https://thegraph.com/studio/).
+
+#### 4. Save and Restart
+
+Once you've entered your Gateway API key into your settings, save the file and restart Claude Desktop.
+
+### Option 2: Building from Source
+
+#### Requirements
+
+- Rust (latest stable version recommended: 1.75+)
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+ Follow the on-screen instructions. For other platforms, see the [official Rust installation guide](https://www.rust-lang.org/tools/install).
+
+#### Installation Steps
+
+1. **Clone and Build the Repository**
+
+ ```bash
+ git clone git@github.com:graphops/subgraph-mcp.git
+ cd subgraph-mcp
+ cargo build --release
+ ```
+
+2. **Find the Command Path**
+
+ After building, the executable will be located at `target/release/subgraph-mcp` inside your project directory.
+
+ - Navigate to your `subgraph-mcp` directory in terminal
+ - Run `pwd` to get the full path
+ - Combine the output with `/target/release/subgraph-mcp`
+
+3. **Configure Claude Desktop**
+
+ Open your `claude_desktop_config.json` file as described above and add:
+
+ ```json
+ {
+ "mcpServers": {
+ "subgraph": {
+ "command": "/path/to/your/subgraph-mcp/target/release/subgraph-mcp",
+ "env": {
+ "GATEWAY_API_KEY": "your-api-key-here"
+ }
+ }
+ }
+ }
+ ```
+
+ Replace `/path/to/your/subgraph-mcp/target/release/subgraph-mcp` with the actual path to the compiled binary.
+
+## Using The Graph Resource in Claude
+
+After configuring Claude Desktop:
+
+1. Restart Claude Desktop
+2. Start a new conversation
+3. Click on the context menu (top right)
+4. Add "Subgraph Server Instructions" as a resource by adding `graphql://subgraph` to your chat context
+
+> **Important**: Claude Desktop may not automatically utilize the Subgraph MCP. You must manually add "Subgraph Server Instructions" resource to your chat context for each conversation where you want to use it.
+
+## Troubleshooting
+
+To enable logs for the MCP when using the npx option, add the `--verbose true` option to your args array.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID/IPFS hash**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Search subgraphs by keyword**: Find subgraphs by keyword in their display names, ordered by signal
+- **Get deployment 30-day query counts**: Get aggregate query count over the last 30 days for multiple subgraph deployments
+- **Get top Subgraph deployments for a contract**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain, ordered by query fees
+
+## Key Identifier Types
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a subgraph. Use `execute_query_by_subgraph_id` or `get_schema_by_subgraph_id`.
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment. Use `execute_query_by_deployment_id` or `get_schema_by_deployment_id`.
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific, immutable deployment. Use `execute_query_by_deployment_id` (the gateway treats it like a deployment ID for querying) or `get_schema_by_ipfs_hash`.
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Claude will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+```
+Find the top subgraphs for contract 0x1f98431c8ad98523631ae4a59f267346ea31f984 on arbitrum-one
+```
diff --git a/website/src/pages/cs/subgraphs/mcp/cline.mdx b/website/src/pages/cs/subgraphs/mcp/cline.mdx
new file mode 100644
index 000000000000..156221d9a127
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/mcp/cline.mdx
@@ -0,0 +1,99 @@
+---
+title: Cline
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cline to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cline.
+
+## Prerequisites
+
+- [Cline](https://cline.bot/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+## Using The Graph Resource in Cline
+
+After configuring Cline:
+
+1. Restart Cline
+2. Start a new conversation
+3. Enable the Subgraph MCP from the context menu
+4. Add "Subgraph Server Instructions" as a resource to your chat context
+
+## Available Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cline will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/cs/subgraphs/mcp/cursor.mdx b/website/src/pages/cs/subgraphs/mcp/cursor.mdx
new file mode 100644
index 000000000000..298f43ece048
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/mcp/cursor.mdx
@@ -0,0 +1,94 @@
+---
+title: Cursor
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cursor to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cursor.
+
+## Prerequisites
+
+- [Cursor](https://www.cursor.com/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+### 4. Restart Cursor
+
+Restart Cursor, and start a new chat.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cursor will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/cs/subgraphs/querying/best-practices.mdx b/website/src/pages/cs/subgraphs/querying/best-practices.mdx
index 038319488eda..167ccbac2e9c 100644
--- a/website/src/pages/cs/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/cs/subgraphs/querying/best-practices.mdx
@@ -2,9 +2,7 @@
title: Osvědčené postupy dotazování
---
-The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-
-Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
+Use The Graph's GraphQL API to query [Subgraph](/subgraphs/developing/subgraphs/) data efficiently. This guide outlines essential GraphQL rules, guides, and best practices to help you write optimized, reliable queries.
---
@@ -12,9 +10,11 @@ Learn the essential GraphQL language rules and best practices to optimize your S
### The Anatomy of a GraphQL Query
-Na rozdíl od rozhraní REST API je GraphQL API postaveno na schématu, které definuje, jaké dotazy lze provádět.
+> GraphQL queries use the GraphQL language, which is defined in the [GraphQL specification](https://spec.graphql.org/).
+
+Unlike REST APIs, GraphQL APIs are built on a schema-driven design that defines which queries can be performed.
-For example, a query to get a token using the `token` query will look as follows:
+Here's a typical query to fetch a `token`:
```graphql
query GetToken($id: ID!) {
@@ -25,7 +25,7 @@ query GetToken($id: ID!) {
}
```
-which will return the following predictable JSON response (_when passing the proper `$id` variable value_):
+which will return a predictable JSON response (when passing the proper `$id` variable value):
```json
{
@@ -36,8 +36,6 @@ which will return the following predictable JSON response (_when passing the pro
}
```
-GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/).
-
The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders):
```graphql
@@ -50,33 +48,31 @@ query [operationName]([variableName]: [variableType]) {
}
```
-## Rules for Writing GraphQL Queries
+### Rules for Writing GraphQL Queries
-- Each `queryName` must only be used once per operation.
-- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`)
-- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
-- Každá proměnná přiřazená argumentu musí odpovídat jeho typu.
-- V daném seznamu proměnných musí být každá z nich jedinečná.
-- Musí být použity všechny definované proměnné.
+> Important: Failing to follow these rules will result in an error from The Graph API.
-> Note: Failing to follow these rules will result in an error from The Graph API.
+1. Each `queryName` must only be used once per operation.
+2. Each `field` must be used only once in a selection (you cannot query `id` twice under `token`).
+3. Complex types require a selection of sub-fields.
+ - For example, some `fields' or queries (like `tokens`) return complex types which will require a selection of sub-fields. Not providing a selection when expected or providing one when not expected will raise an error, such as `id\`. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
+4. Každá proměnná přiřazená argumentu musí odpovídat jeho typu.
+5. Variables must be uniquely defined and used.
-For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/).
+**For a complete list of rules with code examples, check out the [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)**.
-### Odeslání dotazu na GraphQL API
+### How to Send a Query to a GraphQL API
-GraphQL is a language and set of conventions that transport over HTTP.
+[GraphQL is a query language](https://graphql.org/learn/) and a set of conventions for APIs, typically used over HTTP to request and send data between clients and servers. This means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-
-However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
+However, as recommended in [Querying from an Application](/subgraphs/querying/from-an-application/), it's best to use `graph-client`, which supports the following unique features:
- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Plně zadaný výsledekv
-Here's how to query The Graph with `graph-client`:
+Example query using `graph-client`:
```tsx
import { execute } from '../.graphclient'
@@ -100,15 +96,15 @@ async function main() {
main()
```
-More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/).
+For more alternatives, see ["Querying from an Application"](/subgraphs/querying/from-an-application/).
---
## Osvědčené postupy
-### Vždy pište statické dotazy
+### 1. Always Write Static Queries
-A common (bad) practice is to dynamically build query strings as follows:
+A common bad practice is to dynamically build a query string as follows:
```tsx
const id = params.id
@@ -124,14 +120,16 @@ query GetToken {
// Execute query...
```
-While the above snippet produces a valid GraphQL query, **it has many drawbacks**:
+While the example above produces a valid GraphQL query, it comes with several issues:
+
+- The full query is harder to understand.
+- Developers are responsible for safely sanitizing the string interpolation.
+- Not sending the values of the variables as part of the request can block server-side caching.
+- It prevents tools from statically analyzing the query (e.g.linters or type generation tools).
-- it makes it **harder to understand** the query as a whole
-- developers are **responsible for safely sanitizing the string interpolation**
-- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side**
-- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools)
+Instead, it's recommended to **always write queries as static strings**.
-For this reason, it is recommended to always write queries as static strings:
+Example of static string:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -153,18 +151,21 @@ const result = await execute(query, {
})
```
-Doing so brings **many advantages**:
+Static strings have several **key advantages**:
-- **Easy to read and maintain** queries
-- The GraphQL **server handles variables sanitization**
-- **Variables can be cached** at server-level
-- **Queries can be statically analyzed by tools** (more on this in the following sections)
+- Queries are easier to read, manage, and debug.
+- Variable sanitization is handled by the GraphQL server The GraphQL.
+- Variables can be cached at the server level.
+- Queries can be statically analyzed by tools (see [GraphQL Essential Tools](/subgraphs/querying/best-practices/#graphql-essential-tools-guides)).
-### How to include fields conditionally in static queries
+### 2. Include Fields Conditionally in Static Queries
-You might want to include the `owner` field only on a particular condition.
+Including fields in static queries only for a particular condition improves performance and keeps responses lightweight by fetching only the necessary data when it's relevant.
-For this, you can leverage the `@include(if:...)` directive as follows:
+- The `@include(if:...)` directive tells the query to **include** a specific field only if the given condition is true.
+- The `@skip(if: ...)` directive tells the query to **exclude** a specific field if the given condition is true.
+
+Example using `owner` field with `@include(if:...)` directive:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -187,15 +188,11 @@ const result = await execute(query, {
})
```
-> Note: The opposite directive is `@skip(if: ...)`.
-
-### Ask for what you want
-
-GraphQL became famous for its "Ask for what you want" tagline.
+### 3. Ask Only For What You Want
-For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually.
+GraphQL is known for its "Ask for what you want” tagline, which is why it requires explicitly listing each field you want. There's no built-in way to fetch all available fields automatically.
-- Při dotazování na GraphQL vždy myslete na to, abyste dotazovali pouze pole, která budou skutečně použita.
+- When querying GraphQL APIs, always think of querying only the fields that will actually be used.
- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities.
For example, in the following query:
@@ -215,9 +212,9 @@ query listTokens {
The response could contain 100 transactions for each of the 100 tokens.
-If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field.
+If the application only needs 10 transactions, the query should explicitly set **`first: 10`** on the transactions field.
-### Use a single query to request multiple records
+### 4. Use a Single Query to Request Multiple Records
By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
@@ -249,7 +246,7 @@ query ManyRecords {
}
```
-### Combine multiple queries in a single request
+### 5. Combine Multiple Queries in a Single Request
Your application might require querying multiple types of data as follows:
@@ -281,9 +278,9 @@ const [tokens, counters] = Promise.all(
)
```
-While this implementation is totally valid, it will require two round trips with the GraphQL API.
+While this implementation is valid, it will require two round trips with the GraphQL API.
-Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows:
+It's best to send multiple queries in the same GraphQL request as follows:
```graphql
import { execute } from "your-favorite-graphql-client"
@@ -304,9 +301,9 @@ query GetTokensandCounters {
const { result: { tokens, counters } } = execute(query)
```
-This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**.
+Sending multiple queries in the same GraphQL request **improves the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and provides a **more concise implementation**.
-### Využití fragmentů GraphQL
+### 6. Leverage GraphQL Fragments
A helpful feature to write GraphQL queries is GraphQL Fragment.
@@ -335,7 +332,7 @@ Such repeated fields (`id`, `active`, `status`) bring many issues:
- More extensive queries become harder to read.
- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces.
-A refactored version of the query would be the following:
+An optimized version of the query would be the following:
```graphql
query {
@@ -359,15 +356,18 @@ fragment DelegateItem on Transcoder {
}
```
-Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation.
+Using a GraphQL `fragment` improves readability (especially at scale) and results in better TypeScript types generation.
When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_).
-### Co dělat a nedělat s fragmenty GraphQL
+## GraphQL Fragment Guidelines
-### Základem fragmentu musí být typ
+### Do's and Don'ts for Fragments
-A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**:
+1. Fragments cannot be based on a non-applicable types (types without fields).
+2. `BigInt` cannot be used as a fragment's base because it's a **scalar** (native "plain" type).
+
+Příklad:
```graphql
fragment MyFragment on BigInt {
@@ -375,11 +375,8 @@ fragment MyFragment on BigInt {
}
```
-`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base.
-
-#### Jak šířit fragment
-
-Fragments are defined on specific types and should be used accordingly in queries.
+3. Fragments belong to specific types and must be used with those same types in queries.
+4. Spread only fragments matching the correct type.
Příklad:
@@ -402,20 +399,23 @@ fragment VoteItem on Vote {
}
```
-`newDelegate` and `oldDelegate` are of type `Transcoder`.
+- `newDelegate` and `oldDelegate` are of type `Transcoder`. It's not possible to spread a fragment of type `Vote` here.
-It is not possible to spread a fragment of type `Vote` here.
+5. Fragments must be defined based on their specific usage.
+6. Define fragments as an atomic business unit of data.
-#### Definice fragmentu jako atomické obchodní jednotky dat
+---
-GraphQL `Fragment`s must be defined based on their usage.
+### How to Define `Fragment` as an Atomic Business Unit of Data
-For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient.
+> For most use-cases, defining one fragment per type (in the case of repeated fields usage or type generation) is enough.
Here is a rule of thumb for using fragments:
- When fields of the same type are repeated in a query, group them in a `Fragment`.
-- When similar but different fields are repeated, create multiple fragments, for instance:
+- When similar but different fields are repeated, create multiple fragments.
+
+Příklad:
```graphql
# base fragment (mostly used in listing)
@@ -438,35 +438,45 @@ fragment VoteWithPoll on Vote {
---
-## The Essential Tools
+## GraphQL Essential Tools Guides
+
+### Test Queries with Graph Explorer
+
+Before integrating GraphQL queries into your dapp, it's best to test them. Instead of running them directly in your app, use a web-based playground.
+
+Start with [Graph Explorer](https://thegraph.com/explorer), a preconfigured GraphQL playground built specifically for Subgraphs. You can experiment with queries and see the structure of the data returned without writing any frontend code.
+
+If you want alternatives to debug/test your queries, check out other similar web-based tools:
+
+- [GraphiQL](https://graphiql-online.com/graphiql)
+- [Altair](https://altairgraphql.dev/)
-### Weboví průzkumníci GraphQL
+### Setting up Workflow and IDE Tools
-Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries.
+In order to keep up with querying best practices and syntax rules, use the following workflow and IDE tools.
-If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql).
+#### GraphQL ESLint
-### GraphQL Linting
+1. Install GraphQL ESLint
-In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools.
+Use [GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) to enforce best practices and syntax rules with zero effort.
-**GraphQL ESLint**
+2. Use the ["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config
-[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort.
+This will enforce essential rules such as:
-[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as:
+- `@graphql-eslint/fields-on-correct-type`: Ensures fields match the proper type.
+- `@graphql-eslint/no-unused variables`: Flags unused variables in your queries.
-- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type?
-- `@graphql-eslint/no-unused variables`: should a given variable stay unused?
-- a další!
+Result: You'll **catch errors without even testing queries** on the playground or running them in production!
-This will allow you to **catch errors without even testing queries** on the playground or running them in production!
+#### Use IDE plugins
-### IDE zásuvné
+GraphQL plugins streamline your workflow by offering real-time feedback while you code. They highlight mistakes, suggest completions, and help you explore your schema faster.
-**VSCode and GraphQL**
+1. VS Code
-The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get:
+Install the [GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) to unlock:
- Syntax highlighting
- Autocomplete suggestions
@@ -474,11 +484,11 @@ The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemNa
- Snippets
- Go to definition for fragments and input types
-If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly.
+If you are using `graphql-eslint`, use the [ESLint VS Code extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) to visualize errors and warnings inlined in your code correctly.
-**WebStorm/Intellij and GraphQL**
+2. WebStorm/Intellij and GraphQL
-The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing:
+Install the [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/). It significantly improves the experience of working with GraphQL by providing:
- Syntax highlighting
- Autocomplete suggestions
diff --git a/website/src/pages/cs/subgraphs/querying/graphql-api.mdx b/website/src/pages/cs/subgraphs/querying/graphql-api.mdx
index e5dc52ccce1f..6fc03e0856a8 100644
--- a/website/src/pages/cs/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/cs/subgraphs/querying/graphql-api.mdx
@@ -2,23 +2,37 @@
title: GraphQL API
---
-Learn about the GraphQL Query API used in The Graph.
+Explore the GraphQL Query API for interacting with Subgraphs on The Graph Network.
-## What is GraphQL?
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with existing data.
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
+The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
+## Core Concepts
-## Queries with GraphQL
+### Entities
+
+- **What they are**: Persistent data objects defined with `@entity` in your schema
+- **Key requirement**: Must contain `id: ID!` as primary identifier
+- **Usage**: Foundation for all query operations
+
+### Schema
+
+- **Purpose**: Blueprint defining the data structure and relationships using GraphQL [IDL](https://facebook.github.io/graphql/draft/#sec-Type-System)
+- **Key characteristics**:
+ - Auto-generates query endpoints
+ - Read-only operations (no mutations)
+ - Defines entity interfaces and derived fields
-In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+## Query Structure
-> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
+GraphQL queries in The Graph target entities defined in the Subgraph schema. Each `Entity` type generates corresponding `entity` and `entities` fields on the root `Query` type.
-### Příklady
+> Note: The `query` keyword is not required at the top level of GraphQL queries.
-Query for a single `Token` entity defined in your schema:
+### Single Entity Queries Example
+
+Query for a single `Token` entity:
```graphql
{
@@ -29,9 +43,11 @@ Query for a single `Token` entity defined in your schema:
}
```
-> Note: When querying for a single entity, the `id` field is required, and it must be written as a string.
+> Note: Single entity queries require the `id` parameter as a string.
+
+### Collection Queries Example
-Query all `Token` entities:
+Query format for all `Token` entities:
```graphql
{
@@ -42,14 +58,14 @@ Query all `Token` entities:
}
```
-### Třídění
+### Sorting Example
-When querying a collection, you may:
+Collection queries support the following sort parameters:
-- Use the `orderBy` parameter to sort by a specific attribute.
-- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending.
+- `orderBy`: Specifies the attribute for sorting
+- `orderDirection`: Accepts `asc` (ascending) or `desc` (descending)
-#### Příklad
+#### Standard Sorting Example
```graphql
{
@@ -60,11 +76,7 @@ When querying a collection, you may:
}
```
-#### Příklad vnořeného třídění entit
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities.
-
-The following example shows tokens sorted by the name of their owner:
+#### Nested Entity Sorting Example
```graphql
{
@@ -77,20 +89,18 @@ The following example shows tokens sorted by the name of their owner:
}
```
-> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
+> Note: Nested sorting supports one-level-deep `String` or `ID` types on `@entity` and `@derivedFrom` fields.
-### Stránkování
+### Pagination Example
-When querying a collection, it's best to:
+When querying a collection, it is best to:
- Use the `first` parameter to paginate from the beginning of the collection.
- The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time.
- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities.
- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above.
-#### Example using `first`
-
-Dotaz na prvních 10 tokenů:
+#### Standard Pagination Example
```graphql
{
@@ -101,11 +111,7 @@ Dotaz na prvních 10 tokenů:
}
```
-To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection.
-
-#### Example using `first` and `skip`
-
-Query 10 `Token` entities, offset by 10 places from the beginning of the collection:
+#### Offset Pagination Example
```graphql
{
@@ -116,9 +122,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect
}
```
-#### Example using `first` and `id_ge`
-
-If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query:
+#### Cursor-based Pagination Example
```graphql
query manyTokens($lastID: String) {
@@ -129,16 +133,11 @@ query manyTokens($lastID: String) {
}
```
-The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values.
-
### Filtrování
-- You can use the `where` parameter in your queries to filter for different properties.
-- You can filter on multiple values within the `where` parameter.
-
-#### Example using `where`
+The `where` parameter filters entities based on specified conditions.
-Query challenges with `failed` outcome:
+#### Basic Filtering Example
```graphql
{
@@ -152,9 +151,7 @@ Query challenges with `failed` outcome:
}
```
-You can use suffixes like `_gt`, `_lte` for value comparison:
-
-#### Příklad filtrování rozsahu
+#### Numeric Comparison Example
```graphql
{
@@ -166,11 +163,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
}
```
-#### Příklad pro filtrování bloků
-
-You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-
-This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
+#### Block-based Filtering Example
```graphql
{
@@ -182,11 +175,7 @@ This can be useful if you are looking to fetch only entities which have changed,
}
```
-#### Příklad vnořeného filtrování entit
-
-Filtering on the basis of nested entities is possible in the fields with the `_` suffix.
-
-To může být užitečné, pokud chcete načíst pouze entity, jejichž entity podřízené úrovně splňují zadané podmínky.
+#### Nested Entity Filtering Example
```graphql
{
@@ -200,11 +189,9 @@ To může být užitečné, pokud chcete načíst pouze entity, jejichž entity
}
```
-#### Logické operátory
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria.
+#### Logical Operators
-##### `AND` Operator
+##### AND Operations Example
The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`.
@@ -220,27 +207,11 @@ The following example filters for challenges with `outcome` `succeeded` and `num
}
```
-> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas.
->
-> ```graphql
-> {
-> challenges(where: { number_gte: 100, outcome: "succeeded" }) {
-> challenger
-> outcome
-> application {
-> id
-> }
-> }
-> }
-> ```
-
-##### `OR` Operator
-
-The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`.
+**Syntactic sugar:** You can simplify the above query by removing the `and` operator and by passing a sub-expression separated by commas.
```graphql
{
- challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenges(where: { number_gte: 100, outcome: "succeeded" }) {
challenger
outcome
application {
@@ -250,52 +221,36 @@ The following example filters for challenges with `outcome` `succeeded` or `numb
}
```
-> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
-
-#### Všechny filtry
-
-Úplný seznam přípon parametrů:
+##### OR Operations Example
+```graphql
+{
+ challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenger
+ outcome
+ application {
+ id
+ }
+ }
+}
```
-_
-_not
-_gt
-_lt
-_gte
-_lte
-_in
-_not_in
-_contains
-_contains_nocase
-_not_contains
-_not_contains_nocase
-_starts_with
-_starts_with_nocase
-_ends_with
-_ends_with_nocase
-_not_starts_with
-_not_starts_with_nocase
-_not_ends_with
-_not_ends_with_nocase
-```
-
-> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types.
-In addition, the following global filters are available as part of `where` argument:
+Global filter parameter:
```graphql
_change_block(number_gte: Int)
```
-### Dotazy na cestování čase
+### Time-travel Queries Example
-You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries.
+Queries support historical state retrieval using the `block` parameter:
-The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change.
+- `number`: Integer block number
+- `hash`: String block hash
> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail.
-#### Příklad
+#### Block Number Query Example
```graphql
{
@@ -309,9 +264,7 @@ The result of such a query will not change over time, i.e., querying at a certai
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000.
-
-#### Příklad
+#### Block Hash Query Example
```graphql
{
@@ -325,28 +278,26 @@ This query will return `Challenge` entities, and their associated `Application`
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash.
-
-### Fulltextové Vyhledávání dotazy
+### Full-Text Search Example
-Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
+Full-text search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Full-text Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add full-text search to your Subgraph.
-Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
+Full-text search queries have one required field, `text`, for supplying search terms. Several special full-text operators are available to be used in this `text` search field.
-Operátory fulltextového vyhledávání:
+Full-text search fields use the required `text` parameter with the following operators:
-| Symbol | Operátor | Popis |
-| --- | --- | --- |
-| `&` | `And` | Pro kombinaci více vyhledávacích výrazů do filtru pro entity, které obsahují všechny zadané výrazy |
-| | | `Or` | Dotazy s více hledanými výrazy oddělenými operátorem nebo vrátí všechny entity, které odpovídají některému z uvedených výrazů |
-| `<->` | `Follow by` | Zadejte vzdálenost mezi dvěma slovy. |
-| `:*` | `Prefix` | Pomocí předponového výrazu vyhledejte slova, jejichž předpona se shoduje (vyžadovány 2 znaky) |
+| Operator | Symbol | Description |
+| --------- | ------ | --------------------------------------------------------------- |
+| And | `&` | Matches entities containing all terms |
+| Or | `\|` | Return all entities with a match from any of the provided terms |
+| Follow by | `<->` | Matches terms with specified distance |
+| Prefix | `:*` | Matches word prefixes (minimum 2 characters) |
-#### Příklady
+#### Search Examples
-Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields.
+OR operator:
-```graphql
+```
{
blogSearch(text: "anarchism | crumpets") {
id
@@ -357,7 +308,7 @@ Using the `or` operator, this query will filter to blog entities with variations
}
```
-The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy"
+“Follow” by operator:
```graphql
{
@@ -370,7 +321,7 @@ The `follow by` operator specifies a words a specific distance apart in the full
}
```
-Kombinací fulltextových operátorů můžete vytvářet složitější filtry. S operátorem pretextového vyhledávání v kombinaci s operátorem follow by bude tento příklad dotazu odpovídat všem entitá blog se slovy začínajícími na "lou" a následovanými slovem "music".
+Combined operators:
```graphql
{
@@ -383,29 +334,19 @@ Kombinací fulltextových operátorů můžete vytvářet složitější filtry.
}
```
-### Validace
+### Definice schématu
-Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
-
-## Schema
-
-The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
-
-> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
-
-### Entities
+Entity types require:
-All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field.
+- GraphQL Interface Definition Language (IDL) format
+- `@entity` directive
+- `ID` field
-> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported.
+### Subgraph Metadata Example
-### Metadata podgrafů
+The `_Meta_` object provides subgraph metadata:
-All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
-
-```graphQL
+```graphql
{
_meta(block: { number: 123987 }) {
block {
@@ -419,14 +360,49 @@ All Subgraphs have an auto-generated `_Meta_` object, which provides access to S
}
```
-If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
-
-`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
+Metadata fields:
+
+- `deployment`: IPFS CID of the subgraph.yaml
+- `block`: Latest block information
+- `hasIndexingErrors`: Boolean indicating past indexing errors
+
+> Note: When writing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use `and` operators instead of `or` whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
+
+### GraphQL Filter Operators Reference
+
+This table explains each filter operator available in The Graph's GraphQL API. These operators are used as suffixes to field names when filtering data using the `where` parameter.
+
+| Operátor | Popis | Příklad |
+| ------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
+| `_` | Matches entities where the specified field equals another entity | `{ where: { owner_: { name: "Alice" } } }` |
+| `_not` | Negates the specified condition | `{ where: { active_not: true } }` |
+| `_gt` | Greater than (>) | `{ where: { price_gt: "100" } }` |
+| `_lt` | Less than (`\<`) | `{ where: { price_lt: "100" } }` |
+| `_gte` | Greater than or equal to (>=) | `{ where: { price_gte: "100" } }` |
+| `_lte` | Less than or equal to (`\<=`) | `{ where: { price_lte: "100" } }` |
+| `_in` | Value is in the specified array | `{ where: { category_in: ["Art", "Music"] } }` |
+| `_not_in` | Value is not in the specified array | `{ where: { category_not_in: ["Art", "Music"] } }` |
+| `_contains` | Field contains the specified string (case-sensitive) | `{ where: { name_contains: "token" } }` |
+| `_contains_nocase` | Field contains the specified string (case-insensitive) | `{ where: { name_contains_nocase: "token" } }` |
+| `_not_contains` | Field does not contain the specified string (case-sensitive) | `{ where: { name_not_contains: "test" } }` |
+| `_not_contains_nocase` | Field does not contain the specified string (case-insensitive) | `{ where: { name_not_contains_nocase: "test" } }` |
+| `_starts_with` | Field starts with the specified string (case-sensitive) | `{ where: { name_starts_with: "Crypto" } }` |
+| `_starts_with_nocase` | Field starts with the specified string (case-insensitive) | `{ where: { name_starts_with_nocase: "crypto" } }` |
+| `_ends_with` | Field ends with the specified string (case-sensitive) | `{ where: { name_ends_with: "Token" } }` |
+| `_ends_with_nocase` | Field ends with the specified string (case-insensitive) | `{ where: { name_ends_with_nocase: "token" } }` |
+| `_not_starts_with` | Field does not start with the specified string (case-sensitive) | `{ where: { name_not_starts_with: "Test" } }` |
+| `_not_starts_with_nocase` | Field does not start with the specified string (case-insensitive) | `{ where: { name_not_starts_with_nocase: "test" } }` |
+| `_not_ends_with` | Field does not end with the specified string (case-sensitive) | `{ where: { name_not_ends_with: "Test" } }` |
+| `_not_ends_with_nocase` | Field does not end with the specified string (case-insensitive) | `{ where: { name_not_ends_with_nocase: "test" } }` |
+
+#### Notes
+
+- Type support varies by operator. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`.
+- The `_` operator is only available for object and interface types.
+- String comparison operators are especially useful for text fields.
+- Numeric comparison operators work with both number and string-encoded number fields.
+- Use these operators in combination with logical operators (`and`, `or`) for complex filtering.
-`block` provides information about the latest block (taking into account any block constraints passed to `_meta`):
-
-- hash: hash bloku
-- číslo: číslo bloku
-- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
+### Validace
-`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
+Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
diff --git a/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx
index f2954c5593c0..a30fb3f56483 100644
--- a/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx
@@ -1,34 +1,86 @@
---
-title: Managing API keys
+title: How to Manage API keys
---
+This guide shows you how to create, manage, and secure API keys for your [Subgraphs](/subgraphs/developing/subgraphs/).
+
## Přehled
-API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are required to query Subgraphs. They authenticate users and devices, authorize access to specific endpoints, enforce rate limits, and enable usage tracking across The Graph.
+
+## Prerequisites
+
+- A [Subgraph Studio](https://thegraph.com/studio/) account
+
+## Create a New API Key
+
+1. Navigate to [Subgraph Studio](https://thegraph.com/studio/)
+2. Click the **API Keys** tab in the navigation menu
+3. Click the **Create API Key** button
+
+A new window will pop up:
+
+4. Enter a name for your API key
+5. Optional: You can enable a period spending limit
+6. Click **Create API Key**
+
+
+
+## Manage API Keys
+
+The “API keys” table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+
+### How to Set Spending Limits
+
+1. Find your API key in the API keys table
+2. Click the "three dots" icon next to the key
+3. Select "Manage spending limit"
+4. Enter your desired monthly limit in USD
+5. Click **Save**
+
+> Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+
+### How to Rename an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Rename API key"
+3. Enter the new name
+4. Click **Save**
+
+### How to Regenerate an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Regenerate API key"
+3. Confirm the action in the pop up dialog
+
+> Warning: Regenerating an API key will invalidate the previous key immediately. Update your applications with the new key to prevent service interruption.
+
+## API Key Details
-### Create and Manage API Keys
+### Monitoring Usage
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
+1. Click on your API key to view the Details page
+2. Check the **Overview** section for:
+ - Total number of queries
+ - GRT spent
+ - Current usage statistics
-The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+### Restricting Domain Access
-You can click the "three dots" menu to the right of a given API key to:
+1. Click on your API key to open the Details page
+2. Navigate to the **Security** section
+3. Click "Add Domain"
+4. Enter the authorized domain name
+5. Click **Save**
-- Rename API key
-- Regenerate API key
-- Delete API key
-- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+### Limiting Subgraph Access
-### API Key Details
+1. In the API key Details page
+2. Navigate to the **Security** section
+3. Click "Assign Subgraphs"
+4. Select the Subgraphs you want to authorize
+5. Click **Save**
-You can click on an individual API key to view the Details page:
+## Další zdroje
-1. Under the **Overview** section, you can:
- - Úprava názvu klíče
- - Regenerace klíčů API
- - Zobrazení aktuálního využití klíče API se statsi:
- - Počet dotazů
- - Výše vynaložených GRT
-2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- - Zobrazení a správa názvů domén oprávněných používat váš klíč API
- - Assign Subgraphs that can be queried with your API key
+[Deploying Using Subgraph Studio](/subgraphs/developing/deploying/using-subgraph-studio/)
diff --git a/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 7792cb56d855..7d07d37a8acc 100644
--- a/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,15 +2,19 @@
title: ID podgrafu vs. ID nasazení
---
+Managing and accessing Subgraphs relies on two distinct identification systems: Subgraph IDs and Deployment IDs.
+
A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
-Here are some key differences between the two IDs: 
+Both identifiers are accessible in [Subgraph Studio](https://thegraph.com/studio/):
+
+
## ID nasazení
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://ipfs.thegraph.com/ipfs/QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
@@ -18,6 +22,12 @@ Příklad koncového bodu, který používá ID nasazení:
`https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB`
+Using Deployment IDs for queries offers precise version control but comes with specific implications:
+
+- Advantages: Complete control over which version you're querying, ensuring consistent results
+- Challenges: Requires manual updates to query code when new Subgraph versions are published
+- Use case: Ideal for production environments where stability and predictability are crucial
+
## ID podgrafu
The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
@@ -25,3 +35,20 @@ The Subgraph ID is a unique identifier for a Subgraph. It remains constant acros
Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
+
+Using Subgraph IDs comes with important considerations:
+
+- Benefits: Automatically queries the latest version, reducing maintenance overhead
+- Limitations: May encounter version synchronization delays or breaking schema changes
+- Use case: Better suited for development environments or when staying current is more important than version stability
+
+## Deployment ID vs Subgraph ID
+
+Here are the key differences between the two IDs:
+
+| Consideration | Deployment ID | Subgraph ID |
+| ----------------------- | --------------------- | --------------- |
+| Version Pinning | Specific version | Always latest |
+| Maintenance Effort | High (manual updates) | Low (automatic) |
+| Environment Suitability | Production | Development |
+| Sync Status Awareness | Not required | Critical |
diff --git a/website/src/pages/cs/subgraphs/quick-start.mdx b/website/src/pages/cs/subgraphs/quick-start.mdx
index 7c52d4745a83..27325c2c7f08 100644
--- a/website/src/pages/cs/subgraphs/quick-start.mdx
+++ b/website/src/pages/cs/subgraphs/quick-start.mdx
@@ -2,24 +2,28 @@
title: Rychlé spuštění
---
-Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Create, deploy, and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph Network.
+
+By the end, you'll have:
+
+- Initialized a Subgraph from a smart contract
+- Deployed it to Subgraph Studio for testing
+- Published to The Graph Network for decentralized indexing
## Prerequisites
- Kryptopeněženka
-- A smart contract address on a [supported network](/supported-networks/)
-- [Node.js](https://nodejs.org/) installed
-- A package manager of your choice (`npm`, `yarn` or `pnpm`)
+- A deployed smart contract on a [supported network](/supported-networks/)
+- [Node.js](https://nodejs.org/) & a package manager of your choice (`npm`, `yarn` or `pnpm`)
## How to Build a Subgraph
### 1. Create a Subgraph in Subgraph Studio
-Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-
-Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-
-Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
+1. Go to [Subgraph Studio](https://thegraph.com/studio/)
+2. Connect your wallet
+3. Click "Create a Subgraph"
+4. Name it in Title Case: "Subgraph Name Chain Name"
### 2. Nainstalujte Graph CLI
@@ -37,20 +41,22 @@ Použitím [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your Subgraph
+Verify install:
-> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
+```sh
+graph --version
+```
-The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
+### 3. Initialize your Subgraph
-The following command initializes your Subgraph from an existing contract:
+> You can find commands for your specific Subgraph in [Subgraph Studio](https://thegraph.com/studio/).
+
+The following command initializes your Subgraph from an existing contract and indexes events:
```sh
graph init
```
-If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-
When you initialize your Subgraph, the CLI will ask you for the following information:
- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
@@ -59,19 +65,17 @@ When you initialize your Subgraph, the CLI will ask you for the following inform
- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block where the contract was deployed to optimize Subgraph indexing of blockchain data.
- **Contract Name**: Input the name of your contract.
- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-See the following screenshot for an example for what to expect when initializing your Subgraph:
+See the following screenshot for an example of what to expect when initializing your Subgraph:

### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-
When making changes to the Subgraph, you will mainly work with three files:
- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
@@ -82,9 +86,7 @@ For a detailed breakdown on how to write your Subgraph, check out [Creating a Su
### 5. Deploy your Subgraph
-> Remember, deploying is not the same as publishing.
-
-When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
Once your Subgraph is written, run the following commands:
@@ -107,8 +109,6 @@ graph deploy
```
````
-The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-
### 6. Review your Subgraph
If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
@@ -125,55 +125,13 @@ When your Subgraph is ready for a production environment, you can publish it to
- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-
-> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
-
-#### Publishing with Subgraph Studio
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to add curation signal.
-To publish your Subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard and select your network.

-Select the network to which you would like to publish your Subgraph.
-
-#### Publishing from the CLI
-
-As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
-
-Open the `graph-cli`.
-
-Use the following commands:
-
-````
-```sh
-graph codegen && graph build
-```
-
-Then,
-
-```sh
-graph publish
-```
-````
-
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.
-
-
-
-To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-
-#### Adding signal to your Subgraph
-
-1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
-
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
-
-2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
-
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
-
-To learn more about curation, read [Curating](/resources/roles/curating/).
+> It is recommended that you curate your own Subgraph with at least 3,000 GRT to incentivize indexing.
To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:
diff --git a/website/src/pages/cs/subgraphs/upgrade-indexer.mdx b/website/src/pages/cs/subgraphs/upgrade-indexer.mdx
new file mode 100644
index 000000000000..4994154ea4a1
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/upgrade-indexer.mdx
@@ -0,0 +1,25 @@
+---
+title: Edge & Node Upgrade Indexer
+sidebarTitle: Upgrade Indexer
+---
+
+## Přehled
+
+The Upgrade Indexer is a specialized Indexer operated by Edge & Node. It supports newly integrated chains within The Graph ecosystem and ensures new Subgraphs are immediately available for querying, eliminating potential downtime.
+
+Originally designed as a transitional support, its primary purpose was to facilitate the migration of Subgraphs from the hosted service to the decentralized network. Currently, it supports newly deployed Subgraphs before the full Chain Integration Process (CIP) Indexing rewards are activated.
+
+### What it does
+
+- Provides immediate query support for all newly deployed Subgraphs.
+- Functions as the sole supporting Indexer for each chain until indexing rewards are activated.
+
+### What it does **not** do
+
+- Does not permanently index Subgraphs. Subgraph owners should curate Subgraphs to use independent Indexers long term.
+- Does not compete for rewards. The Upgrade Indexer's participation on the Graph Network does not dilute rewards for other Indexers.
+- Doesn't support Time Travel Queries (TTQ). All Subgraphs on the Upgrade Indexer are auto-pruned. If TTQs are needed on a Subgraph, [curation signal can be added](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to attract Indexers that will support this feature.
+
+### Závěr
+
+The Edge & Node Upgrade Indexer is foundational in supporting chain integrations and mitigating data latency risks. It plays a critical role in scaling The Graph's decentralized infrastructure by ensuring immediate query support and fostering community-driven indexing.
diff --git a/website/src/pages/cs/substreams/_meta-titles.json b/website/src/pages/cs/substreams/_meta-titles.json
index 6262ad528c3a..b8799cc89251 100644
--- a/website/src/pages/cs/substreams/_meta-titles.json
+++ b/website/src/pages/cs/substreams/_meta-titles.json
@@ -1,3 +1,4 @@
{
- "developing": "Developing"
+ "developing": "Developing",
+ "sps": "Substreams-powered Subgraphs"
}
diff --git a/website/src/pages/cs/substreams/developing/sinks.mdx b/website/src/pages/cs/substreams/developing/sinks.mdx
index d89161878fc9..ed04b4b9b053 100644
--- a/website/src/pages/cs/substreams/developing/sinks.mdx
+++ b/website/src/pages/cs/substreams/developing/sinks.mdx
@@ -8,14 +8,13 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database or a file.
## Sinks
> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed.
- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database.
-- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network.
- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application.
- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic.
- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks.
@@ -26,26 +25,26 @@ Sinks are integrations that allow you to send the extracted data to different de
### Official
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Official Support (by one of the main Substreams providers)
- C = Community Support
diff --git a/website/src/pages/cs/substreams/quick-start.mdx b/website/src/pages/cs/substreams/quick-start.mdx
index 50a16d470cfe..ecf32732ba24 100644
--- a/website/src/pages/cs/substreams/quick-start.mdx
+++ b/website/src/pages/cs/substreams/quick-start.mdx
@@ -31,6 +31,7 @@ If you can't find a Substreams package that meets your specific needs, you can d
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/).
diff --git a/website/src/pages/cs/substreams/sps/faq.mdx b/website/src/pages/cs/substreams/sps/faq.mdx
new file mode 100644
index 000000000000..25e77dc3c7f1
--- /dev/null
+++ b/website/src/pages/cs/substreams/sps/faq.mdx
@@ -0,0 +1,96 @@
+---
+title: Substreams-Powered Subgraphs FAQ
+sidebarTitle: FAQ
+---
+
+## Co jsou substreamu?
+
+Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications.
+
+Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere.
+
+Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
+
+## What are Substreams-powered Subgraphs?
+
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
+
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
+
+## How are Substreams-powered Subgraphs different from Subgraphs?
+
+Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain.
+
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+
+## What are the benefits of using Substreams-powered Subgraphs?
+
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+
+## Jaké jsou výhody Substreams?
+
+Používání ubstreams má mnoho výhod, mimo jiné:
+
+- Složitelný: Moduly Substreams můžete skládat na sebe jako kostky LEGO, stavět na komunitních moduly a dále vylepšovat veřejná data.
+
+- Vysoce výkonné indexování: Řádově rychlejší indexování prostřednictvím rozsáhlých klastrů paralelních operací (viz BigQuery).
+
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
+
+- Programovatelné: Pomocí kódu můžete přizpůsobit extrakci, provádět agregace v čase transformace a modelovat výstup pro více zdrojů.
+
+- Přístup k dalším údajům, které nejsou k dispozici jako součást JSON RPC
+
+- Všechny výhody Firehose.
+
+## Co je Firehose?
+
+Firehose, vyvinutý společností [StreamingFast](https://www.streamingfast.io/), je vrstva pro extrakci dat z blockchainu, která byla od základu navržena tak, aby zpracovávala celou historii blockchainu dosud nevídanou rychlostí. Poskytuje přístup založený na souborech a streamování v první řadě a je klíčovou součástí sady open-source technologií StreamingFast a základem pro Substreams.
+
+Další informace o Firehose najdete v [dokumentaci](https://firehose.streamingfast.io/).
+
+## Jaké jsou výhody Firehose?
+
+Používání Firehose přináší mnoho výhod, včetně:
+
+- Nejnižší latence a žádné dotazování: Uzly Firehose jsou navrženy tak, aby se předháněly v odesílání blokových dat jako první.
+
+- Předchází výpadkům: Navrženo od základu pro vysokou dostupnost.
+
+- Nikdy nezmeškáte ani minutu: Proudový kurzor Firehose je navržen tak, aby si poradil s rozcestími a pokračoval tam, kde jste skončili, za jakýchkoli podmínek.
+
+- Nejbohatší datový model: Nejlepší datový model, který zahrnuje změny zůstatku, celý strom volání, interní transakce, protokoly, změny v úložišti, náklady na plyn a další.
+
+- Využívá ploché soubory: Blockchain data jsou extrahována do plochých souborů, což je nejlevnější a nejoptimálnější dostupný výpočetní zdroj.
+
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
+
+The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules.
+
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+
+The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
+
+## Jaká je role modulů Rust v Substreamu?
+
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
+
+See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
+
+## Co dělá Substreamy složitelnými?
+
+Při použití substreamů probíhá kompozice na transformační vrstvě, což umožňuje opakované použití modulů uložených v mezipaměti.
+
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
+
+## Jak můžete vytvořit a nasadit Substreams využívající podgraf?
+
+After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
+
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
+
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
+
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
+
+Integrace slibuje mnoho výhod, včetně extrémně výkonného indexování a větší složitelnosti díky využití komunitních modulů a stavění na nich.
diff --git a/website/src/pages/cs/substreams/sps/introduction.mdx b/website/src/pages/cs/substreams/sps/introduction.mdx
new file mode 100644
index 000000000000..4938d23102e4
--- /dev/null
+++ b/website/src/pages/cs/substreams/sps/introduction.mdx
@@ -0,0 +1,31 @@
+---
+title: Introduction to Substreams-Powered Subgraphs
+sidebarTitle: Úvod
+---
+
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+
+## Přehled
+
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+
+### Specifics
+
+There are two methods of enabling this technology:
+
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
+
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
+
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+
+### Další zdroje
+
+Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly:
+
+- [Solana](/substreams/developing/solana/transactions/)
+- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm)
+- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
+- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
+- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/cs/substreams/sps/triggers.mdx b/website/src/pages/cs/substreams/sps/triggers.mdx
new file mode 100644
index 000000000000..b0c4bea23f3d
--- /dev/null
+++ b/website/src/pages/cs/substreams/sps/triggers.mdx
@@ -0,0 +1,47 @@
+---
+title: Substreams Triggers
+---
+
+Use Custom Triggers and enable the full use GraphQL.
+
+## Přehled
+
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
+
+### Defining `handleTransactions`
+
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
+
+```tsx
+export function handleTransactions(bytes: Uint8Array): void {
+ let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1.
+ if (transactions.length == 0) {
+ log.info('No transactions found', [])
+ return
+ }
+
+ for (let i = 0; i < transactions.length; i++) {
+ // 2.
+ let transaction = transactions[i]
+
+ let entity = new Transaction(transaction.hash) // 3.
+ entity.from = transaction.from
+ entity.to = transaction.to
+ entity.save()
+ }
+}
+```
+
+Here's what you're seeing in the `mappings.ts` file:
+
+1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object
+2. Looping over the transactions
+3. Create a new Subgraph entity for every transaction
+
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
+
+### Další zdroje
+
+To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/).
diff --git a/website/src/pages/cs/substreams/sps/tutorial.mdx b/website/src/pages/cs/substreams/sps/tutorial.mdx
new file mode 100644
index 000000000000..c1850bab04fa
--- /dev/null
+++ b/website/src/pages/cs/substreams/sps/tutorial.mdx
@@ -0,0 +1,155 @@
+---
+title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana"
+sidebarTitle: Tutorial
+---
+
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
+
+## Začněte
+
+For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial)
+
+### Prerequisites
+
+Before starting, make sure to:
+
+- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container.
+- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs.
+
+### Step 1: Initialize Your Project
+
+1. Open your Dev Container and run the following command to initialize your project:
+
+ ```bash
+ substreams init
+ ```
+
+2. Select the "minimal" project option.
+
+3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID:
+
+```yaml
+specVersion: v0.1.0
+package:
+ name: my_project_sol
+ version: v0.1.0
+
+imports: # Pass your spkg of interest
+ solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg
+
+modules:
+ - name: map_spl_transfers
+ use: solana:map_block # Select corresponding modules available within your spkg
+ initialBlock: 260000082
+
+ - name: map_transactions_by_programid
+ use: solana:solana:transactions_by_programid_without_votes
+
+network: solana-mainnet-beta
+
+params: # Modify the param fields to meet your needs
+ # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA
+ map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE
+```
+
+### Step 2: Generate the Subgraph Manifest
+
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
+
+```bash
+substreams codegen subgraph
+```
+
+You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source:
+
+```yaml
+---
+dataSources:
+ - kind: substreams
+ name: my_project_sol
+ network: solana-mainnet-beta
+ source:
+ package:
+ moduleName: map_spl_transfers # Module defined in the substreams.yaml
+ file: ./my-project-sol-v0.1.0.spkg
+ mapping:
+ apiVersion: 0.0.9
+ kind: substreams/graph-entities
+ file: ./src/mappings.ts
+ handler: handleTriggers
+```
+
+### Step 3: Define Entities in `schema.graphql`
+
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
+
+Here is an example:
+
+```graphql
+type MyTransfer @entity {
+ id: ID!
+ amount: String!
+ source: String!
+ designation: String!
+ signers: [String!]!
+}
+```
+
+This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`.
+
+### Step 4: Handle Substreams Data in `mappings.ts`
+
+With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
+
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
+
+```ts
+import { Protobuf } from 'as-proto/assembly'
+import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events'
+import { MyTransfer } from '../generated/schema'
+
+export function handleTriggers(bytes: Uint8Array): void {
+ const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode)
+
+ for (let i = 0; i < input.data.length; i++) {
+ const event = input.data[i]
+
+ if (event.transfer != null) {
+ let entity_id: string = `${event.txnId}-${i}`
+ const entity = new MyTransfer(entity_id)
+ entity.amount = event.transfer!.instruction!.amount.toString()
+ entity.source = event.transfer!.accounts!.source
+ entity.designation = event.transfer!.accounts!.destination
+
+ if (event.transfer!.accounts!.signer!.single != null) {
+ entity.signers = [event.transfer!.accounts!.signer!.single!.signer]
+ } else if (event.transfer!.accounts!.signer!.multisig != null) {
+ entity.signers = event.transfer!.accounts!.signer!.multisig!.signers
+ }
+ entity.save()
+ }
+ }
+}
+```
+
+### Step 5: Generate Protobuf Files
+
+To generate Protobuf objects in AssemblyScript, run the following command:
+
+```bash
+npm run protogen
+```
+
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
+
+### Závěr
+
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+
+### Video Tutorial
+
+
+
+### Další zdroje
+
+For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana).
diff --git a/website/src/pages/cs/supported-networks.mdx b/website/src/pages/cs/supported-networks.mdx
index 863814948ba7..fefd918d2b78 100644
--- a/website/src/pages/cs/supported-networks.mdx
+++ b/website/src/pages/cs/supported-networks.mdx
@@ -4,17 +4,17 @@ hideTableOfContents: true
hideContentHeader: true
---
-import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, NetworksTable } from '@/supportedNetworks'
import { Heading } from '@/components'
import { useI18n } from '@/i18n'
export const getStaticProps = getSupportedNetworksStaticProps
-
+
{useI18n().t('index.supportedNetworks.title')}
-
+
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
diff --git a/website/src/pages/cs/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/cs/token-api/evm/get-balances-evm-by-address.mdx
index 3386fd078059..68385ffc4272 100644
--- a/website/src/pages/cs/token-api/evm/get-balances-evm-by-address.mdx
+++ b/website/src/pages/cs/token-api/evm/get-balances-evm-by-address.mdx
@@ -1,9 +1,9 @@
---
-title: Token Balances by Wallet Address
+title: Balances by Address
template:
type: openApi
apiId: tokenApi
operationId: getBalancesEvmByAddress
---
-The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
+Provides latest ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/cs/token-api/evm/get-historical-balances-evm-by-address.mdx b/website/src/pages/cs/token-api/evm/get-historical-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..d96ed1b81fa2
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-historical-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Historical Balances
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHistoricalBalancesEvmByAddress
+---
+
+Provides historical ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/cs/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-holders-evm-by-contract.mdx
index 0bb79e41ed54..01a52bbf7ad2 100644
--- a/website/src/pages/cs/token-api/evm/get-holders-evm-by-contract.mdx
+++ b/website/src/pages/cs/token-api/evm/get-holders-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders by Contract Address
+title: Token Holders
template:
type: openApi
apiId: tokenApi
operationId: getHoldersEvmByContract
---
-The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
+Provides ERC-20 token holder balances by contract address.
diff --git a/website/src/pages/cs/token-api/evm/get-nft-activities-evm.mdx b/website/src/pages/cs/token-api/evm/get-nft-activities-evm.mdx
new file mode 100644
index 000000000000..f76eb35f653a
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-nft-activities-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Activities
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftActivitiesEvm
+---
+
+Provides NFT Activities (ex: transfers, mints & burns).
diff --git a/website/src/pages/cs/token-api/evm/get-nft-collections-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-nft-collections-evm-by-contract.mdx
new file mode 100644
index 000000000000..c8e9bfb64219
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-nft-collections-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Collection
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftCollectionsEvmByContract
+---
+
+Provides single NFT collection metadata, total supply, owners & total transfers.
diff --git a/website/src/pages/cs/token-api/evm/get-nft-holders-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-nft-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..091d01a197f4
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-nft-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Holders
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftHoldersEvmByContract
+---
+
+Provides NFT holders per contract.
diff --git a/website/src/pages/cs/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx b/website/src/pages/cs/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
new file mode 100644
index 000000000000..cf9ff1c6e1b8
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Items
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftItemsEvmContractByContractToken_idByToken_id
+---
+
+Provides single NFT token metadata, ownership & traits.
diff --git a/website/src/pages/cs/token-api/evm/get-nft-ownerships-evm-by-address.mdx b/website/src/pages/cs/token-api/evm/get-nft-ownerships-evm-by-address.mdx
new file mode 100644
index 000000000000..4c33526eceb7
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-nft-ownerships-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Ownerships
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftOwnershipsEvmByAddress
+---
+
+Provides NFT Ownerships for Account.
diff --git a/website/src/pages/cs/token-api/evm/get-nft-sales-evm.mdx b/website/src/pages/cs/token-api/evm/get-nft-sales-evm.mdx
new file mode 100644
index 000000000000..f2d78bea4052
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-nft-sales-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Sales
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftSalesEvm
+---
+
+Provides latest NFT marketplace sales.
diff --git a/website/src/pages/cs/token-api/evm/get-ohlc-pools-evm-by-pool.mdx b/website/src/pages/cs/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
new file mode 100644
index 000000000000..d5bc5357eadf
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
@@ -0,0 +1,9 @@
+---
+title: OHLCV by Pool
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPoolsEvmByPool
+---
+
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/cs/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
index d1558ddd6e78..ff8f590b0433 100644
--- a/website/src/pages/cs/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
+++ b/website/src/pages/cs/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token OHLCV prices by Contract Address
+title: OHLCV by Contract
template:
type: openApi
apiId: tokenApi
operationId: getOhlcPricesEvmByContract
---
-The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/cs/token-api/evm/get-pools-evm.mdx b/website/src/pages/cs/token-api/evm/get-pools-evm.mdx
new file mode 100644
index 000000000000..db32376f5a17
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-pools-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Liquidity Pools
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getPoolsEvm
+---
+
+Provides Uniswap V2 & V3 liquidity pool metadata.
diff --git a/website/src/pages/cs/token-api/evm/get-swaps-evm.mdx b/website/src/pages/cs/token-api/evm/get-swaps-evm.mdx
new file mode 100644
index 000000000000..0a7697f38c8b
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-swaps-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Swap Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getSwapsEvm
+---
+
+Provides Uniswap V2 & V3 swap events.
diff --git a/website/src/pages/cs/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-tokens-evm-by-contract.mdx
index b6fab8011fc2..aed206c15272 100644
--- a/website/src/pages/cs/token-api/evm/get-tokens-evm-by-contract.mdx
+++ b/website/src/pages/cs/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders and Supply by Contract Address
+title: Token Metadata
template:
type: openApi
apiId: tokenApi
operationId: getTokensEvmByContract
---
-The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
+Provides ERC-20 token contract metadata.
diff --git a/website/src/pages/cs/token-api/evm/get-transfers-evm.mdx b/website/src/pages/cs/token-api/evm/get-transfers-evm.mdx
new file mode 100644
index 000000000000..d8e73c90a03c
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-transfers-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Transfer Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvm
+---
+
+Provides ERC-20 & Native transfer events.
diff --git a/website/src/pages/cs/token-api/faq.mdx b/website/src/pages/cs/token-api/faq.mdx
index 83196959be14..d94b110ff682 100644
--- a/website/src/pages/cs/token-api/faq.mdx
+++ b/website/src/pages/cs/token-api/faq.mdx
@@ -6,21 +6,37 @@ Get fast answers to easily integrate and scale with The Graph's high-performance
## Obecný
-### What blockchains does the Token API support?
+### Which blockchains are supported by the Token API?
-Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+Currently, the Token API supports Ethereum, BNB Smart Chain (BSC), Polygon, Optimism, Base, Unichain, and Arbitrum One.
-### Why isn't my API key from The Graph Market working?
+### Does the Token API support NFTs?
-Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+Yes, The Graph Token API currently supports ERC-721 and ERC-1155 NFT token standards, with support for additional NFT standards planned. Endpoints are offered for ownership, collection stats, metadata, sales, holders, and transfer activity.
+
+### Do NFTs include off-chain data?
+
+NFT endpoints currently only include on-chain data. To get off-chain data, use the IPFS or HTTP links included in the NFT item response.
+
+### How do I authenticate requests to the Token API, and why doesn't my API key from The Graph Market work?
+
+Authentication is managed via API tokens obtained through [The Graph Market](https://thegraph.market/). If you're experiencing issues, make sure you're using the API Token generated from the API key, not the API key itself. An API token can be found on The Graph Market dashboard next to each API key. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
### How current is the data provided by the API relative to the blockchain?
The API provides data up to the latest finalized block.
-### How do I authenticate requests to the Token API?
+### How do I retrieve token prices?
+
+By default, token prices are returned with token-related responses, including token balances, token transfers, token metadata, and token holders. Historical prices are available with the Open-High-Low-Close (OHLC) endpoints.
+
+### Does the Token API support historical token data?
-Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+The Token API supports historical token balances with the `/historical/balances/evm/{address}` endpoint. You can query historical price data by pool at `/ohlc/pools/evm/{pool}` and by contract at `/ohlc/prices/evm/{contract}`.
+
+### What exchanges does the Token API use for token prices?
+
+The Token API currently tracks prices on Uniswap v2 and Uniswap v3, with plans to support additional exchanges in the future.
### Does the Token API provide a client SDK?
@@ -34,9 +50,9 @@ Yes, more blockchains will be supported in the future. Please share feedback on
Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
-### Are there plans to support additional use cases such as NFTs?
+### Are there plans to support additional use cases?
-The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
## MCP / LLM / AI Topics
@@ -60,17 +76,25 @@ You can find the code for the MCP client in [The Graph's repo](https://github.co
Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
-### Are there rate limits or usage costs?\*\*
+### Why am I getting 500 errors?
+
+Networks that are currently or temporarily unavailable on a given endpoint will return a `bad_database_response`, `Endpoint is currently not supported for this network` error. Databases that are in the process of ingestion will produce this response.
+
+### Are there rate limits or usage costs?
+
+During Beta, the Token API is free for authorized developers. There are no specific rate limits, but reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What do I do if I notice data inconsistencies in the data returned by the Token API?
-During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+If you notice data inconsistencies, please report the issue on our [Discord](https://discord.gg/graphprotocol). Identifying edge cases can help make sure all data is accurate and up-to-date.
-### What networks are supported, and how do I specify them?
+### How do I specify a network?
-You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of the exact network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`, `unichain`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
### Why do I only see 10 results? How can I get more data?
-Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` and `page` (1-indexed) to return more results. For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
### How do I fetch older transfer history?
diff --git a/website/src/pages/cs/token-api/monitoring/get-health.mdx b/website/src/pages/cs/token-api/monitoring/get-health.mdx
index 57a827b3343b..09f7b954dbf3 100644
--- a/website/src/pages/cs/token-api/monitoring/get-health.mdx
+++ b/website/src/pages/cs/token-api/monitoring/get-health.mdx
@@ -1,7 +1,9 @@
---
-title: Get health status of the API
+title: Health Status
template:
type: openApi
apiId: tokenApi
operationId: getHealth
---
+
+Get health status of the API
diff --git a/website/src/pages/cs/token-api/monitoring/get-networks.mdx b/website/src/pages/cs/token-api/monitoring/get-networks.mdx
index 0ea3c485ddb9..5aff52361e29 100644
--- a/website/src/pages/cs/token-api/monitoring/get-networks.mdx
+++ b/website/src/pages/cs/token-api/monitoring/get-networks.mdx
@@ -1,7 +1,9 @@
---
-title: Get supported networks of the API
+title: Podporované sítě
template:
type: openApi
apiId: tokenApi
operationId: getNetworks
---
+
+Get supported networks of the API
diff --git a/website/src/pages/cs/token-api/monitoring/get-version.mdx b/website/src/pages/cs/token-api/monitoring/get-version.mdx
index 0be6b7e92d04..f0d5f765305b 100644
--- a/website/src/pages/cs/token-api/monitoring/get-version.mdx
+++ b/website/src/pages/cs/token-api/monitoring/get-version.mdx
@@ -1,7 +1,9 @@
---
-title: Get the version of the API
+title: Verze
template:
type: openApi
apiId: tokenApi
operationId: getVersion
---
+
+Get the version of the API
diff --git a/website/src/pages/cs/token-api/quick-start.mdx b/website/src/pages/cs/token-api/quick-start.mdx
index 4083154b5a8b..8102fae89beb 100644
--- a/website/src/pages/cs/token-api/quick-start.mdx
+++ b/website/src/pages/cs/token-api/quick-start.mdx
@@ -9,15 +9,15 @@ sidebarTitle: Rychlé spuštění
The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
-The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+The Token API provides access to onchain NFT and fungible token data, including live and historical balances, holders, prices, market data, token metadata, and token transfers. This API also uses the Model Context Protocol (MCP) to allow AI tools such as Claude to enrich raw blockchain data with contextual insights.
## Prerequisites
-Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+Before you begin, get a JWT API token by signing up on [The Graph Market](https://thegraph.market/). Make sure to use the JWT API Token, not the API key. Each API key can generate a new JWT API Token at any time.
## Authentication
-All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+All API endpoints are authenticated using a JWT API token inserted in the header as `Authorization: Bearer `.
```json
{
@@ -64,6 +64,20 @@ Make sure to replace `` with the JWT Token generated from your API key.
> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+## Chain and Feature Support
+
+| Network | evm-tokens | evm-uniswaps | evm-nft-tokens |
+| ---------------- | :---------: | :----------: | :------------: |
+| Ethereum Mainnet | ✅ | ✅ | ✅ |
+| BSC | ✅\* | ✅ | ✅ |
+| Base | ✅ | ✅ | ✅ |
+| Unichain | ✅ | ✅ | ✅ |
+| Arbitrum-One | Ingesting\* | Ingesting\* | Ingesting\* |
+| Optimism | ✅ | ✅ | ✅ |
+| Polygon | ✅ | ✅ | ✅ |
+
+\*Some chains are still in the process of syncing. You may encounter `bad_database_response` errors or incorrect response values until data is fully synced.
+
## Troubleshooting
If the API call fails, try printing out the full response object for additional error details. For example:
diff --git a/website/src/pages/de/about.mdx b/website/src/pages/de/about.mdx
index 30ff84ae06f0..d644545b115e 100644
--- a/website/src/pages/de/about.mdx
+++ b/website/src/pages/de/about.mdx
@@ -1,67 +1,46 @@
---
-title: Über The Graph
+title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## Was ist The Graph?
-The Graph ist ein leistungsstarkes dezentrales Protokoll, das eine nahtlose Abfrage und Indizierung von Blockchain-Daten ermöglicht. Es vereinfacht den komplexen Prozess der Abfrage von Blockchain-Daten und macht die App-Entwicklung schneller und einfacher.
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## Grundlagen verstehen
+Its data services include:
-Projekte mit komplexen Smart Contracts wie [Uniswap](https://uniswap.org/) und NFTs-Initiativen wie [Bored Ape Yacht Club](https://boredapeyachtclub.com/) speichern Daten auf der Ethereum-Blockchain, was es sehr schwierig macht, etwas anderes als grundlegende Daten direkt von der Blockchain zu lesen.
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### Herausforderungen ohne The Graph
+### Why Blockchain Data is Difficult to Query
-Im Fall des oben aufgeführten konkreten Beispiels, Bored Ape Yacht Club, können Sie grundlegende Leseoperationen auf [dem Vertrag](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) durchführen. Sie können den Besitzer eines bestimmten Ape auslesen, die Inhalts-URI eines Ape anhand seiner ID lesen oder das Gesamtangebot auslesen.
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- Dies ist möglich, da diese Lesevorgänge direkt in den Smart Contract selbst programmiert sind. Allerdings sind fortgeschrittene, spezifische und reale Abfragen und Operationen wie Aggregation, Suche, Beziehungen und nicht-triviale Filterung **nicht möglich**.
+The result is slow performance, complex infrastructure, and scalability issues.
-- Wenn Sie sich beispielsweise nach Apes erkundigen möchten, die einer bestimmten Adresse gehören, und Ihre Suche anhand eines bestimmten Merkmals verfeinern möchten, können Sie diese Informationen nicht durch direkte Interaktion mit dem Vertrag selbst erhalten.
+## How The Graph Solves This
-- Um mehr Daten zu erhalten, müsste man jedes einzelne [`Übertragungsereignis`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746), das jemals gesendet wurde, verarbeiten, die Metadaten aus IPFS unter Verwendung der Token-ID und des IPFS-Hashs lesen und dann zusammenfassen.
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### Warum ist das ein Problem?
+Find the perfect data service for you:
-Es würde **Stunden oder sogar Tage** dauern, bis eine dezentrale Anwendung (dapp), die in einem Browser läuft, eine Antwort auf diese einfachen Fragen erhält.
+### 1. Custom Real-Time Data Streams
-Alternativ können Sie einen eigenen Server einrichten, die Transaktionen verarbeiten, sie in einer Datenbank speichern und einen API-Endpunkt zur Abfrage der Daten erstellen. Diese Option ist jedoch [Ressourcen-intensiv](/resources/benefits/), muss gewartet werden, stellt einen Single Point of Failure dar und bricht wichtige Sicherheitseigenschaften, die für die Dezentralisierung erforderlich sind.
+**Use Case:** High-frequency trading, live analytics.
-Blockchain-Eigenschaften wie Endgültigkeit, Umstrukturierung der Kette und nicht gesperrte Blöcke erhöhen die Komplexität des Prozesses und machen es zeitaufwändig und konzeptionell schwierig, genaue Abfrageergebnisse aus Blockchain-Daten zu erhalten.
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph bietet eine Lösung
+### 2. Instant Token Data
-The Graph löst diese Herausforderung mit einem dezentralen Protokoll, das die Blockchain-Daten indiziert und eine effiziente und leistungsstarke Abfrage ermöglicht. Diese APIs (indizierte „Subgraphen“) können dann mit einer Standard-GraphQL-API abgefragt werden.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-Heute gibt es ein dezentralisiertes Protokoll, das durch die Open-Source-Implementierung von [Graph Node](https://github.com/graphprotocol/graph-node) unterstützt wird und diesen Prozess ermöglicht.
+- [Start with Token API](/token-api/quick-start/)
-### Die Funktionsweise von The Graph
+### 3. Flexible Historical Queries
-Die Indexierung von Blockchain-Daten ist sehr schwierig, aber The Graph macht es einfach. The Graph lernt, wie man Ethereum-Daten mit Hilfe von Subgraphen indizieren kann. Subgraphen sind benutzerdefinierte APIs, die auf Blockchain-Daten aufgebaut sind. Sie extrahieren Daten aus einer Blockchain, verarbeiten sie und speichern sie so, dass sie nahtlos über GraphQL abgefragt werden können.
+**Use Case:** Dapp frontends, custom analytics.
-#### Besonderheiten
-
-- The Graph verwendet Subgraph-Beschreibungen, die als Subgraph-Manifest innerhalb des Subgraphen bekannt sind.
-
-- Die Subgraph-Beschreibung beschreibt die Smart Contracts, die für einen Subgraphen von Interesse sind, die Ereignisse innerhalb dieser Verträge, auf die man sich konzentrieren soll, und wie man die Ereignisdaten den Daten zuordnet, die The Graph in seiner Datenbank speichern wird.
-
-- Wenn Sie einen Subgraphen erstellen, müssen Sie ein Subgraphenmanifest schreiben.
-
-- Nachdem Sie das `Subgraphenmanifest` geschrieben haben, können Sie das Graph CLI verwenden, um die Definition im IPFS zu speichern und einen Indexer anzuweisen, mit der Indizierung von Daten für diesen Subgraphen zu beginnen.
-
-Das nachstehende Diagramm enthält detailliertere Informationen über den Datenfluss, nachdem ein Subgraph-Manifest mit Ethereum-Transaktionen bereitgestellt wurde.
-
-
-
-Der Ablauf ist wie folgt:
-
-1. Eine Dapp fügt Ethereum durch eine Transaktion auf einem Smart Contract Daten hinzu.
-2. Der Smart Contract gibt während der Verarbeitung der Transaktion ein oder mehrere Ereignisse aus.
-3. Graph Node scannt Ethereum kontinuierlich nach neuen Blöcken und den darin enthaltenen Daten für Ihren Subgraph.
-4. Graph Node findet in diesen Blöcken Ethereum-Ereignisse für Ihren Subgraph und führt die von Ihnen bereitgestellten Mapping-Handler aus. Das Mapping ist ein WASM-Modul, das die Dateneinheiten erstellt oder aktualisiert, die Graph Node als Reaktion auf Ethereum-Ereignisse speichert.
-5. Die Dapp fragt den Graph Node über den [GraphQL-Endpunkt](https://graphql.org/learn/) des Knotens nach Daten ab, die von der Blockchain indiziert wurden. Der Graph Node wiederum übersetzt die GraphQL-Abfragen in Abfragen für seinen zugrundeliegenden Datenspeicher, um diese Daten abzurufen, wobei er die Indexierungsfunktionen des Speichers nutzt. Die Dapp zeigt diese Daten in einer reichhaltigen Benutzeroberfläche für die Endnutzer an, mit der diese dann neue Transaktionen auf Ethereum durchführen können. Der Zyklus wiederholt sich.
-
-## Nächste Schritte
-
-In den folgenden Abschnitten werden die Subgraphen, ihr Einsatz und die Datenabfrage näher erläutert.
-
-Bevor Sie Ihren eigenen Subgraph schreiben, sollten Sie den [Graph Explorer](https://thegraph.com/explorer) erkunden und sich einige der bereits eingesetzten Subgraphen ansehen. Die Seite jedes Subgraphen enthält eine GraphQL-Spielwiese, mit der Sie seine Daten abfragen können.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/de/index.json b/website/src/pages/de/index.json
index b56ea56c5897..a16848e4fa2e 100644
--- a/website/src/pages/de/index.json
+++ b/website/src/pages/de/index.json
@@ -2,7 +2,7 @@
"title": "Home",
"hero": {
"title": "The Graph Docs",
- "description": "Starten Sie Ihr Web3-Projekt mit den Tools zum Extrahieren, Transformieren und Laden von Blockchain-Daten.",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "Funktionsweise von The Graph",
"cta2": "Erstellen Sie Ihren ersten Subgraphen"
},
@@ -19,10 +19,10 @@
"description": "Abrufen und Konsumieren von Blockchain-Daten mit paralleler Ausführung.",
"cta": "Entwickeln mit Substreams"
},
- "sps": {
- "title": "Substreams-getriebene Subgraphen",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "Einrichten eines Substreams-powered Subgraphen"
+ "tokenApi": {
+ "title": "Token API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "Graph-Knoten",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "Firehose",
- "description": "Extrahieren Sie Blockchain-Daten in flache Dateien, um die Synchronisierungszeiten und Streaming-Funktionen zu verbessern.",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Erste Schritte mit Firehose"
}
},
@@ -58,6 +58,7 @@
"networks": "Netzwerke",
"completeThisForm": "füllen Sie dieses Formular aus"
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "Subgraphs",
"substreams": "Substreams",
"firehose": "Firehose",
- "tokenapi": "Token API"
+ "tokenApi": "Token API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "Substreams",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "Graph Explorer",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/de/indexing/new-chain-integration.mdx b/website/src/pages/de/indexing/new-chain-integration.mdx
index eed49796a99f..fd936ac7b608 100644
--- a/website/src/pages/de/indexing/new-chain-integration.mdx
+++ b/website/src/pages/de/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ Damit Graph Node Daten aus einer EVM-Kette aufnehmen kann, muss der RPC-Knoten d
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, in einem JSON-RPC-Batch-Antrag
-- `trace_filter` *(begrenztes Tracing und optional erforderlich für Graph Node)*
+- `trace_filter` _(begrenztes Tracing und optional erforderlich für Graph Node)_
### 2. Firehose Integration
@@ -63,7 +63,7 @@ Die Konfiguration von Graph Node ist so einfach wie die Vorbereitung Ihrer lokal
> Ändern Sie nicht den Namen der Env-Variable selbst. Er muss `ethereum` bleiben, auch wenn der Netzwerkname anders lautet.
-3. Führen Sie einen IPFS-Knoten aus oder verwenden Sie den von The Graph verwendeten: https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## Substreams-getriebene Subgraphen
diff --git a/website/src/pages/de/indexing/overview.mdx b/website/src/pages/de/indexing/overview.mdx
index 4635fbb7f2b9..38550ce65494 100644
--- a/website/src/pages/de/indexing/overview.mdx
+++ b/website/src/pages/de/indexing/overview.mdx
@@ -111,11 +111,11 @@ Indexierer können sich durch die Anwendung fortgeschrittener Techniken für die
- **Large** - Vorbereitet, um alle derzeit verwendeten Subgraphen zu indizieren und Anfragen für den entsprechenden Verkehr zu bedienen.
| Konfiguration | Postgres (CPUs) | Postgres (Speicher in GB) | Postgres (Festplatte in TB) | VMs (CPUs) | VMs (Speicher in GB) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| ------------- | :------------------: | :----------------------------: | :------------------------------: | :-------------: | :-----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### Was sind einige grundlegende Sicherheitsvorkehrungen, die ein Indexierer treffen sollte?
@@ -131,7 +131,7 @@ Im Zentrum der Infrastruktur eines Indexierers steht der Graph Node, der die ind
- **Datenendpunkt** - Bei EVM-kompatiblen Netzwerken muss der Graph Node mit einem Endpunkt verbunden sein, der eine EVM-kompatible JSON-RPC-API bereitstellt. Dabei kann es sich um einen einzelnen Client handeln oder um ein komplexeres Setup, das die Last auf mehrere Clients verteilt. Es ist wichtig, sich darüber im Klaren zu sein, dass bestimmte Subgraphen besondere Client-Fähigkeiten erfordern, wie z. B. den Archivmodus und/oder die Paritätsverfolgungs-API.
-- **IPFS-Knoten (Version kleiner als 5)** - Die Metadaten für die Subgraph-Bereitstellung werden im IPFS-Netzwerk gespeichert. Der Graph Node greift in erster Linie auf den IPFS-Knoten während der Bereitstellung des Subgraphen zu, um das Subgraphen-Manifest und alle verknüpften Dateien zu holen. Netzwerk-Indizierer müssen keinen eigenen IPFS-Knoten hosten, ein IPFS-Knoten für das Netzwerk wird unter https://ipfs.network.thegraph.com gehostet.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Indexierer-Dienst** - Erledigt alle erforderlichen externen Kommunikationen mit dem Netz. Teilt Kostenmodelle und Indizierungsstatus, leitet Abfrageanfragen von Gateways an einen Graph Node weiter und verwaltet die Abfragezahlungen über Statuskanäle mit dem Gateway.
@@ -147,26 +147,26 @@ Hinweis: Um eine flexible Skalierung zu unterstützen, wird empfohlen, Abfrage-
#### Graph-Knoten
-| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP Server (für Subgraph-Abfragen) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (für Subgraphen-Abonnements) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (zum Verwalten von Deployments) | / | \--admin-port | - |
-| 8030 | Status der Indizierung von Subgraphen API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus-Metriken | /metrics | \--metrics-port | - |
+| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
+| ---- | ------------------------------------------------ | ---------------------------------------------- | ------------------ | ----------------- |
+| 8000 | GraphQL HTTP Server (für Subgraph-Abfragen) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (für Subgraphen-Abonnements) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (zum Verwalten von Deployments) | / | \--admin-port | - |
+| 8030 | Status der Indizierung von Subgraphen API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus-Metriken | /metrics | \--metrics-port | - |
#### Indexer-Service
-| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP Server (für bezahlte Subgraph-Abfragen) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus-Metriken | /metrics | \--metrics-port | - |
+| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
+| ---- | --------------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP Server (für bezahlte Subgraph-Abfragen) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus-Metriken | /metrics | \--metrics-port | - |
#### Indexierer-Agent
-| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
-| ---- | ----------------------- | ------ | -------------------------- | --------------------------------------- |
-| 8000 | Indexer-Verwaltungs-API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` |
+| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
+| ---- | ------------------------------- | ------ | -------------------------- | --------------------------------------- |
+| 8000 | Indexer-Verwaltungs-API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` |
### Einrichten einer Server-Infrastruktur mit Terraform auf Google Cloud
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Erste Schritte mit Docker
@@ -708,42 +708,6 @@ Beachten Sie, dass unterstützte Aktionstypen für das Allokationsmanagement unt
Kostenmodelle ermöglichen eine dynamische Preisgestaltung für Abfragen auf der Grundlage von Markt- und Abfrageattributen. Der Indexierer-Service teilt ein Kostenmodell mit den Gateways für jeden Subgraphen, für den sie beabsichtigen, auf Anfragen zu antworten. Die Gateways wiederum nutzen das Kostenmodell, um Entscheidungen über die Auswahl der Indexer pro Anfrage zu treffen und die Bezahlung mit den ausgewählten Indexern auszuhandeln.
-#### Agora
-
-Die Agora-Sprache bietet ein flexibles Format zur Deklaration von Kostenmodellen für Abfragen. Ein Agora-Preismodell ist eine Folge von Anweisungen, die für jede Top-Level-Abfrage in einer GraphQL-Abfrage nacheinander ausgeführt werden. Für jede Top-Level-Abfrage bestimmt die erste Anweisung, die ihr entspricht, den Preis für diese Abfrage.
-
-Eine Anweisung besteht aus einem Prädikat, das zum Abgleich von GraphQL-Abfragen verwendet wird, und einem Kostenausdruck, der bei der Auswertung die Kosten in dezimalen GRT ausgibt. Werte in der benannten Argumentposition einer Abfrage können im Prädikat erfasst und im Ausdruck verwendet werden. Globale Werte können auch gesetzt und durch Platzhalter in einem Ausdruck ersetzt werden.
-
-Beispielkostenmodell:
-
-```
-# Diese Anweisung erfasst den Wert „skip“,
-# verwendet einen booleschen Ausdruck im Prädikat, um mit bestimmten Abfragen übereinzustimmen, die `skip` verwenden
-# und einen Kostenausdruck, um die Kosten auf der Grundlage des `skip`-Wertes und des globalen SYSTEM_LOAD-Wertes zu berechnen
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-# Diese Vorgabe passt auf jeden GraphQL-Ausdruck.
-# Sie verwendet ein Global, das in den Ausdruck eingesetzt wird, um die Kosten zu berechnen
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-Beispiel für eine Abfragekostenberechnung unter Verwendung des obigen Modells:
-
-| Abfrage | Preis |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### Anwendung des Kostenmodells
-
-Kostenmodelle werden über die Indexierer-CLI angewendet, die sie zum Speichern in der Datenbank an die Indexierer-Verwaltungs-API des Indexierer-Agenten übergibt. Der Indexierer-Service holt sie dann ab und stellt Gateways die Kostenmodelle zur Verfügung, jedesmal wenn sie danach fragen.
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## Interaktion mit dem Netzwerk
### Einsatz im Protokoll
diff --git a/website/src/pages/de/indexing/tap.mdx b/website/src/pages/de/indexing/tap.mdx
index a3eec839d931..8d76412fd28b 100644
--- a/website/src/pages/de/indexing/tap.mdx
+++ b/website/src/pages/de/indexing/tap.mdx
@@ -45,19 +45,19 @@ Solange Sie `tap-agent` und `indexer-agent` ausführen, wird alles automatisch a
### Verträge
-| Vertrag | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) |
-| ------------------- | -------------------------------------------- | -------------------------------------------- |
-| TAP-Prüfer | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` |
-| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` |
-| Treuhandkonto | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` |
+| Vertrag | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) |
+| -------------------------- | -------------------------------------------- | -------------------------------------------- |
+| TAP-Prüfer | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` |
+| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` |
+| Treuhandkonto | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` |
### Gateway
-| Komponente | Edge- und Node-Mainnet (Arbitrum-Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) |
-| ------------- | --------------------------------------------- | --------------------------------------------- |
-| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` |
-| Unterzeichner | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
-| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
+| Komponente | Edge- und Node-Mainnet (Arbitrum-Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) |
+| ---------------- | ---------------------------------------------- | --------------------------------------------- |
+| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` |
+| Unterzeichner | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
+| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
### Voraussetzungen
diff --git a/website/src/pages/de/indexing/tooling/graph-node.mdx b/website/src/pages/de/indexing/tooling/graph-node.mdx
index 3c4cb903b165..cb8182a9b3c8 100644
--- a/website/src/pages/de/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/de/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ Während einige Subgraphen nur einen vollständigen Knoten benötigen, können e
### IPFS-Knoten
-Die Metadaten für den Einsatz von Subgraphen werden im IPFS-Netzwerk gespeichert. Der Graph Node greift während des Einsatzes von Subgraphen primär auf den IPFS-Knoten zu, um das Subgraphen-Manifest und alle verknüpften Dateien abzurufen. Netzwerkindizierer müssen keinen eigenen IPFS-Knoten hosten. Ein IPFS-Knoten für das Netzwerk wird auf https://ipfs.network.thegraph.com gehostet.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### Prometheus-Metrikserver
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Erste Schritte mit Kubernetes
@@ -77,15 +77,20 @@ Eine vollständige Datenbeispiel-Konfiguration für Kubernetes ist im [indexer r
Wenn es ausgeführt wird, stellt Graph Node die folgenden Ports zur Verfügung:
-| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP Server (für Subgraph-Abfragen) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (für Subgraphen-Abonnements) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (zum Verwalten von Deployments) | / | \--admin-port | - |
-| 8030 | Status der Indizierung von Subgraphen API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus-Metriken | /metrics | \--metrics-port | - |
-
-> **Wichtig**: Seien Sie vorsichtig damit, Ports öffentlich zugänglich zu machen - **Verwaltungsports** sollten unter Verschluss gehalten werden. Dies gilt auch für den Graph Node JSON-RPC Endpunkt.
+| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
+| ---- | ------------------------------------------------ | ---------------------------------------------- | ------------------ | ----------------- |
+| 8000 | GraphQL HTTP Server (für Subgraph-Abfragen) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (für Subgraphen-Abonnements) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (zum Verwalten von Deployments) | / | \--admin-port | - |
+| 8030 | Status der Indizierung von Subgraphen API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus-Metriken | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## Erweiterte Graph-Knoten-Konfiguration
@@ -330,7 +335,7 @@ Datenbanktabellen, die Entitäten speichern, scheinen im Allgemeinen in zwei Var
Für kontoähnliche Tabellen kann `graph-node` Abfragen generieren, die sich die Details zunutze machen, wie Postgres Daten mit einer so hohen Änderungsrate speichert, nämlich dass alle Versionen für die jüngsten Blöcke in einem kleinen Teil des Gesamtspeichers für eine solche Tabelle liegen.
-Der Befehl `graphman stats show ` zeigt für jeden Entitätstyp/jede Tabelle in einem Einsatz an, wie viele unterschiedliche Entitäten und wie viele Entitätsversionen jede Tabelle enthält. Diese Daten beruhen auf Postgres-internen Schätzungen und sind daher notwendigerweise ungenau und können um eine Größenordnung abweichen. Ein `-1` in der Spalte `entities` bedeutet, dass Postgres davon ausgeht, dass alle Zeilen eine eindeutige Entität enthalten.
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
Im Allgemeinen sind Tabellen, bei denen die Anzahl der unterschiedlichen Entitäten weniger als 1 % der Gesamtzahl der Zeilen/Entitätsversionen beträgt, gute Kandidaten für die kontoähnliche Optimierung. Wenn die Ausgabe von `graphman stats show` darauf hindeutet, dass eine Tabelle von dieser Optimierung profitieren könnte, führt `graphman stats show ` eine vollständige Zählung der Tabelle durch - das kann langsam sein, liefert aber ein genaues Maß für das Verhältnis von eindeutigen Entitäten zu den gesamten Entitätsversionen.
@@ -340,6 +345,4 @@ For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates f
#### Entfernen von Subgraphen
-> This is new functionality, which will be available in Graph Node 0.29.x
-
Irgendwann möchte ein Indexer vielleicht einen bestimmten Subgraph entfernen. Dies kann einfach mit `graphman drop` gemacht werden, welches einen Einsatz und alle indizierten Daten löscht. Der Einsatz kann entweder als Name eines Subgraphen, als IPFS-Hash `Qm..` oder als Datenbank-Namensraum `sgdNNN` angegeben werden. Weitere Dokumentation ist [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop) verfügbar.
diff --git a/website/src/pages/de/resources/claude-mcp.mdx b/website/src/pages/de/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..5e1e68159023
--- /dev/null
+++ b/website/src/pages/de/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## Voraussetzungen
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/de/subgraphs/_meta-titles.json b/website/src/pages/de/subgraphs/_meta-titles.json
index 1338cbaa797d..b2c9cd10eaee 100644
--- a/website/src/pages/de/subgraphs/_meta-titles.json
+++ b/website/src/pages/de/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "Abfragen",
"developing": "Entwicklung",
"guides": "Anleitungen",
- "best-practices": "Bewährte Praktiken"
+ "best-practices": "Bewährte Praktiken",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/de/subgraphs/developing/creating/advanced.mdx b/website/src/pages/de/subgraphs/developing/creating/advanced.mdx
index e1245dcae9a8..38b0aead992e 100644
--- a/website/src/pages/de/subgraphs/developing/creating/advanced.mdx
+++ b/website/src/pages/de/subgraphs/developing/creating/advanced.mdx
@@ -8,11 +8,11 @@ Fügen Sie fortgeschrittene Subgraph-Funktionen hinzu und implementieren Sie sie
Ab `specVersion` `0.0.4` müssen Subgraph-Funktionen explizit im Abschnitt `features` auf der obersten Ebene der Manifestdatei unter Verwendung ihres `camelCase`-Namens deklariert werden, wie in der folgenden Tabelle aufgeführt:
-| Funktion | Name |
-| ------------------------------------------------- | ---------------- |
-| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` |
-| [Volltextsuche](#defining-fulltext-search-fields) | "Volltextsuche" |
-| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` |
+| Funktion | Name |
+| ----------------------------------------------------- | ---------------- |
+| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` |
+| [Volltextsuche](#defining-fulltext-search-fields) | "Volltextsuche" |
+| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` |
Wenn ein Subgraph beispielsweise die Funktionen **Volltextsuche** und **Nicht fatale Fehler** verwendet, sollte das Feld „Features“ im Manifest lauten:
@@ -173,17 +173,17 @@ Ursprüngliche kombinierte Einheit:
```graphql
type Token @entity {
- id: ID!
- tokenID: BigInt!
- tokenURI: String!
- externalURL: String!
- ipfsURI: String!
- image: String!
- name: String!
- description: String!
- type: String!
- updatedAtTimestamp: BigInt
- owner: User!
+ id: ID!
+ tokenID: BigInt!
+ tokenURI: String!
+ externalURL: String!
+ ipfsURI: String!
+ image: String!
+ name: String!
+ description: String!
+ type: String!
+ updatedAtTimestamp: BigInt
+ owner: User!
}
```
@@ -191,20 +191,20 @@ Neu, geteilte Einheit:
```graphql
type Token @entity {
- id: ID!
- tokenID: BigInt!
- tokenURI: String!
- ipfsURI: TokenMetadata
- updatedAtTimestamp: BigInt
- owner: String!
+ id: ID!
+ tokenID: BigInt!
+ tokenURI: String!
+ ipfsURI: TokenMetadata
+ updatedAtTimestamp: BigInt
+ owner: String!
}
type TokenMetadata @entity {
- id: ID!
- image: String!
- externalURL: String!
- name: String!
- description: String!
+ id: ID!
+ image: String!
+ externalURL: String!
+ name: String!
+ description: String!
}
```
@@ -528,7 +528,7 @@ subgraph.yaml„ unter Verwendung von “event.params
```yaml
Aufrufe:
- - ERC20DecimalsToken0: ERC20[event.params.token0].decimals()
+ - ERC20DecimalsToken0: ERC20[event.params.token0].decimals()
```
### Grafting auf bestehende Subgraphen
@@ -542,8 +542,8 @@ Ein Subgraph wird auf einen Basis-Subgraph gepfropft, wenn das Subgraph-Manifest
```yaml
Beschreibung: ...
graft:
- base: Qm ... # Subgraph ID des Basis-Subgraphen
- block: 7345624 # Blocknummer
+ base: Qm ... # Subgraph ID des Basis-Subgraphen
+ block: 7345624 # Blocknummer
```
Wenn ein Subgraph, dessen Manifest einen „graft“-Block enthält, bereitgestellt wird, kopiert Graph Node die Daten des ‚Basis‘-Subgraphen bis einschließlich des angegebenen „Blocks“ und fährt dann mit der Indizierung des neuen Subgraphen ab diesem Block fort. Der Basis-Subgraph muss auf der Ziel-Graph-Node-Instanz existieren und mindestens bis zum angegebenen Block indexiert sein. Aufgrund dieser Einschränkung sollte Grafting nur während der Entwicklung oder in Notfällen verwendet werden, um die Erstellung eines äquivalenten, nicht gepfropften Subgraphen zu beschleunigen.
diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 9dace9f39aaf..89111c270bba 100644
--- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Patch-Änderungen
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Geringfügige Änderungen
diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx
index c56511a3a35c..f6df454bbca8 100644
--- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ Die Bibliothek `@graphprotocol/graph-ts` bietet die folgenden APIs:
Die `apiVersion` im Subgraph-Manifest gibt die Mapping-API-Version an, die von Graph Node für einen bestimmten Subgraph ausgeführt wird.
-| Version | Hinweise zur Version |
-| :-: | --- |
-| 0.0.9 | Fügt neue Host-Funktionen hinzu [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Fügt eine Validierung für das Vorhandensein von Feldern im Schema beim Speichern einer Entität hinzu. |
-| 0.0.7 | Klassen `TransactionReceipt` und `Log` zu den Ethereum-Typen hinzugefügt<br />Feld `Receipt` zum Ethereum Event Objekt hinzugefügt |
-| 0.0.6 | Feld `nonce` zum Ethereum Transaction Objekt hinzugefügt<br />`baseFeePerGas` zum Ethereum Block Objekt hinzugefügt |
-| 0.0.5 | AssemblyScript wurde auf Version 0.19.10 aktualisiert (dies beinhaltet einige Änderungen, siehe [`Migrationsanleitung`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` umbenannt in `ethereum.transaction.gasLimit` |
-| 0.0.4 | Feld `functionSignature` zum Ethereum SmartContractCall Objekt hinzugefügt |
-| 0.0.3 | Feld `von` zum Ethereum Call Objekt hinzugefügt<br />`ethereum.call.address` umbenannt in `ethereum.call.to` |
-| 0.0.2 | Feld „Eingabe“ zum Ethereum-Transaktionsobjekt hinzugefügt |
+| Version | Hinweise zur Version |
+| :-----: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Fügt neue Host-Funktionen hinzu [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Fügt eine Validierung für das Vorhandensein von Feldern im Schema beim Speichern einer Entität hinzu. |
+| 0.0.7 | Klassen `TransactionReceipt` und `Log` zu den Ethereum-Typen hinzugefügt<br />Feld `Receipt` zum Ethereum Event Objekt hinzugefügt |
+| 0.0.6 | Feld `nonce` zum Ethereum Transaction Objekt hinzugefügt<br />`baseFeePerGas` zum Ethereum Block Objekt hinzugefügt |
+| 0.0.5 | AssemblyScript wurde auf Version 0.19.10 aktualisiert (dies beinhaltet einige Änderungen, siehe [`Migrationsanleitung`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` umbenannt in `ethereum.transaction.gasLimit` |
+| 0.0.4 | Feld `functionSignature` zum Ethereum SmartContractCall Objekt hinzugefügt |
+| 0.0.3 | Feld `von` zum Ethereum Call Objekt hinzugefügt<br />`ethereum.call.address` umbenannt in `ethereum.call.to` |
+| 0.0.2 | Feld „Eingabe“ zum Ethereum-Transaktionsobjekt hinzugefügt |
### Integrierte Typen
diff --git a/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx
index c198baf1e1f1..a2f39804cff0 100644
--- a/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -22,14 +22,14 @@ Starten Sie den Prozess und erstellen Sie einen Subgraphen, der Ihren Anforderun
Erkunden Sie zusätzliche [Ressourcen für APIs](/subgraphs/developing/creating/graph-ts/README/) und führen Sie lokale Tests mit [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) durch.
-| Version | Hinweise zur Version |
-| :-: | --- |
-| 1.2.0 | Unterstützung für [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) hinzugefügt & `eth_call` erklärt |
-| 1.1.0 | Unterstützt [Timeseries & Aggregations](#timeseries-and-aggregations). Unterstützung für Typ `Int8` für `id` hinzugefügt. |
-| 1.0.0 | Unterstützt die Funktion [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) zum Beschneiden von Subgraphen |
-| 0.0.9 | Unterstützt `endBlock` Funktion |
-| 0.0.8 | Unterstützung für die Abfrage von [Block-Handlern](/developing/creating-a-subgraph/#polling-filter) und [Initialisierungs-Handlern](/developing/creating-a-subgraph/#once-filter) hinzugefügt. |
-| 0.0.7 | Unterstützung für [Dateidatenquellen](/developing/creating-a-subgraph/#file-data-sources) hinzugefügt. |
-| 0.0.6 | Unterstützt schnelle [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) Berechnungsvariante. |
-| 0.0.5 | Unterstützung für Event-Handler mit Zugriff auf Transaktionsbelege hinzugefügt. |
-| 0.0.4 | Unterstützung für die Verwaltung von Subgraphen-Features wurde hinzugefügt. |
+| Version | Hinweise zur Version |
+| :-----: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 1.2.0 | Unterstützung für [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) hinzugefügt & `eth_call` erklärt |
+| 1.1.0 | Unterstützt [Timeseries & Aggregations](#timeseries-and-aggregations). Unterstützung für Typ `Int8` für `id` hinzugefügt. |
+| 1.0.0 | Unterstützt die Funktion [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) zum Beschneiden von Subgraphen |
+| 0.0.9 | Unterstützt `endBlock` Funktion |
+| 0.0.8 | Unterstützung für die Abfrage von [Block-Handlern](/developing/creating-a-subgraph/#polling-filter) und [Initialisierungs-Handlern](/developing/creating-a-subgraph/#once-filter) hinzugefügt. |
+| 0.0.7 | Unterstützung für [Dateidatenquellen](/developing/creating-a-subgraph/#file-data-sources) hinzugefügt. |
+| 0.0.6 | Unterstützt schnelle [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) Berechnungsvariante. |
+| 0.0.5 | Unterstützung für Event-Handler mit Zugriff auf Transaktionsbelege hinzugefügt. |
+| 0.0.4 | Unterstützung für die Verwaltung von Subgraphen-Features wurde hinzugefügt. |
diff --git a/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx
index 357617cfce50..f8733d6ef561 100644
--- a/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx
+++ b/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx
@@ -155,7 +155,7 @@ Also you can check out the video series on ["How to use Matchstick to write unit
## Struktur der Tests
-WICHTIG: Die unten beschriebene Teststruktur hängt von der Version `matchstick-as` >=0.5.0\*\*\_ ab.
+WICHTIG: Die unten beschriebene Teststruktur hängt von der Version `matchstick-as` >=0.5.0\*\*_ ab.
### describe()
@@ -728,12 +728,12 @@ import { addMetadata, assert, createMockedFunction, clearStore, test } from 'mat
import { Gravity } from '../../generated/Gravity/Gravity'
import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts'
-let contractAddress = Address.fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7')
+let contractAddress = Address. fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7')
let expectedResult = Address.fromString('0x90cBa2Bbb19ecc291A12066Fd8329D65FA1f1947')
-let bigIntParam = BigInt.fromString('1234')
+let bigIntParam = BigInt. fromString('1234')
createMockedFunction(contractAddress, 'gravatarToOwner', 'gravatarToOwner(uint256):(address)')
- .withArgs([ethereum.Value.fromSignedBigInt(bigIntParam)])
- .returns([ethereum.Value.fromAddress(Address.fromString('0x90cBa2Bbb19ecc291A12066Fd8329D65FA1f1947'))])
+.withArgs([ethereum.Value.fromSignedBigInt(bigIntParam)])
+.returns([ethereum.Value.fromAddress(Address. fromString('0x90cBa2Bbb19ecc291A12066Fd8329D65FA1f1947'))])
let gravity = Gravity.bind(contractAddress)
let result = gravity.gravatarToOwner(bigIntParam)
diff --git a/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx
index 6db33ed6bf1e..9d918a953466 100644
--- a/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx
@@ -212,7 +212,7 @@ Jeder Subgraph, der von dieser Richtlinie betroffen ist, hat die Möglichkeit, d
Wenn ein Subgraph erfolgreich synchronisiert wird, ist das ein gutes Zeichen dafür, dass er für immer gut laufen wird. Neue Auslöser im Netzwerk könnten jedoch dazu führen, dass Ihr Subgraph auf eine ungetestete Fehlerbedingung stößt, oder er könnte aufgrund von Leistungsproblemen oder Problemen mit den Knotenbetreibern ins Hintertreffen geraten.
-Graph Node stellt einen GraphQL-Endpunkt zur Verfügung, den Sie abfragen können, um den Status Ihres Subgraphen zu überprüfen. Auf dem gehosteten Dienst ist er unter `https://api.thegraph.com/index-node/graphql` verfügbar. Auf einem lokalen Knoten ist er standardmäßig auf Port `8030/graphql` verfügbar. Das vollständige Schema für diesen Endpunkt finden Sie [hier](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Hier ist ein Datenbeispiel für eine Abfrage, die den Status der aktuellen Version eines Subgraphen überprüft:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph: `https://indexer.upgrade.thegraph.com/status`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
diff --git a/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 4f784b4304b8..2428cc8eca51 100644
--- a/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -88,6 +88,8 @@ graph auth
Sobald Sie bereit sind, können Sie Ihren Subgraph in Subgraph Studio bereitstellen.
> Wenn Sie einen Subgraphen mit der Befehlszeilenschnittstelle bereitstellen, wird er in das Studio übertragen, wo Sie ihn testen und die Metadaten aktualisieren können. Durch diese Aktion wird Ihr Subgraph nicht im dezentralen Netzwerk veröffentlicht.
+>
+> **Note**: Each account is limited to 3 deployed (unpublished) Subgraphs. If you reach this limit, you must archive or publish existing Subgraphs before deploying new ones.
Verwenden Sie den folgenden CLI-Befehl, um Ihren Subgraph zu verteilen:
@@ -104,6 +106,8 @@ Nach der Ausführung dieses Befehls wird die CLI nach einer Versionsbezeichnung
Nach dem Deployment können Sie Ihren Subgraph testen (entweder in Subgraph Studio oder in Ihrer eigenen Anwendung, mit der Deployment-Query-URL), eine weitere Version deployen, die Metadaten aktualisieren und im [Graph Explorer](https://thegraph.com/explorer) veröffentlichen, wenn Sie bereit sind.
+> **Note**: The development query URL is limited to 3,000 queries per day.
+
Verwenden Sie Subgraph Studio, um die Protokolle auf dem Dashboard zu überprüfen und nach Fehlern in Ihrem Subgraphen zu suchen.
## Veröffentlichen Sie Ihren Subgraph
diff --git a/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index 2fa5e3654038..474081f817d5 100644
--- a/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -53,7 +53,7 @@ USAGE
FLAGS
-h, --help Show CLI help.
- -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node.
+ -i, --ipfs= [default: https://ipfs.thegraph.com/api/v0] Upload build results to an IPFS node.
--ipfs-hash= IPFS hash of the subgraph manifest to deploy.
--protocol-network= [default: arbitrum-one] The network to use for the subgraph deployment.
diff --git a/website/src/pages/de/subgraphs/explorer.mdx b/website/src/pages/de/subgraphs/explorer.mdx
index 3a386698a7d4..f98024526dd0 100644
--- a/website/src/pages/de/subgraphs/explorer.mdx
+++ b/website/src/pages/de/subgraphs/explorer.mdx
@@ -2,83 +2,103 @@
title: Graph Explorer
---
-Erschließen Sie die Welt der Subgraphen und Netzwerkdaten mit [Graph Explorer] (https://thegraph.com/explorer).
+Use [Graph Explorer](https://thegraph.com/explorer) and take full advantage of its core features.
## Überblick
-Graph Explorer besteht aus mehreren Teilen, in denen Sie mit [[Subgraphen]] (https://thegraph.com/explorer?chain=arbitrum-one) interagieren, [[delegieren]] (https://thegraph.com/explorer/delegate?chain=arbitrum-one), [[Teilnehmer]] (https://thegraph.com/explorer/participants?chain=arbitrum-one) einbeziehen, [[Netzwerkinformationen]] (https://thegraph.com/explorer/network?chain=arbitrum-one) anzeigen und auf Ihr Benutzerprofil zugreifen können.
+This guide explains how to use [Graph Explorer](https://thegraph.com/explorer) to quickly discover and interact with Subgraphs on The Graph Network, delegate GRT, view participant metrics, and analyze network performance.
-## Inside Explorer
+> When you visit Graph Explorer, you can also access the link to [explore Substreams](https://substreams.dev/).
-Nachfolgend finden Sie eine Übersicht über die wichtigsten Funktionen von Graph Explorer. Für zusätzliche Unterstützung können Sie sich den [Graph Explorer Video Guide](/subgraphs/explorer/#video-guide) ansehen.
+## Voraussetzungen
-### Subgraphen-Seite
+- To perform actions, you need a wallet (e.g., MetaMask) connected to [Graph Explorer](https://thegraph.com/explorer).
+ > Make sure your wallet is connected to the correct network (e.g., Arbitrum). Features and data shown are network specific.
+- GRT tokens if you plan to delegate or curate.
+- Basic knowledge of [Subgraphs](https://thegraph.com/docs/en/subgraphs/developing/subgraphs/)
-Nachdem Sie Ihren Subgraph in Subgraph Studio bereitgestellt und veröffentlicht haben, gehen Sie zu [Graph Explorer] (https://thegraph.com/explorer) und klicken Sie auf den Link „[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)“ in der Navigationsleiste, um auf Folgendes zuzugreifen:
+## Navigating Graph Explorer
-- Ihre eigenen fertigen Subgraphen
-- Von anderen veröffentlichte Subgraphen
-- Den genauen Subgraphen, den Sie wünschen (basierend auf dem Erstellungsdatum, der Signalmenge oder dem Namen).
+### Step 1. Explore Subgraphs
-
+> For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
-Wenn Sie in einen Subgraphen klicken, können Sie Folgendes tun:
+Go to the Subgraphs page in [Graph Explorer](https://thegraph.com/explorer).
-- Testen Sie Abfragen auf dem Playground und nutzen Sie Netzwerkdetails, um fundierte Entscheidungen zu treffen.
-- Signalisieren Sie GRT auf Ihrem eigenen Subgraphen oder den Subgraphen anderer, um die Indexierer auf seine Bedeutung und Qualität aufmerksam zu machen.
+- If you've deployed and published your Subgraph in Subgraph Studio, you can view it here.
+- Search all published Subgraphs and filter them by indexed network, specific categories (such as DeFI, NFTs, and DAOs), and **most queried, most curated, recently created, and recently updated**.
+
+
+
+To find Subgraphs indexing a specific contract, enter the contract address into the search bar.
+
+- For example, you can enter the L2GNS contract on Artbitrum (`0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec`) and this returns all Subgraphs indexing that contract:
+
+
- - Dies ist von entscheidender Bedeutung, da die Signalisierung eines Subgraphen einen Anreiz darstellt, ihn zu indizieren, was bedeutet, dass er schließlich im Netzwerk auftaucht, um Abfragen zu bedienen.
+> Looking for indexing contracts? Check out [this Subgraph](https://thegraph.com/explorer/subgraphs/FMTUN6d7sY2bLnAmNEPJTqiU3iuQht6ZXurpBh71wbWR?view=About&chain=arbitrum-one) which indexes contract addresses listed in its manifest. It shows all current deployments indexing those contracts on Arbitrum One, along with the signal allocated to each.
-
+You can click into any Subgraph, to:
+
+- Testen Sie Abfragen auf dem Playground und nutzen Sie Netzwerkdetails, um fundierte Entscheidungen zu treffen.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make Indexers aware of its importance and quality.
+ > This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it'll eventually surface on the network to serve queries.
+
+
Auf der speziellen Seite jedes Subgraphen können Sie Folgendes tun:
-- Signal/Un-Signal auf Subgraphen
-- Weitere Details wie Diagramme, aktuelle Bereitstellungs-ID und andere Metadaten anzeigen
-- Versionen wechseln, um frühere Iterationen des Subgraphen zu erkunden
- Abfrage von Subgraphen über GraphQL
+- View Subgraph ID, current deployment ID, Query URL, and other metadata
+- Signal/unsignal on Subgraphs
- Subgraphen auf dem Prüfstand testen
- Anzeigen der Indexierer, die auf einem bestimmten Subgraphen indexieren
- Subgraphen-Statistiken (Zuweisungen, Kuratoren, etc.)
-- Anzeigen der Entität, die den Subgraphen veröffentlicht hat
+- View query fees and charts
+- Change versions to explore past iterations of the Subgraph
+- View entity types
+- View Subgraph activity
-
+
-### Delegierten-Seite
+### Step 2. Delegate GRT
-Auf der [Delegierten-Seite] (https://thegraph.com/explorer/delegate?chain=arbitrum-one) finden Sie Informationen zum Delegieren, zum Erwerb von GRT und zur Auswahl eines Indexierers.
+Go to the [Delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) page to learn how to delegate, get GRT, and choose an Indexer.
-Auf dieser Seite können Sie Folgendes sehen:
+Here, you can:
-- Indexierer, die die meisten Abfragegebühren erhoben haben
-- Indexierer mit dem höchsten geschätzten effektiven Jahreszins
+- Compare Indexers by most query fees earned and highest estimated APR.
+- Use the built-in ROI calculator or search by Indexer name or address.
+- Click **"Delegate"** next to an Indexer to stake your GRT.
-Darüber hinaus können Sie Ihren ROI berechnen und die besten Indexierer nach Name, Adresse oder Subgraph suchen.
+### Step 3. Monitor Participants in the Network
-### Teilnehmer-Seite
+Go to the [Participants](https://thegraph.com/explorer/participants?chain=arbitrum-one) page to view:
-Diese Seite bietet einen Überblick über alle „Teilnehmer“, d. h. alle am Netzwerk beteiligten Personen wie Indexer, Delegatoren und Kuratoren.
+- Indexers: stakes, allocations, rewards, and delegation parameters
+- Curators: signal amounts, Subgraph shares, and activity history
+- Delegators: current and historical delegations, rewards, and Indexer metrics
-#### 1. Indexierer
+#### Indexer
-
+
Indexierer sind das Rückgrat des Protokolls. Sie setzen auf Subgraphen, indizieren sie und stellen allen, die Subgraphen konsumieren, Abfragen zur Verfügung.
-In der Tabelle Indizierer können Sie die Delegationsparameter eines Indizierers, seinen Einsatz, die Höhe seines Einsatzes für jeden Subgraphen und die Höhe seiner Einnahmen aus Abfragegebühren und Indizierungsprämien sehen.
+In the Indexers table, you can see an Indexer's delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Besonderheiten**
-- Abfragegebührenkürzung – der Prozentsatz der Abfragegebührenrabatte, den der Indexierer bei der Aufteilung mit Delegatoren teien.
-- Effektiver Reward Cut - der auf den Delegationspool angewandte Indexierungs-Reward Cut. Ist er negativ, bedeutet dies, dass der Indexierer einen Teil seiner Rewards abgibt. Ist er positiv, bedeutet dies, dass der Indexierer einen Teil seiner Rewards behält.
-- Verbleibende Abklingzeit - die verbleibende Zeit, bis der Indexierer die oben genannten Delegationsparameter ändern kann. Abklingzeiten werden von Indexierern festgelegt, wenn sie ihre Delegationsparameter aktualisieren.
-- Eigenkapital - Dies ist der hinterlegte Einsatz des Indexierers, der bei bösartigem oder falschem Verhalten gekürzt werden kann.
-- Delegiert - Einsätze von Delegatoren, die vom Indexierer zugewiesen werden können, aber nicht durchgeschnitten werden können.
-- Zugewiesen - Einsatz, den Indexierer aktiv den Subgraphen zuweisen, die sie indizieren.
-- Verfügbare Delegationskapazität - die Menge der delegierten Anteile, die die Indexierer noch erhalten können, bevor sie überdelegiert werden.
-- Maximale Delegationskapazität - der maximale Betrag an delegiertem Einsatz, den der Indexierer produktiv akzeptieren kann. Ein überschüssiger delegierter Einsatz kann nicht für Zuteilungen oder Belohnungsberechnungen verwendet werden.
-- Abfragegebühren - dies ist die Gesamtsumme der Gebühren, die Endnutzer über die gesamte Zeit für Abfragen von einem Indexierer bezahlt haben.
-- Indexierer Rewards - dies ist die Gesamtsumme der Indexierer Rewards, die der Indexierer und seine Delegatoren über die gesamte Zeit verdient haben. Indexierer Rewards werden durch die Ausgabe von GRTs ausgezahlt.
+- Query Fee Cut: The % of the query fee rebates that the Indexer keeps when splitting with Delegators.
+- Effective Reward Cut: The indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
+- Cooldown Remaining: The time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
+- Owned: This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
+- Delegated: Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
+- Allocated: Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
+- Available Delegation Capacity: The amount of delegated stake the Indexers can still receive before they become over-delegated.
+- Max Delegation Capacity: The maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
+- Query Fees: This is the total fees that end users have paid for queries from an Indexer over all time.
+- Indexer Rewards: This is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
Indexierer können sowohl Abfragegebühren als auch Indexierungsprämien verdienen. Funktionell geschieht dies, wenn Netzwerkteilnehmer GRT an einen Indexierer delegieren. Dadurch können Indexierer je nach ihren Indexierer-Parametern Abfragegebühren und Belohnungen erhalten.
@@ -86,9 +106,9 @@ Indexierer können sowohl Abfragegebühren als auch Indexierungsprämien verdien
Um mehr darüber zu erfahren, wie man ein Indexierer wird, können Sie einen Blick auf die [offizielle Dokumentation](/indexing/overview/) oder [The Graph Academy Indexer guides](https://thegraph.academy/delegators/choosing-indexers/) werfen.
-
+
-#### 2. Kuratoren
+#### Curators
Kuratoren analysieren Subgraphen, um festzustellen, welche Subgraphen von höchster Qualität sind. Sobald ein Kurator einen potenziell hochwertigen Subgraphen gefunden hat, kann er ihn kuratieren, indem er seine Bindungskurve signalisiert. Auf diese Weise teilen die Kuratoren den Indexierern mit, welche Subgraphen von hoher Qualität sind und indiziert werden sollten.
@@ -102,11 +122,11 @@ In der unten aufgeführten Tabelle von The Curator können Sie sehen:
- Die Anzahl der hinterlegten GRT
- Die Anzahl der Anteile, die ein Kurator besitzt
-
+
Wenn Sie mehr über die Rolle des Kurators erfahren möchten, besuchen Sie [offizielle Dokumentation](/resources/roles/curating/) oder [The Graph Academy](https://thegraph.academy/curators/).
-#### 3. Delegatoren
+#### Delegatoren
Delegatoren spielen eine Schlüsselrolle bei der Aufrechterhaltung der Sicherheit und Dezentralisierung des Graph Network. Sie beteiligen sich am Netzwerk, indem sie GRT-Token an einen oder mehrere Indexierer delegieren (d.h. „staken“).
@@ -114,7 +134,7 @@ Delegatoren spielen eine Schlüsselrolle bei der Aufrechterhaltung der Sicherhei
- Die Delegatoren wählen die Indexierer auf der Grundlage einer Reihe von Variablen aus, wie z. B. frühere Leistungen, Indexierungsvergütungssätze und Senkung der Abfragegebühren.
- Die Reputation innerhalb der Community kann bei der Auswahl ebenfalls eine Rolle spielen. Es wird empfohlen, mit den ausgewählten Indexierern über [The Graph's Discord] (https://discord.gg/graphprotocol) oder [The Graph Forum] (https://forum.thegraph.com/) in Kontakt zu treten.
-
+
In der Tabelle „Delegatoren“ können Sie die aktiven Delegatoren in der Community und wichtige Metriken einsehen:
@@ -127,9 +147,9 @@ In der Tabelle „Delegatoren“ können Sie die aktiven Delegatoren in der Comm
If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers).
-### Netzwerk-Seite
+### Step 4. Analyze Network Performance
-Auf dieser Seite können Sie globale KPIs sehen und haben die Möglichkeit, auf eine Epochenbasis zu wechseln und die Netzwerkmetriken detaillierter zu analysieren. Diese Details geben Ihnen ein Gefühl dafür, wie sich das Netzwerk im Laufe der Zeit entwickelt.
+On the [Network](https://thegraph.com/explorer/network?chain=arbitrum-one) page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
#### Überblick
@@ -147,7 +167,7 @@ Ein paar wichtige Details sind zu beachten:
- **Die Abfragegebühren stellen die von den Verbrauchern** generierten Gebühren dar. Sie können von den Indexierern nach einem Zeitraum von mindestens 7 Epochen (siehe unten) eingefordert werden (oder auch nicht), nachdem ihre Zuweisungen zu den Subgraphen abgeschlossen wurden und die von ihnen gelieferten Daten von den Verbrauchern validiert wurden.
- **Die Indizierungs-Belohnungen stellen die Anzahl der Belohnungen dar, die die Indexierer während der Epoche von der Netzwerkausgabe beansprucht haben.** Obwohl die Protokollausgabe festgelegt ist, werden die Belohnungen erst geprägt, wenn die Indexierer ihre Zuweisungen zu den Subgraphen schließen, die sie indiziert haben. Daher variiert die Anzahl der Rewards pro Epoche (d. h. während einiger Epochen könnten Indexer kollektiv Zuweisungen geschlossen haben, die seit vielen Tagen offen waren).
-
+
#### Epochen
@@ -161,69 +181,77 @@ Im Abschnitt Epochen können Sie je nach Epochen Metriken analysieren:
- Die verteilenden Epochen sind die Epochen, in denen die Zustandskanäle für die Epochen abgerechnet werden und die Indexierer ihre Rückerstattung der Abfragegebühren beantragen können.
- Die abgeschlossenen Epochen sind die Epochen, für die die Indexierer keine Abfragegebühren-Rabatte mehr beanspruchen können.
-
+
+
+## Access and Manage Your User Profile
+
+### Step 1. Access Your Profile
-## Ihr Benutzerprofil
+- Click your wallet address in the top right corner
+- Your wallet acts as your user profile
+- In your profile dashboard, you can view and interact with several useful tabs
-Ihr persönliches Profil ist der Ort, an dem Sie Ihre Netzwerkaktivitäten sehen können, unabhängig von Ihrer Rolle im Netzwerk. Ihre Krypto- Wallet dient als Ihr Benutzerprofil, und im Benutzer-Dashboard können Sie die folgenden Registerkarten sehen:
+### Step 2. Explore the Tabs
-### Profil-Übersicht
+#### Profil-Übersicht
In diesem Abschnitt können Sie Folgendes sehen:
-- Jede Ihrer aktuellen Aktionen, die Sie durchgeführt haben.
-- Ihre Profilinformationen, Beschreibung und Website (falls Sie eine hinzugefügt haben).
+- Your activity
+- Your profile information: total query fees, total shares value, owned stake, stake delegating
-
+
-### Registerkarte "Subgraphen"
+#### Registerkarte "Subgraphen"
-Auf der Registerkarte "Subgraphen" sehen Sie Ihre veröffentlichten Subgraphen.
+The Subgraphs tab displays all your published Subgraphs.
-> Dies schließt keine Subgraphen ein, die mit dem CLI zu Testzwecken bereitgestellt wurden. Subgraphen werden erst angezeigt, wenn sie im dezentralen Netzwerk veröffentlicht werden.
+> Subgraphs deployed with the CLI for testing purposes will not show up here. Subgraphs will only show up when they are published to the decentralized network.
-
+
-### Registerkarte "Indizierung"
+#### Registerkarte "Indizierung"
-Auf der Registerkarte "Indizierung" finden Sie eine Tabelle mit allen aktiven und historischen Zuweisungen zu Subgraphen. Hier finden Sie auch Diagramme, in denen Sie Ihre bisherige Leistung als Indexierer sehen und analysieren können.
+> If you haven't indexed, you will see links to stake to index Subgraphs and browse Subgraphs on Graph Explorer.
-Dieser Abschnitt enthält auch Angaben zu Ihren Netto-Indexierer-Belohnungen und Netto-Abfragegebühren. Sie sehen die folgenden Metriken:
+The Indexing tab displays a table where you can review active and historical allocations to Subgraphs.
-- Delegated Stake - der Einsatz von Delegatoren, der von Ihnen zugewiesen werden kann, aber nicht reduziert werden kann
-- Gesamte Abfragegebühren - die gesamten Gebühren, die Nutzer im Laufe der Zeit für von Ihnen durchgeführte Abfragen bezahlt haben
-- Indexierer Rewards - der Gesamtbetrag der Indexierer Rewards, die Sie erhalten haben, in GRT
-- Gebührensenkung - der Prozentsatz der Rückerstattungen von Abfragegebühren, den Sie behalten, wenn Sie mit Delegatoren teilen
-- Rewardkürzung - der Prozentsatz der Indexierer-Rewards, den Sie behalten, wenn Sie mit Delegatoren teilen
-- Eigenkapital - Ihr hinterlegter Einsatz, der bei böswilligem oder falschem Verhalten gekürzt werden kann
+Track your Indexer performance with visual charts and key metrics, including:
-
+- Delegated Stake: Stake from Delegators that can be allocated by you but cannot be slashed.
+- Total Query Fees: Cumulative fees from served queries.
+- Indexer Rewards (in GRT): Total rewards earned.
+- Fee Cut & Rewards Cut: The % of query fee rebates and Indexer rewards you'll keep when you split with Delegators.
+- Owned Stake: Your deposited stake, which could be slashed for malicious or incorrect behavior.
-### Registerkarte "Delegieren"
+
-Die Delegatoren sind wichtig für The Graph Network. Sie müssen ihr Wissen nutzen, um einen Indexierer auszuwählen, der eine gesunde Rendite abwirft.
+#### Registerkarte "Delegieren"
-Auf der Registerkarte "Delegatoren" finden Sie die Details Ihrer aktiven und historischen Delegationen sowie die Metriken der Indexierer, an die Sie delegiert haben.
+> To learn more about the benefits of delegating, check out [delegating](/resources/roles/delegating/delegating/).
-In der ersten Hälfte der Seite sehen Sie Ihr Delegationsdiagramm sowie das Diagramm „Nur Belohnungen“. Auf der linken Seite sehen Sie die KPIs, die Ihre aktuellen Delegationskennzahlen widerspiegeln.
+The Delegators tab displays your active and historical delegations, along with the metrics for the Indexers you've delegated to.
-Auf dieser Registerkarte sehen Sie unter anderem die Delegator-Metriken:
+Top Section:
-- Delegationsprämien insgesamt
-- Unrealisierte Rewards insgesamt
-- Gesamte realisierte Rewards
+- View delegation and rewards-only charts
+- Track key metrics:
+ - Delegationsprämien insgesamt
+ - Unrealized rewards
+ - Realized Rewards
-In der zweiten Hälfte der Seite finden Sie die Tabelle der Delegationen. Hier sehen Sie die Indexierer, an die Sie delegiert haben, sowie deren Details (wie z. B. Belohnungskürzungen, Abklingzeit, usw.).
+Bottom Section:
-Mit den Schaltflächen auf der rechten Seite der Tabelle können Sie Ihre Delegierung verwalten - mehr delegieren, die Delegierung aufheben oder Ihre Delegierung nach der Auftauzeit zurückziehen.
+- Explore a table of your Indexer delegations, including reward cuts, cooldowns, and more.
+- Use the buttons on the right side of the table to manage your delegation - delegate more, undelegate, or withdraw it after the thawing period.
-Beachten Sie, dass dieses Diagramm horizontal gescrollt werden kann. Wenn Sie also ganz nach rechts scrollen, können Sie auch den Status Ihrer Delegation sehen (delegierend, nicht delegierend, zurückziehbar).
+> This table is horizontally scrollable, so scroll right to see delegation status: delegating, undelegating, or withdrawable.
-
+
-### Registerkarte "Kuratieren"
+#### Registerkarte "Kuratieren"
-Auf der Registerkarte „Kuratierung“ finden Sie alle Subgraphen, für die Sie ein Signal geben (damit Sie Abfragegebühren erhalten). Mit der Signalisierung können Kuratoren den Indexierern zeigen, welche Subgraphen wertvoll und vertrauenswürdig sind und somit signalisieren, dass sie indiziert werden müssen.
+The Curation tab displays all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
Auf dieser Registerkarte finden Sie eine Übersicht über:
@@ -232,22 +260,22 @@ Auf dieser Registerkarte finden Sie eine Übersicht über:
- Abfragebelohnungen pro Subgraph
- Aktualisiert bei Datumsdetails
-
+
-### Ihre Profileinstellungen
+#### Ihre Profileinstellungen
In Ihrem Benutzerprofil können Sie Ihre persönlichen Profildaten verwalten (z. B. einen ENS-Namen einrichten). Wenn Sie ein Indexierer sind, haben Sie sogar noch mehr Zugang zu den Einstellungen, die Ihnen zur Verfügung stehen. In Ihrem Benutzerprofil können Sie Ihre Delegationsparameter und Operatoren einrichten.
- Operatoren führen im Namen des Indexierers begrenzte Aktionen im Protokoll durch, wie z. B. das Öffnen und Schließen von Allokationen. Operatoren sind in der Regel andere Ethereum-Adressen, die von ihrer Staking-Wallet getrennt sind und einen beschränkten Zugang zum Netzwerk haben, den Indexer persönlich festlegen können
- Mit den Delegationsparametern können Sie die Verteilung der GRT zwischen Ihnen und Ihren Delegatoren steuern.
-
+
Als Ihr offizielles Portal in die Welt der dezentralen Daten ermöglicht Ihnen der Graph Explorer eine Vielzahl von Aktionen, unabhängig von Ihrer Rolle im Netzwerk. Sie können zu Ihren Profileinstellungen gelangen, indem Sie das Dropdown-Menü neben Ihrer Adresse öffnen und dann auf die Schaltfläche Einstellungen klicken.

-## Zusätzliche Ressourcen
+### Zusätzliche Ressourcen
### Video-Leitfaden
diff --git a/website/src/pages/de/subgraphs/fair-use-policy.mdx b/website/src/pages/de/subgraphs/fair-use-policy.mdx
new file mode 100644
index 000000000000..6ef14fc646f7
--- /dev/null
+++ b/website/src/pages/de/subgraphs/fair-use-policy.mdx
@@ -0,0 +1,51 @@
+---
+title: Fair Use Policy
+---
+
+> Effective Date: May 15, 2025
+
+## Überblick
+
+This outlines storage limits for Subgraphs that rely solely on [Edge & Node's Upgrade Indexer](/subgraphs/upgrade-indexer/). This policy is designed to ensure fair and optimized use of queries across the community.
+
+To maintain performance and reliability across its infrastructure, Edge & Node is updating its Upgrade Indexer Subgraph storage policy. Free usage tiers remain available, but users who exceed specified limits will need to upgrade to a paid plan. Storage allocations and thresholds vary by feature.
+
+### 1. Scope
+
+This policy applies to all individual users, teams, chains, and dapps using Edge & Node's Upgrade Indexer in Subgraph Studio for storage and queries.
+
+### 2. Fair Use Storage Limits
+
+**Free Storage: Up to 10 GB**
+
+Beyond that, pricing is variable and adjusts based on usage patterns, network conditions, infrastructure requirements, and specific use cases.
+
+Reach out to Edge & Node at [info@edgeandnode.com](mailto:info@edgeandnode.com) to discuss options that meet your technical needs.
+
+You can monitor your usage via [Subgraph Studio](https://thegraph.com/studio/).
+
+### 3. Fair Use Limits
+
+To preserve the stability of Edge & Node's Subgraph Studio and preserve the reliability of The Graph Network, the Edge & Node Support Team will monitor storage usage and take corresponding action with Subgraphs that have:
+
+- Abnormally high or sustained bandwidth or storage usage beyond posted limits
+- Circumvention of storage thresholds (e.g., use of multiple free-tier accounts)
+
+The Edge & Node Support Team reserves the right to revise storage limits or impose temporary constraints for operational integrity.
+
+If you exceed your included storage:
+
+- Try [pruning Subgraph data](/subgraphs/best-practices/pruning/) to remove unused entities and help stay within storage limits
+- [Add signal to the Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to encourage other Indexers on the network to serve it
+- You will receive multiple notifications and email alerts
+- A grace period of 14 days will be provided to upgrade or reduce storage
+
+Edge & Node's team is committed to helping users avoid unnecessary interruptions and will continue to support all web3 builders.
+
+### 4. Subgraph Data Retention
+
+Subgraphs inactive for over 14 days or Subgraphs that exceed free-tier storage limits will be subject to automatic data archival or deletion. Edge & Node's team will notify you before any such actions are taken.
+
+### 5. Support
+
+If you believe your usage is incorrectly flagged or have unique use cases (e.g. approved special request pending new Subgraph upgrade plan), reach out the Edge & Node team at [info@edgeandnode.com](mailto:info@edgeandnode.com).
diff --git a/website/src/pages/de/subgraphs/guides/near.mdx b/website/src/pages/de/subgraphs/guides/near.mdx
index 3bb7e5af4796..586e97ebc5b9 100644
--- a/website/src/pages/de/subgraphs/guides/near.mdx
+++ b/website/src/pages/de/subgraphs/guides/near.mdx
@@ -185,8 +185,8 @@ Als kurze Einführung - der erste Schritt ist das „Erstellen“ Ihres Subgraph
Sobald Ihr Subgraph erstellt wurde, können Sie ihn mit dem CLI-Befehl `graph deploy` einsetzen:
```sh
-$ graph create --node # erstellt einen Subgraph auf einem lokalen Graph-Knoten (bei Subgraph Studio wird dies über die Benutzeroberfläche erledigt)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # lädt die Build-Dateien auf einen angegebenen IPFS-Endpunkt hoch und stellt den Subgraphen dann auf der Grundlage des manifestierten IPFS-Hashs auf einem angegebenen Graph-Knoten bereit
+$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
+$ graph deploy --node --ipfs https://ipfs.thegraph.com # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
Die Knotenkonfiguration hängt davon ab, wo der Subgraph eingesetzt werden soll.
diff --git a/website/src/pages/de/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/de/subgraphs/guides/subgraph-composition.mdx
index 900ecb8e636d..07c4381f7cda 100644
--- a/website/src/pages/de/subgraphs/guides/subgraph-composition.mdx
+++ b/website/src/pages/de/subgraphs/guides/subgraph-composition.mdx
@@ -39,20 +39,20 @@ Während der Ausgangs-Subgraph ein Standard-Subgraph ist, verwendet der abhängi
### Source Subgraphs
-- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs).
- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
-- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
-- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
-- Source Subgraphs cannot use grafting on top of existing entities
-- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+- Immutable entities only: All Subgraphs must have [immutable entities](/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed.
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of.
+- Source Subgraphs cannot use grafting on top of existing entities.
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly.
### Composed Subgraphs
-- You can only compose up to a **maximum of 5 source Subgraphs**
-- Composed Subgraphs can only use **datasources from the same chain**
-- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
-- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
-- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+- You can only compose up to a **maximum of 5 source Subgraphs.**
+- Composed Subgraphs can only use **datasources from the same chain.**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time.
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly.
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph).
Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
diff --git a/website/src/pages/de/subgraphs/mcp/claude.mdx b/website/src/pages/de/subgraphs/mcp/claude.mdx
new file mode 100644
index 000000000000..eac1b9e10c9c
--- /dev/null
+++ b/website/src/pages/de/subgraphs/mcp/claude.mdx
@@ -0,0 +1,180 @@
+---
+title: Claude Desktop
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Claude to interact directly with Subgraphs on The Graph Network. This integration allows you to find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries—all through natural language conversations with Claude.
+
+## What You Can Do
+
+The Subgraph MCP integration enables you to:
+
+- Access the GraphQL schema for any Subgraph on The Graph Network
+- Execute GraphQL queries against any Subgraph deployment
+- Find top Subgraph deployments for a given keyword or contract address
+- Get 30-day query volume for Subgraph deployments
+- Ask natural language questions about Subgraph data without writing GraphQL queries manually
+
+## Voraussetzungen
+
+- [Node.js](https://nodejs.org/en) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+
+## Installation Options
+
+### Option 1: Using npx (Recommended)
+
+#### Configuration Steps using npx
+
+#### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+#### 2. Add Configuration
+
+Paste the following settings into your config file:
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+#### 3. Add Your Gateway API Key
+
+Replace `GATEWAY_API_KEY` with your API key from [Subgraph Studio](https://thegraph.com/studio/).
+
+#### 4. Save and Restart
+
+Once you've entered your Gateway API key into your settings, save the file and restart Claude Desktop.
+
+### Option 2: Building from Source
+
+#### Requirements
+
+- Rust (latest stable version recommended: 1.75+)
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+ Follow the on-screen instructions. For other platforms, see the [official Rust installation guide](https://www.rust-lang.org/tools/install).
+
+#### Installation Steps
+
+1. **Clone and Build the Repository**
+
+ ```bash
+ git clone git@github.com:graphops/subgraph-mcp.git
+ cd subgraph-mcp
+ cargo build --release
+ ```
+
+2. **Find the Command Path**
+
+ After building, the executable will be located at `target/release/subgraph-mcp` inside your project directory.
+
+ - Navigate to your `subgraph-mcp` directory in terminal
+ - Run `pwd` to get the full path
+ - Combine the output with `/target/release/subgraph-mcp`
+
+3. **Configure Claude Desktop**
+
+ Open your `claude_desktop_config.json` file as described above and add:
+
+ ```json
+ {
+ "mcpServers": {
+ "subgraph": {
+ "command": "/path/to/your/subgraph-mcp/target/release/subgraph-mcp",
+ "env": {
+ "GATEWAY_API_KEY": "your-api-key-here"
+ }
+ }
+ }
+ }
+ ```
+
+ Replace `/path/to/your/subgraph-mcp/target/release/subgraph-mcp` with the actual path to the compiled binary.
+
+## Using The Graph Resource in Claude
+
+After configuring Claude Desktop:
+
+1. Restart Claude Desktop
+2. Start a new conversation
+3. Click on the context menu (top right)
+4. Add "Subgraph Server Instructions" as a resource by adding `graphql://subgraph` to your chat context
+
+> **Important**: Claude Desktop may not automatically utilize the Subgraph MCP. You must manually add "Subgraph Server Instructions" resource to your chat context for each conversation where you want to use it.
+
+## Troubleshooting
+
+To enable logs for the MCP when using the npx option, add the `--verbose true` option to your args array.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID/IPFS hash**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Search subgraphs by keyword**: Find subgraphs by keyword in their display names, ordered by signal
+- **Get deployment 30-day query counts**: Get aggregate query count over the last 30 days for multiple subgraph deployments
+- **Get top Subgraph deployments for a contract**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain, ordered by query fees
+
+## Key Identifier Types
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a subgraph. Use `execute_query_by_subgraph_id` or `get_schema_by_subgraph_id`.
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment. Use `execute_query_by_deployment_id` or `get_schema_by_deployment_id`.
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific, immutable deployment. Use `execute_query_by_deployment_id` (the gateway treats it like a deployment ID for querying) or `get_schema_by_ipfs_hash`.
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Claude will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+```
+Find the top subgraphs for contract 0x1f98431c8ad98523631ae4a59f267346ea31f984 on arbitrum-one
+```
diff --git a/website/src/pages/de/subgraphs/mcp/cline.mdx b/website/src/pages/de/subgraphs/mcp/cline.mdx
new file mode 100644
index 000000000000..803e5db99125
--- /dev/null
+++ b/website/src/pages/de/subgraphs/mcp/cline.mdx
@@ -0,0 +1,99 @@
+---
+title: Cline
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cline to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cline.
+
+## Voraussetzungen
+
+- [Cline](https://cline.bot/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+## Using The Graph Resource in Cline
+
+After configuring Cline:
+
+1. Restart Cline
+2. Start a new conversation
+3. Enable the Subgraph MCP from the context menu
+4. Add "Subgraph Server Instructions" as a resource to your chat context
+
+## Available Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cline will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/de/subgraphs/mcp/cursor.mdx b/website/src/pages/de/subgraphs/mcp/cursor.mdx
new file mode 100644
index 000000000000..c8a7ae439298
--- /dev/null
+++ b/website/src/pages/de/subgraphs/mcp/cursor.mdx
@@ -0,0 +1,94 @@
+---
+title: Cursor
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cursor to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cursor.
+
+## Voraussetzungen
+
+- [Cursor](https://www.cursor.com/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+### 4. Restart Cursor
+
+Restart Cursor, and start a new chat.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cursor will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/de/subgraphs/querying/best-practices.mdx b/website/src/pages/de/subgraphs/querying/best-practices.mdx
index 50053b27f889..f7f0f8979879 100644
--- a/website/src/pages/de/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/de/subgraphs/querying/best-practices.mdx
@@ -2,9 +2,7 @@
title: Best Practices für Abfragen
---
-The Graph bietet eine dezentrale Möglichkeit zur Abfrage von Daten aus Blockchains. Die Daten werden über eine GraphQL-API zugänglich gemacht, was die Abfrage mit der GraphQL-Sprache erleichtert.
-
-Lernen Sie die wesentlichen GraphQL-Sprachregeln und Best Practices, um Ihren Subgraph zu optimieren.
+Use The Graph's GraphQL API to query [Subgraph](/subgraphs/developing/subgraphs/) data efficiently. This guide outlines essential GraphQL rules, guides, and best practices to help you write optimized, reliable queries.
---
@@ -12,9 +10,11 @@ Lernen Sie die wesentlichen GraphQL-Sprachregeln und Best Practices, um Ihren Su
### Die Anatomie einer GraphQL-Abfrage
-Im Gegensatz zur REST-API basiert eine GraphQL-API auf einem Schema, das definiert, welche Abfragen durchgeführt werden können.
+> GraphQL queries use the GraphQL language, which is defined in the [GraphQL specification](https://spec.graphql.org/).
+
+Unlike REST APIs, GraphQL APIs are built on a schema-driven design that defines which queries can be performed.
-Eine Abfrage zum Abrufen eines Tokens mit der Abfrage `token` sieht zum Beispiel wie folgt aus:
+Here's a typical query to fetch a `token`:
```graphql
query GetToken($id: ID!) {
@@ -25,7 +25,7 @@ query GetToken($id: ID!) {
}
```
-die die folgende vorhersehbare JSON-Antwort zurückgibt (\_bei Übergabe des richtigen Variablenwerts `$id`):
+which will return a predictable JSON response (when passing the proper `$id` variable value):
```json
{
@@ -36,8 +36,6 @@ die die folgende vorhersehbare JSON-Antwort zurückgibt (\_bei Übergabe des ric
}
```
-GraphQL-Abfragen verwenden die GraphQL-Sprache, die nach [einer Spezifikation] (https://spec.graphql.org/) definiert ist.
-
Die obige `GetToken`-Abfrage besteht aus mehreren Sprachteilen (im Folgenden durch `[...]` Platzhalter ersetzt):
```graphql
@@ -50,33 +48,31 @@ query [operationName]([variableName]: [variableType]) {
}
```
-## Regeln für das Schreiben von GraphQL-Abfragen
+### Regeln für das Schreiben von GraphQL-Abfragen
-- Jeder `queryName` darf nur einmal pro Vorgang verwendet werden.
-- Jedes `field` darf nur einmal in einer Auswahl verwendet werden (wir können `id` nicht zweimal unter `token`abfragen)
-- Einige `field`s oder Abfragen (wie `tokens`) geben komplexe Typen zurück, die eine Auswahl von Unterfeldern erfordern. Wird eine Auswahl nicht bereitgestellt, wenn sie erwartet wird (oder eine Auswahl bereitgestellt, wenn sie nicht erwartet wird - zum Beispiel bei `id`), wird ein Fehler ausgelöst. Um einen Feldtyp zu kennen, schauen Sie bitte im [Graph Explorer](/subgraphs/explorer/) nach.
-- Jede Variable, die einem Argument zugewiesen wird, muss ihrem Typ entsprechen.
-- In einer gegebenen Liste von Variablen muss jede von ihnen eindeutig sein.
-- Alle definierten Variablen müssen verwendet werden.
+> Important: Failing to follow these rules will result in an error from The Graph API.
-> Hinweis: Die Nichtbeachtung dieser Regeln führt zu einer Fehlermeldung von The Graph API.
+1. Jeder `queryName` darf nur einmal pro Vorgang verwendet werden.
+2. Each `field` must be used only once in a selection (you cannot query `id` twice under `token`).
+3. Complex types require a selection of sub-fields.
+ - For example, some `fields' or queries (like `tokens`) return complex types which will require a selection of sub-fields. Not providing a selection when expected or providing one when not expected will raise an error, such as `id\`. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
+4. Jede Variable, die einem Argument zugewiesen wird, muss ihrem Typ entsprechen.
+5. Variables must be uniquely defined and used.
-For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/).
+**For a complete list of rules with code examples, check out the [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)**.
-### Senden einer Abfrage an eine GraphQL API
+### How to Send a Query to a GraphQL API
-GraphQL ist eine Sprache und ein Satz von Konventionen, die über HTTP transportiert werden.
+[GraphQL is a query language](https://graphql.org/learn/) and a set of conventions for APIs, typically used over HTTP to request and send data between clients and servers. This means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-Das bedeutet, dass Sie eine GraphQL-API mit dem Standard `fetch` abfragen können (nativ oder über `@whatwg-node/fetch` oder `isomorphic-fetch`).
-
-Wie in [„Abfragen von einer Anwendung“](/subgraphs/querying/from-an-application/) erwähnt, wird jedoch empfohlen, den `graph-client` zu verwenden, der die folgenden einzigartigen Funktionen unterstützt:
+However, as recommended in [Querying from an Application](/subgraphs/querying/from-an-application/), it's best to use `graph-client`, which supports the following unique features:
- Kettenübergreifende Behandlung von Subgraphen: Abfragen von mehreren Subgraphen in einer einzigen Abfrage
- [Automatische Blockverfolgung](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatische Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Vollständig typisiertes Ergebnis
-So wird The Graph mit `graph-client` abgefragt:
+Example query using `graph-client`:
```tsx
import { execute } from '../.graphclient'
@@ -93,22 +89,22 @@ const variables = { id: '1' }
async function main() {
const result = await execute(query, variables)
- // `result` ist vollständig typisiert!
- console.log(result)
+ // `result` ist vollständig typisiert!
+ console.log(result)
}
main()
```
-Weitere GraphQL-Client-Alternativen werden in [„Abfragen von einer Anwendung“](/subgraphs/querying/from-an-application/) behandelt.
+For more alternatives, see ["Querying from an Application"](/subgraphs/querying/from-an-application/).
---
## Bewährte Praktiken
-### Schreiben Sie immer statische Abfragen
+### 1. Always Write Static Queries
-Eine gängige (schlechte) Praxis ist es, Abfragezeichenfolgen dynamisch wie folgt zu erstellen:
+A common bad practice is to dynamically build a query string as follows:
```tsx
const id = params.id
@@ -122,14 +118,16 @@ query GetToken {
`
```
-Auch wenn das obige Snippet eine gültige GraphQL-Abfrage erzeugt, **hat es viele Nachteile**:
+While the example above produces a valid GraphQL query, it comes with several issues:
+
+- The full query is harder to understand.
+- Developers are responsible for safely sanitizing the string interpolation.
+- Not sending the values of the variables as part of the request can block server-side caching.
+- It prevents tools from statically analyzing the query (e.g.linters or type generation tools).
-- es macht es **schwieriger**, die Abfrage als Ganzes zu verstehen
-- Die Entwickler sind **für die sichere Bereinigung der String-Interpolation verantwortlich**.
-- die Werte der Variablen nicht als Teil der Anforderungsparameter zu senden **eine mögliche Zwischenspeicherung auf der Server-Seite zu verhindern**
-- es **verhindert, dass Werkzeuge die Abfrage statisch analysieren** (z. B. Linter oder Werkzeuge zur Typgenerierung)
+Instead, it's recommended to **always write queries as static strings**.
-Aus diesem Grund ist es empfehlenswert, Abfragen immer als statische Strings zu schreiben:
+Example of static string:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -151,18 +149,21 @@ const result = await execute(query, {
})
```
-Dies bringt **viele Vorteile**:
+Static strings have several **key advantages**:
-- **Einfach zu lesende und zu pflegende** Abfragen
-- Der GraphQL **Server kümmert sich um die Bereinigung von Variablen**
-- **Variablen können auf Server-Ebene zwischengespeichert werden**.
-- **Abfragen können von Tools statisch analysiert werden** (mehr dazu in den folgenden Abschnitten)
+- Queries are easier to read, manage, and debug.
+- Variable sanitization is handled by the GraphQL server The GraphQL.
+- Variables can be cached at the server level.
+- Queries can be statically analyzed by tools (see [GraphQL Essential Tools](/subgraphs/querying/best-practices/#graphql-essential-tools-guides)).
-### Wie man Felder bedingt in statische Abfragen einbezieht
+### 2. Include Fields Conditionally in Static Queries
-Möglicherweise möchten Sie das Feld `owner` nur unter einer bestimmten Bedingung einbeziehen.
+Including fields in static queries only for a particular condition improves performance and keeps responses lightweight by fetching only the necessary data when it's relevant.
-Dazu können Sie die Richtlinie `@include(if:...)` wie folgt nutzen:
+- The `@include(if:...)` directive tells the query to **include** a specific field only if the given condition is true.
+- The `@skip(if: ...)` directive tells the query to **exclude** a specific field if the given condition is true.
+
+Example using `owner` field with `@include(if:...)` directive:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -185,15 +186,11 @@ const result = await execute(query, {
})
```
-> Anmerkung: Die gegenteilige Direktive ist `@skip(if: ...)`.
-
-### Verlangen Sie, was Sie wollen
-
-GraphQL wurde durch den Slogan „Frag nach dem, was du willst“ bekannt.
+### 3. Ask Only For What You Want
-Aus diesem Grund gibt es in GraphQL keine Möglichkeit, alle verfügbaren Felder zu erhalten, ohne sie einzeln auflisten zu müssen.
+GraphQL is known for its "Ask for what you want” tagline, which is why it requires explicitly listing each field you want. There's no built-in way to fetch all available fields automatically.
-- Denken Sie bei der Abfrage von GraphQL-APIs immer daran, nur die Felder abzufragen, die tatsächlich verwendet werden.
+- When querying GraphQL APIs, always think of querying only the fields that will actually be used.
- Stellen Sie sicher, dass Abfragen nur so viele Entitäten abrufen, wie Sie tatsächlich benötigen. Standardmäßig rufen Abfragen 100 Entitäten in einer Sammlung ab, was in der Regel viel mehr ist, als tatsächlich verwendet wird, z. B. für die Anzeige für den Benutzer. Dies gilt nicht nur für die Top-Level-Sammlungen in einer Abfrage, sondern vor allem auch für verschachtelte Sammlungen von Entitäten.
Zum Beispiel in der folgenden Abfrage:
@@ -203,8 +200,7 @@ query listTokens {
tokens {
# wird bis zu 100 Tokens
id
- Transaktionen
- abrufen {
+ Transaktionen abrufen {
# wird bis zu 100 Transaktionen abrufen
id
}
@@ -214,9 +210,9 @@ query listTokens {
Die Antwort könnte 100 Transaktionen für jedes der 100 Token enthalten.
-Wenn die Anwendung nur 10 Transaktionen benötigt, sollte die Abfrage explizit `first: 10` für das Feld „transactions“ festlegen.
+If the application only needs 10 transactions, the query should explicitly set **`first: 10`** on the transactions field.
-### Verwenden Sie eine einzige Abfrage, um mehrere Datensätze abzufragen
+### 4. Use a Single Query to Request Multiple Records
Standardmäßig haben Subgraphen eine singuläre Entität für einen Datensatz. Für mehrere Datensätze verwenden Sie die Plural-Entitäten und den Filter: `where: {id_in:[X,Y,Z]}` oder `where: {Volumen_gt:100000}`
@@ -248,7 +244,7 @@ query ManyRecords {
}
```
-### Mehrere Abfragen in einer einzigen Anfrage kombinieren
+### 5. Combine Multiple Queries in a Single Request
Für Ihre Anwendung kann es erforderlich sein, mehrere Datentypen wie folgt abzufragen:
@@ -280,9 +276,9 @@ const [tokens, counters] = Promise.all(
)
```
-Diese Implementierung ist zwar durchaus sinnvoll, erfordert aber zwei Umläufe mit der GraphQL-API.
+While this implementation is valid, it will require two round trips with the GraphQL API.
-Glücklicherweise ist es auch möglich, mehrere Abfragen in der gleichen GraphQL-Anfrage wie folgt zu senden:
+It's best to send multiple queries in the same GraphQL request as follows:
```graphql
import { execute } from "your-favorite-graphql-client"
@@ -302,9 +298,9 @@ query GetTokensandCounters {
const { result: { tokens, counters } } = execute(query)
```
-Dieser Ansatz **verbessert die Gesamtleistung**, indem er die im Netz verbrachte Zeit reduziert (erspart Ihnen einen Hin- und Rückweg zur API) und bietet eine **präzisere Implementierung**.
+Sending multiple queries in the same GraphQL request **improves the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and provides a **more concise implementation**.
-### Nutzung von GraphQL-Fragmenten
+### 6. Leverage GraphQL Fragments
Eine hilfreiche Funktion zum Schreiben von GraphQL-Abfragen ist GraphQL Fragment.
@@ -333,7 +329,7 @@ Solche wiederholten Felder (`id`, `active`, `status`) bringen viele Probleme mit
- Umfangreichere Abfragen werden schwieriger zu lesen.
- Bei der Verwendung von Tools, die TypeScript-Typen auf Basis von Abfragen generieren (_mehr dazu im letzten Abschnitt_), führen `newDelegate` und `oldDelegate` zu zwei unterschiedlichen Inline-Schnittstellen.
-Eine überarbeitete Version der Abfrage würde wie folgt aussehen:
+An optimized version of the query would be the following:
```graphql
query {
@@ -357,15 +353,18 @@ fragment DelegateItem auf Transcoder {
}
```
-Die Verwendung von GraphQL `fragment` verbessert die Lesbarkeit (insbesondere bei Skalierung) und führt zu einer besseren TypeScript-Typengenerierung.
+Using a GraphQL `fragment` improves readability (especially at scale) and results in better TypeScript types generation.
-Wenn Sie das Tool zur Generierung von Typen verwenden, wird die obige Abfrage einen geeigneten Typ `DelegateItemFragment` erzeugen (\_siehe letzter Abschnitt „Tools“).
+Wenn Sie das Tool zur Generierung von Typen verwenden, wird die obige Abfrage einen geeigneten Typ `DelegateItemFragment` erzeugen (_siehe letzter Abschnitt „Tools“).
-### GraphQL-Fragmente: Was man tun und lassen sollte
+## GraphQL Fragment Guidelines
-### Die Fragmentbasis muss ein Typ sein
+### Do's and Don'ts for Fragments
-Ein Fragment kann nicht auf einem nicht anwendbaren Typ basieren, kurz gesagt, **auf einem Typ, der keine Felder hat**:
+1. Fragments cannot be based on a non-applicable types (types without fields).
+2. `BigInt` cannot be used as a fragment's base because it's a **scalar** (native "plain" type).
+
+Beispiel:
```graphql
fragment MyFragment on BigInt {
@@ -373,11 +372,8 @@ fragment MyFragment on BigInt {
}
```
-`BigInt` ist ein **Skalar** (nativer “einfacher" Typ), der nicht als Basis für ein Fragment verwendet werden kann.
-
-#### Wie man ein Fragment verbreitet
-
-Fragmente sind für bestimmte Typen definiert und sollten entsprechend in Abfragen verwendet werden.
+3. Fragments belong to specific types and must be used with those same types in queries.
+4. Spread only fragments matching the correct type.
Beispiel:
@@ -386,9 +382,7 @@ query {
bondEvents {
id
newDelegate {
- ...VoteItem # Fehler! `VoteItem` kann nicht auf `Transcoder` Typ
- verteilt
- werden
+ ...VoteItem # Error! `VoteItem` cannot be spread on `Transcoder` type
}
oldDelegate {
...VoteItem
@@ -402,20 +396,23 @@ fragment VoteItem on Vote {
}
```
-`newDelegate` und `oldDelegate` sind vom Typ `Transcoder`.
+- `newDelegate` and `oldDelegate` are of type `Transcoder`. It's not possible to spread a fragment of type `Vote` here.
-Es ist nicht möglich, ein Fragment des Typs `Vote` hier zu verbreiten.
+5. Fragments must be defined based on their specific usage.
+6. Define fragments as an atomic business unit of data.
-#### Definition eines Fragments als atomare Geschäftseinheit von Daten
+---
-GraphQL `Fragment`s müssen entsprechend ihrer Verwendung definiert werden.
+### How to Define `Fragment` as an Atomic Business Unit of Data
-Für die meisten Anwendungsfälle reicht es aus, ein Fragment pro Typ zu definieren (im Falle der Verwendung wiederholter Felder oder der Generierung von Typen).
+> For most use-cases, defining one fragment per type (in the case of repeated fields usage or type generation) is enough.
-Hier ist eine Faustregel für die Verwendung von Fragmenten:
+Here is a rule of thumb for using fragments:
-- Wenn Felder desselben Typs in einer Abfrage wiederholt werden, gruppieren Sie sie in einem `Fragment`.
-- Wenn sich ähnliche, aber unterschiedliche Felder wiederholen, erstellen Sie z. B. mehrere Fragmente:
+- When fields of the same type are repeated in a query, group them in a `Fragment`.
+- When similar but different fields are repeated, create multiple fragments.
+
+Beispiel:
```graphql
# Basisfragment (meist im Listing verwendet)
@@ -438,35 +435,45 @@ fragment VoteWithPoll on Vote {
---
-## Die wichtigsten Tools
+## GraphQL Essential Tools Guides
+
+### Test Queries with Graph Explorer
+
+Before integrating GraphQL queries into your dapp, it's best to test them. Instead of running them directly in your app, use a web-based playground.
+
+Start with [Graph Explorer](https://thegraph.com/explorer), a preconfigured GraphQL playground built specifically for Subgraphs. You can experiment with queries and see the structure of the data returned without writing any frontend code.
+
+If you want alternatives to debug/test your queries, check out other similar web-based tools:
+
+- [GraphiQL](https://graphiql-online.com/graphiql)
+- [Altair](https://altairgraphql.dev/)
-### Webbasierte GraphQL-Explorer
+### Setting up Workflow and IDE Tools
-Das Iterieren von Abfragen, indem Sie sie in Ihrer Anwendung ausführen, kann mühsam sein. Zögern Sie deshalb nicht, den [Graph Explorer] (https://thegraph.com/explorer) zu verwenden, um Ihre Abfragen zu testen, bevor Sie sie Ihrer Anwendung hinzufügen. Der Graph Explorer bietet Ihnen eine vorkonfigurierte GraphQL-Spielwiese zum Testen Ihrer Abfragen.
+In order to keep up with querying best practices and syntax rules, use the following workflow and IDE tools.
-Wenn Sie nach einer flexibleren Methode zum Debuggen/Testen Ihrer Abfragen suchen, gibt es ähnliche webbasierte Tools wie [Altair] (https://altairgraphql.dev/) und [GraphiQL] (https://graphiql-online.com/graphiql).
+#### GraphQL ESLint
-### GraphQL-Linting
+1. Install GraphQL ESLint
-Um die oben genannten Best Practices und syntaktischen Regeln einzuhalten, wird die Verwendung der folgenden Workflow- und IDE-Tools dringend empfohlen.
+Use [GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) to enforce best practices and syntax rules with zero effort.
-**GraphQL ESLint**
+2. Use the ["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config
-[GraphQL ESLint] (https://the-guild.dev/graphql/eslint/docs/getting-started) hilft Ihnen dabei, mit null Aufwand auf dem neuesten Stand der GraphQL Best Practices zu bleiben.
+This will enforce essential rules such as:
-[Die „operations-recommended“](https://the-guild.dev/graphql/eslint/docs/configs) Konfiguration setzt wichtige Regeln wie z.B.:
+- `@graphql-eslint/fields-on-correct-type`: Ensures fields match the proper type.
+- `@graphql-eslint/no-unused variables`: Flags unused variables in your queries.
-- `@graphql-eslint/fields-on-correct-type`: wird ein Feld auf einen richtigen Typ verwendet?
-- `@graphql-eslint/no-unused variables`: Soll eine bestimmte Variable unbenutzt bleiben?
-- und mehr!
+Result: You'll **catch errors without even testing queries** on the playground or running them in production!
-So können Sie **Fehler aufspüren, ohne Abfragen** auf dem Playground zu testen oder sie in der Produktion auszuführen!
+#### Use IDE plugins
-### IDE-Plugins
+GraphQL plugins streamline your workflow by offering real-time feedback while you code. They highlight mistakes, suggest completions, and help you explore your schema faster.
-**VSCode und GraphQL**
+1. VS Code
-Die [GraphQL VSCode-Erweiterung] (https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) ist eine hervorragende Ergänzung zu Ihrem Entwicklungs-Workflow zu bekommen:
+Install the [GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) to unlock:
- Syntaxhervorhebung
- Autovervollständigungsvorschläge
@@ -474,11 +481,11 @@ Die [GraphQL VSCode-Erweiterung] (https://marketplace.visualstudio.com/items?ite
- Snippets
- Zur Definition von Fragmenten und Eingabetypen
-Wenn Sie `graphql-eslint` verwenden, ist die [ESLint VSCode-Erweiterung] (https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) ein Muss, um Fehler und Warnungen in Ihrem Code korrekt zu visualisieren.
+If you are using `graphql-eslint`, use the [ESLint VS Code extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) to visualize errors and warnings inlined in your code correctly.
-**WebStorm/Intellij und GraphQL**
+2. WebStorm/Intellij and GraphQL
-Das [JS GraphQL Plugin] (https://plugins.jetbrains.com/plugin/8097-graphql/) wird Ihre Erfahrung bei der Arbeit mit GraphQL erheblich verbessern, indem es Folgendes bietet:
+Install the [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/). It significantly improves the experience of working with GraphQL by providing:
- Syntaxhervorhebung
- Autovervollständigungsvorschläge
diff --git a/website/src/pages/de/subgraphs/querying/graphql-api.mdx b/website/src/pages/de/subgraphs/querying/graphql-api.mdx
index effc56357802..7179e82c80cf 100644
--- a/website/src/pages/de/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/de/subgraphs/querying/graphql-api.mdx
@@ -2,23 +2,37 @@
title: GraphQL-API
---
-Erfahren Sie mehr über die GraphQL Query API, die in The Graph verwendet wird.
+Explore the GraphQL Query API for interacting with Subgraphs on The Graph Network.
-## Was ist GraphQL?
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with existing data.
-[GraphQL] (https://graphql.org/learn/) ist eine Abfragesprache für APIs und eine Laufzeitumgebung für die Ausführung dieser Abfragen mit Ihren vorhandenen Daten. The Graph verwendet GraphQL zur Abfrage von Subgraphen.
+The Graph uses GraphQL to query Subgraphs.
-Um die größere Rolle, die GraphQL spielt, zu verstehen, lesen Sie [Entwickeln](/subgraphs/entwickeln/einfuehrung/) und [Erstellen eines Subgraphen](/entwickeln/einen-subgraph-erstellen/).
+## Core Concepts
-## Abfragen mit GraphQL
+### Entitäten
+
+- **What they are**: Persistent data objects defined with `@entity` in your schema
+- **Key requirement**: Must contain `id: ID!` as primary identifier
+- **Usage**: Foundation for all query operations
+
+### Schema
+
+- **Purpose**: Blueprint defining the data structure and relationships using GraphQL [IDL](https://facebook.github.io/graphql/draft/#sec-Type-System)
+- **Key characteristics**:
+ - Auto-generates query endpoints
+ - Read-only operations (no mutations)
+ - Defines entity interfaces and derived fields
-In Ihrem Subgraph-Schema definieren Sie Typen namens `Entities`. Für jeden `Entity`-Typ werden `entity`- und `entities`-Felder auf der obersten Ebene des `Query`-Typs erzeugt.
+## Query Structure
-> Hinweis: Bei der Verwendung von The Graph muss `query` nicht am Anfang der `graphql`-Abfrage stehen.
+GraphQL queries in The Graph target entities defined in the Subgraph schema. Each `Entity` type generates corresponding `entity` and `entities` fields on the root `Query` type.
-### Beispiele
+> Note: The `query` keyword is not required at the top level of GraphQL queries.
-Abfrage nach einer einzelnen, in Ihrem Schema definierten Entität `Token`:
+### Single Entity Queries Example
+
+Query for a single `Token` entity:
```graphql
{
@@ -29,9 +43,11 @@ Abfrage nach einer einzelnen, in Ihrem Schema definierten Entität `Token`:
}
```
-> Hinweis: Bei der Abfrage einer einzelnen Entität ist das Feld `id` erforderlich und muss als String geschrieben werden.
+> Note: Single entity queries require the `id` parameter as a string.
+
+### Collection Queries Example
-Abfrage aller `Token`-Entitäten:
+Query format for all `Token` entities:
```graphql
{
@@ -42,14 +58,14 @@ Abfrage aller `Token`-Entitäten:
}
```
-### Sortierung
+### Sorting Example
-Wenn Sie eine Sammlung abfragen, können Sie:
+Collection queries support the following sort parameters:
-- den Parameter `orderBy` verwenden, um nach einem bestimmten Attribut zu sortieren.
-- `orderDirection` verwenden, um die Sortierrichtung anzugeben, `asc` für aufsteigend oder `desc` für absteigend.
+- `orderBy`: Specifies the attribute for sorting
+- `orderDirection`: Accepts `asc` (ascending) or `desc` (descending)
-#### Beispiel
+#### Standard Sorting Example
```graphql
{
@@ -60,11 +76,7 @@ Wenn Sie eine Sammlung abfragen, können Sie:
}
```
-#### Beispiel für die Sortierung verschachtelter Entitäten
-
-Ab Graph Node [`v0.30.0`] (https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) können Entitäten auf der Basis von verschachtelten Entitäten sortiert werden.
-
-Im folgenden Beispiel werden die Token nach dem Namen ihres Besitzers sortiert:
+#### Nested Entity Sorting Example
```graphql
{
@@ -77,20 +89,18 @@ Im folgenden Beispiel werden die Token nach dem Namen ihres Besitzers sortiert:
}
```
-> Derzeit können Sie nach den Typen `String` oder `ID` auf den Feldern `@entity` und `@derivedFrom` sortieren. Leider wird die [Sortierung nach Schnittstellen auf Entitäten mit einer Tiefe von einer Ebene] (https://github.com/graphprotocol/graph-node/pull/4058), die Sortierung nach Feldern, die Arrays und verschachtelte Entitäten sind, noch nicht unterstützt.
+> Note: Nested sorting supports one-level-deep `String` or `ID` types on `@entity` and `@derivedFrom` fields.
-### Pagination
+### Pagination Example
-Wenn Sie eine Sammlung abfragen, ist es am besten, dies zu tun:
+When querying a collection, it is best to:
- Verwenden Sie den Parameter `first`, um vom Anfang der Sammlung an zu paginieren.
- Die Standardsortierung erfolgt nach `ID` in aufsteigender alphanumerischer Reihenfolge, **nicht** nach Erstellungszeit.
- Verwenden Sie den Parameter `skip`, um Entitäten zu überspringen und zu paginieren. Zum Beispiel zeigt `first:100` die ersten 100 Entitäten und `first:100, skip:100` zeigt die nächsten 100 Entitäten.
- Vermeiden Sie die Verwendung von `skip`-Werten in Abfragen, da diese im Allgemeinen schlecht funktionieren. Um eine große Anzahl von Elementen abzurufen, ist es am besten, die Entitäten auf der Grundlage eines Attributs zu durchblättern, wie im obigen Beispiel gezeigt.
-#### Beispiel mit `first`
-
-Die Abfrage für die ersten 10 Token:
+#### Standard Pagination Example
```graphql
{
@@ -101,11 +111,7 @@ Die Abfrage für die ersten 10 Token:
}
```
-Um nach Gruppen von Entitäten in der Mitte einer Sammlung zu suchen, kann der Parameter `skip` in Verbindung mit dem Parameter `first` verwendet werden, um eine bestimmte Anzahl von Entitäten zu überspringen, beginnend am Anfang der Sammlung.
-
-#### Beispiel mit `first` und `skip`
-
-Abfrage von 10 „Token“-Entitäten, versetzt um 10 Stellen vom Beginn der Sammlung:
+#### Offset Pagination Example
```graphql
{
@@ -116,9 +122,7 @@ Abfrage von 10 „Token“-Entitäten, versetzt um 10 Stellen vom Beginn der Sam
}
```
-#### Beispiel mit `first` und `id_ge`
-
-Wenn ein Client eine große Anzahl von Entitäten abrufen muss, ist es leistungsfähiger, Abfragen auf ein Attribut zu stützen und nach diesem Attribut zu filtern. Zum Beispiel könnte ein Client mit dieser Abfrage eine große Anzahl von Token abrufen:
+#### Cursor-based Pagination Example
```graphql
query manyTokens($lastID: String) {
@@ -129,16 +133,11 @@ query manyTokens($lastID: String) {
}
```
-Beim ersten Mal würde es die Abfrage mit `lastID = „“` senden, und bei nachfolgenden Anfragen würde es `lastID` auf das Attribut `id` der letzten Entität in der vorherigen Anfrage setzen. Dieser Ansatz ist wesentlich leistungsfähiger als die Verwendung steigender `skip`-Werte.
-
### Filtration
-- Sie können den Parameter `where` in Ihren Abfragen verwenden, um nach verschiedenen Eigenschaften zu filtern.
-- Sie können nach mehreren Werten innerhalb des Parameters `where` filtern.
-
-#### Beispiel mit `where`
+The `where` parameter filters entities based on specified conditions.
-Abfrage von Herausforderungen mit `failed`-Ergebnis:
+#### Basic Filtering Example
```graphql
{
@@ -152,9 +151,7 @@ Abfrage von Herausforderungen mit `failed`-Ergebnis:
}
```
-Sie können Suffixe wie `_gt`, `_lte` für den Wertevergleich verwenden:
-
-#### Beispiel für Range-Filterung
+#### Numeric Comparison Example
```graphql
{
@@ -166,11 +163,7 @@ Sie können Suffixe wie `_gt`, `_lte` für den Wertevergleich verwenden:
}
```
-#### Beispiel für Block-Filterung
-
-Sie können auch Entitäten filtern, die in oder nach einem bestimmten Block mit `_change_block(number_gte: Int)` aktualisiert wurden.
-
-Dies kann nützlich sein, wenn Sie nur Entitäten abrufen möchten, die sich geändert haben, z. B. seit der letzten Abfrage. Oder es kann nützlich sein, um zu untersuchen oder zu debuggen, wie sich Entitäten in Ihrem Subgraphen ändern (wenn Sie dies mit einem Blockfilter kombinieren, können Sie nur Entitäten isolieren, die sich in einem bestimmten Block geändert haben).
+#### Block-based Filtering Example
```graphql
{
@@ -182,11 +175,7 @@ Dies kann nützlich sein, wenn Sie nur Entitäten abrufen möchten, die sich ge
}
```
-#### Beispiel für die Filterung verschachtelter Entitäten
-
-Die Filterung nach verschachtelten Entitäten ist in den Feldern mit dem Suffix `_`möglich.
-
-Dies kann nützlich sein, wenn Sie nur die Entitäten abrufen möchten, deren untergeordnete Entitäten die angegebenen Bedingungen erfüllen.
+#### Nested Entity Filtering Example
```graphql
{
@@ -200,11 +189,9 @@ Dies kann nützlich sein, wenn Sie nur die Entitäten abrufen möchten, deren un
}
```
-#### Logische Operatoren
-
-Seit Graph Node [`v0.30.0`] (https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) können Sie mehrere Parameter im selben `where`-Argument gruppieren, indem Sie die `und`- oder `oder`-Operatoren verwenden, um Ergebnisse nach mehr als einem Kriterium zu filtern.
+#### Logical Operators
-##### Operator `AND`
+##### AND Operations Example
Das folgende Beispiel filtert nach Challenges mit `outcome` `succeeded` und `number` größer als oder gleich `100`.
@@ -220,27 +207,11 @@ Das folgende Beispiel filtert nach Challenges mit `outcome` `succeeded` und `num
}
```
-> **Syntaktischer Zucker:** Sie können die obige Abfrage vereinfachen, indem Sie den „und“-Operator entfernen und einen durch Kommata getrennten Unterausdruck übergeben.
->
-> ```graphql
-> {
-> challenges(where: { number_gte: 100, outcome: "succeeded" }) {
-> challenger
-> outcome
-> application {
-> id
-> }
-> }
-> }
-> ```
-
-##### Operator `OR`
-
-Das folgende Beispiel filtert nach Herausforderungen mit `outcome` `succeeded` oder `number` größer oder gleich `100`.
+**Syntactic sugar:** You can simplify the above query by removing the `and` operator and by passing a sub-expression separated by commas.
```graphql
{
- challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenges(where: { number_gte: 100, outcome: "succeeded" }) {
challenger
outcome
application {
@@ -250,58 +221,36 @@ Das folgende Beispiel filtert nach Herausforderungen mit `outcome` `succeeded` o
}
```
-> Hinweis : Beim Erstellen von Abfragen ist es wichtig, die Auswirkungen der Verwendung des
-> or
-Operators auf die Leistung zu berücksichtigen. Obwohl or
ein nützliches Tool zum
-> Erweitern von Suchergebnissen sein kann, kann es auch erhebliche Kosten verursachen. Eines der Hauptprobleme mit
-> or
ist, dass Abfragen dadurch verlangsamt werden können. Dies liegt daran, dass or
-> erfordert, dass die Datenbank mehrere Indizes durchsucht, was ein zeitaufwändiger Prozess sein kann. Um diese Probleme
-> zu vermeiden, wird empfohlen, dass Entwickler and -Operatoren anstelle von oder verwenden, wann immer dies möglich
-> ist. Dies ermöglicht eine präzisere Filterung und kann zu schnelleren und genaueren Abfragen führen.
-
-#### Alle Filter
-
-Vollständige Liste der Parameter-Suffixe:
+##### OR Operations Example
+```graphql
+{
+ challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenger
+ outcome
+ application {
+ id
+ }
+ }
+}
```
-_
-_not
-_gt
-_lt
-_gte
-_lte
-_in
-_not_in
-_contains
-_contains_nocase
-_not_contains
-_not_contains_nocase
-_starts_with
-_starts_with_nocase
-_ends_with
-_ends_with_nocase
-_not_starts_with
-_not_starts_with_nocase
-_not_ends_with
-_not_ends_with_nocase
-```
-
-> Bitte beachten Sie, dass einige Suffixe nur für bestimmte Typen unterstützt werden. So unterstützt `Boolean` nur `_not`, `_in` und `_not_in`, aber `_` ist nur für Objekt- und Schnittstellentypen verfügbar.
-Darüber hinaus sind die folgenden globalen Filter als Teil des Arguments `where` verfügbar:
+Global filter parameter:
```graphql
_change_block(number_gte: Int)
```
-### Time-travel-Anfragen
+### Time-travel Queries Example
-Sie können den Zustand Ihrer Entitäten nicht nur für den letzten Block abfragen, was der Standard ist, sondern auch für einen beliebigen Block in der Vergangenheit. Der Block, zu dem eine Abfrage erfolgen soll, kann entweder durch seine Blocknummer oder seinen Block-Hash angegeben werden, indem ein `block`-Argument in die Toplevel-Felder von Abfragen aufgenommen wird.
+Queries support historical state retrieval using the `block` parameter:
-Das Ergebnis einer solchen Abfrage wird sich im Laufe der Zeit nicht ändern, d.h. die Abfrage eines bestimmten vergangenen Blocks wird das gleiche Ergebnis liefern, egal wann sie ausgeführt wird, mit der Ausnahme, dass sich das Ergebnis bei einer Abfrage eines Blocks, der sehr nahe am Kopf der Kette liegt, ändern kann, wenn sich herausstellt, dass dieser Block **nicht** in der Hauptkette ist und die Kette umorganisiert wird. Sobald ein Block als endgültig betrachtet werden kann, wird sich das Ergebnis der Abfrage nicht mehr ändern.
+- `number`: Integer block number
+- `hash`: String block hash
> Hinweis: Die derzeitige Implementierung unterliegt noch bestimmten Beschränkungen, die diese Garantien verletzen könnten. Die Implementierung kann nicht immer erkennen, dass ein bestimmter Block-Hash überhaupt nicht in der Hauptkette ist, oder ob ein Abfrageergebnis durch einen Block-Hash für einen Block, der noch nicht als endgültig gilt, durch eine gleichzeitig mit der Abfrage laufende Blockumstrukturierung beeinflusst werden könnte. Sie haben keinen Einfluss auf die Ergebnisse von Abfragen per Block-Hash, wenn der Block endgültig ist und sich bekanntermaßen in der Hauptkette befindet. In [Diese Ausgabe] (https://github.com/graphprotocol/graph-node/issues/1405) werden diese Einschränkungen im Detail erläutert.
-#### Beispiel
+#### Block Number Query Example
```graphql
{
@@ -315,9 +264,7 @@ Das Ergebnis einer solchen Abfrage wird sich im Laufe der Zeit nicht ändern, d.
}
```
-Diese Abfrage gibt die `Challenge`-Entitäten und die zugehörigen `Application`-Entitäten so zurück, wie sie unmittelbar nach der Verarbeitung von Block Nummer 8.000.000 bestanden.
-
-#### Beispiel
+#### Block Hash Query Example
```graphql
{
@@ -331,28 +278,26 @@ Diese Abfrage gibt die `Challenge`-Entitäten und die zugehörigen `Application`
}
```
-Diese Abfrage gibt `Challenge`-Entitäten und die zugehörigen `Application`-Entitäten zurück, wie sie unmittelbar nach der Verarbeitung des Blocks mit dem angegebenen Hash vorhanden waren.
-
-### Volltext-Suchanfragen
+### Full-Text Search Example
-Volltextsuchabfrage-Felder bieten eine aussagekräftige Textsuch-API, die dem Subgraph-Schema hinzugefügt und angepasst werden kann. Siehe [Definieren von Volltext-Suchfeldern](/developing/creating-a-subgraph/#defining-fulltext-search-fields), um die Volltextsuche zu Ihrem Subgraph hinzuzufügen.
+Full-text search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Full-text Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add full-text search to your Subgraph.
-Volltextsuchanfragen haben ein erforderliches Feld, `text`, für die Eingabe von Suchbegriffen. Mehrere spezielle Volltext-Operatoren sind verfügbar, die in diesem `text`-Suchfeld verwendet werden können.
+Full-text search queries have one required field, `text`, for supplying search terms. Several special full-text operators are available to be used in this `text` search field.
-Volltext-Suchanfragen:
+Full-text search fields use the required `text` parameter with the following operators:
-| Symbol | Operator | Beschreibung |
-| --- | --- | --- |
-| `&` | `And` | Zum Kombinieren mehrerer Suchbegriffe zu einem Filter für Entitäten, die alle bereitgestellten Begriffe enthalten |
-| | | `Or` | Abfragen mit mehreren durch den Operator or getrennten Suchbegriffen geben alle Entitäten mit einer Übereinstimmung mit einem der bereitgestellten Begriffe zurück |
-| `<->` | `Follow by` | Geben Sie den Abstand zwischen zwei Wörtern an. |
-| `:*` | `Prefix` | Verwenden Sie den Präfix-Suchbegriff, um Wörter zu finden, deren Präfix übereinstimmt (2 Zeichen erforderlich) |
+| Operator | Symbol | Description |
+| --------- | ------ | --------------------------------------------------------------- |
+| And | `&` | Matches entities containing all terms |
+| Or | `\|` | Return all entities with a match from any of the provided terms |
+| Follow by | `<->` | Matches terms with specified distance |
+| Prefix | `:*` | Matches word prefixes (minimum 2 characters) |
-#### Beispiele
+#### Search Examples
-Mit dem Operator `or` filtert diese Abfrage nach Blog-Entitäten mit Variationen von entweder "anarchism" oder „crumpet“ in ihren Volltextfeldern.
+OR operator:
-```graphql
+```
{
blogSearch(text: "anarchism | crumpets") {
id
@@ -363,7 +308,7 @@ Mit dem Operator `or` filtert diese Abfrage nach Blog-Entitäten mit Variationen
}
```
-Der Operator `follow by` gibt Wörter an, die in den Volltextdokumenten einen bestimmten Abstand zueinander haben. Die folgende Abfrage gibt alle Blogs mit Variationen von „decentralize“ gefolgt von „philosophy“ zurück
+“Follow” by operator:
```graphql
{
@@ -376,7 +321,7 @@ Der Operator `follow by` gibt Wörter an, die in den Volltextdokumenten einen be
}
```
-Kombinieren Sie Volltextoperatoren, um komplexere Filter zu erstellen. Mit einem Präfix-Suchoperator in Kombination mit "follow by" von dieser Beispielabfrage werden alle Blog-Entitäten mit Wörtern abgeglichen, die mit „lou“ beginnen, gefolgt von „music“.
+Combined operators:
```graphql
{
@@ -389,29 +334,19 @@ Kombinieren Sie Volltextoperatoren, um komplexere Filter zu erstellen. Mit einem
}
```
-### Validierung
+### Schema-Definition
-Graph Node implementiert die [spezifikationsbasierte](https://spec.graphql.org/October2021/#sec-Validation) Validierung der empfangenen GraphQL-Abfragen mit [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), die auf der [graphql-js-Referenzimplementierung](https://github.com/graphql/graphql-js/tree/main/src/validation) basiert. Abfragen, die eine Validierungsregel nicht erfüllen, werden mit einem Standardfehler angezeigt - besuchen Sie die [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation), um mehr zu erfahren.
-
-## Schema
-
-Das Schema Ihrer Datenquellen, d. h. die Entitätstypen, Werte und Beziehungen, die zur Abfrage zur Verfügung stehen, werden über die [GraphQL Interface Definition Language (IDL)] (https://facebook.github.io/graphql/draft/#sec-Type-System) definiert.
-
-GraphQL-Schemata definieren im Allgemeinen Wurzeltypen für „Abfragen“, „Abonnements“ und „Mutationen“. The Graph unterstützt nur `Abfragen`. Der Root-Typ „Abfrage“ für Ihren Subgraph wird automatisch aus dem GraphQL-Schema generiert, das in Ihrem [Subgraph-Manifest] enthalten ist (/developing/creating-a-subgraph/#components-of-a-subgraph).
-
-> Hinweis: Unsere API stellt keine Mutationen zur Verfügung, da von den Entwicklern erwartet wird, dass sie aus ihren Anwendungen heraus Transaktionen direkt gegen die zugrunde liegende Blockchain durchführen.
-
-### Entitäten
+Entity types require:
-Alle GraphQL-Typen mit `@entity`-Direktiven in Ihrem Schema werden als Entitäten behandelt und müssen ein `ID`-Feld haben.
+- GraphQL Interface Definition Language (IDL) format
+- `@entity` directive
+- `ID` field
-> **Hinweis:** Derzeit müssen alle Typen in Ihrem Schema eine `@entity`-Direktive haben. In Zukunft werden wir Typen ohne `@entity`-Direktive als Wertobjekte behandeln, aber dies wird noch nicht unterstützt.
+### Subgraph Metadata Example
-### Subgraph-Metadaten
+The `_Meta_` object provides subgraph metadata:
-Alle Subgraphen haben ein automatisch generiertes `_Meta_`-Objekt, das Zugriff auf die Metadaten des Subgraphen bietet. Dieses kann wie folgt abgefragt werden:
-
-```graphQL
+```graphql
{
_meta(block: { number: 123987 }) {
block {
@@ -425,14 +360,49 @@ Alle Subgraphen haben ein automatisch generiertes `_Meta_`-Objekt, das Zugriff a
}
```
-Wenn ein Block angegeben wird, gelten die Metadaten ab diesem Block, andernfalls wird der zuletzt indizierte Block verwendet. Falls angegeben, muss der Block nach dem Startblock des Subgraphen liegen und kleiner oder gleich dem zuletzt indizierten Block sein.
-
-`deployment` ist eine eindeutige ID, die der IPFS CID der Datei `subgraph.yaml` entspricht.
+Metadata fields:
+
+- `deployment`: IPFS CID of the subgraph.yaml
+- `block`: Latest block information
+- `hasIndexingErrors`: Boolean indicating past indexing errors
+
+> Note: When writing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use `and` operators instead of `or` whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
+
+### GraphQL Filter Operators Reference
+
+This table explains each filter operator available in The Graph's GraphQL API. These operators are used as suffixes to field names when filtering data using the `where` parameter.
+
+| Operator | Beschreibung | Beispiel |
+| ------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
+| `_` | Matches entities where the specified field equals another entity | `{ where: { owner_: { name: "Alice" } } }` |
+| `_not` | Negates the specified condition | `{ where: { active_not: true } }` |
+| `_gt` | Greater than (>) | `{ where: { price_gt: "100" } }` |
+| `_lt` | Less than (`\<`) | `{ where: { price_lt: "100" } }` |
+| `_gte` | Greater than or equal to (>=) | `{ where: { price_gte: "100" } }` |
+| `_lte` | Less than or equal to (`\<=`) | `{ where: { price_lte: "100" } }` |
+| `_in` | Value is in the specified array | `{ where: { category_in: ["Art", "Music"] } }` |
+| `_not_in` | Value is not in the specified array | `{ where: { category_not_in: ["Art", "Music"] } }` |
+| `_contains` | Field contains the specified string (case-sensitive) | `{ where: { name_contains: "token" } }` |
+| `_contains_nocase` | Field contains the specified string (case-insensitive) | `{ where: { name_contains_nocase: "token" } }` |
+| `_not_contains` | Field does not contain the specified string (case-sensitive) | `{ where: { name_not_contains: "test" } }` |
+| `_not_contains_nocase` | Field does not contain the specified string (case-insensitive) | `{ where: { name_not_contains_nocase: "test" } }` |
+| `_starts_with` | Field starts with the specified string (case-sensitive) | `{ where: { name_starts_with: "Crypto" } }` |
+| `_starts_with_nocase` | Field starts with the specified string (case-insensitive) | `{ where: { name_starts_with_nocase: "crypto" } }` |
+| `_ends_with` | Field ends with the specified string (case-sensitive) | `{ where: { name_ends_with: "Token" } }` |
+| `_ends_with_nocase` | Field ends with the specified string (case-insensitive) | `{ where: { name_ends_with_nocase: "token" } }` |
+| `_not_starts_with` | Field does not start with the specified string (case-sensitive) | `{ where: { name_not_starts_with: "Test" } }` |
+| `_not_starts_with_nocase` | Field does not start with the specified string (case-insensitive) | `{ where: { name_not_starts_with_nocase: "test" } }` |
+| `_not_ends_with` | Field does not end with the specified string (case-sensitive) | `{ where: { name_not_ends_with: "Test" } }` |
+| `_not_ends_with_nocase` | Field does not end with the specified string (case-insensitive) | `{ where: { name_not_ends_with_nocase: "test" } }` |
+
+#### Anmerkungen
+
+- Type support varies by operator. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`.
+- The `_` operator is only available for object and interface types.
+- String comparison operators are especially useful for text fields.
+- Numeric comparison operators work with both number and string-encoded number fields.
+- Use these operators in combination with logical operators (`and`, `or`) for complex filtering.
-`block` liefert Informationen über den letzten Block (unter Berücksichtigung aller an `_meta` übergebenen Blockeinschränkungen):
-
-- hash: der Hash des Blocks
-- number: die Blocknummer
-- timestamp: der Zeitstempel des Blocks, falls verfügbar (dies ist derzeit nur für Subgraphen verfügbar, die EVM-Netzwerke indizieren)
+### Validierung
-hasIndexingErrors“ ist ein boolescher Wert, der angibt, ob der Subgraph in einem vergangenen Block auf Indizierungsfehler gestoßen ist.
+Graph Node implementiert die [spezifikationsbasierte](https://spec.graphql.org/October2021/#sec-Validation) Validierung der empfangenen GraphQL-Abfragen mit [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), die auf der [graphql-js-Referenzimplementierung](https://github.com/graphql/graphql-js/tree/main/src/validation) basiert. Abfragen, die eine Validierungsregel nicht erfüllen, werden mit einem Standardfehler angezeigt - besuchen Sie die [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation), um mehr zu erfahren.
diff --git a/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx
index cc71c6e7afd0..16dd2228d3d7 100644
--- a/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx
@@ -1,34 +1,86 @@
---
-title: Verwalten von API-Schlüsseln
+title: How to Manage API keys
---
+This guide shows you how to create, manage, and secure API keys for your [Subgraphs](/subgraphs/developing/subgraphs/).
+
## Überblick
-API-Schlüssel werden für die Abfrage von Subgraphen benötigt. Sie stellen sicher, dass die Verbindungen zwischen Anwendungsdiensten gültig und autorisiert sind, einschließlich der Authentifizierung des Endnutzers und des Geräts, das die Anwendung verwendet.
+API keys are required to query Subgraphs. They authenticate users and devices, authorize access to specific endpoints, enforce rate limits, and enable usage tracking across The Graph.
+
+## Voraussetzungen
+
+- A [Subgraph Studio](https://thegraph.com/studio/) account
+
+## Create a New API Key
+
+1. Navigate to [Subgraph Studio](https://thegraph.com/studio/)
+2. Click the **API Keys** tab in the navigation menu
+3. Click the **Create API Key** button
+
+A new window will pop up:
+
+4. Enter a name for your API key
+5. Optional: You can enable a period spending limit
+6. Click **Create API Key**
+
+
+
+## Manage API Keys
+
+The “API keys” table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+
+### How to Set Spending Limits
+
+1. Find your API key in the API keys table
+2. Click the "three dots" icon next to the key
+3. Select "Manage spending limit"
+4. Enter your desired monthly limit in USD
+5. Click **Save**
+
+> Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+
+### How to Rename an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Rename API key"
+3. Enter the new name
+4. Click **Save**
+
+### How to Regenerate an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Regenerate API key"
+3. Confirm the action in the pop up dialog
+
+> Warning: Regenerating an API key will invalidate the previous key immediately. Update your applications with the new key to prevent service interruption.
+
+## API Key Details
-### Erstellen und Verwalten von API-Schlüsseln
+### Monitoring Usage
-Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und klicken Sie auf die Registerkarte **API-Schlüssel**, um Ihre API-Schlüssel für bestimmte Subgraphen zu erstellen und zu verwalten.
+1. Click on your API key to view the Details page
+2. Check the **Overview** section for:
+ - Total number of queries
+ - GRT spent
+ - Current usage statistics
-Die Tabelle „API-Schlüssel“ listet die vorhandenen API-Schlüssel auf und ermöglicht es Ihnen, diese zu verwalten oder zu löschen. Für jeden Schlüssel können Sie seinen Status, die Kosten für den aktuellen Zeitraum, das Ausgabenlimit für den aktuellen Zeitraum und die Gesamtzahl der Abfragen sehen.
+### Restricting Domain Access
-Sie können auf das Menü mit den „drei Punkten“ rechts neben einem bestimmten API-Schlüssel klicken, um:
+1. Click on your API key to open the Details page
+2. Navigate to the **Security** section
+3. Click "Add Domain"
+4. Enter the authorized domain name
+5. Click **Save**
-- Umbenennen des API-Schlüssels
-- API-Schlüssel neu generieren
-- Löschen des API-Schlüssels
-- Ausgabenlimit verwalten: Dies ist ein optionales monatliches Ausgabenlimit für einen bestimmten API-Schlüssel, in USD. Dieses Limit gilt pro Abrechnungszeitraum (Kalendermonat).
+### Limiting Subgraph Access
-### API-Schlüssel Details
+1. In the API key Details page
+2. Navigate to the **Security** section
+3. Click "Assign Subgraphs"
+4. Select the Subgraphs you want to authorize
+5. Click **Save**
-Sie können auf einen einzelnen API-Schlüssel klicken, um die Detailseite anzuzeigen:
+## Zusätzliche Ressourcen
-1. Unter dem Abschnitt **Übersicht** können Sie:
- - Bearbeiten Sie den Namen Ihres Schlüssels
- - API-Schlüssel neu generieren
- - Anzeige der aktuellen Nutzung des API-Schlüssels mit Statistiken:
- - Anzahl der Abfragen
- - Ausgegebener GRT-Betrag
-2. Unter dem Abschnitt **Sicherheit** können Sie je nach gewünschter Kontrollstufe Sicherheitseinstellungen vornehmen. Im Einzelnen können Sie:
- - Anzeigen und Verwalten der Domainnamen, die zur Verwendung Ihres API-Schlüssels berechtigt sind
- - Zuweisung von Subgraphen, die mit Ihrem API-Schlüssel abgefragt werden können
+[Deploying Using Subgraph Studio](/subgraphs/developing/deploying/using-subgraph-studio/)
diff --git a/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index b35d7d952215..4ab2328b364d 100644
--- a/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,15 +2,19 @@
title: Subgraphen-ID vs. Einsatz-ID
---
+Managing and accessing Subgraphs relies on two distinct identification systems: Subgraph IDs and Deployment IDs.
+
Ein Subgraph wird durch eine Subgraph-ID identifiziert, und jede Version des Subgraphen wird durch eine Deployment-ID identifiziert.
Bei der Abfrage eines Subgraphen kann jede der beiden IDs verwendet werden, obwohl im Allgemeinen empfohlen wird, die Deployment ID zu verwenden, da sie eine bestimmte Version eines Subgraphen angeben kann.
-Hier sind einige wichtige Unterschiede zwischen den beiden IDs: 
+Both identifiers are accessible in [Subgraph Studio](https://thegraph.com/studio/):
+
+
## Einsatz-ID
-Die Bereitstellungs-ID ist der IPFS-Hash der kompilierten Manifestdatei, der auf andere Dateien im IPFS statt auf relative URLs auf dem Computer verweist. Auf das kompilierte Manifest kann zum Beispiel zugegriffen werden über: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. Um die Bereitstellungs-ID zu ändern, kann man einfach die Manifestdatei aktualisieren, z. B. durch Ändern des Beschreibungsfeldes, wie in der [Subgraph manifest documentation] (https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api) beschrieben.
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://ipfs.thegraph.com/ipfs/QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
Wenn Abfragen unter Verwendung der Einsatz-ID eines Subgraphen durchgeführt werden, geben wir eine Version dieses Subgraphen zur Abfrage an. Die Verwendung der Bereitstellungs-ID zur Abfrage einer bestimmten Subgraphenversion führt zu einer ausgefeilteren und robusteren Einrichtung, da die volle Kontrolle über die abgefragte Subgraphenversion besteht. Dies hat jedoch zur Folge, dass der Abfragecode jedes Mal manuell aktualisiert werden muss, wenn eine neue Version des Subgraphen veröffentlicht wird.
@@ -18,6 +22,12 @@ Beispiel für einen Endpunkt, der die Bereitstellungs-ID verwendet:
`https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB`
+Using Deployment IDs for queries offers precise version control but comes with specific implications:
+
+- Advantages: Complete control over which version you're querying, ensuring consistent results
+- Challenges: Requires manual updates to query code when new Subgraph versions are published
+- Use case: Ideal for production environments where stability and predictability are crucial
+
## Subgraph ID
Die Subgraph-ID ist ein eindeutiger Bezeichner für einen Subgraphen. Sie bleibt über alle Versionen eines Subgraphen hinweg konstant. Es wird empfohlen, die Subgraph-ID zu verwenden, um die neueste Version eines Subgraphen abzufragen, obwohl es einige Einschränkungen gibt.
@@ -25,3 +35,20 @@ Die Subgraph-ID ist ein eindeutiger Bezeichner für einen Subgraphen. Sie bleibt
Beachten Sie, dass Abfragen unter Verwendung der Subgraph-ID dazu führen können, dass Abfragen von einer älteren Version des Subgraphen beantwortet werden, da die neue Version Zeit zum Synchronisieren benötigt. Außerdem könnten neue Versionen Änderungen am Schema mit sich bringen.
Beispiel-Endpunkt, der die Subgraph-ID verwendet: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
+
+Using Subgraph IDs comes with important considerations:
+
+- Benefits: Automatically queries the latest version, reducing maintenance overhead
+- Limitations: May encounter version synchronization delays or breaking schema changes
+- Use case: Better suited for development environments or when staying current is more important than version stability
+
+## Deployment ID vs Subgraph ID
+
+Here are the key differences between the two IDs:
+
+| Consideration | Deployment ID | Subgraph ID |
+| ----------------------- | --------------------- | --------------- |
+| Version Pinning | Specific version | Always latest |
+| Maintenance Effort | High (manual updates) | Low (automatic) |
+| Environment Suitability | Production | Development |
+| Sync Status Awareness | Not required | Critical |
diff --git a/website/src/pages/de/subgraphs/quick-start.mdx b/website/src/pages/de/subgraphs/quick-start.mdx
index 4608dc407ca7..120831ea9ab2 100644
--- a/website/src/pages/de/subgraphs/quick-start.mdx
+++ b/website/src/pages/de/subgraphs/quick-start.mdx
@@ -2,24 +2,28 @@
title: Schnellstart
---
-Erfahren Sie, wie Sie auf einfache Weise einen [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) auf The Graph erstellen, veröffentlichen und abfragen können.
+Create, deploy, and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph Network.
+
+By the end, you'll have:
+
+- Initialized a Subgraph from a smart contract
+- Deployed it to Subgraph Studio for testing
+- Published to The Graph Network for decentralized indexing
## Voraussetzungen
- Eine Krypto-Wallet
-- Eine Smart-Contract-Adresse in einem [unterstützten Netzwerk](/supported-networks/
-- [Node.js](https://nodejs.org/) installiert
-- Ein Paketmanager Ihrer Wahl (`npm`, `yarn` oder `pnpm`)
+- A deployed smart contract on a [supported network](/supported-networks/)
+- [Node.js](https://nodejs.org/) & a package manager of your choice (`npm`, `yarn` or `pnpm`)
## Wie man einen Subgraphen erstellt
### 1. Erstellen Sie einen Subgraphen in Subgraph Studio
-Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und verbinden Sie Ihre Wallet.
-
-Mit Subgraph Studio können Sie Subgraphen erstellen, verwalten, bereitstellen und veröffentlichen sowie API-Schlüssel erstellen und verwalten.
-
-Klicken Sie auf „Einen Subgraphen erstellen“. Es wird empfohlen, den Subgraph in Title Case zu benennen: „Subgraph Name Chain Name“.
+1. Go to [Subgraph Studio](https://thegraph.com/studio/)
+2. Connect your wallet
+3. Click "Create a Subgraph"
+4. Name it in Title Case: "Subgraph Name Chain Name"
### 2. Installieren der Graph-CLI
@@ -37,20 +41,22 @@ Verwendung von [yarn] (https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialisieren Sie Ihren Subgraphen
+Verify install:
-> Sie finden die Befehle für Ihren spezifischen Subgraphen auf der Subgraphen-Seite in [Subgraph Studio] (https://thegraph.com/studio/).
+```sh
+graph --version
+```
-Der Befehl `graph init` erstellt automatisch ein Gerüst eines Subgraphen auf der Grundlage der Ereignisse Ihres Vertrags.
+### 3. Initialisieren Sie Ihren Subgraphen
-Der folgende Befehl initialisiert Ihren Subgraphen anhand eines bestehenden Vertrags:
+> You can find commands for your specific Subgraph in [Subgraph Studio](https://thegraph.com/studio/).
+
+The following command initializes your Subgraph from an existing contract and indexes events:
```sh
graph init
```
-Wenn Ihr Vertrag auf dem jeweiligen Blockscanner, auf dem er eingesetzt wird (z. B. [Etherscan](https://etherscan.io/)), verifiziert wird, wird die ABI automatisch im CLI erstellt.
-
Wenn Sie Ihren Subgraphen initialisieren, werden Sie von der CLI nach den folgenden Informationen gefragt:
- **Protokoll**: Wählen Sie das Protokoll, mit dem Ihr Subgraph Daten indizieren soll.
@@ -59,19 +65,17 @@ Wenn Sie Ihren Subgraphen initialisieren, werden Sie von der CLI nach den folgen
- **Ethereum-Netzwerk** (optional): Möglicherweise müssen Sie angeben, von welchem EVM-kompatiblen Netzwerk Ihr Subgraph Daten indizieren soll.
- **Vertragsadresse**: Suchen Sie die Adresse des Smart Contracts, von dem Sie Daten abfragen möchten.
- **ABI**: Wenn die ABI nicht automatisch ausgefüllt wird, müssen Sie sie manuell in eine JSON-Datei eingeben.
-- **Startblock**: Sie sollten den Startblock eingeben, um die Subgraph-Indizierung von Blockchain-Daten zu optimieren. Ermitteln Sie den Startblock, indem Sie den Block suchen, in dem Ihr Vertrag bereitgestellt wurde.
+- **Start Block**: You should input the start block where the contract was deployed to optimize Subgraph indexing of blockchain data.
- **Vertragsname**: Geben Sie den Namen Ihres Vertrags ein.
- **Vertragsereignisse als Entitäten indizieren**: Es wird empfohlen, dies auf „true“ zu setzen, da es automatisch Mappings zu Ihrem Subgraph für jedes emittierte Ereignis hinzufügt.
- **Einen weiteren Vertrag hinzufügen** (optional): Sie können einen weiteren Vertrag hinzufügen.
-Der folgende Screenshot zeigt ein Beispiel dafür, was Sie bei der Initialisierung Ihres Subgraphen erwarten können:
+See the following screenshot for an example of what to expect when initializing your Subgraph:

### 4. Bearbeiten Sie Ihren Subgraphen
-Der `init`-Befehl im vorherigen Schritt erzeugt einen Gerüst-Subgraphen, den Sie als Ausgangspunkt für den Aufbau Ihres Subgraphen verwenden können.
-
Wenn Sie Änderungen am Subgraphen vornehmen, werden Sie hauptsächlich mit drei Dateien arbeiten:
- Manifest (`subgraph.yaml`) - definiert, welche Datenquellen Ihr Subgraph indizieren wird.
@@ -82,9 +86,7 @@ Eine detaillierte Aufschlüsselung, wie Sie Ihren Subgraphen schreiben, finden S
### 5. Verteilen Sie Ihren Subgraphen
-> Denken Sie daran, dass die Bereitstellung nicht dasselbe ist wie die Veröffentlichung.
-
-Wenn Sie einen Subgraphen **breitstellen**, schieben Sie ihn in das [Subgraph Studio] (https://thegraph.com/studio/), wo Sie ihn testen, einstellen und überprüfen können. Die Indizierung eines bereitgestellten Subgraphen wird vom [Upgrade Indexierer](https://thegraph.com/blog/upgrade-indexer/) durchgeführt, der ein einzelner Indexierer ist, der von Edge & Node betrieben wird, und nicht von den vielen dezentralen Indexierern im Graph Network. Ein **eingesetzter** Subgraph ist frei nutzbar, ratenbegrenzt, für die Öffentlichkeit nicht sichtbar und für Entwicklungs-, Staging- und Testzwecke gedacht.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
Sobald Ihr Subgraph geschrieben ist, führen Sie die folgenden Befehle aus:
@@ -107,8 +109,6 @@ graph deploy
```
````
-Die CLI fragt nach einer Versionsbezeichnung. Es wird dringend empfohlen, [semantische Versionierung](https://semver.org/) zu verwenden, z.B. `0.0.1`.
-
### 6. Überprüfen Sie Ihren Subgraphen
Wenn Sie Ihren Subgraph vor der Veröffentlichung testen möchten, können Sie mit [Subgraph Studio] (https://thegraph.com/studio/) Folgendes tun:
@@ -125,55 +125,13 @@ Wenn Ihr Subgraph bereit für eine Produktionsumgebung ist, können Sie ihn im d
- Es macht Ihren Subgraphen verfügbar, um von den dezentralisierten [Indexierers](/indexing/overview/) auf The Graph Network indiziert zu werden.
- Sie hebt Ratenbeschränkungen auf und macht Ihren Subgraphen öffentlich durchsuchbar und abfragbar im [Graph Explorer] (https://thegraph.com/explorer/).
-- Es macht Ihren Subgraphen für [Kuratoren](/resources/roles/curating/) verfügbar, um ihn zu kuratieren.
-
-> Je mehr GRT Sie und andere auf Ihrem Subgraph kuratieren, desto mehr Indexierer werden dazu angeregt, Ihren Subgraphen zu indizieren, was die Servicequalität verbessert, die Latenzzeit reduziert und die Netzwerkredundanz für Ihren Subgraphen erhöht.
-
-#### Veröffentlichung mit Subgraph Studio
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to add curation signal.
-Um Ihren Subgraphen zu veröffentlichen, klicken Sie auf die Schaltfläche "Veröffentlichen" im Dashboard.
+To publish your Subgraph, click the Publish button in the dashboard and select your network.

-Wählen Sie das Netzwerk aus, in dem Sie Ihren Subgraphen veröffentlichen möchten.
-
-#### Veröffentlichen über die CLI
-
-Ab Version 0.73.0 können Sie Ihren Subgraphen auch mit dem Graph CLI veröffentlichen.
-
-Öffnen Sie den `graph-cli`.
-
-Verwenden Sie die folgenden Befehle:
-
-````
-```sh
-graph codegen && graph build
-```
-
-Dann,
-
-```sh
-graph publish
-```
-````
-
-3. Es öffnet sich ein Fenster, in dem Sie Ihre Wallet verbinden, Metadaten hinzufügen und Ihren fertigen Subgraphen in einem Netzwerk Ihrer Wahl bereitstellen können.
-
-
-
-Wie Sie Ihre Bereitstellung anpassen können, erfahren Sie unter [Veröffentlichen eines Subgraphen](/subgraphs/developing/publishing/publishing-a-subgraph/).
-
-#### Hinzufügen von Signalen zu Ihrem Subgraphen
-
-1. Um Indexierer für die Abfrage Ihres Subgraphen zu gewinnen, sollten Sie ihn mit einem GRT-Kurationssignal versehen.
-
- - Diese Maßnahme verbessert die Servicequalität, verringert die Latenz und erhöht die Netzwerkredundanz und -verfügbarkeit für Ihren Subgraphen.
-
-2. Indexer erhalten GRT Rewards auf der Grundlage des signalisierten Betrags, wenn sie für Indexing Rewards in Frage kommen.
-
- - Es wird empfohlen, mindestens 3.000 GRT zu kuratieren, um 3 Indexierer zu gewinnen. Prüfen Sie die Berechtigung zur Belohnung anhand der Nutzung der Subgraph-Funktion und der unterstützten Netzwerke.
-
-Um mehr über das Kuratieren zu erfahren, lesen Sie [Kuratieren](/resources/roles/curating/).
+> It is recommended that you curate your own Subgraph with at least 3,000 GRT to incentivize indexing.
Um Gaskosten zu sparen, können Sie Ihren Subgraphen in der gleichen Transaktion kuratieren, in der Sie ihn veröffentlichen, indem Sie diese Option wählen:
diff --git a/website/src/pages/de/subgraphs/upgrade-indexer.mdx b/website/src/pages/de/subgraphs/upgrade-indexer.mdx
new file mode 100644
index 000000000000..8675a7e47fb6
--- /dev/null
+++ b/website/src/pages/de/subgraphs/upgrade-indexer.mdx
@@ -0,0 +1,25 @@
+---
+title: Edge & Node Upgrade Indexer
+sidebarTitle: Upgrade Indexer
+---
+
+## Überblick
+
+The Upgrade Indexer is a specialized Indexer operated by Edge & Node. It supports newly integrated chains within The Graph ecosystem and ensures new Subgraphs are immediately available for querying, eliminating potential downtime.
+
+Originally designed as a transitional support, its primary purpose was to facilitate the migration of Subgraphs from the hosted service to the decentralized network. Currently, it supports newly deployed Subgraphs before the full Chain Integration Process (CIP) Indexing rewards are activated.
+
+### What it does
+
+- Provides immediate query support for all newly deployed Subgraphs.
+- Functions as the sole supporting Indexer for each chain until indexing rewards are activated.
+
+### What it does **not** do
+
+- Does not permanently index Subgraphs. Subgraph owners should curate Subgraphs to use independent Indexers long term.
+- Does not compete for rewards. The Upgrade Indexer's participation on the Graph Network does not dilute rewards for other Indexers.
+- Doesn't support Time Travel Queries (TTQ). All Subgraphs on the Upgrade Indexer are auto-pruned. If TTQs are needed on a Subgraph, [curation signal can be added](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to attract Indexers that will support this feature.
+
+### Schlussfolgerung
+
+The Edge & Node Upgrade Indexer is foundational in supporting chain integrations and mitigating data latency risks. It plays a critical role in scaling The Graph's decentralized infrastructure by ensuring immediate query support and fostering community-driven indexing.
diff --git a/website/src/pages/de/substreams/_meta-titles.json b/website/src/pages/de/substreams/_meta-titles.json
index cf75f2729d64..5050d4df9e35 100644
--- a/website/src/pages/de/substreams/_meta-titles.json
+++ b/website/src/pages/de/substreams/_meta-titles.json
@@ -1,3 +1,4 @@
{
- "developing": "Entwicklung"
+ "developing": "Entwicklung",
+ "sps": "Substreams-getriebene Subgraphen"
}
diff --git a/website/src/pages/de/substreams/developing/sinks.mdx b/website/src/pages/de/substreams/developing/sinks.mdx
index 9902c99e2b3d..f5f20d57336a 100644
--- a/website/src/pages/de/substreams/developing/sinks.mdx
+++ b/website/src/pages/de/substreams/developing/sinks.mdx
@@ -8,14 +8,13 @@ Wählen Sie ein Becken, das den Anforderungen Ihres Projekts entspricht.
Sobald Sie ein Paket gefunden haben, das Ihren Anforderungen entspricht, können Sie wählen, wie Sie die Daten nutzen möchten.
-Senken sind Integrationen, die es Ihnen ermöglichen, die extrahierten Daten an verschiedene Ziele zu senden, z. B. an eine SQL-Datenbank, eine Datei oder einen Subgraphen.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database or a file.
## Sinks
> Hinweis: Einige der Sinks werden offiziell vom StreamingFast-Entwicklungsteam unterstützt (d.h. es wird aktiver Support angeboten), aber andere Sinks werden von der Community betrieben und der Support kann nicht garantiert werden.
- [SQL-Datenbank](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Senden Sie die Daten an eine Datenbank.
-- [Subgraph](/sps/einfuehrung/): Konfigurieren Sie eine API, die Ihren Datenanforderungen entspricht, und hosten Sie sie im The Graph Network.
- [Direktes Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Streamen Sie Daten direkt aus Ihrer Anwendung.
- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Senden von Daten an ein PubSub-Thema.
- [[Community Sinks]] (https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Erforschen Sie hochwertige, von der Community unterhaltene Sinks.
@@ -26,26 +25,26 @@ Senken sind Integrationen, die es Ihnen ermöglichen, die extrahierten Daten an
### Offiziell
-| Name | Support | Maintainer | Quellcode |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| Name | Support | Maintainer | Quellcode |
+| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| Name | Support | Maintainer | Quellcode |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Name | Support | Maintainer | Quellcode |
+| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Offizielle Unterstützung (durch einen der wichtigsten Substreams-Anbieter)
- C = Community Support
diff --git a/website/src/pages/de/substreams/quick-start.mdx b/website/src/pages/de/substreams/quick-start.mdx
index 6d82be0f8ac1..6c03ea9762a7 100644
--- a/website/src/pages/de/substreams/quick-start.mdx
+++ b/website/src/pages/de/substreams/quick-start.mdx
@@ -31,6 +31,7 @@ Wenn Sie kein Substreams-Paket finden können, das Ihren speziellen Anforderunge
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
Um Ihre Substreams von Anfang an zu erstellen und zu optimieren, verwenden Sie den minimalen Pfad innerhalb des [Dev Containers](/substreams/developing/dev-container/).
diff --git a/website/src/pages/de/substreams/sps/faq.mdx b/website/src/pages/de/substreams/sps/faq.mdx
new file mode 100644
index 000000000000..705188578529
--- /dev/null
+++ b/website/src/pages/de/substreams/sps/faq.mdx
@@ -0,0 +1,96 @@
+---
+title: Substreams-basierte Subgraphen FAQ
+sidebarTitle: FAQ
+---
+
+## Was sind Substreams?
+
+Substreams ist eine außergewöhnlich leistungsstarke Verarbeitungsmaschine, die umfangreiche Blockchain-Datenströme verarbeiten kann. Sie ermöglicht es Ihnen, Blockchain-Daten für eine schnelle und nahtlose Verarbeitung durch Endbenutzeranwendungen zu verfeinern und zu gestalten.
+
+Genauer gesagt handelt es sich um eine Blockchain-agnostische, parallelisierte und Streaming-first-Engine, die als Blockchain-Datenumwandlungsschicht dient. Sie wird von [Firehose](https://firehose.streamingfast.io/) angetrieben und ermöglicht es Entwicklern, Rust-Module zu schreiben, auf Community-Modulen aufzubauen, eine extrem leistungsstarke Indizierung bereitzustellen und ihre Daten überall [zu versenken](/substreams/developing/sinks/).
+
+Substreams wird von [StreamingFast](https://www.streamingfast.io/) entwickelt. Besuchen Sie die [Substreams-Dokumentation](/substreams/introduction/), um mehr über Substreams zu erfahren.
+
+## Was sind Substreams-basierte Subgraphen?
+
+Die [Substreams-basierte Subgraphen](/sps/introduction/) kombinieren die Leistungsfähigkeit von Substreams mit der Abfragefähigkeit von Subgraphen. Bei der Veröffentlichung eines Substreams-basierten Subgraphen können die von den Substreams-Transformationen erzeugten Daten [Entitätsänderungen ausgeben](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs), die mit Subgraph-Entitäten kompatibel sind.
+
+Wenn Sie bereits mit der Entwicklung von Subgraphen vertraut sind, dann beachten Sie, dass Substreams-basierte Subgraphen dann abgefragt werden können, als ob sie von der AssemblyScript-Transformationsschicht erzeugt worden wären, mit allen Vorteilen von Subgraphen, wie der Bereitstellung einer dynamischen und flexiblen GraphQL-API.
+
+## Wie unterscheiden sich Substreams-basierte Subgraphen von Subgraphen?
+
+Subgraphen bestehen aus Datenquellen, die Onchain-Ereignisse spezifizieren und angeben, wie diese Ereignisse über in Assemblyscript geschriebene Handler umgewandelt werden sollen. Diese Ereignisse werden nacheinander verarbeitet, basierend auf der Reihenfolge, in der die Ereignisse in der Kette auftreten.
+
+Im Gegensatz dazu haben Substreams-basierte Subgraphen eine einzige Datenquelle, die auf ein Substreams-Paket verweist, das vom Graph Node verarbeitet wird. Substreams haben im Vergleich zu herkömmlichen Subgraphen Zugriff auf zusätzliche granulare Onchain-Daten und können zudem von einer massiv parallelisierten Verarbeitung profitieren, was zu deutlich schnelleren Verarbeitungszeiten führen kann.
+
+## Was sind die Vorteile der Verwendung von Substreams-basierten Subgraphen?
+
+Substreams-basierte Subgraphen kombinieren alle Vorteile von Substreams mit der Abfragefähigkeit von Subgraphen. Sie bieten The Graph eine bessere Zusammensetzbarkeit und eine leistungsstarke Indizierung. Sie ermöglichen auch neue Datenanwendungsfälle; sobald Sie beispielsweise Ihren Substreams-basierten Subgraphen erstellt haben, können Sie Ihre [Substreams-Module] (https://docs.substreams.dev/reference-material/substreams-components/modules#modules) für die Ausgabe an verschiedene [Senken] (https://substreams.streamingfast.io/reference-and-specs/manifests#sink) wie PostgreSQL, MongoDB und Kafka wiederverwenden.
+
+## Was sind die Vorteile von Substreams?
+
+Die Verwendung von Substreams hat viele Vorteile, unter anderem:
+
+- Zusammensetzbar: Sie können Substreams-Module wie LEGO-Steine stapeln und auf Community-Modulen aufbauen, um öffentliche Daten weiter zu verfeinern.
+
+- Leistungsstarke Indexierung: Um Größenordnungen schnellere Indizierung durch groß angelegte Cluster paralleler Operationen (siehe BigQuery).
+
+- Versenken Sie überall: Versenken Sie Ihre Daten, wo immer Sie wollen: PostgreSQL, MongoDB, Kafka, Subgraphen, Flat Files, Google Sheets.
+
+- Programmierbar: Verwenden Sie Code, um die Extraktion anzupassen, Aggregationen zur Transformationszeit durchzuführen und Ihre Ausgabe für mehrere Sinken zu modellieren.
+
+- Zugang zu zusätzlichen Daten, die nicht als Teil des JSON RPC verfügbar sind
+
+- Alle Vorteile von Firehose.
+
+## Was ist überhaupt Firehose?
+
+Der von [StreamingFast] (https://www.streamingfast.io/) entwickelte Firehose ist eine Blockchain-Datenextraktionsschicht, die von Grund auf neu entwickelt wurde, um die gesamte Historie von Blockchains mit bisher nicht gekannter Geschwindigkeit zu verarbeiten. Sie bietet einen dateibasierten und streamingorientierten Ansatz und ist eine Kernkomponente der Open-Source-Technologien von StreamingFast und die Grundlage für Substreams.
+
+Besuchen Sie die [Dokumentation] (https://firehose.streamingfast.io/), um mehr über Firehose zu erfahren.
+
+## Was sind die Vorteile von Firehose?
+
+Die Verwendung von Firehose bietet viele Vorteile, darunter:
+
+- Geringste Latenz und kein Polling: Die Firehose-Knoten sind so konstruiert, dass sie die Blockdaten zuerst herausgeben, und zwar nach dem Streaming-Prinzip.
+
+- Verhindert Ausfallzeiten: Von Grund auf für Hochverfügbarkeit konzipiert.
+
+- Verpassen Sie nie einen Beat: Der Firehose Stream Cursor ist so konzipiert, dass er mit Forks umgehen kann und in jeder Situation dort weitermacht, wo Sie aufgehört haben.
+
+- Umfangreichstes Datenmodell: Bestes Datenmodell, das die Änderungen des Kontostands, den vollständigen Aufrufbaum, interne Transaktionen, Logs, Speicheränderungen, Gaskosten und mehr enthält.
+
+- Nutzung von Flat Files: Blockchain-Daten werden in Flat Files extrahiert, der billigsten und optimalsten verfügbaren Rechenressource.
+
+## Wo erhalten Entwickler weitere Informationen über Substreams-basierten Subgraphen und Substreams?
+
+In der [Substreams-Dokumentation](/substreams/introduction/) erfahren Sie, wie Sie Substreams-Module erstellen können.
+
+Die [Dokumentation zu Substreams-basierten Subgraphen](/sps/introduction/) zeigt Ihnen, wie Sie diese für die Bereitstellung in The Graph verpacken können.
+
+Das [neueste Substreams Codegen-Tool] (https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) ermöglicht es Ihnen, ein Substreams-Projekt ohne jeglichen Code zu booten.
+
+## Welche Rolle spielen die Rust-Module in Substreams?
+
+Rust-Module sind das Äquivalent zu den AssemblyScript-Mappern in Subgraphen. Sie werden auf ähnliche Weise in WASM kompiliert, aber das Programmiermodell ermöglicht eine parallele Ausführung. Sie definieren die Art der Transformationen und Aggregationen, die Sie auf die Blockchain-Rohdaten anwenden möchten.
+
+Weitere Informationen finden Sie in der [Moduldokumentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules).
+
+## Was macht Substreams kompositionsfähig?
+
+Bei der Verwendung von Substreams erfolgt die Komposition auf der Transformationsschicht, wodurch zwischengespeicherte Module wiederverwendet werden können.
+
+Als Datenbeispiel kann Alice ein DEX-Preismodul erstellen, Bob kann damit einen Volumenaggregator für einige Token seines Interesses erstellen, und Lisa kann vier einzelne DEX-Preismodule zu einem Preisorakel kombinieren. Eine einzige Substreams-Anfrage bündelt all diese individuellen Module und verbindet sie miteinander, um einen viel feineren Datenstrom anzubieten. Dieser Datenstrom kann dann verwendet werden, um einen Subgraphen aufzufüllen und von den Verbrauchern abgefragt zu werden.
+
+## Wie können Sie einen Substreams-basierten Subgraphen erstellen und einsetzen?
+
+Nach der [Definition](/sps/introduction/) eines Subgraphen können Sie den Graph CLI verwenden, um ihn in [Subgraph Studio](https://thegraph.com/studio/) einzusetzen.
+
+## Wo finde ich Datenbeispiele für Substreams und Substreams-basierte Subgraphen?
+
+Sie können [dieses Github Repo] (https://github.com/pinax-network/awesome-substreams) besuchen, um Datenbeispiele für Substreams und Substreams-basierte Subgraphen zu finden.
+
+## Was bedeuten Substreams und Substreams-basierte Subgraphen für The Graph Network?
+
+Die Integration verspricht viele Vorteile, darunter eine extrem leistungsstarke Indizierung und eine größere Kompositionsfähigkeit durch die Nutzung von Community-Modulen und deren Weiterentwicklung.
diff --git a/website/src/pages/de/substreams/sps/introduction.mdx b/website/src/pages/de/substreams/sps/introduction.mdx
new file mode 100644
index 000000000000..396c53077fd1
--- /dev/null
+++ b/website/src/pages/de/substreams/sps/introduction.mdx
@@ -0,0 +1,31 @@
+---
+title: Einführung in Substreams-Powered Subgraphen
+sidebarTitle: Einführung
+---
+
+Steigern Sie die Effizienz und Skalierbarkeit Ihres Subgraphen, indem Sie [Substreams](/substreams/introduction/) verwenden, um vorindizierte Blockchain-Daten zu streamen.
+
+## Überblick
+
+Verwenden Sie ein Substreams-Paket (`.spkg`) als Datenquelle, um Ihrem Subgraph Zugang zu einem Strom von vorindizierten Blockchain-Daten zu geben. Dies ermöglicht eine effizientere und skalierbarere Datenverarbeitung, insbesondere bei großen oder komplexen Blockchain-Netzwerken.
+
+### Besonderheiten
+
+Es gibt zwei Methoden zur Aktivierung dieser Technologie:
+
+1. **Verwendung von Substreams [triggers](/sps/triggers/)**: Nutzen Sie ein beliebiges Substreams-Modul, indem Sie das Protobuf-Modell über einen Subgraph-Handler importieren und Ihre gesamte Logik in einen Subgraph verschieben. Diese Methode erstellt die Subgraph-Entitäten direkt im Subgraph.
+
+2. **Unter Verwendung von [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: Wenn Sie einen größeren Teil der Logik in Substreams schreiben, können Sie die Ausgabe des Moduls direkt in [graph-node](/indexing/tooling/graph-node/) verwenden. In graph-node können Sie die Substreams-Daten verwenden, um Ihre Subgraph-Entitäten zu erstellen.
+
+Sie können wählen, wo Sie Ihre Logik platzieren möchten, entweder im Subgraph oder in Substreams. Überlegen Sie jedoch, was mit Ihren Datenanforderungen übereinstimmt, da Substreams ein parallelisiertes Modell hat und Auslöser linear in den Graphknoten verbraucht werden.
+
+### Zusätzliche Ressourcen
+
+Unter den folgenden Links finden Sie Anleitungen zur Verwendung von Tools zur Codegenerierung, mit denen Sie schnell Ihr erstes durchgängiges Substreams-Projekt erstellen können:
+
+- [Solana](/substreams/developing/solana/transactions/)
+- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm)
+- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
+- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
+- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/de/substreams/sps/triggers.mdx b/website/src/pages/de/substreams/sps/triggers.mdx
new file mode 100644
index 000000000000..792dee351596
--- /dev/null
+++ b/website/src/pages/de/substreams/sps/triggers.mdx
@@ -0,0 +1,47 @@
+---
+title: Trigger für Substreams
+---
+
+Verwenden Sie Custom Triggers und aktivieren Sie die volle Nutzung von GraphQL.
+
+## Überblick
+
+Mit benutzerdefinierten Triggern können Sie Daten direkt in Ihre Subgraph-Mappings-Datei und Entitäten senden, die Tabellen und Feldern ähneln. So können Sie die GraphQL-Schicht vollständig nutzen.
+
+Durch den Import der Protobuf-Definitionen, die von Ihrem Substreams-Modul ausgegeben werden, können Sie diese Daten in Ihrem Subgraph-Handler empfangen und verarbeiten. Dies gewährleistet eine effiziente und schlanke Datenverwaltung innerhalb des Subgraph-Frameworks.
+
+### Definieren von `handleTransactions`
+
+Der folgende Code veranschaulicht, wie eine Funktion `handleTransactions` in einem Subgraph-Handler definiert wird. Diese Funktion empfängt rohe Substream-Bytes als Parameter und dekodiert sie in ein `Transactions`-Objekt. Für jede Transaktion wird eine neue Subgraph-Entität erstellt.
+
+```tsx
+export function handleTransactions(bytes: Uint8Array): void {
+ let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1.
+ if (transactions.length == 0) {
+ log.info('No transactions found', [])
+ return
+ }
+
+ for (let i = 0; i < transactions.length; i++) {
+ // 2.
+ let transaction = transactions[i]
+
+ let entity = new Transaction(transaction.hash) // 3.
+ entity.from = transaction.from
+ entity.to = transaction.to
+ entity.save()
+ }
+}
+```
+
+Das sehen Sie in der Datei `mappings.ts`:
+
+1. Die Bytes, die die Substreams enthalten, werden in das generierte `Transactions`-Objekt dekodiert. Dieses Objekt wird wie jedes andere AssemblyScript-Objekt verwendet
+2. Looping über die Transaktionen
+3. Erstellen einer neuen Subgraph-Entität für jede Transaktion
+
+Ein ausführliches Datenbeispiel für einen auslöserbasierten Subgraphen finden Sie [hier](/sps/tutorial/).
+
+### Zusätzliche Ressourcen
+
+Um Ihr erstes Projekt im Entwicklungscontainer zu erstellen, lesen Sie einen der [Schritt-für-Schritt-Guide](/substreams/developing/dev-container/).
diff --git a/website/src/pages/de/substreams/sps/tutorial.mdx b/website/src/pages/de/substreams/sps/tutorial.mdx
new file mode 100644
index 000000000000..598a1f340089
--- /dev/null
+++ b/website/src/pages/de/substreams/sps/tutorial.mdx
@@ -0,0 +1,155 @@
+---
+title: "Tutorial: Einrichten eines Substreams-basierten Subgraphen auf Solana"
+sidebarTitle: Tutorial
+---
+
+Erfolgreiche Einrichtung eines auslösungsbasierten Substreams-powered Subgraphs für ein Solana SPL-Token.
+
+## Los geht’s
+
+For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial)
+
+### Voraussetzungen
+
+Bevor Sie beginnen, stellen Sie Folgendes sicher:
+
+- Vervollständigen Sie das [Erste-Schritte-Guide] (https://github.com/streamingfast/substreams-starter), um Ihre Entwicklungsumgebung mit einem Dev Container einzurichten.
+- Sie sollten mit The Graph und grundlegenden Blockchain-Konzepten wie Transaktionen und Protobufs vertraut sein.
+
+### Schritt 1: Initialisieren Sie Ihr Projekt
+
+1. Öffnen Sie Ihren Dev Container und führen Sie den folgenden Befehl aus, um Ihr Projekt zu initialisieren:
+
+ ```bash
+ substreams init
+ ```
+
+2. Wählen Sie die Option „Minimalprojekt“.
+
+3. Ersetzen Sie den Inhalt der generierten Datei `substreams.yaml` durch die folgende Konfiguration, die Transaktionen für das Orca-Konto nach der SPL-Token-Programm-ID filtert:
+
+```yaml
+specVersion: v0.1.0
+Paket:
+ Name: my_project_sol
+ Version: v0.1.0
+
+importiert: # Übergeben Sie Ihr spkg von Interesse
+ solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg
+
+Module:
+ - name: map_spl_transfers
+ use: solana:map_block # Wählen Sie die entsprechenden Module aus, die in Ihrem spkg verfügbar sind
+ initialBlock: 260000082
+
+ - name: map_transactions_by_programid
+ use: solana:solana:transactions_by_programid_without_votes
+
+netzwerk: solana-mainnet-beta
+
+params: # Passen Sie die param-Felder an Ihre Bedürfnisse an.
+ # Für program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA
+ map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE
+```
+
+### Schritt 2: Erzeugen des Subgraph-Manifests
+
+Sobald das Projekt initialisiert ist, erzeugen Sie ein Subgraph-Manifest, indem Sie den folgenden Befehl im Dev Container ausführen:
+
+```bash
+substreams codegen subgraph
+```
+
+Sie erzeugen ein `subgraph.yaml`-Manifest, das das Substreams-Paket als Datenquelle importiert:
+
+```yaml
+---
+dataSources:
+ - kind: substreams
+ name: my_project_sol
+ network: solana-mainnet-beta
+ source:
+ package:
+ moduleName: map_spl_transfers # Module defined in the substreams.yaml
+ file: ./my-project-sol-v0.1.0.spkg
+ mapping:
+ apiVersion: 0.0.9
+ kind: substreams/graph-entities
+ file: ./src/mappings.ts
+ handler: handleTriggers
+```
+
+### Schritt 3: Definieren Sie Entitäten in `schema.graphql`
+
+Definieren Sie die Felder, die Sie in Ihren Subgraph-Entitäten speichern wollen, indem Sie die Datei `schema.graphql` aktualisieren.
+
+Hier ist ein Beispiel:
+
+```graphql
+type MyTransfer @entity {
+ id: ID!
+ amount: String!
+ source: String!
+ designation: String!
+ signers: [String!]!
+}
+```
+
+Dieses Schema definiert eine `MyTransfer`-Entität mit Feldern wie `id`, `amount`, `source`, `designation` und `signers`.
+
+### Schritt 4: Umgang mit Substreams Daten in `mappings.ts`
+
+Mit den erzeugten Protobuf-Objekten können Sie nun die dekodierten Substreams-Daten in Ihrer Datei `mappings.ts` im Verzeichnis `./src`verarbeiten.
+
+Das folgende Beispiel zeigt, wie die nicht abgeleiteten Überweisungen, die mit der Orca-Kontonummer verbunden sind, in die Subgraph-Entitäten extrahiert werden:
+
+```ts
+import { Protobuf } from 'as-proto/assembly'
+import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events'
+import { MyTransfer } from '../generated/schema'
+
+export function handleTriggers(bytes: Uint8Array): void {
+ const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode)
+
+ for (let i = 0; i < input.data.length; i++) {
+ const event = input.data[i]
+
+ if (event.transfer != null) {
+ let entity_id: string = `${event.txnId}-${i}`
+ const entity = new MyTransfer(entity_id)
+ entity.amount = event.transfer!.instruction!.amount.toString()
+ entity.source = event.transfer!.accounts!.source
+ entity.designation = event.transfer!.accounts!.destination
+
+ if (event.transfer!.accounts!.signer!.single != null) {
+ entity.signers = [event.transfer!.accounts!.signer!.single!.signer]
+ } else if (event.transfer!.accounts!.signer!.multisig != null) {
+ entity.signers = event.transfer!.accounts!.signer!.multisig!.signers
+ }
+ entity.save()
+ }
+ }
+}
+```
+
+### Schritt 5: Erzeugen von Protobuf-Dateien
+
+Um Protobuf-Objekte in AssemblyScript zu erzeugen, führen Sie den folgenden Befehl aus:
+
+```bash
+npm run protogen
+```
+
+Dieser Befehl konvertiert die Protobuf-Definitionen in AssemblyScript, so dass Sie sie im Handler des Subgraphen verwenden können.
+
+### Schlussfolgerung
+
+Herzlichen Glückwunsch! Sie haben erfolgreich einen Trigger-basierten Substreams-powered Subgraph für ein Solana SPL-Token eingerichtet. Im nächsten Schritt können Sie Ihr Schema, Ihre Mappings und Module an Ihren spezifischen Anwendungsfall anpassen.
+
+### Video-Anleitung
+
+
+
+### Zusätzliche Ressourcen
+
+Für weitergehende Anpassungen und Optimierungen lesen Sie bitte die offizielle [Substreams-Dokumentation] (https://substreams.streamingfast.io/tutorials/solana).
diff --git a/website/src/pages/de/supported-networks.mdx b/website/src/pages/de/supported-networks.mdx
index 1ae4bd5d095b..698073a1d456 100644
--- a/website/src/pages/de/supported-networks.mdx
+++ b/website/src/pages/de/supported-networks.mdx
@@ -4,17 +4,17 @@ hideTableOfContents: true
hideContentHeader: true
---
-import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, NetworksTable } from '@/supportedNetworks'
import { Heading } from '@/components'
import { useI18n } from '@/i18n'
export const getStaticProps = getSupportedNetworksStaticProps
-
+
{useI18n().t('index.supportedNetworks.title')}
-
+
- Subgraph Studio verlässt sich auf die Stabilität und Zuverlässigkeit der zugrundeliegenden Technologien, z.B. JSON-RPC, Firehose und Substreams Endpunkte.
- Subgraphs, die die Gnosis-Kette indizieren, können jetzt mit dem gnosis- Netzwerkidentifikator eingesetzt werden.
diff --git a/website/src/pages/de/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/de/token-api/evm/get-balances-evm-by-address.mdx
index 3386fd078059..68385ffc4272 100644
--- a/website/src/pages/de/token-api/evm/get-balances-evm-by-address.mdx
+++ b/website/src/pages/de/token-api/evm/get-balances-evm-by-address.mdx
@@ -1,9 +1,9 @@
---
-title: Token Balances by Wallet Address
+title: Balances by Address
template:
type: openApi
apiId: tokenApi
operationId: getBalancesEvmByAddress
---
-The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
+Provides latest ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/de/token-api/evm/get-historical-balances-evm-by-address.mdx b/website/src/pages/de/token-api/evm/get-historical-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..d96ed1b81fa2
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-historical-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Historical Balances
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHistoricalBalancesEvmByAddress
+---
+
+Provides historical ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/de/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/de/token-api/evm/get-holders-evm-by-contract.mdx
index 0bb79e41ed54..01a52bbf7ad2 100644
--- a/website/src/pages/de/token-api/evm/get-holders-evm-by-contract.mdx
+++ b/website/src/pages/de/token-api/evm/get-holders-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders by Contract Address
+title: Token Holders
template:
type: openApi
apiId: tokenApi
operationId: getHoldersEvmByContract
---
-The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
+Provides ERC-20 token holder balances by contract address.
diff --git a/website/src/pages/de/token-api/evm/get-nft-activities-evm.mdx b/website/src/pages/de/token-api/evm/get-nft-activities-evm.mdx
new file mode 100644
index 000000000000..f76eb35f653a
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-nft-activities-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Activities
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftActivitiesEvm
+---
+
+Provides NFT Activities (ex: transfers, mints & burns).
diff --git a/website/src/pages/de/token-api/evm/get-nft-collections-evm-by-contract.mdx b/website/src/pages/de/token-api/evm/get-nft-collections-evm-by-contract.mdx
new file mode 100644
index 000000000000..c8e9bfb64219
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-nft-collections-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Collection
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftCollectionsEvmByContract
+---
+
+Provides single NFT collection metadata, total supply, owners & total transfers.
diff --git a/website/src/pages/de/token-api/evm/get-nft-holders-evm-by-contract.mdx b/website/src/pages/de/token-api/evm/get-nft-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..091d01a197f4
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-nft-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Holders
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftHoldersEvmByContract
+---
+
+Provides NFT holders per contract.
diff --git a/website/src/pages/de/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx b/website/src/pages/de/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
new file mode 100644
index 000000000000..cf9ff1c6e1b8
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Items
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftItemsEvmContractByContractToken_idByToken_id
+---
+
+Provides single NFT token metadata, ownership & traits.
diff --git a/website/src/pages/de/token-api/evm/get-nft-ownerships-evm-by-address.mdx b/website/src/pages/de/token-api/evm/get-nft-ownerships-evm-by-address.mdx
new file mode 100644
index 000000000000..4c33526eceb7
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-nft-ownerships-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Ownerships
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftOwnershipsEvmByAddress
+---
+
+Provides NFT Ownerships for Account.
diff --git a/website/src/pages/de/token-api/evm/get-nft-sales-evm.mdx b/website/src/pages/de/token-api/evm/get-nft-sales-evm.mdx
new file mode 100644
index 000000000000..f2d78bea4052
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-nft-sales-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Sales
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftSalesEvm
+---
+
+Provides latest NFT marketplace sales.
diff --git a/website/src/pages/de/token-api/evm/get-ohlc-pools-evm-by-pool.mdx b/website/src/pages/de/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
new file mode 100644
index 000000000000..d5bc5357eadf
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
@@ -0,0 +1,9 @@
+---
+title: OHLCV by Pool
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPoolsEvmByPool
+---
+
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/de/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/de/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
index d1558ddd6e78..ff8f590b0433 100644
--- a/website/src/pages/de/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
+++ b/website/src/pages/de/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token OHLCV prices by Contract Address
+title: OHLCV by Contract
template:
type: openApi
apiId: tokenApi
operationId: getOhlcPricesEvmByContract
---
-The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/de/token-api/evm/get-pools-evm.mdx b/website/src/pages/de/token-api/evm/get-pools-evm.mdx
new file mode 100644
index 000000000000..db32376f5a17
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-pools-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Liquidity Pools
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getPoolsEvm
+---
+
+Provides Uniswap V2 & V3 liquidity pool metadata.
diff --git a/website/src/pages/de/token-api/evm/get-swaps-evm.mdx b/website/src/pages/de/token-api/evm/get-swaps-evm.mdx
new file mode 100644
index 000000000000..0a7697f38c8b
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-swaps-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Swap Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getSwapsEvm
+---
+
+Provides Uniswap V2 & V3 swap events.
diff --git a/website/src/pages/de/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/de/token-api/evm/get-tokens-evm-by-contract.mdx
index b6fab8011fc2..aed206c15272 100644
--- a/website/src/pages/de/token-api/evm/get-tokens-evm-by-contract.mdx
+++ b/website/src/pages/de/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders and Supply by Contract Address
+title: Token Metadata
template:
type: openApi
apiId: tokenApi
operationId: getTokensEvmByContract
---
-The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
+Provides ERC-20 token contract metadata.
diff --git a/website/src/pages/de/token-api/evm/get-transfers-evm.mdx b/website/src/pages/de/token-api/evm/get-transfers-evm.mdx
new file mode 100644
index 000000000000..d8e73c90a03c
--- /dev/null
+++ b/website/src/pages/de/token-api/evm/get-transfers-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Transfer Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvm
+---
+
+Provides ERC-20 & Native transfer events.
diff --git a/website/src/pages/de/token-api/faq.mdx b/website/src/pages/de/token-api/faq.mdx
index c90af204668f..16e661cf61ec 100644
--- a/website/src/pages/de/token-api/faq.mdx
+++ b/website/src/pages/de/token-api/faq.mdx
@@ -6,21 +6,37 @@ Get fast answers to easily integrate and scale with The Graph's high-performance
## Allgemein
-### What blockchains does the Token API support?
+### Which blockchains are supported by the Token API?
-Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+Currently, the Token API supports Ethereum, BNB Smart Chain (BSC), Polygon, Optimism, Base, Unichain, and Arbitrum One.
-### Why isn't my API key from The Graph Market working?
+### Does the Token API support NFTs?
-Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+Yes, The Graph Token API currently supports ERC-721 and ERC-1155 NFT token standards, with support for additional NFT standards planned. Endpoints are offered for ownership, collection stats, metadata, sales, holders, and transfer activity.
+
+### Do NFTs include off-chain data?
+
+NFT endpoints currently only include on-chain data. To get off-chain data, use the IPFS or HTTP links included in the NFT item response.
+
+### How do I authenticate requests to the Token API, and why doesn't my API key from The Graph Market work?
+
+Authentication is managed via API tokens obtained through [The Graph Market](https://thegraph.market/). If you're experiencing issues, make sure you're using the API Token generated from the API key, not the API key itself. An API token can be found on The Graph Market dashboard next to each API key. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
### How current is the data provided by the API relative to the blockchain?
The API provides data up to the latest finalized block.
-### How do I authenticate requests to the Token API?
+### How do I retrieve token prices?
+
+By default, token prices are returned with token-related responses, including token balances, token transfers, token metadata, and token holders. Historical prices are available with the Open-High-Low-Close (OHLC) endpoints.
+
+### Does the Token API support historical token data?
-Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+The Token API supports historical token balances with the `/historical/balances/evm/{address}` endpoint. You can query historical price data by pool at `/ohlc/pools/evm/{pool}` and by contract at `/ohlc/prices/evm/{contract}`.
+
+### What exchanges does the Token API use for token prices?
+
+The Token API currently tracks prices on Uniswap v2 and Uniswap v3, with plans to support additional exchanges in the future.
### Does the Token API provide a client SDK?
@@ -34,9 +50,9 @@ Yes, more blockchains will be supported in the future. Please share feedback on
Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
-### Are there plans to support additional use cases such as NFTs?
+### Are there plans to support additional use cases?
-The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
## MCP / LLM / AI Topics
@@ -60,17 +76,25 @@ You can find the code for the MCP client in [The Graph's repo](https://github.co
Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
-### Are there rate limits or usage costs?\*\*
+### Why am I getting 500 errors?
+
+Networks that are currently or temporarily unavailable on a given endpoint will return a `bad_database_response`, `Endpoint is currently not supported for this network` error. Databases that are in the process of ingestion will produce this response.
+
+### Are there rate limits or usage costs?
+
+During Beta, the Token API is free for authorized developers. There are no specific rate limits, but reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What do I do if I notice data inconsistencies in the data returned by the Token API?
-During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+If you notice data inconsistencies, please report the issue on our [Discord](https://discord.gg/graphprotocol). Identifying edge cases can help make sure all data is accurate and up-to-date.
-### What networks are supported, and how do I specify them?
+### How do I specify a network?
-You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of the exact network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`, `unichain`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
### Why do I only see 10 results? How can I get more data?
-Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` and `page` (1-indexed) to return more results. For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
### How do I fetch older transfer history?
diff --git a/website/src/pages/de/token-api/monitoring/get-health.mdx b/website/src/pages/de/token-api/monitoring/get-health.mdx
index 57a827b3343b..09f7b954dbf3 100644
--- a/website/src/pages/de/token-api/monitoring/get-health.mdx
+++ b/website/src/pages/de/token-api/monitoring/get-health.mdx
@@ -1,7 +1,9 @@
---
-title: Get health status of the API
+title: Health Status
template:
type: openApi
apiId: tokenApi
operationId: getHealth
---
+
+Get health status of the API
diff --git a/website/src/pages/de/token-api/monitoring/get-networks.mdx b/website/src/pages/de/token-api/monitoring/get-networks.mdx
index 0ea3c485ddb9..addfc546ce40 100644
--- a/website/src/pages/de/token-api/monitoring/get-networks.mdx
+++ b/website/src/pages/de/token-api/monitoring/get-networks.mdx
@@ -1,7 +1,9 @@
---
-title: Get supported networks of the API
+title: Unterstützte Netzwerke
template:
type: openApi
apiId: tokenApi
operationId: getNetworks
---
+
+Get supported networks of the API
diff --git a/website/src/pages/de/token-api/monitoring/get-version.mdx b/website/src/pages/de/token-api/monitoring/get-version.mdx
index 0be6b7e92d04..fa0040807854 100644
--- a/website/src/pages/de/token-api/monitoring/get-version.mdx
+++ b/website/src/pages/de/token-api/monitoring/get-version.mdx
@@ -1,7 +1,9 @@
---
-title: Get the version of the API
+title: Version
template:
type: openApi
apiId: tokenApi
operationId: getVersion
---
+
+Get the version of the API
diff --git a/website/src/pages/de/token-api/quick-start.mdx b/website/src/pages/de/token-api/quick-start.mdx
index b84fad5f665a..4b792fc2b85b 100644
--- a/website/src/pages/de/token-api/quick-start.mdx
+++ b/website/src/pages/de/token-api/quick-start.mdx
@@ -9,15 +9,15 @@ sidebarTitle: Schnellstart
The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
-The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+The Token API provides access to onchain NFT and fungible token data, including live and historical balances, holders, prices, market data, token metadata, and token transfers. This API also uses the Model Context Protocol (MCP) to allow AI tools such as Claude to enrich raw blockchain data with contextual insights.
## Voraussetzungen
-Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+Before you begin, get a JWT API token by signing up on [The Graph Market](https://thegraph.market/). Make sure to use the JWT API Token, not the API key. Each API key can generate a new JWT API Token at any time.
## Authentication
-All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+All API endpoints are authenticated using a JWT API token inserted in the header as `Authorization: Bearer `.
```json
{
@@ -64,6 +64,20 @@ Make sure to replace `` with the JWT Token generated from your API key.
> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+## Chain and Feature Support
+
+| Network | evm-tokens | evm-uniswaps | evm-nft-tokens |
+| ---------------- | :---------: | :----------: | :------------: |
+| Ethereum Mainnet | ✅ | ✅ | ✅ |
+| BSC | ✅\* | ✅ | ✅ |
+| Base | ✅ | ✅ | ✅ |
+| Unichain | ✅ | ✅ | ✅ |
+| Arbitrum-One | Ingesting\* | Ingesting\* | Ingesting\* |
+| Optimism | ✅ | ✅ | ✅ |
+| Polygon | ✅ | ✅ | ✅ |
+
+\*Some chains are still in the process of syncing. You may encounter `bad_database_response` errors or incorrect response values until data is fully synced.
+
## Troubleshooting
If the API call fails, try printing out the full response object for additional error details. For example:
diff --git a/website/src/pages/es/about.mdx b/website/src/pages/es/about.mdx
index ffa133b4e0b7..17ffa7c8df83 100644
--- a/website/src/pages/es/about.mdx
+++ b/website/src/pages/es/about.mdx
@@ -1,67 +1,46 @@
---
-title: Acerca de The Graph
+title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## Que es The Graph?
-The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier.
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## Understanding the Basics
+Its data services include:
-Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain.
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### Challenges Without The Graph
+### Why Blockchain Data is Difficult to Query
-In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply.
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**.
+The result is slow performance, complex infrastructure, and scalability issues.
-- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself.
+## How The Graph Solves This
-- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it.
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### Why is this a problem?
+Find the perfect data service for you:
-It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions.
+### 1. Custom Real-Time Data Streams
-Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
+**Use Case:** High-frequency trading, live analytics.
-Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data.
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph Provides a Solution
+### 2. Instant Token Data
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
+- [Start with Token API](/token-api/quick-start/)
-### How The Graph Functions
+### 3. Flexible Historical Queries
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+**Use Case:** Dapp frontends, custom analytics.
-#### Specifics
-
-- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-
-- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-
-- When creating a Subgraph, you need to write a Subgraph manifest.
-
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-
-The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.
-
-
-
-El flujo sigue estos pasos:
-
-1. Una aplicación descentralizada (dapp) añade datos a Ethereum a través de una transacción en un contrato inteligente.
-2. El contrato inteligente emite uno o más eventos mientras procesa la transacción.
-3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
-4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
-5. La dapp consulta a través de Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo de The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacenamiento de datos subyacentes con el fin de obtener estos datos, haciendo uso de las capacidades de indexación que ofrece el almacenamiento. La dapp muestra estos datos en una interfaz muy completa para el usuario, a fin de que los end users que usan este subgrafo puedan emitir nuevas transacciones en Ethereum. El ciclo se repite.
-
-## Próximos puntos
-
-The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-
-Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/es/index.json b/website/src/pages/es/index.json
index 2c1eeb105f26..040f5142456c 100644
--- a/website/src/pages/es/index.json
+++ b/website/src/pages/es/index.json
@@ -2,7 +2,7 @@
"title": "Inicio",
"hero": {
"title": "Documentación de The Graph",
- "description": "Inicia tu proyecto web3 con las herramientas para extraer, transformar y cargar datos de blockchain.",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "Cómo funciona The Graph",
"cta2": "Crea tu primer subgrafo"
},
@@ -19,10 +19,10 @@
"description": "Obtén y consume datos de blockchain con ejecución paralela.",
"cta": "Desarrolla con Substreams"
},
- "sps": {
- "title": "Subgrafos impulsados por Substreams",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "Configura un subgrafo impulsado por Substreams"
+ "tokenApi": {
+ "title": "Token API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "Graph Node",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "Firehose",
- "description": "Extrae datos de blockchain en archivos planos para mejorar los tiempos de sincronización y las capacidades de transmisión.",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Comienza con Firehose"
}
},
@@ -58,6 +58,7 @@
"networks": "networks",
"completeThisForm": "completa este formulario"
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "Subgrafos",
"substreams": "Corrientes secundarias",
"firehose": "Firehose",
- "tokenapi": "Token API"
+ "tokenApi": "Token API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "Corrientes secundarias",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "Graph Explorer",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/es/indexing/new-chain-integration.mdx b/website/src/pages/es/indexing/new-chain-integration.mdx
index 5e56afca4d75..b496228929c7 100644
--- a/website/src/pages/es/indexing/new-chain-integration.mdx
+++ b/website/src/pages/es/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, en una solicitud por lotes JSON-RPC
-- `trace_filter` *(limited tracing and optionally required for Graph Node)*
+- `trace_filter` _(limited tracing and optionally required for Graph Node)_
### 2. Firehose Integration
@@ -63,7 +63,7 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
> Do not change the env var name itself. It must remain `ethereum` even if the network name is different.
-3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## Substreams-powered Subgraphs
diff --git a/website/src/pages/es/indexing/overview.mdx b/website/src/pages/es/indexing/overview.mdx
index cf592d9ad7e4..600f39e5662e 100644
--- a/website/src/pages/es/indexing/overview.mdx
+++ b/website/src/pages/es/indexing/overview.mdx
@@ -110,12 +110,12 @@ Indexers may differentiate themselves by applying advanced techniques for making
- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
-| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
+| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -131,7 +131,7 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
@@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer
#### Graph Node
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Getting started using Docker
@@ -708,42 +708,6 @@ Note that supported action types for allocation management have different input
Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
-#### Agora
-
-The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query.
-
-A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression.
-
-Example cost model:
-
-```
-# This statement captures the skip value,
-# uses a boolean expression in the predicate to match specific queries that use `skip`
-# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-# This default will match any GraphQL expression.
-# It uses a Global substituted into the expression to calculate cost
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-Example query costing using the above model:
-
-| Query | Price |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### Applying the cost model
-
-Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them.
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## Interacting with the network
### Stake in the protocol
diff --git a/website/src/pages/es/indexing/tooling/graph-node.mdx b/website/src/pages/es/indexing/tooling/graph-node.mdx
index 4563bd8444bb..bbebb84bd22d 100644
--- a/website/src/pages/es/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/es/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ While some Subgraphs may just require a full node, some may have indexing featur
### Nodos IPFS
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### Servidor de métricas Prometheus
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Introducción a Kubernetes
@@ -77,15 +77,20 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
Cuando está funcionando, Graph Node muestra los siguientes puertos:
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
-
-> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## Configuración avanzada de Graph Node
@@ -330,7 +335,7 @@ Las tablas de bases de datos que almacenan entidades suelen ser de dos tipos: La
For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table.
-The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show ` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions.
@@ -340,6 +345,4 @@ For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates f
#### Removing Subgraphs
-> Se trata de una nueva funcionalidad, que estará disponible en Graph Node 0.29.x
-
At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/es/resources/claude-mcp.mdx b/website/src/pages/es/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..5b55bbcbe0a4
--- /dev/null
+++ b/website/src/pages/es/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/es/subgraphs/_meta-titles.json b/website/src/pages/es/subgraphs/_meta-titles.json
index 3fd405eed29a..f095d374344f 100644
--- a/website/src/pages/es/subgraphs/_meta-titles.json
+++ b/website/src/pages/es/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "Querying",
"developing": "Developing",
"guides": "How-to Guides",
- "best-practices": "Best Practices"
+ "best-practices": "Best Practices",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5f964d3cbb78..edc1d88dc6cf 100644
--- a/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Patch Changes
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Minor Changes
diff --git a/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx
index 7673a925ad21..4479673b2af3 100644
--- a/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs:
The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| Version | Notas del lanzamiento |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
-| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
-| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
-| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
-| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Added `input` field to the Ethereum Transaction object |
+| Version | Notas del lanzamiento |
+| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
+| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
+| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
+| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
+| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
+| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Added `input` field to the Ethereum Transaction object |
### Tipos Incorporados
diff --git a/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx
index aad5349fb149..669a29583ee8 100644
--- a/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -22,14 +22,14 @@ Start the process and build a Subgraph that matches your needs:
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
-| Version | Notas del lanzamiento |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+| Version | Notas del lanzamiento |
+| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx
index a96efc430a61..33807eefc5be 100644
--- a/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx
@@ -212,7 +212,7 @@ Every Subgraph affected with this policy has an option to bring the version in q
If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph: `https://indexer.upgrade.thegraph.com/status`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
diff --git a/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 29eed7358005..82b865a94db2 100644
--- a/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -88,6 +88,8 @@ graph auth
Once you are ready, you can deploy your Subgraph to Subgraph Studio.
> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
+>
+> **Note**: Each account is limited to 3 deployed (unpublished) Subgraphs. If you reach this limit, you must archive or publish existing Subgraphs before deploying new ones.
Use the following CLI command to deploy your Subgraph:
@@ -104,6 +106,8 @@ After running this command, the CLI will ask for a version label.
After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+> **Note**: The development query URL is limited to 3,000 queries per day.
+
Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
diff --git a/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index 67c076d0a156..ac0a6b970883 100644
--- a/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -53,7 +53,7 @@ USAGE
FLAGS
-h, --help Show CLI help.
- -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node.
+ -i, --ipfs= [default: https://ipfs.thegraph.com/api/v0] Upload build results to an IPFS node.
--ipfs-hash= IPFS hash of the subgraph manifest to deploy.
--protocol-network= [default: arbitrum-one] The network to use for the subgraph deployment.
diff --git a/website/src/pages/es/subgraphs/explorer.mdx b/website/src/pages/es/subgraphs/explorer.mdx
index e7d1980ac05d..07b1026ec356 100644
--- a/website/src/pages/es/subgraphs/explorer.mdx
+++ b/website/src/pages/es/subgraphs/explorer.mdx
@@ -2,83 +2,103 @@
title: Graph Explorer
---
-Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Use [Graph Explorer](https://thegraph.com/explorer) and take full advantage of its core features.
## Descripción
-Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+This guide explains how to use [Graph Explorer](https://thegraph.com/explorer) to quickly discover and interact with Subgraphs on The Graph Network, delegate GRT, view participant metrics, and analyze network performance.
-## Inside Explorer
+> When you visit Graph Explorer, you can also access the link to [explore Substreams](https://substreams.dev/).
-The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
+## Prerequisites
-### Subgraphs Page
+- To perform actions, you need a wallet (e.g., MetaMask) connected to [Graph Explorer](https://thegraph.com/explorer).
+ > Make sure your wallet is connected to the correct network (e.g., Arbitrum). Features and data shown are network specific.
+- GRT tokens if you plan to delegate or curate.
+- Basic knowledge of [Subgraphs](https://thegraph.com/docs/en/subgraphs/developing/subgraphs/)
-After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+## Navigating Graph Explorer
-- Your own finished Subgraphs
-- Subgraphs published by others
-- The exact Subgraph you want (based on the date created, signal amount, or name).
+### Step 1. Explore Subgraphs
-
+> For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
-When you click into a Subgraph, you will be able to do the following:
+Go to the Subgraphs page in [Graph Explorer](https://thegraph.com/explorer).
-- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
+- If you've deployed and published your Subgraph in Subgraph Studio, you can view it here.
+- Search all published Subgraphs and filter them by indexed network, specific categories (such as DeFI, NFTs, and DAOs), and **most queried, most curated, recently created, and recently updated**.
+
+
+
+To find Subgraphs indexing a specific contract, enter the contract address into the search bar.
+
+- For example, you can enter the L2GNS contract on Artbitrum (`0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec`) and this returns all Subgraphs indexing that contract:
+
+
- - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+> Looking for indexing contracts? Check out [this Subgraph](https://thegraph.com/explorer/subgraphs/FMTUN6d7sY2bLnAmNEPJTqiU3iuQht6ZXurpBh71wbWR?view=About&chain=arbitrum-one) which indexes contract addresses listed in its manifest. It shows all current deployments indexing those contracts on Arbitrum One, along with the signal allocated to each.
-
+You can click into any Subgraph, to:
+
+- Test queries in the playground and be able to leverage network details to make informed decisions.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make Indexers aware of its importance and quality.
+ > This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it'll eventually surface on the network to serve queries.
+
+
On each Subgraph’s dedicated page, you can do the following:
-- Signal/Un-signal on Subgraphs
-- Ver más detalles como gráficos, ID de implementación actual y otros metadatos
-- Switch versions to explore past iterations of the Subgraph
- Query Subgraphs via GraphQL
+- View Subgraph ID, current deployment ID, Query URL, and other metadata
+- Signal/unsignal on Subgraphs
- Test Subgraphs in the playground
- View the Indexers that are indexing on a certain Subgraph
- Estadísticas de subgrafo (asignaciones, Curadores, etc.)
-- View the entity who published the Subgraph
+- View query fees and charts
+- Change versions to explore past iterations of the Subgraph
+- View entity types
+- View Subgraph activity
-
+
-### Delegate Page
+### Step 2. Delegate GRT
-On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer.
+Go to the [Delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) page to learn how to delegate, get GRT, and choose an Indexer.
-On this page, you can see the following:
+Here, you can:
-- Indexers who collected the most query fees
-- Indexers with the highest estimated APR
+- Compare Indexers by most query fees earned and highest estimated APR.
+- Use the built-in ROI calculator or search by Indexer name or address.
+- Click **"Delegate"** next to an Indexer to stake your GRT.
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
+### Step 3. Monitor Participants in the Network
-### Participants Page
+Go to the [Participants](https://thegraph.com/explorer/participants?chain=arbitrum-one) page to view:
-This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators.
+- Indexers: stakes, allocations, rewards, and delegation parameters
+- Curators: signal amounts, Subgraph shares, and activity history
+- Delegators: current and historical delegations, rewards, and Indexer metrics
-#### 1. Indexadores
+#### Indexadores
-
+
Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexer's delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
-- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators.
-- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
-- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
-- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
-- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
-- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
-- Max Delegation Capacity: la cantidad máxima de participación delegada que el Indexador puede aceptar de forma productiva. Un exceso de participación delegada no puede utilizarse para asignaciones o cálculos de recompensas.
-- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
-- Indexer Rewards: este es el total de recompensas del Indexador obtenidas por el Indexador y sus Delegadores durante todo el tiempo que trabajaron en conjunto. Las recompensas de los Indexadores se pagan mediante la emisión de GRT.
+- Query Fee Cut: The % of the query fee rebates that the Indexer keeps when splitting with Delegators.
+- Effective Reward Cut: The indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
+- Cooldown Remaining: The time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
+- Owned: This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
+- Delegated: Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
+- Allocated: Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
+- Available Delegation Capacity: The amount of delegated stake the Indexers can still receive before they become over-delegated.
+- Max Delegation Capacity: The maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
+- Query Fees: This is the total fees that end users have paid for queries from an Indexer over all time.
+- Indexer Rewards: This is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters.
@@ -86,9 +106,9 @@ Indexers can earn both query fees and indexing rewards. Functionally, this happe
To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/)
-
+
-#### 2. Curadores
+#### Curators
Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
@@ -102,11 +122,11 @@ In the The Curator table listed below you can see:
- El número de GRT que se depositaron
- El número de participaciones que posee un Curador
-
+
If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/).
-#### 3. Delegadores
+#### Delegadores
Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers.
@@ -114,7 +134,7 @@ Delegators play a key role in maintaining the security and decentralization of T
- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts.
- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/).
-
+
In the Delegators table you can see the active Delegators in the community and important metrics:
@@ -127,9 +147,9 @@ In the Delegators table you can see the active Delegators in the community and i
If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers).
-### Network Page
+### Step 4. Analyze Network Performance
-On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
+On the [Network](https://thegraph.com/explorer/network?chain=arbitrum-one) page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
#### Descripción
@@ -147,7 +167,7 @@ A few key details to note:
- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
-
+
#### Ciclos (epochs)
@@ -161,69 +181,77 @@ En la sección de Epochs, puedes analizar, por cada epoch, métricas como:
- Los ciclos de distribución son los ciclos en los que los canales correspondientes a los ciclos son establecidos y los Indexadores pueden reclamar sus reembolsos correspondientes a las tarifas de consulta.
- The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers.
-
+
+
+## Access and Manage Your User Profile
+
+### Step 1. Access Your Profile
-## Tu perfil de usuario
+- Click your wallet address in the top right corner
+- Your wallet acts as your user profile
+- In your profile dashboard, you can view and interact with several useful tabs
-Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs:
+### Step 2. Explore the Tabs
-### Información general del perfil
+#### Información general del perfil
In this section, you can view the following:
-- Any of your current actions you've done.
-- Your profile information, description, and website (if you added one).
+- Your activity
+- Your profile information: total query fees, total shares value, owned stake, stake delegating
-
+
-### Pestaña de subgrafos
+#### Pestaña de subgrafos
-In the Subgraphs tab, you’ll see your published Subgraphs.
+The Subgraphs tab displays all your published Subgraphs.
-> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> Subgraphs deployed with the CLI for testing purposes will not show up here. Subgraphs will only show up when they are published to the decentralized network.
-
+
-### Pestaña de indexación
+#### Pestaña de indexación
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+> If you haven't indexed, you will see links to stake to index Subgraphs and browse Subgraphs on Graph Explorer.
-Esta sección también incluirá detalles sobre las recompensas netas que obtienes como Indexador y las tarifas netas que recibes por cada consulta. Verás las siguientes métricas:
+The Indexing tab displays a table where you can review active and historical allocations to Subgraphs.
-- Delegated Stake: la participación de los Delegados que puedes asignar pero que no se puede recortar
-- Total Query Fees: las tarifas totales que los usuarios han pagado por las consultas que has atendido durante tu participación
-- Indexer Rewards: la cantidad total de recompensas que el Indexador ha recibido, se valora en GRT
-- Fee Cut: es el porcentaje que obtendrás por las consultas que has atendido, estos se distribuyen al cerrar un ciclo o cuando te separes de tus delegadores
-- Rewards Cut: este es el porcentaje de recompensas que dividirás con tus delegadores una vez se cierre el ciclo
-- Owned: tu stake depositado, que podría reducirse por un comportamiento malicioso o incorrecto en la red
+Track your Indexer performance with visual charts and key metrics, including:
-
+- Delegated Stake: Stake from Delegators that can be allocated by you but cannot be slashed.
+- Total Query Fees: Cumulative fees from served queries.
+- Indexer Rewards (in GRT): Total rewards earned.
+- Fee Cut & Rewards Cut: The % of query fee rebates and Indexer rewards you'll keep when you split with Delegators.
+- Owned Stake: Your deposited stake, which could be slashed for malicious or incorrect behavior.
-### Pestaña de delegación
+
-Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards.
+#### Pestaña de delegación
-In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards.
+> To learn more about the benefits of delegating, check out [delegating](/resources/roles/delegating/delegating/).
-En la primera mitad de la página, puedes ver tu gráfico de delegación, así como el gráfico de recompensas históricas. A la izquierda, puedes ver los KPI que reflejan tus métricas de delegación actuales.
+The Delegators tab displays your active and historical delegations, along with the metrics for the Indexers you've delegated to.
-Las métricas de Delegador que verás aquí en esta pestaña incluyen:
+Top Section:
-- Recompensas totales de delegación (Total delegation rewards)
-- Recompensas totales no realizadas (Total unrealized rewards)
-- Recompensas totales realizadas (Total realized rewards)
+- View delegation and rewards-only charts
+- Track key metrics:
+ - Recompensas totales de delegación (Total delegation rewards)
+ - Unrealized rewards
+ - Realized Rewards
-En la segunda mitad de la página, tienes la tabla de delegaciones. Aquí puedes ver los Indexadores a los que delegaste, así como sus detalles (como recortes de recompensas, tiempo de enfriamiento, etc.).
+Bottom Section:
-Con los botones en el lado derecho de la tabla, puedes administrar tu delegación: delegar más, quitar tu delegación o retirar tu delegación después del período de descongelación.
+- Explore a table of your Indexer delegations, including reward cuts, cooldowns, and more.
+- Use the buttons on the right side of the table to manage your delegation - delegate more, undelegate, or withdraw it after the thawing period.
-Con los botones situados al lado derecho de la tabla, puedes administrar tu delegación: delegar más, anular la delegación actual o retirar tu delegación después del período de descongelación.
+> This table is horizontally scrollable, so scroll right to see delegation status: delegating, undelegating, or withdrawable.
-
+
-### Pestaña de curación
+#### Pestaña de curación
-In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
+The Curation tab displays all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
Dentro de esta pestaña, encontrarás una descripción general de:
@@ -232,22 +260,22 @@ Dentro de esta pestaña, encontrarás una descripción general de:
- Query rewards per Subgraph
- Actualizaciones de los subgrafos
-
+
-### Configuración de tu perfil
+#### Configuración de tu perfil
Dentro de tu perfil de usuario, podrás administrar los detalles de tu perfil personal (como configurar un nombre de ENS). Si eres un Indexador, tienes aún más acceso a la configuración al alcance de tu mano. En tu perfil de usuario, podrás configurar los parámetros y operadores de tu delegación.
- Los operadores toman acciones limitadas en el protocolo en nombre del Indexador, como abrir y cerrar asignaciones. Los operadores suelen ser otras direcciones de Ethereum, separadas de su billetera de staking, con acceso cerrado a la red que los Indexadores pueden configurar personalmente
- Los parámetros de delegación te permiten controlar la distribución de GRT entre tu y tus Delegadores.
-
+
As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.

-## Recursos Adicionales
+### Recursos Adicionales
### Video Guide
diff --git a/website/src/pages/es/subgraphs/fair-use-policy.mdx b/website/src/pages/es/subgraphs/fair-use-policy.mdx
new file mode 100644
index 000000000000..14562ee0e5ec
--- /dev/null
+++ b/website/src/pages/es/subgraphs/fair-use-policy.mdx
@@ -0,0 +1,51 @@
+---
+title: Fair Use Policy
+---
+
+> Effective Date: May 15, 2025
+
+## Descripción
+
+This outlines storage limits for Subgraphs that rely solely on [Edge & Node's Upgrade Indexer](/subgraphs/upgrade-indexer/). This policy is designed to ensure fair and optimized use of queries across the community.
+
+To maintain performance and reliability across its infrastructure, Edge & Node is updating its Upgrade Indexer Subgraph storage policy. Free usage tiers remain available, but users who exceed specified limits will need to upgrade to a paid plan. Storage allocations and thresholds vary by feature.
+
+### 1. Scope
+
+This policy applies to all individual users, teams, chains, and dapps using Edge & Node's Upgrade Indexer in Subgraph Studio for storage and queries.
+
+### 2. Fair Use Storage Limits
+
+**Free Storage: Up to 10 GB**
+
+Beyond that, pricing is variable and adjusts based on usage patterns, network conditions, infrastructure requirements, and specific use cases.
+
+Reach out to Edge & Node at [info@edgeandnode.com](mailto:info@edgeandnode.com) to discuss options that meet your technical needs.
+
+You can monitor your usage via [Subgraph Studio](https://thegraph.com/studio/).
+
+### 3. Fair Use Limits
+
+To preserve the stability of Edge & Node's Subgraph Studio and preserve the reliability of The Graph Network, the Edge & Node Support Team will monitor storage usage and take corresponding action with Subgraphs that have:
+
+- Abnormally high or sustained bandwidth or storage usage beyond posted limits
+- Circumvention of storage thresholds (e.g., use of multiple free-tier accounts)
+
+The Edge & Node Support Team reserves the right to revise storage limits or impose temporary constraints for operational integrity.
+
+If you exceed your included storage:
+
+- Try [pruning Subgraph data](/subgraphs/best-practices/pruning/) to remove unused entities and help stay within storage limits
+- [Add signal to the Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to encourage other Indexers on the network to serve it
+- You will receive multiple notifications and email alerts
+- A grace period of 14 days will be provided to upgrade or reduce storage
+
+Edge & Node's team is committed to helping users avoid unnecessary interruptions and will continue to support all web3 builders.
+
+### 4. Subgraph Data Retention
+
+Subgraphs inactive for over 14 days or Subgraphs that exceed free-tier storage limits will be subject to automatic data archival or deletion. Edge & Node's team will notify you before any such actions are taken.
+
+### 5. Support
+
+If you believe your usage is incorrectly flagged or have unique use cases (e.g. approved special request pending new Subgraph upgrade plan), reach out the Edge & Node team at [info@edgeandnode.com](mailto:info@edgeandnode.com).
diff --git a/website/src/pages/es/subgraphs/guides/near.mdx b/website/src/pages/es/subgraphs/guides/near.mdx
index f22a497db7e1..29c6ae49614c 100644
--- a/website/src/pages/es/subgraphs/guides/near.mdx
+++ b/website/src/pages/es/subgraphs/guides/near.mdx
@@ -186,7 +186,7 @@ Once your Subgraph has been created, you can deploy your Subgraph by using the `
```sh
$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
+$ graph deploy --node --ipfs https://ipfs.thegraph.com # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
The node configuration will depend on where the Subgraph is being deployed.
diff --git a/website/src/pages/es/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/es/subgraphs/guides/subgraph-composition.mdx
index 3f5fc5e44cca..6fa236b8b7e4 100644
--- a/website/src/pages/es/subgraphs/guides/subgraph-composition.mdx
+++ b/website/src/pages/es/subgraphs/guides/subgraph-composition.mdx
@@ -39,20 +39,20 @@ While the source Subgraph is a standard Subgraph, the dependent Subgraph uses th
### Source Subgraphs
-- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs).
- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
-- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
-- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
-- Source Subgraphs cannot use grafting on top of existing entities
-- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+- Immutable entities only: All Subgraphs must have [immutable entities](/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed.
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of.
+- Source Subgraphs cannot use grafting on top of existing entities.
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly.
### Composed Subgraphs
-- You can only compose up to a **maximum of 5 source Subgraphs**
-- Composed Subgraphs can only use **datasources from the same chain**
-- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
-- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
-- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+- You can only compose up to a **maximum of 5 source Subgraphs.**
+- Composed Subgraphs can only use **datasources from the same chain.**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time.
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly.
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph).
Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
diff --git a/website/src/pages/es/subgraphs/mcp/claude.mdx b/website/src/pages/es/subgraphs/mcp/claude.mdx
new file mode 100644
index 000000000000..8b61438d2ab7
--- /dev/null
+++ b/website/src/pages/es/subgraphs/mcp/claude.mdx
@@ -0,0 +1,180 @@
+---
+title: Claude Desktop
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Claude to interact directly with Subgraphs on The Graph Network. This integration allows you to find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries—all through natural language conversations with Claude.
+
+## What You Can Do
+
+The Subgraph MCP integration enables you to:
+
+- Access the GraphQL schema for any Subgraph on The Graph Network
+- Execute GraphQL queries against any Subgraph deployment
+- Find top Subgraph deployments for a given keyword or contract address
+- Get 30-day query volume for Subgraph deployments
+- Ask natural language questions about Subgraph data without writing GraphQL queries manually
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+
+## Installation Options
+
+### Option 1: Using npx (Recommended)
+
+#### Configuration Steps using npx
+
+#### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+#### 2. Add Configuration
+
+Paste the following settings into your config file:
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+#### 3. Add Your Gateway API Key
+
+Replace `GATEWAY_API_KEY` with your API key from [Subgraph Studio](https://thegraph.com/studio/).
+
+#### 4. Save and Restart
+
+Once you've entered your Gateway API key into your settings, save the file and restart Claude Desktop.
+
+### Option 2: Building from Source
+
+#### Requirements
+
+- Rust (latest stable version recommended: 1.75+)
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+ Follow the on-screen instructions. For other platforms, see the [official Rust installation guide](https://www.rust-lang.org/tools/install).
+
+#### Installation Steps
+
+1. **Clone and Build the Repository**
+
+ ```bash
+ git clone git@github.com:graphops/subgraph-mcp.git
+ cd subgraph-mcp
+ cargo build --release
+ ```
+
+2. **Find the Command Path**
+
+ After building, the executable will be located at `target/release/subgraph-mcp` inside your project directory.
+
+ - Navigate to your `subgraph-mcp` directory in terminal
+ - Run `pwd` to get the full path
+ - Combine the output with `/target/release/subgraph-mcp`
+
+3. **Configure Claude Desktop**
+
+ Open your `claude_desktop_config.json` file as described above and add:
+
+ ```json
+ {
+ "mcpServers": {
+ "subgraph": {
+ "command": "/path/to/your/subgraph-mcp/target/release/subgraph-mcp",
+ "env": {
+ "GATEWAY_API_KEY": "your-api-key-here"
+ }
+ }
+ }
+ }
+ ```
+
+ Replace `/path/to/your/subgraph-mcp/target/release/subgraph-mcp` with the actual path to the compiled binary.
+
+## Using The Graph Resource in Claude
+
+After configuring Claude Desktop:
+
+1. Restart Claude Desktop
+2. Start a new conversation
+3. Click on the context menu (top right)
+4. Add "Subgraph Server Instructions" as a resource by adding `graphql://subgraph` to your chat context
+
+> **Important**: Claude Desktop may not automatically utilize the Subgraph MCP. You must manually add "Subgraph Server Instructions" resource to your chat context for each conversation where you want to use it.
+
+## Troubleshooting
+
+To enable logs for the MCP when using the npx option, add the `--verbose true` option to your args array.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID/IPFS hash**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Search subgraphs by keyword**: Find subgraphs by keyword in their display names, ordered by signal
+- **Get deployment 30-day query counts**: Get aggregate query count over the last 30 days for multiple subgraph deployments
+- **Get top Subgraph deployments for a contract**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain, ordered by query fees
+
+## Key Identifier Types
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a subgraph. Use `execute_query_by_subgraph_id` or `get_schema_by_subgraph_id`.
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment. Use `execute_query_by_deployment_id` or `get_schema_by_deployment_id`.
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific, immutable deployment. Use `execute_query_by_deployment_id` (the gateway treats it like a deployment ID for querying) or `get_schema_by_ipfs_hash`.
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Claude will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+```
+Find the top subgraphs for contract 0x1f98431c8ad98523631ae4a59f267346ea31f984 on arbitrum-one
+```
diff --git a/website/src/pages/es/subgraphs/mcp/cline.mdx b/website/src/pages/es/subgraphs/mcp/cline.mdx
new file mode 100644
index 000000000000..156221d9a127
--- /dev/null
+++ b/website/src/pages/es/subgraphs/mcp/cline.mdx
@@ -0,0 +1,99 @@
+---
+title: Cline
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cline to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cline.
+
+## Prerequisites
+
+- [Cline](https://cline.bot/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+## Using The Graph Resource in Cline
+
+After configuring Cline:
+
+1. Restart Cline
+2. Start a new conversation
+3. Enable the Subgraph MCP from the context menu
+4. Add "Subgraph Server Instructions" as a resource to your chat context
+
+## Available Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cline will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/es/subgraphs/mcp/cursor.mdx b/website/src/pages/es/subgraphs/mcp/cursor.mdx
new file mode 100644
index 000000000000..298f43ece048
--- /dev/null
+++ b/website/src/pages/es/subgraphs/mcp/cursor.mdx
@@ -0,0 +1,94 @@
+---
+title: Cursor
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cursor to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cursor.
+
+## Prerequisites
+
+- [Cursor](https://www.cursor.com/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+### 4. Restart Cursor
+
+Restart Cursor, and start a new chat.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cursor will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/es/subgraphs/querying/best-practices.mdx b/website/src/pages/es/subgraphs/querying/best-practices.mdx
index eb9567990435..a5f468b8ee73 100644
--- a/website/src/pages/es/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/es/subgraphs/querying/best-practices.mdx
@@ -2,9 +2,7 @@
title: Mejores Prácticas para Consultas
---
-The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-
-Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
+Use The Graph's GraphQL API to query [Subgraph](/subgraphs/developing/subgraphs/) data efficiently. This guide outlines essential GraphQL rules, guides, and best practices to help you write optimized, reliable queries.
---
@@ -12,9 +10,11 @@ Learn the essential GraphQL language rules and best practices to optimize your S
### The Anatomy of a GraphQL Query
-A diferencia de la API REST, una API GraphQL se basa en un esquema que define las consultas que se pueden realizar.
+> GraphQL queries use the GraphQL language, which is defined in the [GraphQL specification](https://spec.graphql.org/).
+
+Unlike REST APIs, GraphQL APIs are built on a schema-driven design that defines which queries can be performed.
-For example, a query to get a token using the `token` query will look as follows:
+Here's a typical query to fetch a `token`:
```graphql
query GetToken($id: ID!) {
@@ -25,7 +25,7 @@ query GetToken($id: ID!) {
}
```
-which will return the following predictable JSON response (_when passing the proper `$id` variable value_):
+which will return a predictable JSON response (when passing the proper `$id` variable value):
```json
{
@@ -36,8 +36,6 @@ which will return the following predictable JSON response (_when passing the pro
}
```
-GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/).
-
The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders):
```graphql
@@ -50,33 +48,31 @@ query [operationName]([variableName]: [variableType]) {
}
```
-## Rules for Writing GraphQL Queries
+### Rules for Writing GraphQL Queries
-- Each `queryName` must only be used once per operation.
-- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`)
-- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
-- Cualquier variable asignada a un argumento debe coincidir con su tipo.
-- En una lista dada de variables, cada una de ellas debe ser única.
-- Deben utilizarse todas las variables definidas.
+> Important: Failing to follow these rules will result in an error from The Graph API.
-> Note: Failing to follow these rules will result in an error from The Graph API.
+1. Each `queryName` must only be used once per operation.
+2. Each `field` must be used only once in a selection (you cannot query `id` twice under `token`).
+3. Complex types require a selection of sub-fields.
+ - For example, some `fields' or queries (like `tokens`) return complex types which will require a selection of sub-fields. Not providing a selection when expected or providing one when not expected will raise an error, such as `id\`. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
+4. Cualquier variable asignada a un argumento debe coincidir con su tipo.
+5. Variables must be uniquely defined and used.
-For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/).
+**For a complete list of rules with code examples, check out the [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)**.
-### Envío de una consulta a una API GraphQL
+### How to Send a Query to a GraphQL API
-GraphQL is a language and set of conventions that transport over HTTP.
+[GraphQL is a query language](https://graphql.org/learn/) and a set of conventions for APIs, typically used over HTTP to request and send data between clients and servers. This means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-
-However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
+However, as recommended in [Querying from an Application](/subgraphs/querying/from-an-application/), it's best to use `graph-client`, which supports the following unique features:
- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Resultado completamente tipificado
-Here's how to query The Graph with `graph-client`:
+Example query using `graph-client`:
```tsx
import { execute } from '../.graphclient'
@@ -100,15 +96,15 @@ async function main() {
main()
```
-More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/).
+For more alternatives, see ["Querying from an Application"](/subgraphs/querying/from-an-application/).
---
## Best Practices
-### Escribe siempre consultas estáticas
+### 1. Always Write Static Queries
-A common (bad) practice is to dynamically build query strings as follows:
+A common bad practice is to dynamically build a query string as follows:
```tsx
const id = params.id
@@ -124,14 +120,16 @@ query GetToken {
// Execute query...
```
-While the above snippet produces a valid GraphQL query, **it has many drawbacks**:
+While the example above produces a valid GraphQL query, it comes with several issues:
+
+- The full query is harder to understand.
+- Developers are responsible for safely sanitizing the string interpolation.
+- Not sending the values of the variables as part of the request can block server-side caching.
+- It prevents tools from statically analyzing the query (e.g.linters or type generation tools).
-- it makes it **harder to understand** the query as a whole
-- developers are **responsible for safely sanitizing the string interpolation**
-- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side**
-- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools)
+Instead, it's recommended to **always write queries as static strings**.
-For this reason, it is recommended to always write queries as static strings:
+Example of static string:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -153,18 +151,21 @@ const result = await execute(query, {
})
```
-Doing so brings **many advantages**:
+Static strings have several **key advantages**:
-- **Easy to read and maintain** queries
-- The GraphQL **server handles variables sanitization**
-- **Variables can be cached** at server-level
-- **Queries can be statically analyzed by tools** (more on this in the following sections)
+- Queries are easier to read, manage, and debug.
+- Variable sanitization is handled by the GraphQL server The GraphQL.
+- Variables can be cached at the server level.
+- Queries can be statically analyzed by tools (see [GraphQL Essential Tools](/subgraphs/querying/best-practices/#graphql-essential-tools-guides)).
-### How to include fields conditionally in static queries
+### 2. Include Fields Conditionally in Static Queries
-You might want to include the `owner` field only on a particular condition.
+Including fields in static queries only for a particular condition improves performance and keeps responses lightweight by fetching only the necessary data when it's relevant.
-For this, you can leverage the `@include(if:...)` directive as follows:
+- The `@include(if:...)` directive tells the query to **include** a specific field only if the given condition is true.
+- The `@skip(if: ...)` directive tells the query to **exclude** a specific field if the given condition is true.
+
+Example using `owner` field with `@include(if:...)` directive:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -187,15 +188,11 @@ const result = await execute(query, {
})
```
-> Note: The opposite directive is `@skip(if: ...)`.
-
-### Ask for what you want
-
-GraphQL became famous for its "Ask for what you want" tagline.
+### 3. Ask Only For What You Want
-For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually.
+GraphQL is known for its "Ask for what you want” tagline, which is why it requires explicitly listing each field you want. There's no built-in way to fetch all available fields automatically.
-- When querying GraphQL APIs, always think of querying only the fields that will be actually used.
+- When querying GraphQL APIs, always think of querying only the fields that will actually be used.
- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities.
For example, in the following query:
@@ -215,9 +212,9 @@ query listTokens {
The response could contain 100 transactions for each of the 100 tokens.
-If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field.
+If the application only needs 10 transactions, the query should explicitly set **`first: 10`** on the transactions field.
-### Use a single query to request multiple records
+### 4. Use a Single Query to Request Multiple Records
By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
@@ -249,7 +246,7 @@ query ManyRecords {
}
```
-### Combine multiple queries in a single request
+### 5. Combine Multiple Queries in a Single Request
Your application might require querying multiple types of data as follows:
@@ -281,9 +278,9 @@ const [tokens, counters] = Promise.all(
)
```
-While this implementation is totally valid, it will require two round trips with the GraphQL API.
+While this implementation is valid, it will require two round trips with the GraphQL API.
-Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows:
+It's best to send multiple queries in the same GraphQL request as follows:
```graphql
import { execute } from "your-favorite-graphql-client"
@@ -304,9 +301,9 @@ query GetTokensandCounters {
const { result: { tokens, counters } } = execute(query)
```
-This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**.
+Sending multiple queries in the same GraphQL request **improves the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and provides a **more concise implementation**.
-### Aprovechar los GraphQL Fragments
+### 6. Leverage GraphQL Fragments
A helpful feature to write GraphQL queries is GraphQL Fragment.
@@ -335,7 +332,7 @@ Such repeated fields (`id`, `active`, `status`) bring many issues:
- More extensive queries become harder to read.
- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces.
-A refactored version of the query would be the following:
+An optimized version of the query would be the following:
```graphql
query {
@@ -359,15 +356,18 @@ fragment DelegateItem on Transcoder {
}
```
-Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation.
+Using a GraphQL `fragment` improves readability (especially at scale) and results in better TypeScript types generation.
When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_).
-### Qué hacer y qué no hacer con los GraphQL Fragments
+## GraphQL Fragment Guidelines
-### Fragment base must be a type
+### Do's and Don'ts for Fragments
-A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**:
+1. Fragments cannot be based on a non-applicable types (types without fields).
+2. `BigInt` cannot be used as a fragment's base because it's a **scalar** (native "plain" type).
+
+Ejemplo:
```graphql
fragment MyFragment on BigInt {
@@ -375,11 +375,8 @@ fragment MyFragment on BigInt {
}
```
-`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base.
-
-#### How to spread a Fragment
-
-Fragments are defined on specific types and should be used accordingly in queries.
+3. Fragments belong to specific types and must be used with those same types in queries.
+4. Spread only fragments matching the correct type.
Ejemplo:
@@ -402,20 +399,23 @@ fragment VoteItem on Vote {
}
```
-`newDelegate` and `oldDelegate` are of type `Transcoder`.
+- `newDelegate` and `oldDelegate` are of type `Transcoder`. It's not possible to spread a fragment of type `Vote` here.
-It is not possible to spread a fragment of type `Vote` here.
+5. Fragments must be defined based on their specific usage.
+6. Define fragments as an atomic business unit of data.
-#### Define Fragment as an atomic business unit of data
+---
-GraphQL `Fragment`s must be defined based on their usage.
+### How to Define `Fragment` as an Atomic Business Unit of Data
-For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient.
+> For most use-cases, defining one fragment per type (in the case of repeated fields usage or type generation) is enough.
Here is a rule of thumb for using fragments:
- When fields of the same type are repeated in a query, group them in a `Fragment`.
-- When similar but different fields are repeated, create multiple fragments, for instance:
+- When similar but different fields are repeated, create multiple fragments.
+
+Ejemplo:
```graphql
# base fragment (mostly used in listing)
@@ -438,35 +438,45 @@ fragment VoteWithPoll on Vote {
---
-## The Essential Tools
+## GraphQL Essential Tools Guides
+
+### Test Queries with Graph Explorer
+
+Before integrating GraphQL queries into your dapp, it's best to test them. Instead of running them directly in your app, use a web-based playground.
+
+Start with [Graph Explorer](https://thegraph.com/explorer), a preconfigured GraphQL playground built specifically for Subgraphs. You can experiment with queries and see the structure of the data returned without writing any frontend code.
+
+If you want alternatives to debug/test your queries, check out other similar web-based tools:
+
+- [GraphiQL](https://graphiql-online.com/graphiql)
+- [Altair](https://altairgraphql.dev/)
-### Exploradores web GraphQL
+### Setting up Workflow and IDE Tools
-Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries.
+In order to keep up with querying best practices and syntax rules, use the following workflow and IDE tools.
-If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql).
+#### GraphQL ESLint
-### GraphQL Linting
+1. Install GraphQL ESLint
-In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools.
+Use [GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) to enforce best practices and syntax rules with zero effort.
-**GraphQL ESLint**
+2. Use the ["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config
-[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort.
+This will enforce essential rules such as:
-[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as:
+- `@graphql-eslint/fields-on-correct-type`: Ensures fields match the proper type.
+- `@graphql-eslint/no-unused variables`: Flags unused variables in your queries.
-- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type?
-- `@graphql-eslint/no-unused variables`: should a given variable stay unused?
-- ¡y mucho más!
+Result: You'll **catch errors without even testing queries** on the playground or running them in production!
-This will allow you to **catch errors without even testing queries** on the playground or running them in production!
+#### Use IDE plugins
-### Plugins IDE
+GraphQL plugins streamline your workflow by offering real-time feedback while you code. They highlight mistakes, suggest completions, and help you explore your schema faster.
-**VSCode and GraphQL**
+1. VS Code
-The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get:
+Install the [GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) to unlock:
- Syntax highlighting
- Autocomplete suggestions
@@ -474,11 +484,11 @@ The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemNa
- Snippets
- Go to definition for fragments and input types
-If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly.
+If you are using `graphql-eslint`, use the [ESLint VS Code extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) to visualize errors and warnings inlined in your code correctly.
-**WebStorm/Intellij and GraphQL**
+2. WebStorm/Intellij and GraphQL
-The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing:
+Install the [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/). It significantly improves the experience of working with GraphQL by providing:
- Syntax highlighting
- Autocomplete suggestions
diff --git a/website/src/pages/es/subgraphs/querying/graphql-api.mdx b/website/src/pages/es/subgraphs/querying/graphql-api.mdx
index 374291ce0a88..a52cfaed18ca 100644
--- a/website/src/pages/es/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/es/subgraphs/querying/graphql-api.mdx
@@ -2,23 +2,37 @@
title: API GraphQL
---
-Learn about the GraphQL Query API used in The Graph.
+Explore the GraphQL Query API for interacting with Subgraphs on The Graph Network.
-## What is GraphQL?
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with existing data.
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
+The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
+## Core Concepts
-## Queries with GraphQL
+### Entidades
+
+- **What they are**: Persistent data objects defined with `@entity` in your schema
+- **Key requirement**: Must contain `id: ID!` as primary identifier
+- **Usage**: Foundation for all query operations
+
+### Esquema
+
+- **Purpose**: Blueprint defining the data structure and relationships using GraphQL [IDL](https://facebook.github.io/graphql/draft/#sec-Type-System)
+- **Key characteristics**:
+ - Auto-generates query endpoints
+ - Read-only operations (no mutations)
+ - Defines entity interfaces and derived fields
-In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+## Query Structure
-> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
+GraphQL queries in The Graph target entities defined in the Subgraph schema. Each `Entity` type generates corresponding `entity` and `entities` fields on the root `Query` type.
-### Ejemplos
+> Note: The `query` keyword is not required at the top level of GraphQL queries.
-Query for a single `Token` entity defined in your schema:
+### Single Entity Queries Example
+
+Query for a single `Token` entity:
```graphql
{
@@ -29,9 +43,11 @@ Query for a single `Token` entity defined in your schema:
}
```
-> Note: When querying for a single entity, the `id` field is required, and it must be written as a string.
+> Note: Single entity queries require the `id` parameter as a string.
+
+### Collection Queries Example
-Query all `Token` entities:
+Query format for all `Token` entities:
```graphql
{
@@ -42,14 +58,14 @@ Query all `Token` entities:
}
```
-### Clasificación
+### Sorting Example
-When querying a collection, you may:
+Collection queries support the following sort parameters:
-- Use the `orderBy` parameter to sort by a specific attribute.
-- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending.
+- `orderBy`: Specifies the attribute for sorting
+- `orderDirection`: Accepts `asc` (ascending) or `desc` (descending)
-#### Ejemplo
+#### Standard Sorting Example
```graphql
{
@@ -60,11 +76,7 @@ When querying a collection, you may:
}
```
-#### Ejemplo de filtrado de entidades anidadas
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities.
-
-The following example shows tokens sorted by the name of their owner:
+#### Nested Entity Sorting Example
```graphql
{
@@ -77,20 +89,18 @@ The following example shows tokens sorted by the name of their owner:
}
```
-> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
+> Note: Nested sorting supports one-level-deep `String` or `ID` types on `@entity` and `@derivedFrom` fields.
-### Paginación
+### Pagination Example
-When querying a collection, it's best to:
+When querying a collection, it is best to:
- Use the `first` parameter to paginate from the beginning of the collection.
- The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time.
- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities.
- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above.
-#### Example using `first`
-
-Consulta los primeros 10 tokens:
+#### Standard Pagination Example
```graphql
{
@@ -101,11 +111,7 @@ Consulta los primeros 10 tokens:
}
```
-To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection.
-
-#### Example using `first` and `skip`
-
-Query 10 `Token` entities, offset by 10 places from the beginning of the collection:
+#### Offset Pagination Example
```graphql
{
@@ -116,9 +122,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect
}
```
-#### Example using `first` and `id_ge`
-
-If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query:
+#### Cursor-based Pagination Example
```graphql
query manyTokens($lastID: String) {
@@ -129,16 +133,11 @@ query manyTokens($lastID: String) {
}
```
-The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values.
-
### Filtrado
-- You can use the `where` parameter in your queries to filter for different properties.
-- You can filter on multiple values within the `where` parameter.
-
-#### Example using `where`
+The `where` parameter filters entities based on specified conditions.
-Query challenges with `failed` outcome:
+#### Basic Filtering Example
```graphql
{
@@ -152,9 +151,7 @@ Query challenges with `failed` outcome:
}
```
-You can use suffixes like `_gt`, `_lte` for value comparison:
-
-#### Ejemplo de filtrado de rango
+#### Numeric Comparison Example
```graphql
{
@@ -166,11 +163,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
}
```
-#### Ejemplo de filtrado de bloques
-
-You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-
-This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
+#### Block-based Filtering Example
```graphql
{
@@ -182,11 +175,7 @@ This can be useful if you are looking to fetch only entities which have changed,
}
```
-#### Ejemplo de filtrado de entidades anidadas
-
-Filtering on the basis of nested entities is possible in the fields with the `_` suffix.
-
-Esto puede ser útil si estás buscando obtener solo entidades cuyas entidades de nivel inicial cumplan con las condiciones proporcionadas.
+#### Nested Entity Filtering Example
```graphql
{
@@ -200,11 +189,9 @@ Esto puede ser útil si estás buscando obtener solo entidades cuyas entidades d
}
```
-#### Operadores lógicos
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria.
+#### Logical Operators
-##### `AND` Operator
+##### AND Operations Example
The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`.
@@ -220,27 +207,11 @@ The following example filters for challenges with `outcome` `succeeded` and `num
}
```
-> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas.
->
-> ```graphql
-> {
-> challenges(where: { number_gte: 100, outcome: "succeeded" }) {
-> challenger
-> outcome
-> application {
-> id
-> }
-> }
-> }
-> ```
-
-##### `OR` Operator
-
-The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`.
+**Syntactic sugar:** You can simplify the above query by removing the `and` operator and by passing a sub-expression separated by commas.
```graphql
{
- challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenges(where: { number_gte: 100, outcome: "succeeded" }) {
challenger
outcome
application {
@@ -250,52 +221,36 @@ The following example filters for challenges with `outcome` `succeeded` or `numb
}
```
-> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
-
-#### Todos los filtros
-
-Lista completa de sufijos de parámetros:
+##### OR Operations Example
+```graphql
+{
+ challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenger
+ outcome
+ application {
+ id
+ }
+ }
+}
```
-_
-_not
-_gt
-_lt
-_gte
-_lte
-_in
-_not_in
-_contains
-_contains_nocase
-_not_contains
-_not_contains_nocase
-_starts_with
-_starts_with_nocase
-_ends_with
-_ends_with_nocase
-_not_starts_with
-_not_starts_with_nocase
-_not_ends_with
-_not_ends_with_nocase
-```
-
-> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types.
-In addition, the following global filters are available as part of `where` argument:
+Global filter parameter:
```graphql
_change_block(number_gte: Int)
```
-### Consultas sobre Time-travel
+### Time-travel Queries Example
-You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries.
+Queries support historical state retrieval using the `block` parameter:
-The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change.
+- `number`: Integer block number
+- `hash`: String block hash
> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail.
-#### Ejemplo
+#### Block Number Query Example
```graphql
{
@@ -309,9 +264,7 @@ The result of such a query will not change over time, i.e., querying at a certai
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000.
-
-#### Ejemplo
+#### Block Hash Query Example
```graphql
{
@@ -325,28 +278,26 @@ This query will return `Challenge` entities, and their associated `Application`
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash.
-
-### Consultas de Búsqueda de Texto Completo
+### Full-Text Search Example
-Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
+Full-text search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Full-text Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add full-text search to your Subgraph.
-Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
+Full-text search queries have one required field, `text`, for supplying search terms. Several special full-text operators are available to be used in this `text` search field.
-Operadores de búsqueda de texto completo:
+Full-text search fields use the required `text` parameter with the following operators:
-| Símbolo | Operador | Descripción |
-| --- | --- | --- |
-| `&` | `And` | Para combinar varios términos de búsqueda en un filtro para entidades que incluyen todos los términos proporcionados |
-| | | `Or` | Las consultas con varios términos de búsqueda separados por o el operador devolverá todas las entidades que coincidan con cualquiera de los términos proporcionados |
-| `<->` | `Follow by` | Especifica la distancia entre dos palabras. |
-| `:*` | `Prefix` | Utilice el término de búsqueda del prefijo para encontrar palabras cuyo prefijo coincida (se requieren 2 caracteres.) |
+| Operator | Símbolo | Description |
+| --------- | ------- | --------------------------------------------------------------- |
+| And | `&` | Matches entities containing all terms |
+| Or | `\|` | Return all entities with a match from any of the provided terms |
+| Follow by | `<->` | Matches terms with specified distance |
+| Prefix | `:*` | Matches word prefixes (minimum 2 characters) |
-#### Ejemplos
+#### Search Examples
-Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields.
+OR operator:
-```graphql
+```
{
blogSearch(text: "anarchism | crumpets") {
id
@@ -357,7 +308,7 @@ Using the `or` operator, this query will filter to blog entities with variations
}
```
-The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy"
+“Follow” by operator:
```graphql
{
@@ -370,7 +321,7 @@ The `follow by` operator specifies a words a specific distance apart in the full
}
```
-Combina operadores de texto completo para crear filtros más complejos. Con un operador de búsqueda de pretexto combinado con una consulta de seguimiento de este ejemplo, se buscarán coincidencias con todas las entidades del blog con palabras que comiencen con "lou" seguido de "music".
+Combined operators:
```graphql
{
@@ -383,29 +334,19 @@ Combina operadores de texto completo para crear filtros más complejos. Con un o
}
```
-### Validación
+### Definición de esquema
-Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
-
-## Esquema
-
-The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
-
-> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
-
-### Entidades
+Entity types require:
-All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field.
+- GraphQL Interface Definition Language (IDL) format
+- `@entity` directive
+- `ID` field
-> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported.
+### Subgraph Metadata Example
-### Metadatos del subgrafo
+The `_Meta_` object provides subgraph metadata:
-All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
-
-```graphQL
+```graphql
{
_meta(block: { number: 123987 }) {
block {
@@ -419,14 +360,49 @@ All Subgraphs have an auto-generated `_Meta_` object, which provides access to S
}
```
-If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
-
-`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
+Metadata fields:
+
+- `deployment`: IPFS CID of the subgraph.yaml
+- `block`: Latest block information
+- `hasIndexingErrors`: Boolean indicating past indexing errors
+
+> Note: When writing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use `and` operators instead of `or` whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
+
+### GraphQL Filter Operators Reference
+
+This table explains each filter operator available in The Graph's GraphQL API. These operators are used as suffixes to field names when filtering data using the `where` parameter.
+
+| Operador | Descripción | Ejemplo |
+| ------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
+| `_` | Matches entities where the specified field equals another entity | `{ where: { owner_: { name: "Alice" } } }` |
+| `_not` | Negates the specified condition | `{ where: { active_not: true } }` |
+| `_gt` | Greater than (>) | `{ where: { price_gt: "100" } }` |
+| `_lt` | Less than (`\<`) | `{ where: { price_lt: "100" } }` |
+| `_gte` | Greater than or equal to (>=) | `{ where: { price_gte: "100" } }` |
+| `_lte` | Less than or equal to (`\<=`) | `{ where: { price_lte: "100" } }` |
+| `_in` | Value is in the specified array | `{ where: { category_in: ["Art", "Music"] } }` |
+| `_not_in` | Value is not in the specified array | `{ where: { category_not_in: ["Art", "Music"] } }` |
+| `_contains` | Field contains the specified string (case-sensitive) | `{ where: { name_contains: "token" } }` |
+| `_contains_nocase` | Field contains the specified string (case-insensitive) | `{ where: { name_contains_nocase: "token" } }` |
+| `_not_contains` | Field does not contain the specified string (case-sensitive) | `{ where: { name_not_contains: "test" } }` |
+| `_not_contains_nocase` | Field does not contain the specified string (case-insensitive) | `{ where: { name_not_contains_nocase: "test" } }` |
+| `_starts_with` | Field starts with the specified string (case-sensitive) | `{ where: { name_starts_with: "Crypto" } }` |
+| `_starts_with_nocase` | Field starts with the specified string (case-insensitive) | `{ where: { name_starts_with_nocase: "crypto" } }` |
+| `_ends_with` | Field ends with the specified string (case-sensitive) | `{ where: { name_ends_with: "Token" } }` |
+| `_ends_with_nocase` | Field ends with the specified string (case-insensitive) | `{ where: { name_ends_with_nocase: "token" } }` |
+| `_not_starts_with` | Field does not start with the specified string (case-sensitive) | `{ where: { name_not_starts_with: "Test" } }` |
+| `_not_starts_with_nocase` | Field does not start with the specified string (case-insensitive) | `{ where: { name_not_starts_with_nocase: "test" } }` |
+| `_not_ends_with` | Field does not end with the specified string (case-sensitive) | `{ where: { name_not_ends_with: "Test" } }` |
+| `_not_ends_with_nocase` | Field does not end with the specified string (case-insensitive) | `{ where: { name_not_ends_with_nocase: "test" } }` |
+
+#### Notes
+
+- Type support varies by operator. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`.
+- The `_` operator is only available for object and interface types.
+- String comparison operators are especially useful for text fields.
+- Numeric comparison operators work with both number and string-encoded number fields.
+- Use these operators in combination with logical operators (`and`, `or`) for complex filtering.
-`block` provides information about the latest block (taking into account any block constraints passed to `_meta`):
-
-- hash: el hash del bloque
-- número: el número de bloque
-- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
+### Validación
-`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
+Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
diff --git a/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx
index 50c2fbab7883..258f9d677c8e 100644
--- a/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx
@@ -1,34 +1,86 @@
---
-title: Managing API keys
+title: How to Manage API keys
---
+This guide shows you how to create, manage, and secure API keys for your [Subgraphs](/subgraphs/developing/subgraphs/).
+
## Descripción
-API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are required to query Subgraphs. They authenticate users and devices, authorize access to specific endpoints, enforce rate limits, and enable usage tracking across The Graph.
+
+## Prerequisites
+
+- A [Subgraph Studio](https://thegraph.com/studio/) account
+
+## Create a New API Key
+
+1. Navigate to [Subgraph Studio](https://thegraph.com/studio/)
+2. Click the **API Keys** tab in the navigation menu
+3. Click the **Create API Key** button
+
+A new window will pop up:
+
+4. Enter a name for your API key
+5. Optional: You can enable a period spending limit
+6. Click **Create API Key**
+
+
+
+## Manage API Keys
+
+The “API keys” table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+
+### How to Set Spending Limits
+
+1. Find your API key in the API keys table
+2. Click the "three dots" icon next to the key
+3. Select "Manage spending limit"
+4. Enter your desired monthly limit in USD
+5. Click **Save**
+
+> Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+
+### How to Rename an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Rename API key"
+3. Enter the new name
+4. Click **Save**
+
+### How to Regenerate an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Regenerate API key"
+3. Confirm the action in the pop up dialog
+
+> Warning: Regenerating an API key will invalidate the previous key immediately. Update your applications with the new key to prevent service interruption.
+
+## API Key Details
-### Create and Manage API Keys
+### Monitoring Usage
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
+1. Click on your API key to view the Details page
+2. Check the **Overview** section for:
+ - Total number of queries
+ - GRT spent
+ - Current usage statistics
-The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+### Restricting Domain Access
-You can click the "three dots" menu to the right of a given API key to:
+1. Click on your API key to open the Details page
+2. Navigate to the **Security** section
+3. Click "Add Domain"
+4. Enter the authorized domain name
+5. Click **Save**
-- Rename API key
-- Regenerate API key
-- Delete API key
-- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+### Limiting Subgraph Access
-### API Key Details
+1. In the API key Details page
+2. Navigate to the **Security** section
+3. Click "Assign Subgraphs"
+4. Select the Subgraphs you want to authorize
+5. Click **Save**
-You can click on an individual API key to view the Details page:
+## Recursos Adicionales
-1. Under the **Overview** section, you can:
- - Editar el nombre de tu clave
- - Regenerar las claves API
- - Ver el uso actual de la clave API con estadísticas:
- - Número de consultas
- - Cantidad de GRT gastado
-2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- - Ver y administrar los nombres de dominio autorizados a utilizar tu clave API
- - Assign Subgraphs that can be queried with your API key
+[Deploying Using Subgraph Studio](/subgraphs/developing/deploying/using-subgraph-studio/)
diff --git a/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 17258dd13ea1..c48a3021233a 100644
--- a/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,15 +2,19 @@
title: Subgraph ID vs Deployment ID
---
+Managing and accessing Subgraphs relies on two distinct identification systems: Subgraph IDs and Deployment IDs.
+
A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
-Here are some key differences between the two IDs: 
+Both identifiers are accessible in [Subgraph Studio](https://thegraph.com/studio/):
+
+
## Deployment ID
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://ipfs.thegraph.com/ipfs/QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
@@ -18,6 +22,12 @@ Example endpoint that uses Deployment ID:
`https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB`
+Using Deployment IDs for queries offers precise version control but comes with specific implications:
+
+- Advantages: Complete control over which version you're querying, ensuring consistent results
+- Challenges: Requires manual updates to query code when new Subgraph versions are published
+- Use case: Ideal for production environments where stability and predictability are crucial
+
## Subgraph ID
The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
@@ -25,3 +35,20 @@ The Subgraph ID is a unique identifier for a Subgraph. It remains constant acros
Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
+
+Using Subgraph IDs comes with important considerations:
+
+- Benefits: Automatically queries the latest version, reducing maintenance overhead
+- Limitations: May encounter version synchronization delays or breaking schema changes
+- Use case: Better suited for development environments or when staying current is more important than version stability
+
+## Deployment ID vs Subgraph ID
+
+Here are the key differences between the two IDs:
+
+| Consideration | Deployment ID | Subgraph ID |
+| ----------------------- | --------------------- | --------------- |
+| Version Pinning | Specific version | Always latest |
+| Maintenance Effort | High (manual updates) | Low (automatic) |
+| Environment Suitability | Production | Development |
+| Sync Status Awareness | Not required | Critical |
diff --git a/website/src/pages/es/subgraphs/quick-start.mdx b/website/src/pages/es/subgraphs/quick-start.mdx
index 57d13e479ba2..ae0d63e54b9e 100644
--- a/website/src/pages/es/subgraphs/quick-start.mdx
+++ b/website/src/pages/es/subgraphs/quick-start.mdx
@@ -2,24 +2,28 @@
title: Comienzo Rapido
---
-Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Create, deploy, and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph Network.
+
+By the end, you'll have:
+
+- Initialized a Subgraph from a smart contract
+- Deployed it to Subgraph Studio for testing
+- Published to The Graph Network for decentralized indexing
## Prerequisites
- Una wallet crypto
-- A smart contract address on a [supported network](/supported-networks/)
-- [Node.js](https://nodejs.org/) installed
-- A package manager of your choice (`npm`, `yarn` or `pnpm`)
+- A deployed smart contract on a [supported network](/supported-networks/)
+- [Node.js](https://nodejs.org/) & a package manager of your choice (`npm`, `yarn` or `pnpm`)
## How to Build a Subgraph
### 1. Create a Subgraph in Subgraph Studio
-Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-
-Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-
-Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
+1. Go to [Subgraph Studio](https://thegraph.com/studio/)
+2. Connect your wallet
+3. Click "Create a Subgraph"
+4. Name it in Title Case: "Subgraph Name Chain Name"
### 2. Instala the graph CLI
@@ -37,20 +41,22 @@ Using [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your Subgraph
+Verify install:
-> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
+```sh
+graph --version
+```
-The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
+### 3. Initialize your Subgraph
-The following command initializes your Subgraph from an existing contract:
+> You can find commands for your specific Subgraph in [Subgraph Studio](https://thegraph.com/studio/).
+
+The following command initializes your Subgraph from an existing contract and indexes events:
```sh
graph init
```
-If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-
When you initialize your Subgraph, the CLI will ask you for the following information:
- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
@@ -59,19 +65,17 @@ When you initialize your Subgraph, the CLI will ask you for the following inform
- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block where the contract was deployed to optimize Subgraph indexing of blockchain data.
- **Contract Name**: Input the name of your contract.
- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-See the following screenshot for an example for what to expect when initializing your Subgraph:
+See the following screenshot for an example of what to expect when initializing your Subgraph:

### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-
When making changes to the Subgraph, you will mainly work with three files:
- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
@@ -82,9 +86,7 @@ For a detailed breakdown on how to write your Subgraph, check out [Creating a Su
### 5. Deploy your Subgraph
-> Remember, deploying is not the same as publishing.
-
-When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
Once your Subgraph is written, run the following commands:
@@ -107,8 +109,6 @@ graph deploy
```
````
-The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-
### 6. Review your Subgraph
If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
@@ -125,55 +125,13 @@ When your Subgraph is ready for a production environment, you can publish it to
- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-
-> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
-
-#### Publishing with Subgraph Studio
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to add curation signal.
-To publish your Subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard and select your network.

-Select the network to which you would like to publish your Subgraph.
-
-#### Publishing from the CLI
-
-As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
-
-Open the `graph-cli`.
-
-Use the following commands:
-
-````
-```sh
-graph codegen && graph build
-```
-
-Then,
-
-```sh
-graph publish
-```
-````
-
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.
-
-
-
-To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-
-#### Adding signal to your Subgraph
-
-1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
-
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
-
-2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
-
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
-
-To learn more about curation, read [Curating](/resources/roles/curating/).
+> It is recommended that you curate your own Subgraph with at least 3,000 GRT to incentivize indexing.
To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:
diff --git a/website/src/pages/es/subgraphs/upgrade-indexer.mdx b/website/src/pages/es/subgraphs/upgrade-indexer.mdx
new file mode 100644
index 000000000000..756f4b15f029
--- /dev/null
+++ b/website/src/pages/es/subgraphs/upgrade-indexer.mdx
@@ -0,0 +1,25 @@
+---
+title: Edge & Node Upgrade Indexer
+sidebarTitle: Upgrade Indexer
+---
+
+## Descripción
+
+The Upgrade Indexer is a specialized Indexer operated by Edge & Node. It supports newly integrated chains within The Graph ecosystem and ensures new Subgraphs are immediately available for querying, eliminating potential downtime.
+
+Originally designed as a transitional support, its primary purpose was to facilitate the migration of Subgraphs from the hosted service to the decentralized network. Currently, it supports newly deployed Subgraphs before the full Chain Integration Process (CIP) Indexing rewards are activated.
+
+### What it does
+
+- Provides immediate query support for all newly deployed Subgraphs.
+- Functions as the sole supporting Indexer for each chain until indexing rewards are activated.
+
+### What it does **not** do
+
+- Does not permanently index Subgraphs. Subgraph owners should curate Subgraphs to use independent Indexers long term.
+- Does not compete for rewards. The Upgrade Indexer's participation on the Graph Network does not dilute rewards for other Indexers.
+- Doesn't support Time Travel Queries (TTQ). All Subgraphs on the Upgrade Indexer are auto-pruned. If TTQs are needed on a Subgraph, [curation signal can be added](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to attract Indexers that will support this feature.
+
+### Conclusion
+
+The Edge & Node Upgrade Indexer is foundational in supporting chain integrations and mitigating data latency risks. It plays a critical role in scaling The Graph's decentralized infrastructure by ensuring immediate query support and fostering community-driven indexing.
diff --git a/website/src/pages/es/substreams/_meta-titles.json b/website/src/pages/es/substreams/_meta-titles.json
index 6262ad528c3a..b8799cc89251 100644
--- a/website/src/pages/es/substreams/_meta-titles.json
+++ b/website/src/pages/es/substreams/_meta-titles.json
@@ -1,3 +1,4 @@
{
- "developing": "Developing"
+ "developing": "Developing",
+ "sps": "Substreams-powered Subgraphs"
}
diff --git a/website/src/pages/es/substreams/developing/sinks.mdx b/website/src/pages/es/substreams/developing/sinks.mdx
index 44e6368c9c7b..2f1b4260ba36 100644
--- a/website/src/pages/es/substreams/developing/sinks.mdx
+++ b/website/src/pages/es/substreams/developing/sinks.mdx
@@ -8,14 +8,13 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database or a file.
## Sinks
> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed.
- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database.
-- [Subgraph](./sps/introduction.mdx): Configure an API to meet your data needs and host it on The Graph Network.
- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application.
- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic.
- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks.
@@ -26,26 +25,26 @@ Sinks are integrations that allow you to send the extracted data to different de
### Official
-| Nombre | Soporte | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| Nombre | Soporte | Maintainer | Source Code |
+| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| Nombre | Soporte | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Nombre | Soporte | Maintainer | Source Code |
+| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Official Support (by one of the main Substreams providers)
- C = Community Support
diff --git a/website/src/pages/es/substreams/quick-start.mdx b/website/src/pages/es/substreams/quick-start.mdx
index 4e6ec88c0c0e..44847485b432 100644
--- a/website/src/pages/es/substreams/quick-start.mdx
+++ b/website/src/pages/es/substreams/quick-start.mdx
@@ -31,6 +31,7 @@ If you can't find a Substreams package that meets your specific needs, you can d
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/).
diff --git a/website/src/pages/es/substreams/sps/faq.mdx b/website/src/pages/es/substreams/sps/faq.mdx
new file mode 100644
index 000000000000..dd7685e1a4be
--- /dev/null
+++ b/website/src/pages/es/substreams/sps/faq.mdx
@@ -0,0 +1,96 @@
+---
+title: Preguntas frecuentes sobre Subgrafos impulsados por Substreams - FAQ
+sidebarTitle: FAQ
+---
+
+## ¿Qué son los Substreams?
+
+Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications.
+
+Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere.
+
+Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
+
+## What are Substreams-powered Subgraphs?
+
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
+
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
+
+## How are Substreams-powered Subgraphs different from Subgraphs?
+
+Los subgrafos están compuestos por fuentes de datos que especifican eventos en la cadena de bloques, y cómo esos eventos deben ser transformados mediante controladores escritos en AssemblyScript. Estos eventos se procesan de manera secuencial, según el orden en el que ocurren los eventos onchain.
+
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+
+## What are the benefits of using Substreams-powered Subgraphs?
+
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+
+## ¿Cuáles son los beneficios de Substreams?
+
+Hay muchos beneficios al usar Substreams, incluyendo:
+
+- Componible: Puedes apilar módulos de Substreams como bloques de LEGO y construir sobre módulos de la comunidad, refinando aún más los datos públicos.
+
+- Indexación de alto rendimiento: Indexación mucho más rápida mediante grandes clústeres de operaciones en paralelo (piensa en BigQuery).
+
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
+
+- Programable: Usa código para personalizar la extracción, realizar agregaciones en tiempo de transformación y modelar tu salida para múltiples destinos.
+
+- Acceso a datos adicionales que no están disponibles como parte del JSON RPC
+
+- Todos los beneficios de Firehose.
+
+## ¿Qué es el Firehose?
+
+Desarrollado por StreamingFast (https://www.streamingfast.io/), Firehose es una capa de extracción de datos blockchain diseñada desde cero para procesar el historial completo de las blockchains a velocidades nunca antes vistas. Proporcionando un enfoque basado en archivos y priorizando la transmisión de datos, es un componente clave del conjunto de tecnologías de código abierto de StreamingFast y la base de Substreams.
+
+Visita la [documentación] (https://firehose.streamingfast.io/) para obtener más información sobre Firehose.
+
+## ¿Cuáles son los beneficios de Firehose?
+
+There are many benefits to using Firehose, including:
+
+- Lowest latency & no polling: In a streaming-first fashion, the Firehose nodes are designed to race to push out the block data first.
+
+- Prevents downtimes: Designed from the ground up for High Availability.
+
+- Never miss a beat: The Firehose stream cursor is designed to handle forks and to continue where you left off in any condition.
+
+- Richest data model: Best data model that includes the balance changes, the full call tree, internal transactions, logs, storage changes, gas costs, and more.
+
+- Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available.
+
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
+
+La [documentación de Substreams] (/substreams/introduction/) te enseñará cómo construir módulos de Substreams.
+
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+
+La [última herramienta de Substreams Codegen] (https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) te permitirá iniciar un proyecto de Substreams sin necesidad de escribir código.
+
+## ¿Cuál es el papel de los módulos de Rust en Substreams?
+
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
+
+See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
+
+## ¿Qué hace que Substreams sea componible?
+
+Cuando se usa Substreams, la composición ocurre en la capa de transformación, lo que permite que los módulos en caché sean reutilizados.
+
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
+
+## How can you build and deploy a Substreams-powered Subgraph?
+
+After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
+
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
+
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
+
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
+
+The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them.
diff --git a/website/src/pages/es/substreams/sps/introduction.mdx b/website/src/pages/es/substreams/sps/introduction.mdx
new file mode 100644
index 000000000000..4340733cfc84
--- /dev/null
+++ b/website/src/pages/es/substreams/sps/introduction.mdx
@@ -0,0 +1,31 @@
+---
+title: Introducción a los Subgrafos Impulsados por Substreams
+sidebarTitle: Introducción
+---
+
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+
+## Descripción
+
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+
+### Specifics
+
+Existen dos métodos para habilitar esta tecnología:
+
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
+
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
+
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+
+### Recursos Adicionales
+
+Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly:
+
+- [Solana](/substreams/developing/solana/transactions/)
+- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm)
+- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
+- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
+- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/es/substreams/sps/triggers.mdx b/website/src/pages/es/substreams/sps/triggers.mdx
new file mode 100644
index 000000000000..16db4057a732
--- /dev/null
+++ b/website/src/pages/es/substreams/sps/triggers.mdx
@@ -0,0 +1,47 @@
+---
+title: Disparadores de Substreams
+---
+
+Use Custom Triggers and enable the full use GraphQL.
+
+## Descripción
+
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
+
+### Defining `handleTransactions`
+
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
+
+```tsx
+export function handleTransactions(bytes: Uint8Array): void {
+ let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1.
+ if (transactions.length == 0) {
+ log.info('No transactions found', [])
+ return
+ }
+
+ for (let i = 0; i < transactions.length; i++) {
+ // 2.
+ let transaction = transactions[i]
+
+ let entity = new Transaction(transaction.hash) // 3.
+ entity.from = transaction.from
+ entity.to = transaction.to
+ entity.save()
+ }
+}
+```
+
+Here's what you're seeing in the `mappings.ts` file:
+
+1. Los bytes que contienen los datos de Substreams se decodifican en el objeto 'Transactions' generado, y este objeto se utiliza como cualquier otro objeto de AssemblyScript.
+2. Iterando sobre las transacciones
+3. Create a new Subgraph entity for every transaction
+
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
+
+### Recursos Adicionales
+
+To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/).
diff --git a/website/src/pages/es/substreams/sps/tutorial.mdx b/website/src/pages/es/substreams/sps/tutorial.mdx
new file mode 100644
index 000000000000..d989932c87e1
--- /dev/null
+++ b/website/src/pages/es/substreams/sps/tutorial.mdx
@@ -0,0 +1,155 @@
+---
+title: "Tutorial: Configurar un Subgrafo Potenciado por Substreams en Solana"
+sidebarTitle: Tutorial
+---
+
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
+
+## Comenzar
+
+For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial)
+
+### Prerequisites
+
+Antes de comenzar, asegúrate de:
+
+- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container.
+- Estar familiarizado con The Graph y conceptos básicos de blockchain, como transacciones y Protobufs.
+
+### Paso 1: Inicializa tu proyecto
+
+1. Abre tu Dev Container y ejecuta el siguiente comando para inicializar tu proyecto:
+
+ ```bash
+ substreams init
+ ```
+
+2. Selecciona la opción de proyecto "mínimo".
+
+3. Reemplaza el contenido del archivo 'substreams.yam'l generado con la siguiente configuración, que filtra las transacciones para la cuenta de Orca en el ID del programa SPL token:
+
+```yaml
+specVersion: v0.1.0
+package:
+ name: my_project_sol
+ version: v0.1.0
+
+imports: # Pass your spkg of interest
+ solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg
+
+modules:
+ - name: map_spl_transfers
+ use: solana:map_block # Select corresponding modules available within your spkg
+ initialBlock: 260000082
+
+ - name: map_transactions_by_programid
+ use: solana:solana:transactions_by_programid_without_votes
+
+network: solana-mainnet-beta
+
+params: # Modify the param fields to meet your needs
+ # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA
+ map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE
+```
+
+### Paso 2: Generar el Manifiesto del Subgrafo
+
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
+
+```bash
+substreams codegen subgrafo
+```
+
+Generarás un manifiesto subgraph.yaml que importa el paquete de Substreams como una fuente de datos:
+
+```yaml
+---
+dataSources:
+ - kind: substreams
+ name: my_project_sol
+ network: solana-mainnet-beta
+ source:
+ package:
+ moduleName: map_spl_transfers # Module defined in the substreams.yaml
+ file: ./my-project-sol-v0.1.0.spkg
+ mapping:
+ apiVersion: 0.0.9
+ kind: substreams/graph-entities
+ file: ./src/mappings.ts
+ handler: handleTriggers
+```
+
+### Paso 3: Definir Entidades en schema.graphql
+
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
+
+Here is an example:
+
+```graphql
+type MyTransfer @entity {
+ id: ID!
+ amount: String!
+ source: String!
+ designation: String!
+ signers: [String!]!
+}
+```
+
+Este esquema define una entidad 'MyTransfer' con campos como 'id', 'amount', 'source', 'designation' y 'signers'.
+
+### Paso 4: Manejar los datos de Substreams en 'mappings.ts'
+
+With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
+
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
+
+```ts
+import { Protobuf } from 'as-proto/assembly'
+import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events'
+import { MyTransfer } from '../generated/schema'
+
+export function handleTriggers(bytes: Uint8Array): void {
+ const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode)
+
+ for (let i = 0; i < input.data.length; i++) {
+ const event = input.data[i]
+
+ if (event.transfer != null) {
+ let entity_id: string = `${event.txnId}-${i}`
+ const entity = new MyTransfer(entity_id)
+ entity.amount = event.transfer!.instruction!.amount.toString()
+ entity.source = event.transfer!.accounts!.source
+ entity.designation = event.transfer!.accounts!.destination
+
+ if (event.transfer!.accounts!.signer!.single != null) {
+ entity.signers = [event.transfer!.accounts!.signer!.single!.signer]
+ } else if (event.transfer!.accounts!.signer!.multisig != null) {
+ entity.signers = event.transfer!.accounts!.signer!.multisig!.signers
+ }
+ entity.save()
+ }
+ }
+}
+```
+
+### Paso 5: Generar Archivos Protobuf
+
+Para generar objetos Protobuf en AssemblyScript, ejecuta el siguiente comando:
+
+```bash
+npm run protogen
+```
+
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
+
+### Conclusion
+
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+
+### Video Tutorial
+
+
+
+### Recursos Adicionales
+
+Para una personalización y optimización más avanzada, consulta la [documentación oficial de Substreams] (https://substreams.streamingfast.io/tutorials/solana).
diff --git a/website/src/pages/es/supported-networks.mdx b/website/src/pages/es/supported-networks.mdx
index 93a003ce8005..5baae1d7e76c 100644
--- a/website/src/pages/es/supported-networks.mdx
+++ b/website/src/pages/es/supported-networks.mdx
@@ -4,17 +4,17 @@ hideTableOfContents: true
hideContentHeader: true
---
-import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, NetworksTable } from '@/supportedNetworks'
import { Heading } from '@/components'
import { useI18n } from '@/i18n'
export const getStaticProps = getSupportedNetworksStaticProps
-
+
{useI18n().t('index.supportedNetworks.title')}
-
+
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
diff --git a/website/src/pages/es/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/es/token-api/evm/get-balances-evm-by-address.mdx
index 3386fd078059..68385ffc4272 100644
--- a/website/src/pages/es/token-api/evm/get-balances-evm-by-address.mdx
+++ b/website/src/pages/es/token-api/evm/get-balances-evm-by-address.mdx
@@ -1,9 +1,9 @@
---
-title: Token Balances by Wallet Address
+title: Balances by Address
template:
type: openApi
apiId: tokenApi
operationId: getBalancesEvmByAddress
---
-The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
+Provides latest ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/es/token-api/evm/get-historical-balances-evm-by-address.mdx b/website/src/pages/es/token-api/evm/get-historical-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..d96ed1b81fa2
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-historical-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Historical Balances
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHistoricalBalancesEvmByAddress
+---
+
+Provides historical ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/es/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/es/token-api/evm/get-holders-evm-by-contract.mdx
index 0bb79e41ed54..01a52bbf7ad2 100644
--- a/website/src/pages/es/token-api/evm/get-holders-evm-by-contract.mdx
+++ b/website/src/pages/es/token-api/evm/get-holders-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders by Contract Address
+title: Token Holders
template:
type: openApi
apiId: tokenApi
operationId: getHoldersEvmByContract
---
-The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
+Provides ERC-20 token holder balances by contract address.
diff --git a/website/src/pages/es/token-api/evm/get-nft-activities-evm.mdx b/website/src/pages/es/token-api/evm/get-nft-activities-evm.mdx
new file mode 100644
index 000000000000..f76eb35f653a
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-nft-activities-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Activities
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftActivitiesEvm
+---
+
+Provides NFT Activities (ex: transfers, mints & burns).
diff --git a/website/src/pages/es/token-api/evm/get-nft-collections-evm-by-contract.mdx b/website/src/pages/es/token-api/evm/get-nft-collections-evm-by-contract.mdx
new file mode 100644
index 000000000000..c8e9bfb64219
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-nft-collections-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Collection
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftCollectionsEvmByContract
+---
+
+Provides single NFT collection metadata, total supply, owners & total transfers.
diff --git a/website/src/pages/es/token-api/evm/get-nft-holders-evm-by-contract.mdx b/website/src/pages/es/token-api/evm/get-nft-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..091d01a197f4
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-nft-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Holders
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftHoldersEvmByContract
+---
+
+Provides NFT holders per contract.
diff --git a/website/src/pages/es/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx b/website/src/pages/es/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
new file mode 100644
index 000000000000..cf9ff1c6e1b8
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Items
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftItemsEvmContractByContractToken_idByToken_id
+---
+
+Provides single NFT token metadata, ownership & traits.
diff --git a/website/src/pages/es/token-api/evm/get-nft-ownerships-evm-by-address.mdx b/website/src/pages/es/token-api/evm/get-nft-ownerships-evm-by-address.mdx
new file mode 100644
index 000000000000..4c33526eceb7
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-nft-ownerships-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Ownerships
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftOwnershipsEvmByAddress
+---
+
+Provides NFT Ownerships for Account.
diff --git a/website/src/pages/es/token-api/evm/get-nft-sales-evm.mdx b/website/src/pages/es/token-api/evm/get-nft-sales-evm.mdx
new file mode 100644
index 000000000000..f2d78bea4052
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-nft-sales-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Sales
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftSalesEvm
+---
+
+Provides latest NFT marketplace sales.
diff --git a/website/src/pages/es/token-api/evm/get-ohlc-pools-evm-by-pool.mdx b/website/src/pages/es/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
new file mode 100644
index 000000000000..d5bc5357eadf
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
@@ -0,0 +1,9 @@
+---
+title: OHLCV by Pool
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPoolsEvmByPool
+---
+
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/es/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/es/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
index d1558ddd6e78..ff8f590b0433 100644
--- a/website/src/pages/es/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
+++ b/website/src/pages/es/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token OHLCV prices by Contract Address
+title: OHLCV by Contract
template:
type: openApi
apiId: tokenApi
operationId: getOhlcPricesEvmByContract
---
-The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/es/token-api/evm/get-pools-evm.mdx b/website/src/pages/es/token-api/evm/get-pools-evm.mdx
new file mode 100644
index 000000000000..db32376f5a17
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-pools-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Liquidity Pools
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getPoolsEvm
+---
+
+Provides Uniswap V2 & V3 liquidity pool metadata.
diff --git a/website/src/pages/es/token-api/evm/get-swaps-evm.mdx b/website/src/pages/es/token-api/evm/get-swaps-evm.mdx
new file mode 100644
index 000000000000..0a7697f38c8b
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-swaps-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Swap Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getSwapsEvm
+---
+
+Provides Uniswap V2 & V3 swap events.
diff --git a/website/src/pages/es/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/es/token-api/evm/get-tokens-evm-by-contract.mdx
index b6fab8011fc2..aed206c15272 100644
--- a/website/src/pages/es/token-api/evm/get-tokens-evm-by-contract.mdx
+++ b/website/src/pages/es/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders and Supply by Contract Address
+title: Token Metadata
template:
type: openApi
apiId: tokenApi
operationId: getTokensEvmByContract
---
-The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
+Provides ERC-20 token contract metadata.
diff --git a/website/src/pages/es/token-api/evm/get-transfers-evm.mdx b/website/src/pages/es/token-api/evm/get-transfers-evm.mdx
new file mode 100644
index 000000000000..d8e73c90a03c
--- /dev/null
+++ b/website/src/pages/es/token-api/evm/get-transfers-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Transfer Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvm
+---
+
+Provides ERC-20 & Native transfer events.
diff --git a/website/src/pages/es/token-api/faq.mdx b/website/src/pages/es/token-api/faq.mdx
index 6178aee33e86..3bf60c0cda8f 100644
--- a/website/src/pages/es/token-api/faq.mdx
+++ b/website/src/pages/es/token-api/faq.mdx
@@ -6,21 +6,37 @@ Get fast answers to easily integrate and scale with The Graph's high-performance
## General
-### What blockchains does the Token API support?
+### Which blockchains are supported by the Token API?
-Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+Currently, the Token API supports Ethereum, BNB Smart Chain (BSC), Polygon, Optimism, Base, Unichain, and Arbitrum One.
-### Why isn't my API key from The Graph Market working?
+### Does the Token API support NFTs?
-Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+Yes, The Graph Token API currently supports ERC-721 and ERC-1155 NFT token standards, with support for additional NFT standards planned. Endpoints are offered for ownership, collection stats, metadata, sales, holders, and transfer activity.
+
+### Do NFTs include off-chain data?
+
+NFT endpoints currently only include on-chain data. To get off-chain data, use the IPFS or HTTP links included in the NFT item response.
+
+### How do I authenticate requests to the Token API, and why doesn't my API key from The Graph Market work?
+
+Authentication is managed via API tokens obtained through [The Graph Market](https://thegraph.market/). If you're experiencing issues, make sure you're using the API Token generated from the API key, not the API key itself. An API token can be found on The Graph Market dashboard next to each API key. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
### How current is the data provided by the API relative to the blockchain?
The API provides data up to the latest finalized block.
-### How do I authenticate requests to the Token API?
+### How do I retrieve token prices?
+
+By default, token prices are returned with token-related responses, including token balances, token transfers, token metadata, and token holders. Historical prices are available with the Open-High-Low-Close (OHLC) endpoints.
+
+### Does the Token API support historical token data?
-Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+The Token API supports historical token balances with the `/historical/balances/evm/{address}` endpoint. You can query historical price data by pool at `/ohlc/pools/evm/{pool}` and by contract at `/ohlc/prices/evm/{contract}`.
+
+### What exchanges does the Token API use for token prices?
+
+The Token API currently tracks prices on Uniswap v2 and Uniswap v3, with plans to support additional exchanges in the future.
### Does the Token API provide a client SDK?
@@ -34,9 +50,9 @@ Yes, more blockchains will be supported in the future. Please share feedback on
Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
-### Are there plans to support additional use cases such as NFTs?
+### Are there plans to support additional use cases?
-The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
## MCP / LLM / AI Topics
@@ -60,17 +76,25 @@ You can find the code for the MCP client in [The Graph's repo](https://github.co
Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
-### Are there rate limits or usage costs?\*\*
+### Why am I getting 500 errors?
+
+Networks that are currently or temporarily unavailable on a given endpoint will return a `bad_database_response`, `Endpoint is currently not supported for this network` error. Databases that are in the process of ingestion will produce this response.
+
+### Are there rate limits or usage costs?
+
+During Beta, the Token API is free for authorized developers. There are no specific rate limits, but reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What do I do if I notice data inconsistencies in the data returned by the Token API?
-During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+If you notice data inconsistencies, please report the issue on our [Discord](https://discord.gg/graphprotocol). Identifying edge cases can help make sure all data is accurate and up-to-date.
-### What networks are supported, and how do I specify them?
+### How do I specify a network?
-You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of the exact network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`, `unichain`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
### Why do I only see 10 results? How can I get more data?
-Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` and `page` (1-indexed) to return more results. For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
### How do I fetch older transfer history?
diff --git a/website/src/pages/es/token-api/monitoring/get-health.mdx b/website/src/pages/es/token-api/monitoring/get-health.mdx
index 57a827b3343b..09f7b954dbf3 100644
--- a/website/src/pages/es/token-api/monitoring/get-health.mdx
+++ b/website/src/pages/es/token-api/monitoring/get-health.mdx
@@ -1,7 +1,9 @@
---
-title: Get health status of the API
+title: Health Status
template:
type: openApi
apiId: tokenApi
operationId: getHealth
---
+
+Get health status of the API
diff --git a/website/src/pages/es/token-api/monitoring/get-networks.mdx b/website/src/pages/es/token-api/monitoring/get-networks.mdx
index 0ea3c485ddb9..24156f36f74d 100644
--- a/website/src/pages/es/token-api/monitoring/get-networks.mdx
+++ b/website/src/pages/es/token-api/monitoring/get-networks.mdx
@@ -1,7 +1,9 @@
---
-title: Get supported networks of the API
+title: Redes Admitidas
template:
type: openApi
apiId: tokenApi
operationId: getNetworks
---
+
+Get supported networks of the API
diff --git a/website/src/pages/es/token-api/monitoring/get-version.mdx b/website/src/pages/es/token-api/monitoring/get-version.mdx
index 0be6b7e92d04..fa0040807854 100644
--- a/website/src/pages/es/token-api/monitoring/get-version.mdx
+++ b/website/src/pages/es/token-api/monitoring/get-version.mdx
@@ -1,7 +1,9 @@
---
-title: Get the version of the API
+title: Version
template:
type: openApi
apiId: tokenApi
operationId: getVersion
---
+
+Get the version of the API
diff --git a/website/src/pages/es/token-api/quick-start.mdx b/website/src/pages/es/token-api/quick-start.mdx
index 8488268e1356..09e35405aaa3 100644
--- a/website/src/pages/es/token-api/quick-start.mdx
+++ b/website/src/pages/es/token-api/quick-start.mdx
@@ -9,15 +9,15 @@ sidebarTitle: Comienzo Rapido
The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
-The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+The Token API provides access to onchain NFT and fungible token data, including live and historical balances, holders, prices, market data, token metadata, and token transfers. This API also uses the Model Context Protocol (MCP) to allow AI tools such as Claude to enrich raw blockchain data with contextual insights.
## Prerequisites
-Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+Before you begin, get a JWT API token by signing up on [The Graph Market](https://thegraph.market/). Make sure to use the JWT API Token, not the API key. Each API key can generate a new JWT API Token at any time.
## Authentication
-All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+All API endpoints are authenticated using a JWT API token inserted in the header as `Authorization: Bearer `.
```json
{
@@ -64,6 +64,20 @@ Make sure to replace `` with the JWT Token generated from your API key.
> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+## Chain and Feature Support
+
+| Network | evm-tokens | evm-uniswaps | evm-nft-tokens |
+| ---------------- | :---------: | :----------: | :------------: |
+| Ethereum Mainnet | ✅ | ✅ | ✅ |
+| BSC | ✅\* | ✅ | ✅ |
+| Base | ✅ | ✅ | ✅ |
+| Unichain | ✅ | ✅ | ✅ |
+| Arbitrum-One | Ingesting\* | Ingesting\* | Ingesting\* |
+| Optimism | ✅ | ✅ | ✅ |
+| Polygon | ✅ | ✅ | ✅ |
+
+\*Some chains are still in the process of syncing. You may encounter `bad_database_response` errors or incorrect response values until data is fully synced.
+
## Troubleshooting
If the API call fails, try printing out the full response object for additional error details. For example:
diff --git a/website/src/pages/fr/about.mdx b/website/src/pages/fr/about.mdx
index 1cce1a4218ea..3b1148c8da43 100644
--- a/website/src/pages/fr/about.mdx
+++ b/website/src/pages/fr/about.mdx
@@ -1,67 +1,46 @@
---
-title: À propos de The Graph
+title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## Qu’est-ce que The Graph ?
-The Graph est un puissant protocole décentralisé qui permet d'interroger et d'indexer facilement les données de la blockchain. Il simplifie le processus complexe de requête des données blockchain, rendant ainsi le développement des applications décentralisées (dapps) plus rapide et plus simple.
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## Comprendre les fondamentaux
+Its data services include:
-Des projets dotés de contrats intelligents complexes tels que [Uniswap](https://uniswap.org/) et les initiatives NFT comme [Bored Ape Yacht Club](https://boredapeyachtclub.com/) stockent leurs données sur la blockchain Ethereum, rendant très difficile la lecture directe de données autres que les données de base depuis la blockchain.
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### Défis sans The Graph
+### Why Blockchain Data is Difficult to Query
-Dans le cas de l'exemple mentionné ci-dessus, Bored Ape Yacht Club, vous pouvez effectuer de simples opérations de lecture sur [le contrat](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). Vous pouvez voir le propriétaire d'un certain Ape, lire l'URI du contenu d'un Ape en fonction de son ID, ou connaître l'offre totale en circulation.
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- Cela est possible car ces opérations de lecture sont programmées directement dans le contrat intelligent lui-même. Cependant, des requêtes et des opérations plus avancées, spécifiques et concrètes, telles que l'agrégation, la recherche, l'établissement de relations ou le filtrage complexe **ne sont pas possibles**.
+The result is slow performance, complex infrastructure, and scalability issues.
-- Par exemple, si vous souhaitez identifier les Apes détenus par une adresse spécifique et affiner votre recherche en fonction d'une caractéristique particulière, il serait impossible d'obtenir cette information en interagissant directement avec le contrat.
+## How The Graph Solves This
-- Pour obtenir plus de données, vous devriez traiter chaque événement de [`transfert`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) jamais émis, lire les métadonnées d'IPFS en utilisant l'ID du Token et le hash IPFS, puis les agréger.
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### Pourquoi est-ce un problème ?
+Find the perfect data service for you:
-Il faudrait des **heures, voire des jours,** pour qu'une application décentralisée (dapp) fonctionnant dans un navigateur obtienne une réponse à ces questions simples.
+### 1. Custom Real-Time Data Streams
-Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
+**Use Case:** High-frequency trading, live analytics.
-Les spécificités de la blockchain, comme la finalité des transactions, les réorganisations de chaîne et les blocs oncles (blocs rejetés lorsque deux blocs sont créés simultanément, ce qui entraîne l'omission d'un bloc de la blockchain.), ajoutent de la complexité au processus, rendant longue et conceptuellement difficile la récupération de résultats précis à partir des données de la blockchain.
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph apporte une solution
+### 2. Instant Token Data
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-Aujourd'hui, il existe un protocole décentralisé soutenu par l'implémentation open source de [Graph Node](https://github.com/graphprotocol/graph-node) qui permet ce processus.
+- [Start with Token API](/token-api/quick-start/)
-### Comment fonctionne The Graph
+### 3. Flexible Historical Queries
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+**Use Case:** Dapp frontends, custom analytics.
-#### Spécificités
-
-- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-
-- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-
-- When creating a Subgraph, you need to write a Subgraph manifest.
-
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-
-The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.
-
-
-
-La description des étapes du flux :
-
-1. Une dapp ajoute des données à Ethereum via une transaction sur un contrat intelligent.
-2. Le contrat intelligent va alors produire un ou plusieurs événements lors du traitement de la transaction.
-3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
-4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
-5. Le dapp interroge le Graph Node pour des données indexées à partir de la blockchain, à l'aide du [point de terminaison GraphQL](https://graphql.org/learn/) du noeud. À son tour, le Graph Node traduit les requêtes GraphQL en requêtes pour sa base de données sous-jacente afin de récupérer ces données, en exploitant les capacités d'indexation du magasin. Le dapp affiche ces données dans une interface utilisateur riche pour les utilisateurs finaux, qui s'en servent pour émettre de nouvelles transactions sur Ethereum. Le cycle se répète.
-
-## Les Étapes suivantes
-
-The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-
-Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/fr/index.json b/website/src/pages/fr/index.json
index ee19877c78e6..c2e493892d62 100644
--- a/website/src/pages/fr/index.json
+++ b/website/src/pages/fr/index.json
@@ -2,7 +2,7 @@
"title": "Accueil",
"hero": {
"title": "The Graph Docs",
- "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "How The Graph works",
"cta2": "Construisez votre premier subgraph"
},
@@ -19,10 +19,10 @@
"description": "Fetch and consume blockchain data with parallel execution.",
"cta": "Develop with Substreams"
},
- "sps": {
- "title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "Set up a Substreams-powered subgraph"
+ "tokenApi": {
+ "title": "Token API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "Nœud de The Graph",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "Firehose",
- "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Get started with Firehose"
}
},
@@ -58,6 +58,7 @@
"networks": "networks",
"completeThisForm": "complete this form"
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "Subgraphs",
"substreams": "Substreams",
"firehose": "Firehose",
- "tokenapi": "Token API"
+ "tokenApi": "Token API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "Substreams",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "Graph Explorer",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/fr/indexing/new-chain-integration.mdx b/website/src/pages/fr/indexing/new-chain-integration.mdx
index ab70ce6efb3a..f21653ce39fd 100644
--- a/website/src/pages/fr/indexing/new-chain-integration.mdx
+++ b/website/src/pages/fr/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ Afin que Graph Node puisse ingérer des données provenant d'une chaîne EVM, le
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, in a JSON-RPC batch request
-- `trace_filter` *(traçage limité et optionnellement requis pour Graph Node)*
+- `trace_filter` _(traçage limité et optionnellement requis pour Graph Node)_
### 2. Intégration Firehose
@@ -63,7 +63,7 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
> Ne changez pas le nom de la var env elle-même. Il doit rester `ethereum` même si le nom du réseau est différent.
-3. Exécutez un nœud IPFS ou utilisez celui utilisé par The Graph : https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## Subgraphs alimentés par des substreams
diff --git a/website/src/pages/fr/indexing/overview.mdx b/website/src/pages/fr/indexing/overview.mdx
index 1c0d0f9c7221..af6e461af61a 100644
--- a/website/src/pages/fr/indexing/overview.mdx
+++ b/website/src/pages/fr/indexing/overview.mdx
@@ -111,11 +111,11 @@ Indexers may differentiate themselves by applying advanced techniques for making
- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
| Configuration | Postgres (CPUs) | Postgres (mémoire en Go) | Postgres (disque en To) | VMs (CPUs) | VMs (mémoire en Go) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Petit | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Moyen | 16 | 64 | 2 | 32 | 64 |
-| Grand | 72 | 468 | 3.5 | 48 | 184 |
+| ------------- | :------------------: | :---------------------------: | :--------------------------: | :-------------: | :----------------------: |
+| Petit | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Moyen | 16 | 64 | 2 | 32 | 64 |
+| Grand | 72 | 468 | 3.5 | 48 | 184 |
### Quelles sont les précautions de sécurité de base qu'un Indexeur doit prendre ?
@@ -131,7 +131,7 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Service de l'Indexeur** - Gère toutes les communications externes requises avec le réseau. Il partage les modèles de coûts et les états d'indexation, transmet les requêtes des passerelles à un Graph Node et gère les paiements des requêtes via des canaux d'état avec la passerelle.
@@ -147,26 +147,26 @@ Remarque : pour permettre une mise à l'échelle souple, il est recommandé de s
#### Nœud de The Graph
-| Port | Objectif | Routes | Argument CLI | Variable d'Environment |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (pour gérer les déploiements) | / | \--admin-port | - |
-| 8030 | API du statut de l'indexation des subgraphs | /graphql | \--index-node-port | - |
-| 8040 | Métriques Prometheus | /metrics | \--metrics-port | - |
+| Port | Objectif | Routes | Argument CLI | Variable d'Environment |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | ---------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (pour gérer les déploiements) | / | \--admin-port | - |
+| 8030 | API du statut de l'indexation des subgraphs | /graphql | \--index-node-port | - |
+| 8040 | Métriques Prometheus | /metrics | \--metrics-port | - |
#### Service d'Indexeur
-| Port | Objectif | Routes | Argument CLI | Variable D'environment |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Métriques Prometheus | /metrics | \--metrics-port | - |
+| Port | Objectif | Routes | Argument CLI | Variable D'environment |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Métriques Prometheus | /metrics | \--metrics-port | - |
#### Indexer Agent
-| Port | Objectif | Routes | Argument CLI | Variable D'environment |
-| ---- | ---------------------------- | ------ | -------------------------- | --------------------------------------- |
-| 8000 | API de gestion des Indexeurs | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` |
+| Port | Objectif | Routes | Argument CLI | Variable D'environment |
+| ---- | ---------------------------- | ------ | -------------------------- | ----------------------------------------- |
+| 8000 | API de gestion des Indexeurs | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` |
### Mise en place d'une infrastructure de serveurs à l'aide de Terraform sur Google Cloud
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Commencer à utiliser Docker
@@ -708,42 +708,6 @@ Notez que les types d'action pris en charge pour la gestion de l'allocation ont
Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
-#### Agora
-
-Le langage Agora fournit un format flexible pour déclarer des modèles de coûts pour les requêtes. Un modèle de prix Agora est une séquence d'instructions qui s'exécutent dans l'ordre pour chaque requête de niveau supérieur dans une requête GraphQL. Pour chaque requête de niveau supérieur, la première instruction qui lui correspond détermine le prix de cette requête.
-
-Une déclaration est composée d'un prédicat, qui est utilisé pour faire correspondre les requêtes GraphQL, et d'une expression de coût qui, lorsqu'elle est évaluée, produit un coût en GRT décimal. Les valeurs de l'argument nommé d'une requête peuvent être capturées dans le prédicat et utilisées dans l'expression. Les globaux peuvent également être définis et remplacés par des espaces réservés dans une expression.
-
-Exemple de modèle de coût :
-
-```
-# Cette instruction capture la valeur du saut,
-# utilise une expression booléenne dans le prédicat pour faire correspondre les requêtes spécifiques qui utilisent `skip`
-# et une expression de coût pour calculer le coût en fonction de la valeur `skip` et de la valeur globale SYSTEM_LOAD
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-# Cette valeur par défaut correspondra à n'importe quelle expression GraphQL.
-# Il utilise un Global substitué dans l'expression pour calculer le coût.
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-Exemple de calcul des coûts d'une requête à l'aide du modèle ci-dessus :
-
-| Requête | Prix |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### Application du modèle de coût
-
-Les modèles de coûts sont appliqués via l'Indexer CLI, qui les transmet à l'API de gestion de l'Indexer agent pour qu'ils soient stockés dans la base de données. L'Indexer Service les récupère ensuite et fournit les modèles de coûts aux passerelles chaque fois qu'elles le demandent.
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## Interaction avec le réseau
### Staker dans le protocol
diff --git a/website/src/pages/fr/indexing/tap.mdx b/website/src/pages/fr/indexing/tap.mdx
index 68a0b79a2e6f..1986db8769ee 100644
--- a/website/src/pages/fr/indexing/tap.mdx
+++ b/website/src/pages/fr/indexing/tap.mdx
@@ -45,19 +45,19 @@ Tant que vous exécutez `tap-agent` et `indexer-agent`, tout sera exécuté auto
### Contrats
-| Contrat | Mainnet Arbitrum (42161) | Arbitrum Sepolia (421614) |
-| --- | --- | --- |
-| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` |
-| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` |
-| Tiers de confiance (Escrow) | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` |
+| Contrat | Mainnet Arbitrum (42161) | Arbitrum Sepolia (421614) |
+| ---------------------------------------- | -------------------------------------------- | -------------------------------------------- |
+| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` |
+| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` |
+| Tiers de confiance (Escrow) | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` |
### Passerelle (Gateway)
-| Composant | Mainnet Node et Edge (Arbitrum Mainnet) | Testnet Node et Edge (Arbitrum Mainnet) |
-| ----------- | --------------------------------------------- | --------------------------------------------- |
-| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` |
-| Signataires | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
-| Aggregateur | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
+| Composant | Mainnet Node et Edge (Arbitrum Mainnet) | Testnet Node et Edge (Arbitrum Mainnet) |
+| -------------- | --------------------------------------------- | --------------------------------------------- |
+| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` |
+| Signataires | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
+| Aggregateur | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
### Prérequis
diff --git a/website/src/pages/fr/indexing/tooling/graph-node.mdx b/website/src/pages/fr/indexing/tooling/graph-node.mdx
index ea35e2fb9680..778c72488582 100644
--- a/website/src/pages/fr/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/fr/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ While some Subgraphs may just require a full node, some may have indexing featur
### Nœuds IPFS
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### Serveur de métriques Prometheus
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Bien démarrer avec Kubernetes
@@ -77,15 +77,20 @@ Un exemple complet de configuration Kubernetes se trouve dans le [dépôt d'Inde
Lorsqu'il est en cours d'exécution, Graph Node expose les ports suivants :
-| Port | Objectif | Routes | Argument CLI | Variable d'Environment |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (pour gérer les déploiements) | / | \--admin-port | - |
-| 8030 | API du statut de l'indexation des subgraphs | /graphql | \--index-node-port | - |
-| 8040 | Métriques Prometheus | /metrics | \--metrics-port | - |
-
-> **Important** : Soyez prudent lorsque vous exposez des ports publiquement - les **ports d'administration** doivent être verrouillés. Ceci inclut l'endpoint JSON-RPC de Graph Node.
+| Port | Objectif | Routes | Argument CLI | Variable d'Environment |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | ---------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (pour gérer les déploiements) | / | \--admin-port | - |
+| 8030 | API du statut de l'indexation des subgraphs | /graphql | \--index-node-port | - |
+| 8040 | Métriques Prometheus | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## Configuration avancée du nœud graph
@@ -330,7 +335,7 @@ Les tables de base de données qui stockent les entités semblent généralement
Pour les tables de type compte, `graph-node` peut générer des requêtes qui tirent parti des détails de la façon dont Postgres stocke les données avec un taux de changement aussi élevé, à savoir que toutes les versions des blocs récents se trouvent dans une petite sous-section du stockage global d'une telle table.
-La commande `graphman stats show montre, pour chaque type/table d'entité dans un déploiement, combien d'entités distinctes, et combien de versions d'entités chaque table contient. Ces données sont basées sur des estimations internes à Postgres, et sont donc nécessairement imprécises, et peuvent être erronées d'un ordre de grandeur. Un `-1` dans la colonne `entités` signifie que Postgres pense que toutes les lignes contiennent une entité distincte.
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
En général, les tables où le nombre d'entités distinctes est inférieur à 1% du nombre total de versions de lignes/d'entités sont de bons candidats pour l'optimisation de type compte. Lorsque la sortie de `graphman stats show` indique qu'une table pourrait bénéficier de cette optimisation, l'exécution de `graphman stats show ` effectuera un comptage complet de la table - ce qui peut être lent, mais donne une mesure précise du ratio d'entités distinctes par rapport au nombre total de versions d'entités.
@@ -340,6 +345,4 @@ For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates f
#### Removing Subgraphs
-> Il s'agit d'une nouvelle fonctionnalité qui sera disponible dans Graph Node 0.29.x
-
At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/fr/resources/claude-mcp.mdx b/website/src/pages/fr/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..ef0b3c7a0d43
--- /dev/null
+++ b/website/src/pages/fr/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## Prérequis
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/fr/subgraphs/_meta-titles.json b/website/src/pages/fr/subgraphs/_meta-titles.json
index e10948c648a1..d4c4fd0c68b3 100644
--- a/website/src/pages/fr/subgraphs/_meta-titles.json
+++ b/website/src/pages/fr/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "Querying",
"developing": "Developing",
"guides": "Guides pratiques",
- "best-practices": "Les meilleures pratiques"
+ "best-practices": "Les meilleures pratiques",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx b/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx
index 5992294de057..b64f4462d9d3 100644
--- a/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx
+++ b/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx
@@ -8,11 +8,11 @@ Ajouter et mettre en œuvre des fonctionnalités avancées de subgraph pour amé
A partir de la `specVersion` `0.0.4`, les fonctionnalités de Subgraph doivent être explicitement déclarées dans la section `features` au premier niveau du fichier manifest, en utilisant leur nom `camelCase`, comme listé dans le tableau ci-dessous :
-| Fonctionnalité | Nom |
-| --------------------------------------------------------- | ---------------- |
-| [Erreurs non fatales](#non-fatal-errors) | `nonFatalErrors` |
-| [Recherche plein texte](#defining-fulltext-search-fields) | `fullTextSearch` |
-| [Greffage](#grafting-onto-existing-subgraphs) | `grafting` |
+| Fonctionnalité | Nom |
+| ----------------------------------------------------------- | ---------------- |
+| [Erreurs non fatales](#non-fatal-errors) | `nonFatalErrors` |
+| [Recherche plein texte](#defining-fulltext-search-fields) | `fullTextSearch` |
+| [Greffage](#grafting-onto-existing-subgraphs) | `grafting` |
Par exemple, si un subgraph utilise les fonctionnalités **Recherche plein texte** et **Erreurs non fatales**, le champ `features` dans le manifeste devrait être :
diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index e1411a2c1465..04cb4a8d91ec 100644
--- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Changements dans les correctifs
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Minor Changes
diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx
index 90bc58c98943..63c7591d8398 100644
--- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ La bibliothèque `@graphprotocol/graph-ts` fournit les API suivantes :
La `apiVersion` dans le manifeste du subgraph spécifie la version de l'API de mappage qui est exécutée par Graph Node pour un subgraph donné.
-| Version | Notes de version |
-| :-: | --- |
-| 0.0.9 | Ajout de nouvelles fonctions hôtes [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Ajout de la validation pour l'existence des champs dans le schéma lors de l'enregistrement d'une entité. |
-| 0.0.7 | Ajout des classes `TransactionReceipt` et `Log`aux types Ethereum Ajout du champ `receipt` à l'objet Ethereum Event |
-| 0.0.6 | Ajout du champ `nonce` à l'objet Ethereum Transaction Ajout de `baseFeePerGas` à l'objet Ethereum Block |
-| 0.0.5 | AssemblyScript mis à jour vers la version 0.19.10 (cela inclut des changements de rupture, veuillez consulter le [`Guide de migration`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renommé en `ethereum.transaction.gasLimit` |
-| 0.0.4 | Ajout du champ `functionSignature` à l'objet Ethereum SmartContractCall |
-| 0.0.3 | Ajout du champ `from` à l'objet Ethereum Call `ethereum.call.address` renommé en `ethereum.call.to` |
-| 0.0.2 | Ajout du champ `input` à l'objet Ethereum Transaction |
+| Version | Notes de version |
+| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 0.0.9 | Ajout de nouvelles fonctions hôtes [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Ajout de la validation pour l'existence des champs dans le schéma lors de l'enregistrement d'une entité. |
+| 0.0.7 | Ajout des classes `TransactionReceipt` et `Log`aux types Ethereum Ajout du champ `receipt` à l'objet Ethereum Event |
+| 0.0.6 | Ajout du champ `nonce` à l'objet Ethereum Transaction Ajout de `baseFeePerGas` à l'objet Ethereum Block |
+| 0.0.5 | AssemblyScript mis à jour vers la version 0.19.10 (cela inclut des changements de rupture, veuillez consulter le [`Guide de migration`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renommé en `ethereum.transaction.gasLimit` |
+| 0.0.4 | Ajout du champ `functionSignature` à l'objet Ethereum SmartContractCall |
+| 0.0.3 | Ajout du champ `from` à l'objet Ethereum Call `ethereum.call.address` renommé en `ethereum.call.to` |
+| 0.0.2 | Ajout du champ `input` à l'objet Ethereum Transaction |
### Types intégrés
@@ -770,44 +770,44 @@ Lorsque le type d'une valeur est certain, il peut être converti en un [type int
### Référence des conversions de types
-| Source(s) | Destination | Fonctions de conversion |
-| -------------------- | -------------------- | ---------------------------- |
-| Address | Bytes | aucune |
-| Address | String | s.toHexString() |
-| BigDecimal | String | s.toString() |
-| BigInt | BigDecimal | s.toBigDecimal() |
-| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() |
-| BigInt | String (unicode) | s.toString() |
-| BigInt | i32 | s.toI32() |
-| Boolean | Boolean | aucune |
-| Bytes (signé) | BigInt | BigInt.fromSignedBytes(s) |
-| Bytes (non signé) | BigInt | BigInt.fromUnsignedBytes(s) |
-| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() |
-| Bytes | String (unicode) | s.toString() |
-| Bytes | String (base58) | s.toBase58() |
-| Bytes | i32 | s.toI32() |
-| Bytes | u32 | s.toU32() |
-| Bytes | JSON | json.fromBytes(s) |
-| int8 | i32 | aucune |
-| int32 | i32 | aucune |
-| int32 | BigInt | BigInt.fromI32(s) |
-| uint24 | i32 | aucune |
-| int64 - int256 | BigInt | aucune |
-| uint32 - uint256 | BigInt | aucune |
-| JSON | boolean | s.toBool() |
-| JSON | i64 | s.toI64() |
-| JSON | u64 | s.toU64() |
-| JSON | f64 | s.toF64() |
-| JSON | BigInt | s.toBigInt() |
-| JSON | string | s.toString() |
-| JSON | Array | s.toArray() |
-| JSON | Object | s.toObject() |
-| String | Address | Address.fromString(s) |
-| Bytes | Address | Address.fromBytes(s) |
-| String | BigInt | BigInt.fromString(s) |
-| String | BigDecimal | BigDecimal.fromString(s) |
-| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) |
-| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) |
+| Source(s) | Destination | Fonctions de conversion |
+| --------------------- | -------------------- | -------------------------------- |
+| Address | Bytes | aucune |
+| Address | String | s.toHexString() |
+| BigDecimal | String | s.toString() |
+| BigInt | BigDecimal | s.toBigDecimal() |
+| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() |
+| BigInt | String (unicode) | s.toString() |
+| BigInt | i32 | s.toI32() |
+| Boolean | Boolean | aucune |
+| Bytes (signé) | BigInt | BigInt.fromSignedBytes(s) |
+| Bytes (non signé) | BigInt | BigInt.fromUnsignedBytes(s) |
+| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() |
+| Bytes | String (unicode) | s.toString() |
+| Bytes | String (base58) | s.toBase58() |
+| Bytes | i32 | s.toI32() |
+| Bytes | u32 | s.toU32() |
+| Bytes | JSON | json.fromBytes(s) |
+| int8 | i32 | aucune |
+| int32 | i32 | aucune |
+| int32 | BigInt | BigInt.fromI32(s) |
+| uint24 | i32 | aucune |
+| int64 - int256 | BigInt | aucune |
+| uint32 - uint256 | BigInt | aucune |
+| JSON | boolean | s.toBool() |
+| JSON | i64 | s.toI64() |
+| JSON | u64 | s.toU64() |
+| JSON | f64 | s.toF64() |
+| JSON | BigInt | s.toBigInt() |
+| JSON | string | s.toString() |
+| JSON | Array | s.toArray() |
+| JSON | Object | s.toObject() |
+| String | Address | Address.fromString(s) |
+| Bytes | Address | Address.fromBytes(s) |
+| String | BigInt | BigInt.fromString(s) |
+| String | BigDecimal | BigDecimal.fromString(s) |
+| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) |
+| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) |
### Métadonnées de la source de données
diff --git a/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx
index 247c5e721c94..2e161787acff 100644
--- a/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -22,14 +22,14 @@ Commencez le processus et construisez un subgraph qui correspond à vos besoins
Explorez d'autres [ressources pour les API](/subgraphs/developing/creating/graph-ts/README/) et effectuez des tests en local avec [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
-| Version | Notes de version |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supporte la fonctionnalité [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) pour élaguer les subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+| Version | Notes de version |
+| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supporte la fonctionnalité [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) pour élaguer les subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx
index 61b209325211..44f1d8adb180 100644
--- a/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx
+++ b/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx
@@ -1216,8 +1216,8 @@ type TokenLockMetadata @entity {
##### Exemple de gestionnaire
```typescript
-export function handleMetadata(content: Bytes): void {
- // dataSource.stringParams() renvoie le CID du fichier de la source de données
+export function handleMetadata(content : Bytes) : void {
+ // dataSource.stringParams() renvoie le CID du fichier de la source de données
// stringParam() sera simulé dans le test du gestionnaire
// pour plus d'informations https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files
let tokenMetadata = new TokenLockMetadata(dataSource.stringParam())
diff --git a/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx
index 2916c6fa07ad..5e4b27c8b13b 100644
--- a/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx
@@ -211,7 +211,7 @@ Chaque Subgraph concerné par cette politique a la possibilité de rétablir la
Si un Subgraph se synchronise avec succès, c'est le signe qu'il continuera à fonctionner correctement pour toujours. Toutefois, de nouveaux déclencheurs sur le réseau peuvent entraîner une condition d'erreur non testée dans votre Subgraph ou un retard dû à des problèmes de performance ou à des problèmes avec les opérateurs de nœuds.
-Graph Node expose un endpoint GraphQL que vous pouvez interroger pour vérifier l'état de votre subgraph. Sur le service hébergé, il est disponible à `https://api.thegraph.com/index-node/graphql`. Sur un nœud local, il est disponible sur le port `8030/graphql` par défaut. Le schéma complet de ce point d'accès peut être trouvé [ici](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Voici un exemple de requête qui vérifie le statut de la version actuelle d'un subgraph :
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph: `https://indexer.upgrade.thegraph.com/status`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
diff --git a/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 4582f8643eb7..e1a4559b28bf 100644
--- a/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -88,6 +88,8 @@ graph auth
Une fois que vous êtes prêt, vous pouvez déployer votre Subgraph dans Subgraph Studio.
> Le déploiement d'un Subgraph à l'aide de la CLI le transfère dans le Studio, où vous pouvez le tester et mettre à jour les métadonnées. Cette action ne publie pas votre Subgraph sur le réseau décentralisé.
+>
+> **Note**: Each account is limited to 3 deployed (unpublished) Subgraphs. If you reach this limit, you must archive or publish existing Subgraphs before deploying new ones.
Utilisez la commande CLI suivante pour déployer votre Subgraph :
@@ -104,6 +106,8 @@ Après avoir exécuté cette commande, la CLI demandera une étiquette de versio
Après le déploiement, vous pouvez tester votre Subgraph (soit dans Subgraph Studio, soit dans votre propre application, avec l'URL de requête de déploiement), déployer une autre version, mettre à jour les métadonnées et publier sur [Graph Explorer](https://thegraph.com/explorer) lorsque vous êtes prêt.
+> **Note**: The development query URL is limited to 3,000 queries per day.
+
Utilisez Subgraph Studio pour vérifier les journaux du tableau de bord et rechercher les erreurs éventuelles de votre Subgraph.
## Publiez votre subgraph
diff --git a/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index 88b91fcd179c..3d936960605a 100644
--- a/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -47,18 +47,18 @@ Depuis la version 0.73.0, vous pouvez également publier votre Subgraph avec [`g
Vous pouvez télécharger votre Subgraph sur un nœud IPFS spécifique et personnaliser davantage votre déploiement à l'aide des flags suivants :
```
-UTILISATION
+USAGE
$ graph publish [SUBGRAPH-MANIFEST] [-h] [--protocol-network arbitrum-one|arbitrum-sepolia --subgraph-id ] [-i ] [--ipfs-hash ] [--webapp-url
]
FLAGS
- -h, --help Affiche l'aide CLI.
- -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Charge les résultats du build sur un noeud IPFS.
- --ipfs-hash= hash IPFS du manifeste du subgraph à déployer.
- --protocol-network= [default: arbitrum-one] Le réseau à utiliser pour le déploiement du subgraph.
+ -h, --help Show CLI help.
+ -i, --ipfs= [default: https://ipfs.thegraph.com/api/v0] Upload build results to an IPFS node.
+ --ipfs-hash= IPFS hash of the subgraph manifest to deploy.
+ --protocol-network= [default: arbitrum-one] The network to use for the subgraph deployment.
- --subgraph-id= ID du subgraph vers lequel publier.
- --webapp-url= [default: https://cli.thegraph.com/publish] URL de l'interface web que vous souhaitez utiliser pour le déploiement.
+ --subgraph-id= Subgraph ID to publish to.
+ --webapp-url= [default: https://cli.thegraph.com/publish] URL of the web UI you want to use to deploy.
```
diff --git a/website/src/pages/fr/subgraphs/explorer.mdx b/website/src/pages/fr/subgraphs/explorer.mdx
index 7a7cf7e972db..f1a59e1c9c43 100644
--- a/website/src/pages/fr/subgraphs/explorer.mdx
+++ b/website/src/pages/fr/subgraphs/explorer.mdx
@@ -2,83 +2,103 @@
title: Graph Explorer
---
-Découvrez le monde des subgraphs et des données de réseau avec [Graph Explorer](https://thegraph.com/explorer).
+Use [Graph Explorer](https://thegraph.com/explorer) and take full advantage of its core features.
## Aperçu
-Graph Explorer se compose de plusieurs parties où vous pouvez interagir avec les [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [déléguer](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engager les [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), voir les [informations sur le réseau](https://thegraph.com/explorer/network?chain=arbitrum-one) et accéder à votre profil d'utilisateur.
+This guide explains how to use [Graph Explorer](https://thegraph.com/explorer) to quickly discover and interact with Subgraphs on The Graph Network, delegate GRT, view participant metrics, and analyze network performance.
-## À l'intérieur de l'Explorer
+> When you visit Graph Explorer, you can also access the link to [explore Substreams](https://substreams.dev/).
-Vous trouverez ci-dessous une liste de toutes les fonctionnalités clés de Graph Explorer. Pour obtenir une assistance supplémentaire, vous pouvez regarder le [guide vidéo de Graph Explorer](/subgraphs/explorer/#video-guide).
+## Prérequis
-### Page des subgraphs
+- To perform actions, you need a wallet (e.g., MetaMask) connected to [Graph Explorer](https://thegraph.com/explorer).
+ > Make sure your wallet is connected to the correct network (e.g., Arbitrum). Features and data shown are network specific.
+- GRT tokens if you plan to delegate or curate.
+- Basic knowledge of [Subgraphs](https://thegraph.com/docs/en/subgraphs/developing/subgraphs/)
-Après avoir déployé et publié votre subgraph dans Subgraph Studio, allez sur [Graph Explorer](https://thegraph.com/explorer) et cliquez sur le lien "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" dans la barre de navigation pour accéder à ce qui suit :
+## Navigating Graph Explorer
-- Vos propres subgraphs finis
-- Les subgraphs publiés par d'autres
-- Le Subgraph exact que vous souhaitez (sur la base de la date de création, de la quantité de signal ou du nom).
+### Step 1. Explore Subgraphs
-
+> For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
-Lorsque vous cliquez sur un subgraph, vous pouvez effectuer les opérations suivantes :
+Go to the Subgraphs page in [Graph Explorer](https://thegraph.com/explorer).
-- Tester des requêtes dans le l'environnement de test et utiliser les détails du réseau pour prendre des décisions éclairées.
-- Signalez des GRT sur votre propre subgraph ou sur les subgraphs d'autres personnes afin de sensibiliser les Indexeurs sur son importance et sa qualité.
+- If you've deployed and published your Subgraph in Subgraph Studio, you can view it here.
+- Search all published Subgraphs and filter them by indexed network, specific categories (such as DeFI, NFTs, and DAOs), and **most queried, most curated, recently created, and recently updated**.
+
+
+
+To find Subgraphs indexing a specific contract, enter the contract address into the search bar.
+
+- For example, you can enter the L2GNS contract on Artbitrum (`0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec`) and this returns all Subgraphs indexing that contract:
+
+
- - Ce point est essentiel, car le fait de signaler un subgraph incite à être indexé, ce qui signifie qu'il finira par faire surface sur le réseau pour répondre aux requêtes.
+> Looking for indexing contracts? Check out [this Subgraph](https://thegraph.com/explorer/subgraphs/FMTUN6d7sY2bLnAmNEPJTqiU3iuQht6ZXurpBh71wbWR?view=About&chain=arbitrum-one) which indexes contract addresses listed in its manifest. It shows all current deployments indexing those contracts on Arbitrum One, along with the signal allocated to each.
-
+You can click into any Subgraph, to:
+
+- Tester des requêtes dans le l'environnement de test et utiliser les détails du réseau pour prendre des décisions éclairées.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make Indexers aware of its importance and quality.
+ > This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it'll eventually surface on the network to serve queries.
+
+
Sur la page dédiée à chaque subgraph, vous pouvez effectuer les opérations suivantes :
-- Signaler/Dé-signaler sur les subgraphs
-- Afficher plus de détails tels que des graphs, l'ID de déploiement actuel et d'autres métadonnées
-- Passer d'une version à l'autre pour explorer les itérations passées du subgraph
- Interroger les subgraphs via GraphQL
+- View Subgraph ID, current deployment ID, Query URL, and other metadata
+- Signal/unsignal on Subgraphs
- Tester les subgraphs dans le playground
- Voir les Indexeurs qui indexent un certain subgraph
- Statistiques du subgraph (allocations, conservateurs, etc.)
-- Afficher l'entité qui a publié le subgraph
+- View query fees and charts
+- Change versions to explore past iterations of the Subgraph
+- View entity types
+- View Subgraph activity
-
+
-### Page de Délégué
+### Step 2. Delegate GRT
-Sur la [page de Délégué](https://thegraph.com/explorer/delegate?chain=arbitrum-one), vous trouverez des informations sur la délégation, l'acquisition de GRT et le choix d'un Indexeur.
+Go to the [Delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) page to learn how to delegate, get GRT, and choose an Indexer.
-Sur cette page, vous pouvez voir les éléments suivants :
+Here, you can:
-- Indexeurs ayant perçu le plus de frais de requête
-- Indexeurs avec l'APR estimé le plus élevé
+- Compare Indexers by most query fees earned and highest estimated APR.
+- Use the built-in ROI calculator or search by Indexer name or address.
+- Click **"Delegate"** next to an Indexer to stake your GRT.
-En outre, vous pouvez calculer votre retour sur investissement et rechercher les meilleurs Indexeurs par nom, adresse ou subgraph.
+### Step 3. Monitor Participants in the Network
-### Page des participants
+Go to the [Participants](https://thegraph.com/explorer/participants?chain=arbitrum-one) page to view:
-Cette page offre une vue d'ensemble de tous les "participants," c'est-à-dire de toutes les personnes qui participent au réseau, telles que les Indexeurs, les Déléguateurs et les Curateurs.
+- Indexers: stakes, allocations, rewards, and delegation parameters
+- Curators: signal amounts, Subgraph shares, and activity history
+- Delegators: current and historical delegations, rewards, and Indexer metrics
-#### 1. Indexeurs
+#### Indexeurs
-
+
Les Indexeurs constituent la colonne principale du protocole. Ils s'intéressent aux subgraphs, les indexent et servent des requêtes à tous ceux qui consomment des subgraphs.
-Dans le tableau des Indexeurs, vous pouvez voir les paramètres de délégation d'un Indexeur, son staking, le montant qu'il a staké sur chaque subgraph et le revenu qu'il a tiré des frais de requête et des récompenses d'indexation.
+In the Indexers table, you can see an Indexer's delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
Spécificités
-- Query Fee Cut - le % des frais de requête que l'Indexeur conserve lors de la répartition avec les Délégateurs.
-- Effective Reward Cut - la réduction des récompenses d'indexation appliquée au pool de délégation. Si elle est négative, cela signifie que l'Indexeur donne une partie de ses récompenses. Si elle est positive, cela signifie que l'Indexeur garde une partie de ses récompenses.
-- Cooldown Remaining - le temps restant avant que l'Indexeur puisse modifier les paramètres de délégation ci-dessus. Les périodes de cooldown sont définies par les Indexeurs lorsqu'ils mettent à jour leurs paramètres de délégation.
-- Owned - Il s'agit du staking de l'Indexeur, qui peut être partiellement confisquée en cas de comportement malveillant ou incorrect.
-- Delegated - Le staking des Délégateurs qui peut être allouée par l'Indexeur, mais ne peut pas être confisquée.
-- Alloué - Le Staking que les Indexeurs allouent activement aux subgraphs qu'ils indexent.
-- Available Delegation Capacity - le staking délégué que les Indexeurs peuvent encore recevoir avant d'être sur-délégués.
-- Capacité de délégation maximale : montant maximum de participation déléguée que l'indexeur peut accepter de manière productive. Une mise déléguée excédentaire ne peut pas être utilisée pour le calcul des allocations ou des récompenses.
-- Query Fees - il s'agit du total des frais que les utilisateurs finaux ont payés pour les requêtes d'un Indexeur au fil du temps.
-- Récompenses de l'indexeur - il s'agit du total des récompenses de l'indexeur gagnées par l'indexeur et ses délégués sur toute la durée. Les récompenses des indexeurs sont payées par l'émission de GRT.
+- Query Fee Cut: The % of the query fee rebates that the Indexer keeps when splitting with Delegators.
+- Effective Reward Cut: The indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
+- Cooldown Remaining: The time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
+- Owned: This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
+- Delegated: Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
+- Allocated: Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
+- Available Delegation Capacity: The amount of delegated stake the Indexers can still receive before they become over-delegated.
+- Max Delegation Capacity: The maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
+- Query Fees: This is the total fees that end users have paid for queries from an Indexer over all time.
+- Indexer Rewards: This is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
Les Indexeurs peuvent gagner à la fois des frais de requête et des récompenses d'indexation. Fonctionnellement, cela se produit lorsque les participants au réseau délèguent des GRT à un Indexeur. Cela permet aux Indexeurs de recevoir des frais de requête et des récompenses en fonction de leurs paramètres d'Indexeur.
@@ -86,9 +106,9 @@ Les Indexeurs peuvent gagner à la fois des frais de requête et des récompense
Pour en savoir plus sur la façon de devenir Indexeur, vous pouvez consulter la [documentation officielle](/indexing/overview/) ou les [guides de l'Indexeur de The Graph Academy](https://thegraph.academy/delegators/choosing-indexers/)
-
+
-#### 2. Curateurs
+#### Curators
Les Curateurs analysent les subgraphs afin d'identifier ceux qui sont de la plus haute qualité. Une fois qu'un Curateur a trouvé un subgraph potentiellement de haute qualité, il peut le curer en signalant sa courbe de liaison. Ce faisant, les Curateurs indiquent aux Indexeurs quels subgraphs sont de haute qualité et devraient être indexés.
@@ -102,11 +122,11 @@ Dans le tableau des Curateurs ci-dessous, vous pouvez voir :
- Le nombre de GRT déposés
- Nombre d'actions détenues par un curateur
-
+
Si vous souhaitez en savoir plus sur le rôle de Curateur, vous pouvez consulter la [documentation officielle](/resources/roles/curating/) ou [The Graph Academy](https://thegraph.academy/curators/).
-#### 3. Délégués
+#### Délégateurs
Les Délégateurs jouent un rôle clé dans le maintien de la sécurité et de la décentralisation de The Graph Network. Ils participent au réseau en déléguant (c'est-à-dire en "stakant") des jetons GRT à un ou plusieurs Indexeurs.
@@ -114,7 +134,7 @@ Les Délégateurs jouent un rôle clé dans le maintien de la sécurité et de l
- Les Délégateurs sélectionnent leurs Indexeurs selon divers critères, telles que les performances passées, les taux de récompense d'indexation et le partage des frais.
- La réputation au sein de la communauté peut également jouer un rôle dans le processus de sélection. Il est recommandé d'entrer en contact avec les Indexeurs sélectionnés via le [Discord de The Graph](https://discord.gg/graphprotocol) ou le [Forum de The Graph] (https://forum.thegraph.com/).
-
+
Dans le tableau des Délégateurs, vous pouvez voir les Délégateurs actifs dans la communauté et les métriques importantes :
@@ -127,9 +147,9 @@ Dans le tableau des Délégateurs, vous pouvez voir les Délégateurs actifs dan
Si vous souhaitez en savoir plus sur la façon de devenir Déléguateur, consultez la [documentation officielle](/resources/roles/delegating/delegating/) ou [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers).
-### Page de réseau
+### Step 4. Analyze Network Performance
-Sur cette page, vous pouvez voir les KPIs globaux et avoir la possibilité de passer à une base par époque et d'analyser les métriques du réseau plus en détail. Ces détails vous donneront une idée des performances du réseau au fil du temps.
+On the [Network](https://thegraph.com/explorer/network?chain=arbitrum-one) page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
#### Aperçu
@@ -147,7 +167,7 @@ Quelques détails clés à noter :
- **Les frais de requête représentent les frais générés par les consommateurs**. Ils peuvent être réclamés (ou non) par les Indexeurs après une période d'au moins 7 époques (voir ci-dessous) après que leurs allocations vers les subgraphs ont été clôturées et que les données qu'ils ont servies ont été validées par les consommateurs.
- **Les récompenses d'indexation représentent le montant des récompenses que les Indexeurs ont réclamé de l'émission du réseau au cours de l'époque.** Bien que l'émission du protocole soit fixe, les récompenses ne sont mintées qu'une fois que les Indexeurs ont fermé leurs allocations vers les subgraphs qu'ils ont indexés. Ainsi, le nombre de récompenses par époque varie (par exemple, au cours de certaines époques, les Indexeurs peuvent avoir fermé collectivement des attributions qui étaient ouvertes depuis plusieurs jours).
-
+
#### Époques
@@ -161,69 +181,77 @@ Dans la section Époques, vous pouvez analyser, époque par époque, des métriq
- Les époques de distribution sont les époques au cours desquelles les canaux d'État pour les époques sont réglés et les indexeurs peuvent réclamer leurs remises sur les frais de requête.
- Les époques finalisées sont les époques qui n'ont plus de remboursements de frais de requête à réclamer par les Indexeurs.
-
+
+
+## Access and Manage Your User Profile
+
+### Step 1. Access Your Profile
-## Votre profil d'utilisateur
+- Click your wallet address in the top right corner
+- Your wallet acts as your user profile
+- In your profile dashboard, you can view and interact with several useful tabs
-Votre profil personnel est l'endroit où vous pouvez voir votre activité sur le réseau, quel que soit votre rôle sur le réseau. Votre portefeuille crypto agira comme votre profil utilisateur, et avec le tableau de bord utilisateur, vous pourrez voir les onglets suivants :
+### Step 2. Explore the Tabs
-### Aperçu du profil
+#### Aperçu du profil
Dans cette section, vous pouvez voir ce qui suit :
-- Toutes les actions en cours que vous avez effectuées.
-- Les informations de votre profil, description et site web (si vous en avez ajouté un).
+- Your activity
+- Your profile information: total query fees, total shares value, owned stake, stake delegating
-
+
-### Onglet Subgraphs
+#### Onglet Subgraphs
-Dans l'onglet Subgraphs, vous verrez les subgraphs publiés.
+The Subgraphs tab displays all your published Subgraphs.
-> Cela n'inclut pas les subgraphs déployés avec la CLI à des fins de test. Les subgraphs n'apparaîtront que lorsqu'ils seront publiés sur le réseau décentralisé.
+> Subgraphs deployed with the CLI for testing purposes will not show up here. Subgraphs will only show up when they are published to the decentralized network.
-
+
-### Onglet Indexation
+#### Onglet Indexation
-Dans l'onglet Indexation, vous trouverez un tableau avec toutes les allocations actives et historiques vers les subgraphs. Vous trouverez également des graphiques qui vous permettront de voir et d'analyser vos performances passées en tant qu'Indexeur.
+> If you haven't indexed, you will see links to stake to index Subgraphs and browse Subgraphs on Graph Explorer.
-Cette section comprendra également des détails sur vos récompenses nettes d'indexeur et vos frais de requête nets. Vous verrez les métriques suivantes :
+The Indexing tab displays a table where you can review active and historical allocations to Subgraphs.
-- Participation déléguée - la participation des délégués qui peut être allouée par vous mais ne peut pas être réduite
-- Total des frais de requête - le total des frais payés par les utilisateurs pour les requêtes que vous leur avez adressées au fil du temps
-- Récompenses de l'indexeur - le montant total des récompenses de l'indexeur que vous avez reçues, en GRT
-- Réduction des frais : % de remise sur les frais de requête que vous conserverez lors de votre séparation avec les délégués
-- Récompenses de l'indexeur - le montant total des récompenses de l'indexeur que vous avez reçues, en GRT
-- Possédé : votre mise déposée, qui pourrait être réduite en cas de comportement malveillant ou incorrect
+Track your Indexer performance with visual charts and key metrics, including:
-
+- Delegated Stake: Stake from Delegators that can be allocated by you but cannot be slashed.
+- Total Query Fees: Cumulative fees from served queries.
+- Indexer Rewards (in GRT): Total rewards earned.
+- Fee Cut & Rewards Cut: The % of query fee rebates and Indexer rewards you'll keep when you split with Delegators.
+- Owned Stake: Your deposited stake, which could be slashed for malicious or incorrect behavior.
-### Onglet Délégation
+
-Les Délégateurs sont importants pour The Graph Network. Ils doivent utiliser leurs connaissances pour choisir un Indexeur qui fournira un bon rendement sur les récompenses.
+#### Onglet Délégation
-Dans l'onglet Délégateurs, vous pouvez trouver les détails de vos délégations actives et historiques, ainsi que les métriques des Indexeurs vers lesquels vous avez délégué.
+> To learn more about the benefits of delegating, check out [delegating](/resources/roles/delegating/delegating/).
-Dans la première moitié de la page, vous pouvez voir votre diagramme de délégation, ainsi que le diagramme des récompenses uniquement. À gauche, vous pouvez voir les indicateurs clés de performance qui reflètent vos paramètres de délégation actuels.
+The Delegators tab displays your active and historical delegations, along with the metrics for the Indexers you've delegated to.
-Les métriques du délégué que vous verrez ici dans cet onglet incluent :
+Top Section:
-- Récompenses totales de la délégation
-- Récompenses totales non réalisées
-- Récompenses totales réalisées
+- View delegation and rewards-only charts
+- Track key metrics:
+ - Récompenses totales de la délégation
+ - Unrealized rewards
+ - Realized Rewards
-Dans la seconde moitié de la page, vous avez le tableau des délégations. Ici, vous pouvez voir les indexeurs auxquels vous avez délégué, ainsi que leurs détails (tels que les réductions de récompenses, le temps de recharge, etc.).
+Bottom Section:
-Les boutons situés à droite du tableau vous permettent de gérer votre délégation - déléguer davantage, dé-déléguer ou retirer votre délégation après la période de dégel.
+- Explore a table of your Indexer delegations, including reward cuts, cooldowns, and more.
+- Use the buttons on the right side of the table to manage your delegation - delegate more, undelegate, or withdraw it after the thawing period.
-Gardez à l'esprit que ce graph peut être parcouru horizontalement, donc si vous le faites défiler jusqu'à la droite, vous pouvez également voir le statut de votre délégation (en cours de délégation, non-déléguée, en cours de retrait).
+> This table is horizontally scrollable, so scroll right to see delegation status: delegating, undelegating, or withdrawable.
-
+
-### Onglet Conservation
+#### Onglet Conservation
-Dans l'onglet Curation, vous trouverez tous les subgraphs que vous signalez (ce qui vous permet de recevoir des frais de requête). La signalisation permet aux Curateurs d'indiquer aux Indexeurs les subgraphs qui ont de la valeur et qui sont dignes de confiance, signalant ainsi qu'ils doivent être indexés.
+The Curation tab displays all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
Dans cet onglet, vous trouverez un aperçu de :
@@ -232,22 +260,22 @@ Dans cet onglet, vous trouverez un aperçu de :
- Récompenses pour les requêtes par subgraph
- Détails mis à jour
-
+
-### Paramètres de votre profil
+#### Paramètres de votre profil
Dans votre profil utilisateur, vous pourrez gérer les détails de votre profil personnel (comme la configuration d'un nom ENS). Si vous êtes un indexeur, vous avez encore plus accès aux paramètres à portée de main. Dans votre profil utilisateur, vous pourrez configurer vos paramètres et opérateurs de délégation.
- Les opérateurs effectuent des actions limitées dans le protocole au nom de l'indexeur, telles que l'ouverture et la clôture des allocations. Les opérateurs sont généralement d'autres adresses Ethereum, distinctes de leur portefeuille de jalonnement, avec un accès sécurisé au réseau que les indexeurs peuvent définir personnellement
- Les paramètres de délégation vous permettent de contrôler la répartition des GRT entre vous et vos délégués.
-
+
En tant que portail officiel dans le monde des données décentralisées, Graph Explorer vous permet de prendre diverses actions, quel que soit votre rôle dans le réseau. Vous pouvez accéder aux paramètres de votre profil en ouvrant le menu déroulant à côté de votre adresse, puis en cliquant sur le bouton Paramètres.

-## Ressources supplémentaires
+### Ressources supplémentaires
### Guide Vidéo
diff --git a/website/src/pages/fr/subgraphs/fair-use-policy.mdx b/website/src/pages/fr/subgraphs/fair-use-policy.mdx
new file mode 100644
index 000000000000..df3d0b2187b2
--- /dev/null
+++ b/website/src/pages/fr/subgraphs/fair-use-policy.mdx
@@ -0,0 +1,51 @@
+---
+title: Fair Use Policy
+---
+
+> Effective Date: May 15, 2025
+
+## Aperçu
+
+This outlines storage limits for Subgraphs that rely solely on [Edge & Node's Upgrade Indexer](/subgraphs/upgrade-indexer/). This policy is designed to ensure fair and optimized use of queries across the community.
+
+To maintain performance and reliability across its infrastructure, Edge & Node is updating its Upgrade Indexer Subgraph storage policy. Free usage tiers remain available, but users who exceed specified limits will need to upgrade to a paid plan. Storage allocations and thresholds vary by feature.
+
+### 1. Scope
+
+This policy applies to all individual users, teams, chains, and dapps using Edge & Node's Upgrade Indexer in Subgraph Studio for storage and queries.
+
+### 2. Fair Use Storage Limits
+
+**Free Storage: Up to 10 GB**
+
+Beyond that, pricing is variable and adjusts based on usage patterns, network conditions, infrastructure requirements, and specific use cases.
+
+Reach out to Edge & Node at [info@edgeandnode.com](mailto:info@edgeandnode.com) to discuss options that meet your technical needs.
+
+You can monitor your usage via [Subgraph Studio](https://thegraph.com/studio/).
+
+### 3. Fair Use Limits
+
+To preserve the stability of Edge & Node's Subgraph Studio and preserve the reliability of The Graph Network, the Edge & Node Support Team will monitor storage usage and take corresponding action with Subgraphs that have:
+
+- Abnormally high or sustained bandwidth or storage usage beyond posted limits
+- Circumvention of storage thresholds (e.g., use of multiple free-tier accounts)
+
+The Edge & Node Support Team reserves the right to revise storage limits or impose temporary constraints for operational integrity.
+
+If you exceed your included storage:
+
+- Try [pruning Subgraph data](/subgraphs/best-practices/pruning/) to remove unused entities and help stay within storage limits
+- [Add signal to the Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to encourage other Indexers on the network to serve it
+- You will receive multiple notifications and email alerts
+- A grace period of 14 days will be provided to upgrade or reduce storage
+
+Edge & Node's team is committed to helping users avoid unnecessary interruptions and will continue to support all web3 builders.
+
+### 4. Subgraph Data Retention
+
+Subgraphs inactive for over 14 days or Subgraphs that exceed free-tier storage limits will be subject to automatic data archival or deletion. Edge & Node's team will notify you before any such actions are taken.
+
+### 5. Support
+
+If you believe your usage is incorrectly flagged or have unique use cases (e.g. approved special request pending new Subgraph upgrade plan), reach out the Edge & Node team at [info@edgeandnode.com](mailto:info@edgeandnode.com).
diff --git a/website/src/pages/fr/subgraphs/guides/near.mdx b/website/src/pages/fr/subgraphs/guides/near.mdx
index 71baadc8ba82..8d4845dd630b 100644
--- a/website/src/pages/fr/subgraphs/guides/near.mdx
+++ b/website/src/pages/fr/subgraphs/guides/near.mdx
@@ -77,12 +77,12 @@ dataSources:
```yaml
comptes:
- préfixes:
- - application
- - bien
- suffixes:
- - matin.près
- - matin.testnet
+ préfixes :
+ - application
+ - bien
+ suffixes :
+ - matin.près
+ - matin.testnet
```
Les fichiers de données NEAR prennent en charge deux types de gestionnaires :
@@ -185,8 +185,8 @@ Pour commencer, la première étape consiste à "créer" votre subgraph, ce qui
Une fois votre subgraph créé, vous pouvez le déployer en utilisant la commande CLI `graph deploy` :
```sh
-$ graph create --node # crée un subgrpah sur un Graph Node local (sur Subgraph Studio, cela se fait via l'interface utilisateur)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # upload les fichiers de build vers un endpoint IPFS spécifié, puis déploie le subgraph vers un Graph Node spécifié sur la base du hash IPFS du manifeste
+$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
+$ graph deploy --node --ipfs https://ipfs.thegraph.com # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
La configuration des nœuds dépend de l'endroit où le subgraph est déployé.
@@ -258,8 +258,8 @@ If an `account` is specified, that will only match the exact account name. It is
```yaml
comptes:
- suffixes:
- - mintbase1.near
+ suffixes :
+ - mintbase1.near
```
### Les subgraphs NEAR peuvent-ils faire des appels de vue aux comptes NEAR pendant les mappages ?
diff --git a/website/src/pages/fr/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/fr/subgraphs/guides/subgraph-composition.mdx
index ccf1f043fcb1..00c4238f168a 100644
--- a/website/src/pages/fr/subgraphs/guides/subgraph-composition.mdx
+++ b/website/src/pages/fr/subgraphs/guides/subgraph-composition.mdx
@@ -39,20 +39,20 @@ Alors que le subgraph source est un subgraph standard, le subgraph dépendant ut
### Source Subgraphs
-- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs).
- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
-- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
-- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
-- Source Subgraphs cannot use grafting on top of existing entities
-- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+- Immutable entities only: All Subgraphs must have [immutable entities](/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed.
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of.
+- Source Subgraphs cannot use grafting on top of existing entities.
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly.
### Composed Subgraphs
-- You can only compose up to a **maximum of 5 source Subgraphs**
-- Composed Subgraphs can only use **datasources from the same chain**
-- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
-- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
-- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+- You can only compose up to a **maximum of 5 source Subgraphs.**
+- Composed Subgraphs can only use **datasources from the same chain.**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time.
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly.
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph).
Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
diff --git a/website/src/pages/fr/subgraphs/mcp/claude.mdx b/website/src/pages/fr/subgraphs/mcp/claude.mdx
new file mode 100644
index 000000000000..2249aa840560
--- /dev/null
+++ b/website/src/pages/fr/subgraphs/mcp/claude.mdx
@@ -0,0 +1,180 @@
+---
+title: Claude Desktop
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Claude to interact directly with Subgraphs on The Graph Network. This integration allows you to find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries—all through natural language conversations with Claude.
+
+## What You Can Do
+
+The Subgraph MCP integration enables you to:
+
+- Access the GraphQL schema for any Subgraph on The Graph Network
+- Execute GraphQL queries against any Subgraph deployment
+- Find top Subgraph deployments for a given keyword or contract address
+- Get 30-day query volume for Subgraph deployments
+- Ask natural language questions about Subgraph data without writing GraphQL queries manually
+
+## Prérequis
+
+- [Node.js](https://nodejs.org/en) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+
+## Installation Options
+
+### Option 1: Using npx (Recommended)
+
+#### Configuration Steps using npx
+
+#### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+#### 2. Add Configuration
+
+Paste the following settings into your config file:
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+#### 3. Add Your Gateway API Key
+
+Replace `GATEWAY_API_KEY` with your API key from [Subgraph Studio](https://thegraph.com/studio/).
+
+#### 4. Save and Restart
+
+Once you've entered your Gateway API key into your settings, save the file and restart Claude Desktop.
+
+### Option 2: Building from Source
+
+#### Requirements
+
+- Rust (latest stable version recommended: 1.75+)
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+ Follow the on-screen instructions. For other platforms, see the [official Rust installation guide](https://www.rust-lang.org/tools/install).
+
+#### Installation Steps
+
+1. **Clone and Build the Repository**
+
+ ```bash
+ git clone git@github.com:graphops/subgraph-mcp.git
+ cd subgraph-mcp
+ cargo build --release
+ ```
+
+2. **Find the Command Path**
+
+ After building, the executable will be located at `target/release/subgraph-mcp` inside your project directory.
+
+ - Navigate to your `subgraph-mcp` directory in terminal
+ - Run `pwd` to get the full path
+ - Combine the output with `/target/release/subgraph-mcp`
+
+3. **Configure Claude Desktop**
+
+ Open your `claude_desktop_config.json` file as described above and add:
+
+ ```json
+ {
+ "mcpServers": {
+ "subgraph": {
+ "command": "/path/to/your/subgraph-mcp/target/release/subgraph-mcp",
+ "env": {
+ "GATEWAY_API_KEY": "your-api-key-here"
+ }
+ }
+ }
+ }
+ ```
+
+ Replace `/path/to/your/subgraph-mcp/target/release/subgraph-mcp` with the actual path to the compiled binary.
+
+## Using The Graph Resource in Claude
+
+After configuring Claude Desktop:
+
+1. Restart Claude Desktop
+2. Start a new conversation
+3. Click on the context menu (top right)
+4. Add "Subgraph Server Instructions" as a resource by adding `graphql://subgraph` to your chat context
+
+> **Important**: Claude Desktop may not automatically utilize the Subgraph MCP. You must manually add "Subgraph Server Instructions" resource to your chat context for each conversation where you want to use it.
+
+## Troubleshooting
+
+To enable logs for the MCP when using the npx option, add the `--verbose true` option to your args array.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID/IPFS hash**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Search subgraphs by keyword**: Find subgraphs by keyword in their display names, ordered by signal
+- **Get deployment 30-day query counts**: Get aggregate query count over the last 30 days for multiple subgraph deployments
+- **Get top Subgraph deployments for a contract**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain, ordered by query fees
+
+## Key Identifier Types
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a subgraph. Use `execute_query_by_subgraph_id` or `get_schema_by_subgraph_id`.
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment. Use `execute_query_by_deployment_id` or `get_schema_by_deployment_id`.
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific, immutable deployment. Use `execute_query_by_deployment_id` (the gateway treats it like a deployment ID for querying) or `get_schema_by_ipfs_hash`.
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Claude will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+```
+Find the top subgraphs for contract 0x1f98431c8ad98523631ae4a59f267346ea31f984 on arbitrum-one
+```
diff --git a/website/src/pages/fr/subgraphs/mcp/cline.mdx b/website/src/pages/fr/subgraphs/mcp/cline.mdx
new file mode 100644
index 000000000000..a97d7b267247
--- /dev/null
+++ b/website/src/pages/fr/subgraphs/mcp/cline.mdx
@@ -0,0 +1,99 @@
+---
+title: Cline
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cline to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cline.
+
+## Prérequis
+
+- [Cline](https://cline.bot/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+## Using The Graph Resource in Cline
+
+After configuring Cline:
+
+1. Restart Cline
+2. Start a new conversation
+3. Enable the Subgraph MCP from the context menu
+4. Add "Subgraph Server Instructions" as a resource to your chat context
+
+## Available Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cline will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/fr/subgraphs/mcp/cursor.mdx b/website/src/pages/fr/subgraphs/mcp/cursor.mdx
new file mode 100644
index 000000000000..051be3d8fa6d
--- /dev/null
+++ b/website/src/pages/fr/subgraphs/mcp/cursor.mdx
@@ -0,0 +1,94 @@
+---
+title: Cursor
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cursor to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cursor.
+
+## Prérequis
+
+- [Cursor](https://www.cursor.com/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+### 4. Restart Cursor
+
+Restart Cursor, and start a new chat.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cursor will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/fr/subgraphs/querying/best-practices.mdx b/website/src/pages/fr/subgraphs/querying/best-practices.mdx
index 9b5bddd7d439..5c008cc2b427 100644
--- a/website/src/pages/fr/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/fr/subgraphs/querying/best-practices.mdx
@@ -2,9 +2,7 @@
title: Bonnes pratiques d'interrogation
---
-The Graph offre un moyen décentralisé d'interroger les données des blockchains. Ses données sont exposées par le biais d'une API GraphQL, ce qui facilite l'interrogation avec le langage GraphQL.
-
-Apprenez les règles essentielles du langage GraphQL et les meilleures pratiques pour optimiser votre subgraph.
+Use The Graph's GraphQL API to query [Subgraph](/subgraphs/developing/subgraphs/) data efficiently. This guide outlines essential GraphQL rules, guides, and best practices to help you write optimized, reliable queries.
---
@@ -12,9 +10,11 @@ Apprenez les règles essentielles du langage GraphQL et les meilleures pratiques
### Anatomie d'une requête GraphQL
-Contrairement à l'API REST, une API GraphQL repose sur un schéma qui définit les requêtes qui peuvent être effectuées.
+> GraphQL queries use the GraphQL language, which is defined in the [GraphQL specification](https://spec.graphql.org/).
+
+Unlike REST APIs, GraphQL APIs are built on a schema-driven design that defines which queries can be performed.
-Par exemple, une requête pour obtenir un jeton en utilisant la requête `token` ressemblera à ce qui suit :
+Here's a typical query to fetch a `token`:
```graphql
query GetToken($id: ID!) {
@@ -25,7 +25,7 @@ query GetToken($id: ID!) {
}
```
-qui retournera la réponse JSON prévisible suivante (\_en passant la bonne valeur de la variable `$id`):
+which will return a predictable JSON response (when passing the proper `$id` variable value):
```json
{
@@ -36,8 +36,6 @@ qui retournera la réponse JSON prévisible suivante (\_en passant la bonne vale
}
```
-Les requêtes GraphQL utilisent le langage GraphQL, qui est défini dans [une spécification](https://spec.graphql.org/).
-
La requête `GetToken` ci-dessus est composée de plusieurs parties de langage (remplacées ci-dessous par des espaces réservés `[...]`) :
```graphql
@@ -50,33 +48,31 @@ query [operationName]([variableName]: [variableType]) {
}
```
-## Règles d'écriture des requêtes GraphQL
+### Règles d'écriture des requêtes GraphQL
-- Chaque `queryName` ne doit être utilisé qu'une seule fois par opération.
-- Chaque `champ` ne doit être utilisé qu'une seule fois dans une sélection (nous ne pouvons pas interroger `id` deux fois sous `token`)
-- Certains `champs` ou certaines requêtes (comme `tokens`) renvoient des types complexes qui nécessitent une sélection de sous-champs. Ne pas fournir de sélection quand cela est attendu (ou en fournir une quand cela n'est pas attendu - par exemple, sur `id`) lèvera une erreur. Pour connaître un type de champ, veuillez vous référer à [Graph Explorer] (/subgraphs/explorer/).
-- Toute variable affectée à un argument doit correspondre à son type.
-- Dans une liste de variables donnée, chacune d’elles doit être unique.
-- Toutes les variables définies doivent être utilisées.
+> Important: Failing to follow these rules will result in an error from The Graph API.
-> Remarque : le non-respect de ces règles entraînera une erreur de la part de The Graph API.
+1. Chaque `queryName` ne doit être utilisé qu'une seule fois par opération.
+2. Each `field` must be used only once in a selection (you cannot query `id` twice under `token`).
+3. Complex types require a selection of sub-fields.
+ - For example, some `fields' or queries (like `tokens`) return complex types which will require a selection of sub-fields. Not providing a selection when expected or providing one when not expected will raise an error, such as `id\`. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
+4. Toute variable affectée à un argument doit correspondre à son type.
+5. Variables must be uniquely defined and used.
-Pour une liste complète des règles avec des exemples de code, consultez le [Guide des validations GraphQL](/resources/migration-guides/graphql-validations-migration-guide/).
+**For a complete list of rules with code examples, check out the [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)**.
-### Envoi d'une requête à une API GraphQL
+### How to Send a Query to a GraphQL API
-GraphQL est un langage et un ensemble de conventions qui se transportent sur HTTP.
+[GraphQL is a query language](https://graphql.org/learn/) and a set of conventions for APIs, typically used over HTTP to request and send data between clients and servers. This means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-Cela signifie que vous pouvez interroger une API GraphQL en utilisant le standard `fetch` (nativement ou via `@whatwg-node/fetch` ou `isomorphic-fetch`).
-
-Cependant, comme mentionné dans ["Interrogation à partir d'une application"](/subgraphs/querying/from-an-application/), il est recommandé d'utiliser `graph-client`, qui supporte les caractéristiques uniques suivantes :
+However, as recommended in [Querying from an Application](/subgraphs/querying/from-an-application/), it's best to use `graph-client`, which supports the following unique features:
- Traitement des subgraphs multi-chaînes : Interrogation de plusieurs subgraphs en une seule requête
- [Suivi automatique des blocs](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Pagination automatique] (https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Résultat entièrement typé
-Voici comment interroger The Graph avec `graph-client` :
+Example query using `graph-client`:
```tsx
import { execute } from '../.graphclient'
@@ -100,15 +96,15 @@ async function main() {
main()
```
-D'autres alternatives au client GraphQL sont abordées dans ["Requête à partir d'une application"](/subgraphs/querying/from-an-application/).
+For more alternatives, see ["Querying from an Application"](/subgraphs/querying/from-an-application/).
---
## Les meilleures pratiques
-### Écrivez toujours des requêtes statiques
+### 1. Always Write Static Queries
-Une (mauvaise) pratique courante consiste à construire dynamiquement des chaînes de requête comme suit :
+A common bad practice is to dynamically build a query string as follows:
```tsx
const id = params.id
@@ -124,14 +120,16 @@ query GetToken {
// Execute query...
```
-Bien que l'extrait ci-dessus produise une requête GraphQL valide, **il présente de nombreux inconvénients** :
+While the example above produces a valid GraphQL query, it comes with several issues:
+
+- The full query is harder to understand.
+- Developers are responsible for safely sanitizing the string interpolation.
+- Not sending the values of the variables as part of the request can block server-side caching.
+- It prevents tools from statically analyzing the query (e.g.linters or type generation tools).
-- cela rend **plus difficile la compréhension** de la requête dans son ensemble
-- les développeurs sont **responsables de l'assainissement de l'interpolation de la chaîne de caractères**
-- ne pas envoyer les valeurs des variables dans le cadre des paramètres de la requête **empêcher la mise en cache éventuelle côté serveur**
-- il **empêche les outils d'analyser statiquement la requête** (ex : Linter, ou les outils de génération de types)
+Instead, it's recommended to **always write queries as static strings**.
-C'est pourquoi il est recommandé de toujours écrire les requêtes sous forme de chaînes de caractères statiques :
+Example of static string:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -153,18 +151,21 @@ const result = await execute(query, {
})
```
-Cela présente de **nombreux avantages** :
+Static strings have several **key advantages**:
-- **Facile à lire et à entretenir** les requêtes
-- Le **serveur GraphQL s’occupe de la validation des variables**
-- **Les variables peuvent être mises en cache** au niveau du serveur
-- **Les requêtes peuvent être analysées statiquement par des outils** (plus d'informations à ce sujet dans les sections suivantes)
+- Queries are easier to read, manage, and debug.
+- Variable sanitization is handled by the GraphQL server The GraphQL.
+- Variables can be cached at the server level.
+- Queries can be statically analyzed by tools (see [GraphQL Essential Tools](/subgraphs/querying/best-practices/#graphql-essential-tools-guides)).
-### Comment inclure des champs de manière conditionnelle dans des requêtes statiques
+### 2. Include Fields Conditionally in Static Queries
-Il se peut que vous souhaitiez inclure le champ `owner` uniquement pour une condition particulière.
+Including fields in static queries only for a particular condition improves performance and keeps responses lightweight by fetching only the necessary data when it's relevant.
-Pour cela, vous pouvez utiliser la directive `@include(if :...)` comme suit :
+- The `@include(if:...)` directive tells the query to **include** a specific field only if the given condition is true.
+- The `@skip(if: ...)` directive tells the query to **exclude** a specific field if the given condition is true.
+
+Example using `owner` field with `@include(if:...)` directive:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -187,15 +188,11 @@ const result = await execute(query, {
})
```
-> Note : La directive opposée est `@skip(if : ...)`.
-
-### Demandez ce que vous voulez
-
-GraphQL est devenu célèbre grâce à son slogan "Ask for what you want" (demandez ce que vous voulez).
+### 3. Ask Only For What You Want
-Pour cette raison, il n'existe aucun moyen, dans GraphQL, d'obtenir tous les champs disponibles sans avoir à les lister individuellement.
+GraphQL is known for its "Ask for what you want” tagline, which is why it requires explicitly listing each field you want. There's no built-in way to fetch all available fields automatically.
-- Lorsque vous interrogez les API GraphQL, pensez toujours à interroger uniquement les champs qui seront réellement utilisés.
+- When querying GraphQL APIs, always think of querying only the fields that will actually be used.
- Assurez-vous que les requêtes ne récupèrent que le nombre d'entités dont vous avez réellement besoin. Par défaut, les requêtes récupèrent 100 entités dans une collection, ce qui est généralement beaucoup plus que ce qui sera réellement utilisé, par exemple pour l'affichage à l'utilisateur. Cela s'applique non seulement aux collections de premier niveau d'une requête, mais plus encore aux collections imbriquées d'entités.
Par exemple, dans la requête suivante :
@@ -215,9 +212,9 @@ query listTokens {
La réponse pourrait contenir 100 transactions pour chacun des 100 jetons.
-Si l'application n'a besoin que de 10 transactions, la requête doit explicitement définir `first: 10` dans le champ transactions.
+If the application only needs 10 transactions, the query should explicitly set **`first: 10`** on the transactions field.
-### Utiliser une seule requête pour demander plusieurs enregistrements
+### 4. Use a Single Query to Request Multiple Records
Par défaut, les subgraphs ont une entité singulière pour un enregistrement. Pour plusieurs enregistrements, utilisez les entités plurielles et le filtre : `where: {id_in:[X,Y,Z]}` ou `where: {volume_gt:100000}`
@@ -249,7 +246,7 @@ query ManyRecords {
}
```
-### Combiner plusieurs requêtes en une seule
+### 5. Combine Multiple Queries in a Single Request
Votre application peut nécessiter l'interrogation de plusieurs types de données, comme suit :
@@ -281,9 +278,9 @@ const [tokens, counters] = Promise.all(
)
```
-Bien que cette mise en œuvre soit tout à fait valable, elle nécessitera deux allers-retours avec l'API GraphQL.
+While this implementation is valid, it will require two round trips with the GraphQL API.
-Heureusement, il est également possible d'envoyer plusieurs requêtes dans la même requête GraphQL, comme suit :
+It's best to send multiple queries in the same GraphQL request as follows:
```graphql
import { execute } from "your-favorite-graphql-client"
@@ -304,9 +301,9 @@ query GetTokensandCounters {
const { result: { tokens, counters } } = execute(query)
```
-Cette approche **améliore les performances globales** en réduisant le temps passé sur le réseau (vous évite un aller-retour vers l'API) et fournit une **mise en œuvre plus concise**.
+Sending multiple queries in the same GraphQL request **improves the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and provides a **more concise implementation**.
-### Tirer parti des fragments GraphQL
+### 6. Leverage GraphQL Fragments
Une fonctionnalité utile pour écrire des requêtes GraphQL est GraphQL Fragment.
@@ -335,7 +332,7 @@ Ces champs répétés (`id`, `active`, `status`) posent de nombreux problèmes :
- Les requêtes plus longues deviennent plus difficiles à lire.
- Lorsque l'on utilise des outils qui génèrent des types TypeScript basés sur des requêtes (_plus d'informations à ce sujet dans la dernière section_), `newDelegate` et `oldDelegate` donneront lieu à deux interfaces inline distinctes.
-Une version remaniée de la requête serait la suivante :
+An optimized version of the query would be the following:
```graphql
query {
@@ -359,15 +356,18 @@ fragment DelegateItem on Transcoder {
}
```
-L'utilisation de GraphQL `fragment` améliorera la lisibilité (en particulier à grande échelle) et permettra une meilleure génération de types TypeScript.
+Using a GraphQL `fragment` improves readability (especially at scale) and results in better TypeScript types generation.
-Lorsque l'on utilise l'outil de génération de types, la requête ci-dessus génère un type `DelegateItemFragment` approprié (\_voir la dernière section "Outils").
+Lorsque l'on utilise l'outil de génération de types, la requête ci-dessus génère un type `DelegateItemFragment` approprié (_voir la dernière section "Outils").
-### Bonnes pratiques et erreurs à éviter avec les fragments GraphQL
+## GraphQL Fragment Guidelines
-### La base du fragment doit être un type
+### Do's and Don'ts for Fragments
-Un fragment ne peut pas être basé sur un type non applicable, en bref, **sur un type n'ayant pas de champs** :
+1. Fragments cannot be based on a non-applicable types (types without fields).
+2. `BigInt` cannot be used as a fragment's base because it's a **scalar** (native "plain" type).
+
+L'exemple:
```graphql
fragment MyFragment on BigInt {
@@ -375,11 +375,8 @@ fragment MyFragment on BigInt {
}
```
-`BigInt` est un **scalaire** (type natif "plain" ) qui ne peut pas être utilisé comme base d'un fragment.
-
-#### Comment diffuser un fragment
-
-Les fragments sont définis pour des types spécifiques et doivent être utilisés en conséquence dans les requêtes.
+3. Fragments belong to specific types and must be used with those same types in queries.
+4. Spread only fragments matching the correct type.
L'exemple:
@@ -388,7 +385,7 @@ query {
bondEvents {
id
newDelegate {
- ...VoteItem # Erreur ! `VoteItem` ne peut pas être étendu sur le type `Transcoder`
+ ...VoteItem # Error! `VoteItem` cannot be spread on `Transcoder` type
}
oldDelegate {
...VoteItem
@@ -402,20 +399,23 @@ fragment VoteItem on Vote {
}
```
-`newDelegate` et `oldDelegate` sont de type `Transcoder`.
+- `newDelegate` and `oldDelegate` are of type `Transcoder`. It's not possible to spread a fragment of type `Vote` here.
-Il n'est pas possible de diffuser un fragment de type `Vote` ici.
+5. Fragments must be defined based on their specific usage.
+6. Define fragments as an atomic business unit of data.
-#### Définir Fragment comme une unité commerciale atomique de données
+---
-Les `Fragment` GraphQL doivent être définis en fonction de leur utilisation.
+### How to Define `Fragment` as an Atomic Business Unit of Data
-Pour la plupart des cas d'utilisation, la définition d'un fragment par type (dans le cas de l'utilisation répétée de champs ou de la génération de types) est suffisante.
+> For most use-cases, defining one fragment per type (in the case of repeated fields usage or type generation) is enough.
-Voici une règle empirique pour l'utilisation des fragments :
+Here is a rule of thumb for using fragments:
-- Lorsque des champs de même type sont répétés dans une requête, ils sont regroupés dans un `Fragment`.
-- Lorsque des champs similaires mais différents se répètent, créer plusieurs fragments, par exemple :
+- When fields of the same type are repeated in a query, group them in a `Fragment`.
+- When similar but different fields are repeated, create multiple fragments.
+
+L'exemple:
```graphql
# fragment de base (utilisé principalement pour les listes)
@@ -438,35 +438,45 @@ fragment VoteWithPoll on Vote {
---
-## Les outils essentiels
+## GraphQL Essential Tools Guides
+
+### Test Queries with Graph Explorer
+
+Before integrating GraphQL queries into your dapp, it's best to test them. Instead of running them directly in your app, use a web-based playground.
+
+Start with [Graph Explorer](https://thegraph.com/explorer), a preconfigured GraphQL playground built specifically for Subgraphs. You can experiment with queries and see the structure of the data returned without writing any frontend code.
+
+If you want alternatives to debug/test your queries, check out other similar web-based tools:
+
+- [GraphiQL](https://graphiql-online.com/graphiql)
+- [Altair](https://altairgraphql.dev/)
-### Explorateurs Web GraphQL
+### Setting up Workflow and IDE Tools
-Itérer sur des requêtes en les exécutant dans votre application peut s'avérer fastidieux. Pour cette raison, n'hésitez pas à utiliser [Graph Explorer](https://thegraph.com/explorer) pour tester vos requêtes avant de les ajouter à votre application. Graph Explorer vous fournira un terrain de jeu GraphQL préconfiguré pour tester vos requêtes.
+In order to keep up with querying best practices and syntax rules, use the following workflow and IDE tools.
-Si vous recherchez un moyen plus souple de déboguer/tester vos requêtes, d'autres outils web similaires sont disponibles, tels que [Altair](https://altairgraphql.dev/) et [GraphiQL](https://graphiql-online.com/graphiql).
+#### GraphQL ESLint
-### Linting GraphQL
+1. Install GraphQL ESLint
-Afin de respecter les meilleures pratiques et les règles syntaxiques mentionnées ci-dessus, il est fortement recommandé d'utiliser les outils de workflow et d'IDE suivants.
+Use [GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) to enforce best practices and syntax rules with zero effort.
-**GraphQL ESLint**
+2. Use the ["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config
-[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) vous aidera à rester au fait des meilleures pratiques GraphQL sans effort.
+This will enforce essential rules such as:
-[La configuration "opérations-recommandées"](https://the-guild.dev/graphql/eslint/docs/configs) permet d'appliquer des règles essentielles telles que:
+- `@graphql-eslint/fields-on-correct-type`: Ensures fields match the proper type.
+- `@graphql-eslint/no-unused variables`: Flags unused variables in your queries.
-- `@graphql-eslint/fields-on-correct-type` : un champ est-il utilisé sur un type correct ?
-- `@graphql-eslint/no-unused variables` : une variable donnée doit-elle rester inutilisée ?
-- et plus !
+Result: You'll **catch errors without even testing queries** on the playground or running them in production!
-Cela vous permettra de **récupérer les erreurs sans même tester les requêtes** sur le terrain de jeu ou les exécuter en production !
+#### Use IDE plugins
-### Plugins IDE
+GraphQL plugins streamline your workflow by offering real-time feedback while you code. They highlight mistakes, suggest completions, and help you explore your schema faster.
-**VSCode et GraphQL**
+1. VS Code
-L'[extension GraphQL VSCode] (https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) est un excellent complément à votre workflow de développement :
+Install the [GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) to unlock:
- Mise en évidence des syntaxes
- Suggestions d'auto-complétion
@@ -474,11 +484,11 @@ L'[extension GraphQL VSCode] (https://marketplace.visualstudio.com/items?itemNam
- Snippets
- Aller à la définition des fragments et des types d'entrée
-Si vous utilisez `graphql-eslint`, l'extension [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) est indispensable pour visualiser correctement les erreurs et les avertissements dans votre code.
+If you are using `graphql-eslint`, use the [ESLint VS Code extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) to visualize errors and warnings inlined in your code correctly.
-**WebStorm/Intellij et GraphQL**
+2. WebStorm/Intellij and GraphQL
-Le [JS GraphQL plugin] (https://plugins.jetbrains.com/plugin/8097-graphql/) améliorera considérablement votre expérience lorsque vous travaillez avec GraphQL en fournissant :
+Install the [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/). It significantly improves the experience of working with GraphQL by providing:
- Mise en évidence des syntaxes
- Suggestions d'auto-complétion
diff --git a/website/src/pages/fr/subgraphs/querying/graphql-api.mdx b/website/src/pages/fr/subgraphs/querying/graphql-api.mdx
index ae81b6e5427c..28a3f51074fc 100644
--- a/website/src/pages/fr/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/fr/subgraphs/querying/graphql-api.mdx
@@ -2,23 +2,37 @@
title: API GraphQL
---
-Découvrez l'API de requête GraphQL utilisée dans The Graph.
+Explore the GraphQL Query API for interacting with Subgraphs on The Graph Network.
-## Qu'est-ce que GraphQL ?
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with existing data.
-[GraphQL](https://graphql.org/learn/) est un langage d'interrogation pour les API et un moteur d'exécution pour l'exécution de ces requêtes avec vos données existantes. Le graphe utilise GraphQL pour interroger les subgraphs.
+The Graph uses GraphQL to query Subgraphs.
-Pour comprendre le rôle plus important joué par GraphQL, consultez [développer](/subgraphs/developing/introduction/) et [créer un subgraph](/developing/creating-a-subgraph/).
+## Core Concepts
-## Requêtes avec GraphQL
+### Entities
+
+- **What they are**: Persistent data objects defined with `@entity` in your schema
+- **Key requirement**: Must contain `id: ID!` as primary identifier
+- **Usage**: Foundation for all query operations
+
+### Schema
+
+- **Purpose**: Blueprint defining the data structure and relationships using GraphQL [IDL](https://facebook.github.io/graphql/draft/#sec-Type-System)
+- **Key characteristics**:
+ - Auto-generates query endpoints
+ - Read-only operations (no mutations)
+ - Defines entity interfaces and derived fields
-Dans votre schéma Subgraph, vous définissez des types appelés `Entities`. Pour chaque type `Entity`, les champs `entity` et `entities` seront générés sur le type `Query` de premier niveau.
+## Query Structure
-> Note : `query` n'a pas besoin d'être inclus au début de la requête `graphql` lors de l'utilisation de The Graph.
+GraphQL queries in The Graph target entities defined in the Subgraph schema. Each `Entity` type generates corresponding `entity` and `entities` fields on the root `Query` type.
-### Exemples
+> Note: The `query` keyword is not required at the top level of GraphQL queries.
-Requête pour une seule entité `Token` définie dans votre schéma :
+### Single Entity Queries Example
+
+Query for a single `Token` entity:
```graphql
{
@@ -29,9 +43,11 @@ Requête pour une seule entité `Token` définie dans votre schéma :
}
```
-> Note : Lors de l'interrogation d'une seule entité, le champ `id` est obligatoire et doit être écrit sous forme de chaîne de caractères.
+> Note: Single entity queries require the `id` parameter as a string.
+
+### Collection Queries Example
-Interroge toutes les entités `Token` :
+Query format for all `Token` entities:
```graphql
{
@@ -42,14 +58,14 @@ Interroge toutes les entités `Token` :
}
```
-### Tri
+### Sorting Example
-Lors de l'interrogation d'une collection, vous pouvez :
+Collection queries support the following sort parameters:
-- Utilisez le paramètre `orderBy` pour trier les données en fonction d'un attribut spécifique.
-- Utilisez `orderDirection` pour spécifier la direction du tri, `asc` pour ascendant ou `desc` pour descendant.
+- `orderBy`: Specifies the attribute for sorting
+- `orderDirection`: Accepts `asc` (ascending) or `desc` (descending)
-#### Exemple
+#### Standard Sorting Example
```graphql
{
@@ -60,11 +76,7 @@ Lors de l'interrogation d'une collection, vous pouvez :
}
```
-#### Exemple de tri d'entités imbriquées
-
-Depuis Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), les entités peuvent être triées sur la base des entités imbriquées.
-
-L'exemple suivant montre des jetons triés par le nom de leur propriétaire :
+#### Nested Entity Sorting Example
```graphql
{
@@ -79,20 +91,18 @@ L'exemple suivant montre des jetons triés par le nom de leur propriétaire :
}
```
-> Actuellement, vous pouvez trier par type `String` ou `ID` à "un" niveau de profondeur sur les champs `@entity` et `@derivedFrom`. Malheureusement, le [tri par interfaces sur des entités d'un niveau de profondeur] (https://github.com/graphprotocol/graph-node/pull/4058), le tri par champs qui sont des tableaux et des entités imbriquées n'est pas encore prit en charge.
+> Note: Nested sorting supports one-level-deep `String` or `ID` types on `@entity` and `@derivedFrom` fields.
-### Pagination
+### Pagination Example
-Lors de l'interrogation d'une collection, il est préférable de :
+When querying a collection, it is best to:
- Utilisez le paramètre `first` pour paginer à partir du début de la collection.
- L'ordre de tri par défaut est le tri par `ID` dans l'ordre alphanumérique croissant, **non** par heure de création.
- Utilisez le paramètre `skip` pour sauter des entités et paginer. Par exemple, `first:100` affiche les 100 premières entités et `first:100, skip:100` affiche les 100 entités suivantes.
- Évitez d'utiliser les valeurs `skip` dans les requêtes car elles sont généralement peu performantes. Pour récupérer un grand nombre d'éléments, il est préférable de parcourir les entités en fonction d'un attribut, comme indiqué dans l'exemple précédent.
-#### Exemple d'utilisation de `first`
-
-Interroger les 10 premiers tokens :
+#### Standard Pagination Example
```graphql
{
@@ -103,11 +113,7 @@ Interroger les 10 premiers tokens :
}
```
-Pour rechercher des groupes d'entités au milieu d'une collection, le paramètre `skip` peut être utilisé en conjonction avec le paramètre `first` pour sauter un nombre spécifié d'entités en commençant par le début de la collection.
-
-#### Exemple utilisant `first` et `skip`
-
-Interroger 10 entités `Token`, décalées de 10 places par rapport au début de la collection :
+#### Offset Pagination Example
```graphql
{
@@ -118,9 +124,7 @@ Interroger 10 entités `Token`, décalées de 10 places par rapport au début de
}
```
-#### Exemple utilisant `first` et `id_ge`
-
-Si un client a besoin de récupérer un grand nombre d'entités, il est plus performant de baser les requêtes sur un attribut et de filtrer par cet attribut. Par exemple, un client pourrait récupérer un grand nombre de jetons en utilisant cette requête :
+#### Cursor-based Pagination Example
```graphql
query manyTokens($lastID: String) {
@@ -131,16 +135,11 @@ query manyTokens($lastID: String) {
}
```
-La première fois, il enverra la requête avec `lastID = ""`, et pour les requêtes suivantes, il fixera `lastID` à l'attribut `id` de la dernière entité dans la requête précédente. Cette approche est nettement plus performante que l'utilisation de valeurs `skip` croissantes.
-
### Filtration
-- Vous pouvez utiliser le paramètre `where` dans vos requêtes pour filtrer les différentes propriétés.
-- Vous pouvez filtrer sur plusieurs valeurs dans le paramètre `where`.
-
-#### Exemple d'utilisation de `where`
+The `where` parameter filters entities based on specified conditions.
-Défis de la requête avec un résultat `failed` :
+#### Basic Filtering Example
```graphql
{
@@ -154,9 +153,7 @@ Défis de la requête avec un résultat `failed` :
}
```
-Vous pouvez utiliser des suffixes comme `_gt`, `_lte` pour comparer les valeurs :
-
-#### Exemple de filtrage de plage
+#### Numeric Comparison Example
```graphql
{
@@ -168,11 +165,7 @@ Vous pouvez utiliser des suffixes comme `_gt`, `_lte` pour comparer les valeurs
}
```
-#### Exemple de filtrage par bloc
-
-Vous pouvez également filtrer les entités qui ont été mises à jour dans ou après un bloc spécifié avec `_change_block(number_gte : Int)`.
-
-Cela peut être utile si vous cherchez à récupérer uniquement les entités qui ont changé, par exemple depuis la dernière fois que vous avez interrogé. Ou encore, elle peut être utile pour étudier ou déboguer la façon dont les entités changent dans votre subgraph (si elle est combinée à un filtre de bloc, vous pouvez isoler uniquement les entités qui ont changé dans un bloc spécifique).
+#### Block-based Filtering Example
```graphql
{
@@ -184,11 +177,7 @@ Cela peut être utile si vous cherchez à récupérer uniquement les entités qu
}
```
-#### Exemple de filtrage d'entités imbriquées
-
-Le filtrage sur la base d'entités imbriquées est possible dans les champs avec le suffixe `_`.
-
-Cela peut être utile si vous souhaitez récupérer uniquement les entités dont les entités au niveau enfant remplissent les conditions fournies.
+#### Nested Entity Filtering Example
```graphql
{
@@ -202,11 +191,9 @@ Cela peut être utile si vous souhaitez récupérer uniquement les entités dont
}
```
-#### Opérateurs logiques
-
-Depuis Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), vous pouvez regrouper plusieurs paramètres dans le même argument `where` en utilisant les opérateurs `and` ou `or` pour filtrer les résultats en fonction de plusieurs critères.
+#### Logical Operators
-##### L'opérateur `AND`
+##### AND Operations Example
L'exemple suivant filtre les défis avec `outcome` `succeeded` et `number` supérieur ou égal à `100`.
@@ -222,27 +209,11 @@ L'exemple suivant filtre les défis avec `outcome` `succeeded` et `number` supé
}
```
-> **Sucre syntaxique:** Vous pouvez simplifier la requête ci-dessus en supprimant l'opérateur \`and\`\` et en passant une sous-expression séparée par des virgules.
->
-> ```graphql
-> {
-> challenges(where: { number_gte: 100, outcome: "succeeded" }) {
-> challenger
-> outcome
-> application {
-> id
-> }
-> }
-> }
-> ```
-
-##### L'opérateur `OR`
-
-L'exemple suivant filtre les défis avec `outcome` `succeeded` ou `number` supérieur ou égal à `100`.
+**Syntactic sugar:** You can simplify the above query by removing the `and` operator and by passing a sub-expression separated by commas.
```graphql
{
- challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenges(where: { number_gte: 100, outcome: "succeeded" }) {
challenger
outcome
application {
@@ -252,52 +223,36 @@ L'exemple suivant filtre les défis avec `outcome` `succeeded` ou `number` supé
}
```
-> **Note** : Lors de l'élaboration des requêtes, il est important de prendre en compte l'impact sur les performances de l'utilisation de l'opérateur `or`. Si `or` peut être un outil utile pour élargir les résultats d'une recherche, il peut aussi avoir des coûts importants. L'un des principaux problèmes de l'opérateur `or` est qu'il peut ralentir les requêtes. En effet, `or` oblige la base de données à parcourir plusieurs index, ce qui peut prendre beaucoup de temps. Pour éviter ces problèmes, il est recommandé aux développeurs d'utiliser les opérateurs and au lieu de or chaque fois que cela est possible. Cela permet un filtrage plus précis et peut conduire à des requêtes plus rapides et plus précises.
-
-#### Tous les filtres
-
-Liste complète des suffixes de paramètres :
+##### OR Operations Example
+```graphql
+{
+ challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenger
+ outcome
+ application {
+ id
+ }
+ }
+}
```
-_
-_not
-_gt
-_lt
-_gte
-_lte
-_in
-_not_in
-_contains
-_contains_nocase
-_not_contains
-_not_contains_nocase
-_starts_with
-_starts_with_nocase
-_ends_with
-_ends_with_nocase
-_not_starts_with
-_not_starts_with_nocase
-_not_ends_with
-_not_ends_with_nocase
-```
-
-> Veuillez noter que certains suffixes ne sont supportés que pour des types spécifiques. Par exemple, `Boolean` ne supporte que `_not`, `_in`, et `_not_in`, mais `_` n'est disponible que pour les types objet et interface.
-En outre, les filtres globaux suivants sont disponibles en tant que partie de l'argument `where` :
+Global filter parameter:
```graphql
_change_block(numéro_gte : Int)
```
-### Interrogation des états précédents
+### Time-travel Queries Example
-Vous pouvez interroger l'état de vos entités non seulement pour le dernier bloc, ce qui est le cas par défaut, mais aussi pour un bloc arbitraire dans le passé. Le bloc auquel une requête doit se produire peut être spécifié soit par son numéro de bloc, soit par son hash de bloc, en incluant un argument `block` dans les champs de niveau supérieur des requêtes.
+Queries support historical state retrieval using the `block` parameter:
-Le résultat d'une telle requête ne changera pas au fil du temps, c'est-à-dire qu'une requête portant sur un certain bloc passé renverra le même résultat quel que soit le moment où elle est exécutée, à l'exception d'une requête portant sur un bloc très proche de la tête de la chaîne, dont le résultat pourrait changer s'il s'avérait que ce bloc ne figurait **pas** sur la chaîne principale et que la chaîne était réorganisée. Une fois qu'un bloc peut être considéré comme définitif, le résultat de la requête ne changera pas.
+- `number`: Integer block number
+- `hash`: String block hash
> Remarque : l'implémentation actuelle est encore sujette à certaines limitations qui pourraient violer ces garanties. L'implémentation ne permet pas toujours de déterminer si un bloc donné n'est pas du tout sur la chaîne principale ou si le résultat d'une requête par bloc pour un bloc qui n'est pas encore considéré comme final peut être influencé par une réorganisation du bloc qui a lieu en même temps que la requête. Elles n'affectent pas les résultats des requêtes par hash de bloc lorsque le bloc est final et que l'on sait qu'il se trouve sur la chaîne principale. [Ce numéro](https://github.com/graphprotocol/graph-node/issues/1405) explique ces limitations en détail.
-#### Exemple
+#### Block Number Query Example
```graphql
{
@@ -311,9 +266,7 @@ Le résultat d'une telle requête ne changera pas au fil du temps, c'est-à-dire
}
```
-Cette requête renverra les entités `Challenge` et les entités `Application` qui leur sont associées, telles qu'elles existaient directement après le traitement du bloc numéro 8 000 000.
-
-#### Exemple
+#### Block Hash Query Example
```graphql
{
@@ -327,28 +280,26 @@ Cette requête renverra les entités `Challenge` et les entités `Application` q
}
```
-Cette requête renverra les entités `Challenge`, et leurs entités `Application` associées, telles qu'elles existaient directement après le traitement du bloc avec le hash donné.
-
-### Requêtes de recherche en texte intégral
+### Full-Text Search Example
-Les champs de recherche intégralement en texte fournissent une API de recherche textuelle expressive qui peut être ajoutée au schéma du subgraph et personnalisée. Reportez-vous à [Définir des champs de recherche en texte intégral](/developing/creating-a-subgraph/#defining-fulltext-search-fields) pour ajouter la recherche intégralement en texte à votre subgraph.
+Full-text search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Full-text Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add full-text search to your Subgraph.
-Les requêtes de recherche en texte intégral comportent un champ obligatoire, `text`, pour fournir les termes de la recherche. Plusieurs opérateurs spéciaux de texte intégral peuvent être utilisés dans ce champ de recherche `text`.
+Full-text search queries have one required field, `text`, for supplying search terms. Several special full-text operators are available to be used in this `text` search field.
-Opérateurs de recherche en texte intégral :
+Full-text search fields use the required `text` parameter with the following operators:
-| Symbole | Opérateur | Description |
-| --- | --- | --- |
-| `&` | `And` | Pour combiner plusieurs termes de recherche dans un filtre pour les entités incluant tous les termes fournis |
-| | | `Or` | Les requêtes comportant plusieurs termes de recherche séparés par l'opérateur ou renverront toutes les entités correspondant à l'un des termes fournis |
-| `<->` | `Follow by` | Spécifiez la distance entre deux mots. |
-| `:*` | `Prefix` | Utilisez le terme de recherche de préfixe pour trouver les mots dont le préfixe correspond (2 caractères requis.) |
+| Operator | Symbole | Description |
+| --------- | ------- | --------------------------------------------------------------- |
+| And | `&` | Matches entities containing all terms |
+| Or | `\|` | Return all entities with a match from any of the provided terms |
+| Follow by | `<->` | Matches terms with specified distance |
+| Prefix | `:*` | Matches word prefixes (minimum 2 characters) |
-#### Exemples
+#### Search Examples
-En utilisant l'opérateur `ou`, cette requête filtrera les entités de blog ayant des variations d' "anarchism" ou "crumpet" dans leurs champs de texte intégral.
+OR operator:
-```graphql
+```
{
blogSearch(text: "anarchism | crumpets") {
id
@@ -359,7 +310,7 @@ En utilisant l'opérateur `ou`, cette requête filtrera les entités de blog aya
}
```
-L'opérateur `follow by` spécifie un mot à une distance spécifique dans les documents en texte intégral. La requête suivante renverra tous les blogs contenant des variations de "decentralize" suivies de "philosophy"
+“Follow” by operator:
```graphql
{
@@ -372,7 +323,7 @@ L'opérateur `follow by` spécifie un mot à une distance spécifique dans les d
}
```
-Combinez des opérateurs de texte intégral pour créer des filtres plus complexes. Avec un opérateur de recherche de prétexte combiné à un suivi de cet exemple, la requête fera correspondre toutes les entités de blog avec des mots commençant par « lou » suivi de « musique ».
+Combined operators:
```graphql
{
@@ -385,29 +336,19 @@ Combinez des opérateurs de texte intégral pour créer des filtres plus complex
}
```
-### Validation
+### Définition de schéma
-Graph Node met en œuvre une validation [basée sur les spécifications](https://spec.graphql.org/October2021/#sec-Validation) des requêtes GraphQL qu'il reçoit à l'aide de [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), qui est basée sur l'implémentation de référence [graphql-js](https://github.com/graphql/graphql-js/tree/main/src/validation). Les requêtes qui échouent à une règle de validation sont accompagnées d'une erreur standard - consultez les [spécifications GraphQL](https://spec.graphql.org/October2021/#sec-Validation) pour en savoir plus.
-
-## Schema
-
-Le schéma de vos sources de données, c'est-à-dire les types d'entités, les valeurs et les relations qui peuvent être interrogés, est défini dans le [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-
-Les schémas GraphQL définissent généralement des types racines pour les `queries`, les `subscriptions` et les `mutations`. The Graph ne prend en charge que les `requêtes`. Le type racine `Query` pour votre subgraph est automatiquement généré à partir du schéma GraphQL qui est inclus dans votre [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
-
-> Remarque : notre API n'expose pas les mutations car les développeurs sont censés émettre des transactions directement contre la blockchain sous-jacente à partir de leurs applications.
-
-### Entities
+Entity types require:
-Tous les types GraphQL avec des directives `@entity` dans votre schéma seront traités comme des entités et doivent avoir un champ `ID`.
+- GraphQL Interface Definition Language (IDL) format
+- `@entity` directive
+- `ID` field
-> **Note:** Actuellement, tous les types de votre schéma doivent avoir une directive `@entity`. Dans le futur, nous traiterons les types n'ayant pas la directive `@entity` comme des objets de valeur, mais cela n'est pas encore pris en charge.
+### Subgraph Metadata Example
-### Métadonnées du Subgraph
+The `_Meta_` object provides subgraph metadata:
-Tous les subgraphs ont un objet `_Meta_` auto-généré, qui permet d'accéder aux métadonnées du subgraph. Cet objet peut être interrogé comme suit :
-
-```graphQL
+```graphql
{
_meta(block: { number: 123987 }) {
block {
@@ -421,14 +362,49 @@ Tous les subgraphs ont un objet `_Meta_` auto-généré, qui permet d'accéder a
}
```
-Si un bloc est fourni, les métadonnées sont celles de ce bloc, sinon le dernier bloc indexé est utilisé. S'il est fourni, le bloc doit être postérieur au bloc de départ du subgraph et inférieur ou égal au dernier bloc indexé.
-
-`deployment` est un ID unique, correspondant au IPFS CID du fichier `subgraph.yaml`.
+Metadata fields:
+
+- `deployment`: IPFS CID of the subgraph.yaml
+- `block`: Latest block information
+- `hasIndexingErrors`: Boolean indicating past indexing errors
+
+> Note: When writing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use `and` operators instead of `or` whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
+
+### GraphQL Filter Operators Reference
+
+This table explains each filter operator available in The Graph's GraphQL API. These operators are used as suffixes to field names when filtering data using the `where` parameter.
+
+| Opérateur | Description | Exemple |
+| ------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
+| `_` | Matches entities where the specified field equals another entity | `{ where: { owner_: { name: "Alice" } } }` |
+| `_not` | Negates the specified condition | `{ where: { active_not: true } }` |
+| `_gt` | Greater than (>) | `{ where: { price_gt: "100" } }` |
+| `_lt` | Less than (`\<`) | `{ where: { price_lt: "100" } }` |
+| `_gte` | Greater than or equal to (>=) | `{ where: { price_gte: "100" } }` |
+| `_lte` | Less than or equal to (`\<=`) | `{ where: { price_lte: "100" } }` |
+| `_in` | Value is in the specified array | `{ where: { category_in: ["Art", "Music"] } }` |
+| `_not_in` | Value is not in the specified array | `{ where: { category_not_in: ["Art", "Music"] } }` |
+| `_contains` | Field contains the specified string (case-sensitive) | `{ where: { name_contains: "token" } }` |
+| `_contains_nocase` | Field contains the specified string (case-insensitive) | `{ where: { name_contains_nocase: "token" } }` |
+| `_not_contains` | Field does not contain the specified string (case-sensitive) | `{ where: { name_not_contains: "test" } }` |
+| `_not_contains_nocase` | Field does not contain the specified string (case-insensitive) | `{ where: { name_not_contains_nocase: "test" } }` |
+| `_starts_with` | Field starts with the specified string (case-sensitive) | `{ where: { name_starts_with: "Crypto" } }` |
+| `_starts_with_nocase` | Field starts with the specified string (case-insensitive) | `{ where: { name_starts_with_nocase: "crypto" } }` |
+| `_ends_with` | Field ends with the specified string (case-sensitive) | `{ where: { name_ends_with: "Token" } }` |
+| `_ends_with_nocase` | Field ends with the specified string (case-insensitive) | `{ where: { name_ends_with_nocase: "token" } }` |
+| `_not_starts_with` | Field does not start with the specified string (case-sensitive) | `{ where: { name_not_starts_with: "Test" } }` |
+| `_not_starts_with_nocase` | Field does not start with the specified string (case-insensitive) | `{ where: { name_not_starts_with_nocase: "test" } }` |
+| `_not_ends_with` | Field does not end with the specified string (case-sensitive) | `{ where: { name_not_ends_with: "Test" } }` |
+| `_not_ends_with_nocase` | Field does not end with the specified string (case-insensitive) | `{ where: { name_not_ends_with_nocase: "test" } }` |
+
+#### Notes
+
+- Type support varies by operator. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`.
+- The `_` operator is only available for object and interface types.
+- String comparison operators are especially useful for text fields.
+- Numeric comparison operators work with both number and string-encoded number fields.
+- Use these operators in combination with logical operators (`and`, `or`) for complex filtering.
-`block` fournit des informations sur le dernier bloc (en tenant compte des contraintes de bloc passées à `_meta`) :
-
-- hash : le hash du bloc
-- number: the block number
-- horodatage : l'horodatage du bloc, s'il est disponible (pour l'instant, cette information n'est disponible que pour les subgraphs indexant les réseaux EVM)
+### Validation
-`hasIndexingErrors` est un booléen indiquant si le subgraph a rencontré des erreurs d'indexation à un moment donné
+Graph Node met en œuvre une validation [basée sur les spécifications](https://spec.graphql.org/October2021/#sec-Validation) des requêtes GraphQL qu'il reçoit à l'aide de [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), qui est basée sur l'implémentation de référence [graphql-js](https://github.com/graphql/graphql-js/tree/main/src/validation). Les requêtes qui échouent à une règle de validation sont accompagnées d'une erreur standard - consultez les [spécifications GraphQL](https://spec.graphql.org/October2021/#sec-Validation) pour en savoir plus.
diff --git a/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx
index 644b58ccf482..cd7130a391fc 100644
--- a/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx
@@ -1,34 +1,86 @@
---
-title: Gestion des clés API
+title: How to Manage API keys
---
+This guide shows you how to create, manage, and secure API keys for your [Subgraphs](/subgraphs/developing/subgraphs/).
+
## Aperçu
-Les clés API sont nécessaires pour interroger les subgraphs. Elles garantissent que les connexions entre les services d'application sont valides et autorisées, y compris l'authentification de l'utilisateur final et de l'appareil utilisant l'application.
+API keys are required to query Subgraphs. They authenticate users and devices, authorize access to specific endpoints, enforce rate limits, and enable usage tracking across The Graph.
+
+## Prérequis
+
+- A [Subgraph Studio](https://thegraph.com/studio/) account
+
+## Create a New API Key
+
+1. Navigate to [Subgraph Studio](https://thegraph.com/studio/)
+2. Click the **API Keys** tab in the navigation menu
+3. Click the **Create API Key** button
+
+A new window will pop up:
+
+4. Enter a name for your API key
+5. Optional: You can enable a period spending limit
+6. Click **Create API Key**
+
+
+
+## Manage API Keys
+
+The “API keys” table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+
+### How to Set Spending Limits
+
+1. Find your API key in the API keys table
+2. Click the "three dots" icon next to the key
+3. Select "Manage spending limit"
+4. Enter your desired monthly limit in USD
+5. Click **Save**
+
+> Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+
+### How to Rename an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Rename API key"
+3. Enter the new name
+4. Click **Save**
+
+### How to Regenerate an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Regenerate API key"
+3. Confirm the action in the pop up dialog
+
+> Warning: Regenerating an API key will invalidate the previous key immediately. Update your applications with the new key to prevent service interruption.
+
+## API Key Details
-### Créer et gérer des clés API
+### Monitoring Usage
-Allez sur [Subgraph Studio](https://thegraph.com/studio/) et cliquez sur l'onglet **API Keys** pour créer et gérer vos clés API pour des subgraphs spécifiques.
+1. Click on your API key to view the Details page
+2. Check the **Overview** section for:
+ - Total number of queries
+ - GRT spent
+ - Current usage statistics
-Le tableau "Clés API" répertorie les clés API existantes et vous permet de les gérer ou de les supprimer. Pour chaque clé, vous pouvez voir son statut, le coût pour la période en cours, la limite de dépenses pour la période en cours et le nombre total de requêtes.
+### Restricting Domain Access
-Vous pouvez cliquer sur le "menu à trois points" à droite d'une clé API donnée pour :
+1. Click on your API key to open the Details page
+2. Navigate to the **Security** section
+3. Click "Add Domain"
+4. Enter the authorized domain name
+5. Click **Save**
-- Renommer la clé API
-- Régénérer la clé API
-- Supprimer la clé API
-- Gérer la limite de dépenses : il s'agit d'une limite de dépenses mensuelle facultative pour une clé API donnée, en USD. Cette limite s'applique à chaque période de facturation (mois civil).
+### Limiting Subgraph Access
-### Détails de la clé API
+1. In the API key Details page
+2. Navigate to the **Security** section
+3. Click "Assign Subgraphs"
+4. Select the Subgraphs you want to authorize
+5. Click **Save**
-Vous pouvez cliquer sur une clé API individuelle pour afficher la page des détails :
+## Ressources supplémentaires
-1. Dans la section **Aperçu**, vous pouvez :
- - Modifiez le nom de votre clé
- - Régénérer les clés API
- - Affichez l'utilisation actuelle de la clé API avec les statistiques :
- - Nombre de requêtes
- - Montant de GRT dépensé
-2. Dans la section **Sécurité**, vous pouvez choisir des paramètres de sécurité en fonction du niveau de contrôle que vous souhaitez avoir. Plus précisément, vous pouvez :
- - Visualisez et gérez les noms de domaine autorisés à utiliser votre clé API
- - Attribuer des subgraphs qui peuvent être interrogés avec votre clé API
+[Deploying Using Subgraph Studio](/subgraphs/developing/deploying/using-subgraph-studio/)
diff --git a/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index acd40aface24..9b1be35ab167 100644
--- a/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,15 +2,19 @@
title: Identifiant du Subgraph VS. Identifiant de déploiement
---
+Managing and accessing Subgraphs relies on two distinct identification systems: Subgraph IDs and Deployment IDs.
+
Un subgraph est identifié par un ID de subgraph, et chaque version du subgraph est identifiée par un ID de déploiement.
Lors de l'interrogation d'un subgraph, l'un ou l'autre ID peut être utilisé, bien qu'il soit généralement suggéré d'utiliser l'ID de déploiement en raison de sa capacité à spécifier une version spécifique d'un subgraph.
-Voici les principales différences entre les deux ID : 
+Both identifiers are accessible in [Subgraph Studio](https://thegraph.com/studio/):
+
+
## Identifiant de déploiement
-L'ID de déploiement est le hash IPFS du fichier manifeste compilé, qui fait référence à d'autres fichiers sur IPFS au lieu d'URL relatives sur l'ordinateur. Par exemple, le manifeste compilé est accessible via : `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. Pour modifier l'ID de déploiement, il suffit de mettre à jour le fichier de manifeste, en modifiant par exemple le champ de description comme décrit dans la [documentation du manifeste du subgraph](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://ipfs.thegraph.com/ipfs/QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
Lorsque des requêtes sont effectuées à l'aide de l'ID de déploiement d'un subgraph, nous spécifions une version de ce subgraph à interroger. L'utilisation de l'ID de déploiement pour interroger une version spécifique du subgraph donne lieu à une configuration plus sophistiquée et plus robuste, car il y a un contrôle total sur la version du subgraph interrogée. Toutefois, cela implique la nécessité de mettre à jour manuellement le code d'interrogation chaque fois qu'une nouvelle version du subgraph est publiée.
@@ -18,6 +22,12 @@ Exemple d'endpoint utilisant l'identifiant de déploiement:
`https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB`
+Using Deployment IDs for queries offers precise version control but comes with specific implications:
+
+- Advantages: Complete control over which version you're querying, ensuring consistent results
+- Challenges: Requires manual updates to query code when new Subgraph versions are published
+- Use case: Ideal for production environments where stability and predictability are crucial
+
## Identifiant du Subgraph
L'ID du subgraph est un ID unique pour un subgraph. Il reste constant dans toutes les versions d'un subgraph. Il est recommandé d'utiliser l'ID du subgraph pour demander la dernière version d'un subgraph, bien qu'il y ait quelques mises en garde.
@@ -25,3 +35,20 @@ L'ID du subgraph est un ID unique pour un subgraph. Il reste constant dans toute
Sachez que l'interrogation à l'aide de l'ID du Subgraph peut entraîner la réponse à des requêtes par une version plus ancienne du Subgraph, la nouvelle version ayant besoin d'un certain temps pour se synchroniser. De plus, les nouvelles versions peuvent introduire des changements de schéma radicaux.
Exemple d'endpoint utilisant l'ID du subgraph : `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
+
+Using Subgraph IDs comes with important considerations:
+
+- Benefits: Automatically queries the latest version, reducing maintenance overhead
+- Limitations: May encounter version synchronization delays or breaking schema changes
+- Use case: Better suited for development environments or when staying current is more important than version stability
+
+## Deployment ID vs Subgraph ID
+
+Here are the key differences between the two IDs:
+
+| Consideration | Deployment ID | Subgraph ID |
+| ----------------------- | --------------------- | --------------- |
+| Version Pinning | Specific version | Always latest |
+| Maintenance Effort | High (manual updates) | Low (automatic) |
+| Environment Suitability | Production | Development |
+| Sync Status Awareness | Not required | Critical |
diff --git a/website/src/pages/fr/subgraphs/quick-start.mdx b/website/src/pages/fr/subgraphs/quick-start.mdx
index c227ec40ccc7..23c88a13a305 100644
--- a/website/src/pages/fr/subgraphs/quick-start.mdx
+++ b/website/src/pages/fr/subgraphs/quick-start.mdx
@@ -2,24 +2,28 @@
title: Démarrage rapide
---
-Apprenez à construire, publier et interroger facilement un [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) sur The Graph.
+Create, deploy, and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph Network.
+
+By the end, you'll have:
+
+- Initialized a Subgraph from a smart contract
+- Deployed it to Subgraph Studio for testing
+- Published to The Graph Network for decentralized indexing
## Prérequis
- Un portefeuille crypto
-- Une adresse de contrat intelligent sur un [réseau pris en charge](/supported-networks/)
-- [Node.js](https://nodejs.org/) installé
-- Un gestionnaire de package de votre choix (`npm`, `yarn` ou `pnpm`)
+- A deployed smart contract on a [supported network](/supported-networks/)
+- [Node.js](https://nodejs.org/) & a package manager of your choice (`npm`, `yarn` or `pnpm`)
## Comment construire un subgraph
### 1. Créer un subgraph dans Subgraph Studio
-Accédez à [Subgraph Studio](https://thegraph.com/studio/) et connectez votre portefeuille.
-
-Subgraph Studio vous permet de créer, de gérer, de déployer et de publier des subgraphs, ainsi que de créer et de gérer des clés API.
-
-Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
+1. Go to [Subgraph Studio](https://thegraph.com/studio/)
+2. Connect your wallet
+3. Click "Create a Subgraph"
+4. Name it in Title Case: "Subgraph Name Chain Name"
### 2. Installez la CLI Graph
@@ -37,20 +41,22 @@ Using [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialiser votre subgraph
+Verify install:
-> Vous trouverez les commandes pour votre subgraph spécifique sur la page du subgraph dans [Subgraph Studio](https://thegraph.com/studio/).
+```sh
+graph --version
+```
-La commande `graph init` créera automatiquement un échafaudage d'un subgraph basé sur les événements de votre contrat.
+### 3. Initialiser votre subgraph
-La commande suivante initialise votre subgraph à partir d'un contrat existant :
+> You can find commands for your specific Subgraph in [Subgraph Studio](https://thegraph.com/studio/).
+
+The following command initializes your Subgraph from an existing contract and indexes events:
```sh
graph init
```
-Si votre contrat est vérifié sur le scanner de blocs où il est déployé (comme [Etherscan](https://etherscan.io/)), l'ABI sera automatiquement créé dans le CLI.
-
Lorsque vous initialisez votre subgraph, la CLI vous demande les informations suivantes :
- **Protocole** : Choisissez le protocole à partir duquel votre subgraph indexera les données.
@@ -59,19 +65,17 @@ Lorsque vous initialisez votre subgraph, la CLI vous demande les informations su
- **Réseau Ethereum** (optionnel) : Vous pouvez avoir besoin de spécifier le réseau compatible EVM à partir duquel votre subgraph indexera les données.
- **Adresse du contrat** : Localisez l'adresse du contrat intelligent dont vous souhaitez interroger les données.
- **ABI** : Si l'ABI n'est pas renseigné automatiquement, vous devrez le saisir manuellement sous la forme d'un fichier JSON.
-- **Bloc de départ** : Vous devez saisir le bloc de départ pour optimiser l'indexation du Subgraph des données de la blockchain. Localisez le bloc de départ en trouvant le bloc où votre contrat a été déployé.
+- **Start Block**: You should input the start block where the contract was deployed to optimize Subgraph indexing of blockchain data.
- **Nom du contrat** : Saisissez le nom de votre contrat.
- **Indexer les événements contractuels comme des entités** : Il est conseillé de mettre cette option à true, car elle ajoutera automatiquement des mappages à votre subgraph pour chaque événement émis.
- **Ajouter un autre contrat** (facultatif) : Vous pouvez ajouter un autre contrat.
-La capture d'écran suivante donne un exemple de ce à quoi on peut s'attendre lors de l'initialisation du subgraph :
+See the following screenshot for an example of what to expect when initializing your Subgraph:

### 4. Modifiez votre subgraph
-La commande `init` de l'étape précédente crée un Subgraph d'échafaudage que vous pouvez utiliser comme point de départ pour construire votre Subgraph.
-
Lorsque vous modifiez le Subgraph, vous travaillez principalement avec trois fichiers :
- Manifest (`subgraph.yaml`) - définit les sources de données que votre Subgraph indexera.
@@ -82,9 +86,7 @@ Pour une description détaillée de la manière d'écrire votre Subgraph, consul
### 5. Déployez votre Subgraph
-> N'oubliez pas que le déploiement n'est pas la même chose que la publication.
-
-Lorsque vous **déployez** un Subgraph, vous l'envoyez au [Subgraph Studio](https://thegraph.com/studio/), où vous pouvez le tester, le mettre en scène et le réviser. L'indexation d'un Subgraph déployé est effectuée par l'[Indexeur de mise à niveau](https://thegraph.com/blog/upgrade-indexer/), qui est un indexeur unique détenu et exploité par Edge & Node, plutôt que par les nombreux Indexeurs décentralisés de The Graph Network. Un Subgraph **déployé** est libre d'utilisation, à taux limité, non visible par le public et destiné à être utilisé à des fins de développement, de mise en place et de test.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
Une fois que votre Subgraph est écrit, exécutez les commandes suivantes :
@@ -107,8 +109,6 @@ graph deploy
```
````
-La CLI demandera un label de version. Il est fortement recommandé d'utiliser [le versionnement sémantique](https://semver.org/), par exemple `0.0.1`.
-
### 6. Examinez votre subgraph
Si vous souhaitez tester votre subgraph avant de le publier, vous pouvez utiliser [Subgraph Studio](https://thegraph.com/studio/) pour effectuer les opérations suivantes :
@@ -125,55 +125,13 @@ Lorsque votre subgraph est prêt pour un environnement de production, vous pouve
- Il rend votre subgraph disponible pour être indexé par les [Indexeurs](/indexing/overview/) décentralisés sur The Graph Network.
- Il supprime les limites de taux et rend votre subgraph publiquement consultable et interrogeable dans [Graph Explorer](https://thegraph.com/explorer/).
-- Il met votre subgraph à la disposition des [Curateurs](/resources/roles/curating/) pour qu'ils le curent.
-
-> Plus la quantité de GRT que vous et d'autres personnes curez dans votre subgraph est importante, plus les Indexeurs seront incités à indexer votre subgraph, ce qui améliorera la qualité du service, réduira la latence et renforcera la redondance du réseau pour votre subgraph.
-
-#### Publier avec Subgraph Studio
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to add curation signal.
-Pour publier votre subgraph, cliquez sur le bouton Publier dans le tableau de bord.
+To publish your Subgraph, click the Publish button in the dashboard and select your network.

-Sélectionnez le réseau dans lequel vous souhaitez publier votre subgraph.
-
-#### Publication à partir de la CLI
-
-Depuis la version 0.73.0, vous pouvez également publier votre subgraph à l'aide de Graph CLI.
-
-Ouvrez le `graph-cli`.
-
-Utilisez les commandes suivantes :
-
-````
-```sh
-graph codegen && graph build
-```
-
-Then,
-
-```sh
-graph publish
-```
-````
-
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.
-
-
-
-Pour personnaliser votre déploiement, voir [Publier un subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-
-#### Adding signal to your Subgraph
-
-1. Pour inciter les Indexeurs à interroger votre subgraph, vous devez y ajouter un signal de curation GRT.
-
- - Cette action améliore la qualité de service, réduit la latence et améliore la redondance et la disponibilité du réseau pour votre Subgraph.
-
-2. Si éligibles aux récompenses d'indexation, les Indexeurs reçoivent des récompenses en GRT proportionnelles au montant signalé.
-
- - Il est recommandé de rassembler au moins 3 000 GRT pour attirer 3 Indexeurs. Vérifiez l'éligibilité des récompenses en fonction de l'utilisation des fonctions du subgraph et des réseaux pris en charge.
-
-Pour en savoir plus sur la curation, lisez [Curating](/resources/roles/curating/).
+> It is recommended that you curate your own Subgraph with at least 3,000 GRT to incentivize indexing.
Pour économiser des frais de gas, vous pouvez créer votre subgraph dans la même transaction que vous le publiez en sélectionnant cette option :
diff --git a/website/src/pages/fr/subgraphs/upgrade-indexer.mdx b/website/src/pages/fr/subgraphs/upgrade-indexer.mdx
new file mode 100644
index 000000000000..64f2972e0c5e
--- /dev/null
+++ b/website/src/pages/fr/subgraphs/upgrade-indexer.mdx
@@ -0,0 +1,25 @@
+---
+title: Edge & Node Upgrade Indexer
+sidebarTitle: Upgrade Indexer
+---
+
+## Aperçu
+
+The Upgrade Indexer is a specialized Indexer operated by Edge & Node. It supports newly integrated chains within The Graph ecosystem and ensures new Subgraphs are immediately available for querying, eliminating potential downtime.
+
+Originally designed as a transitional support, its primary purpose was to facilitate the migration of Subgraphs from the hosted service to the decentralized network. Currently, it supports newly deployed Subgraphs before the full Chain Integration Process (CIP) Indexing rewards are activated.
+
+### What it does
+
+- Provides immediate query support for all newly deployed Subgraphs.
+- Functions as the sole supporting Indexer for each chain until indexing rewards are activated.
+
+### What it does **not** do
+
+- Does not permanently index Subgraphs. Subgraph owners should curate Subgraphs to use independent Indexers long term.
+- Does not compete for rewards. The Upgrade Indexer's participation on the Graph Network does not dilute rewards for other Indexers.
+- Doesn't support Time Travel Queries (TTQ). All Subgraphs on the Upgrade Indexer are auto-pruned. If TTQs are needed on a Subgraph, [curation signal can be added](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to attract Indexers that will support this feature.
+
+### Conclusion
+
+The Edge & Node Upgrade Indexer is foundational in supporting chain integrations and mitigating data latency risks. It plays a critical role in scaling The Graph's decentralized infrastructure by ensuring immediate query support and fostering community-driven indexing.
diff --git a/website/src/pages/fr/substreams/_meta-titles.json b/website/src/pages/fr/substreams/_meta-titles.json
index bd6a51423076..1f11f4426932 100644
--- a/website/src/pages/fr/substreams/_meta-titles.json
+++ b/website/src/pages/fr/substreams/_meta-titles.json
@@ -1,3 +1,4 @@
{
- "developing": "Développement"
+ "developing": "Développement",
+ "sps": "Subgraphs alimentés par des substreams"
}
diff --git a/website/src/pages/fr/substreams/developing/sinks.mdx b/website/src/pages/fr/substreams/developing/sinks.mdx
index c56d379e996d..cd4a38843c65 100644
--- a/website/src/pages/fr/substreams/developing/sinks.mdx
+++ b/website/src/pages/fr/substreams/developing/sinks.mdx
@@ -8,14 +8,13 @@ Choisissez un sink qui répond aux besoins de votre projet.
Une fois que vous avez trouvé un package qui répond à vos besoins, vous pouvez choisir la façon dont vous voulez utiliser les données.
-Les sinks sont des intégrations qui vous permettent d'envoyer les données extraites vers différentes destinations, telles qu'une base de données SQL, un fichier ou un Subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database or a file.
## Sinks
> Remarque : certains sinks sont officiellement pris en charge par l'équipe de développement de StreamingFast (c'est-à-dire qu'ils bénéficient d'un soutien actif), mais d'autres sinks sont gérés par la communauté et leur prise en charge n'est pas garantie.
- [Base de données SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database.
-- [Subgraph](/sps/introduction/) : Configurez une API pour répondre à vos besoins en matière de données et hébergez-la sur The Graph Network.
- [Direct Streaming] (https://docs.substreams.dev/how-to-guides/sinks/stream) : Stream en continu des données directement à partir de votre application.
- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub) : Envoyer des données à un sujet PubSub.
- [Sinks communautaires] (https://docs.substreams.dev/how-to-guides/sinks/community-sinks) : Découvrez des Sinks de qualité entretenus par la communauté.
@@ -26,26 +25,26 @@ Les sinks sont des intégrations qui vous permettent d'envoyer les données extr
### Officiel
-| Nom | Support | Responsable de la maintenance | Code Source |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| SDK Go | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| SDK Rust | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| SDK JS | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| Store KV | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| Nom | Support | Responsable de la maintenance | Code Source |
+| ---------- | ------- | ----------------------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| SDK Go | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| SDK Rust | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| SDK JS | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| Store KV | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Communauté
-| Nom | Support | Responsable de la maintenance | Code Source |
-| --- | --- | --- | --- |
-| MongoDB | C | Communauté | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Fichiers | C | Communauté | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| Store KV | C | Communauté | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Communauté | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Nom | Support | Responsable de la maintenance | Code Source |
+| ---------- | ------- | ----------------------------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Communauté | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Fichiers | C | Communauté | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| Store KV | C | Communauté | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Communauté | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Soutien officiel (par l'un des principaux fournisseurs de Substreams)
- C = Soutien de la Communauté
diff --git a/website/src/pages/fr/substreams/quick-start.mdx b/website/src/pages/fr/substreams/quick-start.mdx
index 75da28206cb5..763e73918164 100644
--- a/website/src/pages/fr/substreams/quick-start.mdx
+++ b/website/src/pages/fr/substreams/quick-start.mdx
@@ -31,6 +31,7 @@ Si vous ne trouvez pas de package Substreams qui réponde à vos besoins spécif
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
Pour construire et optimiser vos Substreams à partir de zéro, utilisez le chemin minimal dans le [conteneur de développement](/substreams/developing/dev-container/).
diff --git a/website/src/pages/fr/substreams/sps/faq.mdx b/website/src/pages/fr/substreams/sps/faq.mdx
new file mode 100644
index 000000000000..9519360ba265
--- /dev/null
+++ b/website/src/pages/fr/substreams/sps/faq.mdx
@@ -0,0 +1,96 @@
+---
+title: FAQ sur les Subgraphs alimentés par Substreams
+sidebarTitle: FAQ
+---
+
+## Que sont les sous-flux ?
+
+Substreams est un moteur de traitement exceptionnellement puissant capable de consommer de riches flux de données blockchain. Il vous permet d'affiner et de façonner les données de la blockchain pour une digestion rapide et transparente par les applications des utilisateurs finaux.
+
+Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere.
+
+Substreams est développé par [StreamingFast](https://www.streamingfast.io/). Visitez la [Documentation Substreams](/substreams/introduction/) pour en savoir plus sur Substreams.
+
+## Qu'est-ce qu'un subgraph alimenté par Substreams ?
+
+Les [subgraphs alimentés par Substreams](/sps/introduction/) combinent la puissance de Substreams avec la capacité d'interrogation des subgraphs. Lors de la publication d'un subgraph alimenté par Substreams, les données produites par les transformations Substreams peuvent [produire des changements d'entité](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) compatibles avec les entités du subgraph.
+
+Si vous êtes déjà familiarisé avec le développement de subgraphs, notez que les subgraphs alimentés par Substreams peuvent être interrogés comme s'ils avaient été produits par la couche de transformation AssemblyScript. Cela permet de bénéficier de tous les avantages des subgraphs, y compris d'une API GraphQL dynamique et flexible.
+
+## En quoi les Subgraphs alimentés par Substreams se distinguent-ils des Subgraphs ?
+
+Les subgraphs sont constitués de sources de données qui spécifient des événements onchain et comment ces événements doivent être transformés via des gestionnaires écrits en Assemblyscript. Ces événements sont traités de manière séquentielle, en fonction de l'ordre dans lequel ils se produisent onchain.
+
+En revanche, les subgraphs alimentés par Substreams ont une source de données unique qui fait référence à un package substream, qui est traité par le Graph Node. Les subgraphs ont accès à des données granulaires supplémentaires onchain par rapport aux subgraphs conventionnels et peuvent également bénéficier d'un traitement massivement parallélisé, ce qui peut se traduire par des temps de traitement beaucoup plus rapides.
+
+## Quels sont les avantages de l'utilisation des subgraphs alimentés par Substreams ?
+
+Les subgraphs alimentés par Substreams combinent tous les avantages de Substreams avec la capacité d'interrogation des subgraphs. Ils apportent à The Graph une plus grande composabilité et une indexation très performante. Ils permettent également de nouveaux cas d'utilisation des données ; par exemple, une fois que vous avez construit votre subgraph alimenté par Substreams, vous pouvez réutiliser vos [modules Substreams](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) pour sortir vers différents [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) tels que PostgreSQL, MongoDB et Kafka.
+
+## Quels sont les avantages de Substreams ?
+
+L'utilisation de Substreams présente de nombreux avantages, notamment:
+
+- Composable : Vous pouvez empiler les modules Substreams comme des blocs LEGO et construire des modules communautaires pour affiner les données publiques.
+
+- Indexation haute performance : Indexation plus rapide d'un ordre de grandeur grâce à des grappes d'opérations parallèles à grande échelle (comme BigQuery).
+
+- "Sinkez" n'importe où : "Sinkez" vos données où vous le souhaitez : PostgreSQL, MongoDB, Kafka, Subgraphs, fichiers plats, Google Sheets.
+
+- Programmable : Utilisez du code pour personnaliser l'extraction, effectuer des agrégations au moment de la transformation et modéliser vos résultats pour plusieurs puits.
+
+- Accès à des données supplémentaires qui ne sont pas disponibles dans le cadre de la RPC JSON
+
+- Tous les avantages du Firehose.
+
+## Tous les avantages du Firehose?
+
+Développé par [StreamingFast] (https://www.streamingfast.io/), le Firehose est une couche d'extraction de données de blockchain conçue à partir de zéro pour traiter l'historique complet des blockchains à des vitesses jusqu'alors inconnues . Obtenez une approche basée sur les fichiers et le streaming, il s'agit d'un composant essentiel de la suite de technologies open-source de StreamingFast et de la base de Substreams.
+
+Consultez la [documentation] (https://firehose.streamingfast.io/) pour en savoir plus sur le Firehose.
+
+## Quels sont les avantages du Firehose ?
+
+L'utilisation de Firehose présente de nombreux avantages, notamment:
+
+- Temps de latence le plus faible et pas d'interrogation : Les nœuds Firehose sont conçus pour faire la course afin de diffuser les données en bloc en premier, selon le principe "streaming-first".
+
+- Prévient les temps d'arrêt : Conçu dès le départ pour une haute disponibilité.
+
+- Ne manquez jamais le rythme : Le curseur du flux Firehose est conçu pour gérer les bifurcations et pour reprendre là où vous vous êtes arrêté dans n'importe quelle condition.
+
+- Modèle de données le plus riche : Meilleur modèle de données qui inclut les changements de solde, l'arbre d'appel complet, les transactions internes, les journaux, les changements de stockage, les coûts du gaz, etc.
+
+- Exploite les fichiers plats : Les données de la blockchain sont extraites dans des fichiers plats, la ressource informatique la moins chère et la plus optimisée disponible.
+
+## Où les développeurs peuvent-ils trouver plus d'informations sur les Substreams et les Subgraphs alimentés par Substreams ?
+
+La [documentation Substreams ](/substreams/introduction/) vous explique comment construire des modules Substreams.
+
+La [documentation sur les subgraphs alimentés par Substreams](/sps/introduction/) vous montrera comment les packager pour les déployer sur The Graph.
+
+Le [dernier outil Substreams Codegen ](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) vous permettra de lancer un projet Substreams sans aucun code.
+
+## Quel est le rôle des modules Rust dans Substreams ?
+
+Les modules Rust sont l'équivalent des mappeurs AssemblyScript dans Subgraphs. Ils sont compilés dans WASM de la même manière, mais le modèle de programmation permet une exécution parallèle. Ils définissent le type de transformations et d'agrégations que vous souhaitez appliquer aux données brutes de la blockchain.
+
+Consultez la [documentation des modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) pour plus de détails.
+
+## Qu'est-ce qui rend Substreams composable ?
+
+Lors de l'utilisation de Substreams, la composition a lieu au niveau de la couche de transformation, ce qui permet de réutiliser les modules mis en cache.
+
+Par exemple, Alice peut créer un module de prix DEX, Bob peut l'utiliser pour créer un agrégateur de volume pour certains jetons qui l'intéressent, et Lisa peut combiner quatre modules de prix DEX individuels pour créer un oracle de prix. Une seule requête Substreams regroupera tous ces modules individuels, les reliera entre eux, pour offrir un flux de données beaucoup plus raffiné. Ce flux peut ensuite être utilisé pour alimenter un subgraph et être interrogé par les consommateurs.
+
+## Comment pouvez-vous créer et déployer un Subgraph basé sur Substreams ?
+
+Après avoir [défini](/sps/introduction/) un subgraph basé sur Substreams, vous pouvez utiliser Graph CLI pour le déployer dans [Subgraph Studio](https://thegraph.com/studio/).
+
+## Où puis-je trouver des exemples de Substreams et de Subgraphs alimentés par Substreams ?
+
+Vous pouvez consulter [cette repo Github](https://github.com/pinax-network/awesome-substreams) pour trouver des exemples de Substreams et de subgraphs alimentés par Substreams.
+
+## Que signifient les Substreams et les subgraphs alimentés par Substreams pour The Gaph Network ?
+
+L'intégration promet de nombreux avantages, notamment une indexation extrêmement performante et une plus grande composabilité grâce à l'exploitation des modules de la communauté et à leur développement.
diff --git a/website/src/pages/fr/substreams/sps/introduction.mdx b/website/src/pages/fr/substreams/sps/introduction.mdx
new file mode 100644
index 000000000000..0454b6f4acee
--- /dev/null
+++ b/website/src/pages/fr/substreams/sps/introduction.mdx
@@ -0,0 +1,31 @@
+---
+title: Introduction aux Subgraphs alimentés par Substreams
+sidebarTitle: Présentation
+---
+
+Améliorez l'efficacité et l'évolutivité de votre subgraph en utilisant [Substreams](/substreams/introduction/) pour streamer des données blockchain pré-indexées.
+
+## Aperçu
+
+Utilisez un package Substreams (`.spkg`) comme source de données pour donner à votre Subgraph l'accès à un flux de données blockchain pré-indexées. Cela permet un traitement des données plus efficace et évolutif, en particulier avec des réseaux de blockchain complexes ou de grande taille.
+
+### Spécificités
+
+Il existe deux méthodes pour activer cette technologie :
+
+1. **Utilisation des [déclencheurs](/sps/triggers/) de Substreams ** : Consommez à partir de n'importe quel module Substreams en important le modèle Protobuf par le biais d'un gestionnaire de subgraph et déplacez toute votre logique dans un subgraph. Cette méthode crée les entités du subgraph directement dans le subgraph.
+
+2. **En utilisant [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)** : En écrivant une plus grande partie de la logique dans Substreams, vous pouvez consommer la sortie du module directement dans [graph-node](/indexing/tooling/graph-node/). Dans graph-node, vous pouvez utiliser les données de Substreams pour créer vos entités Subgraph.
+
+Vous pouvez choisir où placer votre logique, soit dans le subgraph, soit dans Substreams. Cependant, réfléchissez à ce qui correspond à vos besoins en matière de données, car Substreams a un modèle parallélisé et les déclencheurs sont consommés de manière linéaire dans graph node.
+
+### Ressources supplémentaires
+
+Consultez les liens suivants pour obtenir des tutoriels sur l'utilisation de l'outil de génération de code afin de créer rapidement votre premier projet Substreams de bout en bout :
+
+- [Solana](/substreams/developing/solana/transactions/)
+- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm)
+- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
+- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
+- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/fr/substreams/sps/triggers.mdx b/website/src/pages/fr/substreams/sps/triggers.mdx
new file mode 100644
index 000000000000..ecd1253f24c7
--- /dev/null
+++ b/website/src/pages/fr/substreams/sps/triggers.mdx
@@ -0,0 +1,47 @@
+---
+title: Déclencheurs de Substreams
+---
+
+Use Custom Triggers and enable the full use GraphQL.
+
+## Aperçu
+
+Les déclencheurs personnalisés vous permettent d'envoyer des données directement dans votre fichier de mappage de subgraph et dans vos entités, qui sont similaires aux tables et aux champs. Cela vous permet d'utiliser pleinement la couche GraphQL.
+
+En important les définitions Protobuf émises par votre module Substreams, vous pouvez recevoir et traiter ces données dans le gestionnaire de votre subgraph. Cela garantit une gestion efficace et rationalisée des données dans le cadre du Subgraph.
+
+### Définition de `handleTransactions`
+
+Le code suivant montre comment définir une fonction `handleTransactions` dans un gestionnaire de Subgraph. Cette fonction reçoit comme paramètre de Substreams des Bytes bruts et les décode en un objet `Transactions`. Pour chaque transaction, une nouvelle entité Subgraph est créée.
+
+```tsx
+export function handleTransactions(bytes: Uint8Array): void {
+ let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1.
+ if (transactions.length == 0) {
+ log.info('No transactions found', [])
+ return
+ }
+
+ for (let i = 0; i < transactions.length; i++) {
+ // 2.
+ let transaction = transactions[i]
+
+ let entity = new Transaction(transaction.hash) // 3.
+ entity.from = transaction.from
+ entity.to = transaction.to
+ entity.save()
+ }
+}
+```
+
+Voici ce que vous voyez dans le fichier `mappings.ts` :
+
+1. Les bytes contenant les données Substreams sont décodés en un objet `Transactions` généré, qui est utilisé comme n’importe quel autre objet AssemblyScript
+2. Boucle sur les transactions
+3. Créer une nouvelle entité de subgraph pour chaque transaction
+
+Pour découvrir un exemple détaillé de subgraph à déclencheurs, [consultez le tutoriel](/sps/tutorial/).
+
+### Ressources supplémentaires
+
+Pour élaborer votre premier projet dans le conteneur de développement, consultez l'un des [guides pratiques](/substreams/developing/dev-container/).
diff --git a/website/src/pages/fr/substreams/sps/tutorial.mdx b/website/src/pages/fr/substreams/sps/tutorial.mdx
new file mode 100644
index 000000000000..71659989b6e8
--- /dev/null
+++ b/website/src/pages/fr/substreams/sps/tutorial.mdx
@@ -0,0 +1,155 @@
+---
+title: "Tutoriel : Configurer un Subgraph alimenté par Substreams sur Solana"
+sidebarTitle: Tutoriel
+---
+
+Mise en place réussie d'un subgraph alimenté par Substreams basé sur des déclencheurs pour un jeton Solana SPL.
+
+## Commencer
+
+Pour un tutoriel vidéo, consultez [Comment indexer Solana avec un subgraph alimenté par des Substreams](/sps/tutorial/#video-tutorial)
+
+### Prérequis
+
+Avant de commencer, assurez-vous de :
+
+- Avoir suivi le Guide [Getting Started](https://github.com/streamingfast/substreams-starter) pour configurer votre environnement de développement à l’aide d’un Dev Container.
+- Être familier avec The Graph et des concepts de base de la blockchain tels que les transactions et les Protobufs.
+
+### Étape 1 : Initialiser votre projet
+
+1. Ouvrez votre Dev Container et exécutez la commande suivante pour initialiser votre projet :
+
+ ```bash
+ substreams init
+ ```
+
+2. Sélectionnez l'option de projet "minimal".
+
+3. Remplacez le contenu du fichier généré `substreams.yaml` par la configuration suivante, qui filtre les transactions du compte Orca sur l’ID du programme SPL token :
+
+```yaml
+specVersion: v0.1.0
+package:
+ name: my_project_sol
+ version: v0.1.0
+
+imports: # Passez le spkg qui vous intéresse
+ solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg
+
+modules:
+ - name: map_spl_transfers
+ use: solana:map_block # Sélectionnez les modules disponibles dans votre spkg
+ initialBlock: 260000082
+
+ - name: map_transactions_by_programid
+ use: solana:solana:transactions_by_programid_without_votes
+
+network: solana-mainnet-beta
+
+params: # Modifiez les champs param pour répondre à vos besoins
+ # Pour program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA
+ map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE
+```
+
+### Étape 2 : Générer le Manifeste du Subgraph
+
+Une fois le projet initialisé, générez un manifeste de subgraph en exécutant la commande suivante dans le Dev Container :
+
+```bash
+substreams codegen subgraph
+```
+
+Cette commande génère un fichier `subgraph.yaml` qui importe le package Substreams comme source de données:
+
+```yaml
+---
+dataSources:
+ - kind: substreams
+ name: my_project_sol
+ network: solana-mainnet-beta
+ source:
+ package:
+ moduleName: map_spl_transfers # Module defined in the substreams.yaml
+ file: ./my-project-sol-v0.1.0.spkg
+ mapping:
+ apiVersion: 0.0.9
+ kind: substreams/graph-entities
+ file: ./src/mappings.ts
+ handler: handleTriggers
+```
+
+### Étape 3 : Définir les Entités dans `schema.graphql`
+
+Définissez les champs que vous souhaitez enregistrer dans vos entités Subgraph en mettant à jour le fichier `schema.graphql`.
+
+Voici un exemple :
+
+```graphql
+type MyTransfer @entity {
+ id: ID!
+ amount: String!
+ source: String!
+ designation: String!
+ signers: [String!]!
+}
+```
+
+Ce schéma définit une entité `MyTransfer` avec des champs tels que `id`, `amount`, `source`, `designation` et `signers`.
+
+### Étape 4 : Gérer les Données Substreams dans `mappings.ts`
+
+Avec les objets Protobuf générés, vous pouvez désormais gérer les données de Substreams décodées dans votre fichier `mappings.ts` trouvé dans le répertoire `./src`.
+
+L'exemple ci-dessous montre comment extraire vers les entités du subgraph les transferts non dérivés associés à l'Id du compte Orca :
+
+```ts
+import { Protobuf } from 'as-proto/assembly'
+import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events'
+import { MyTransfer } from '../generated/schema'
+
+export function handleTriggers(bytes: Uint8Array): void {
+ const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode)
+
+ for (let i = 0; i < input.data.length; i++) {
+ const event = input.data[i]
+
+ if (event.transfer != null) {
+ let entity_id: string = `${event.txnId}-${i}`
+ const entity = new MyTransfer(entity_id)
+ entity.amount = event.transfer!.instruction!.amount.toString()
+ entity.source = event.transfer!.accounts!.source
+ entity.designation = event.transfer!.accounts!.destination
+
+ if (event.transfer!.accounts!.signer!.single != null) {
+ entity.signers = [event.transfer!.accounts!.signer!.single!.signer]
+ } else if (event.transfer!.accounts!.signer!.multisig != null) {
+ entity.signers = event.transfer!.accounts!.signer!.multisig!.signers
+ }
+ entity.save()
+ }
+ }
+}
+```
+
+### Étape 5 : Générer les Fichiers Protobuf
+
+Pour générer les objets Protobuf en AssemblyScript, exécutez la commande suivante :
+
+```bash
+npm run protogen
+```
+
+Cette commande convertit les définitions Protobuf en AssemblyScript, ce qui permet de les utiliser dans le gestionnaire du subgraph.
+
+### Conclusion
+
+Félicitations ! Vous avez configuré avec succès un subgraph alimenté par Substreams basé sur des déclencheurs pour un jeton Solana SPL. Vous pouvez passer à l'étape suivante en personnalisant votre schéma, vos mappages et vos modules pour les adapter à votre cas d'utilisation spécifique.
+
+### Tutoriel Vidéo
+
+
+
+### Ressources supplémentaires
+
+Pour aller plus loin en matière de personnalisation et d’optimisation, consultez la [documentation officielle de Substreams](https://substreams.streamingfast.io/tutorials/solana).
diff --git a/website/src/pages/fr/supported-networks.mdx b/website/src/pages/fr/supported-networks.mdx
index c1b6ee3fd39c..d96365004739 100644
--- a/website/src/pages/fr/supported-networks.mdx
+++ b/website/src/pages/fr/supported-networks.mdx
@@ -4,17 +4,17 @@ hideTableOfContents: true
hideContentHeader: true
---
-import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, NetworksTable } from '@/supportedNetworks'
import { Heading } from '@/components'
import { useI18n } from '@/i18n'
export const getStaticProps = getSupportedNetworksStaticProps
-
+
{useI18n().t('index.supportedNetworks.title')}
-
+
- Subgraph Studio repose sur la stabilité et la fiabilité des technologies sous-jacentes, comme les endpoints JSON-RPC, Firehose et Substreams.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
diff --git a/website/src/pages/fr/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/fr/token-api/evm/get-balances-evm-by-address.mdx
index 3386fd078059..68385ffc4272 100644
--- a/website/src/pages/fr/token-api/evm/get-balances-evm-by-address.mdx
+++ b/website/src/pages/fr/token-api/evm/get-balances-evm-by-address.mdx
@@ -1,9 +1,9 @@
---
-title: Token Balances by Wallet Address
+title: Balances by Address
template:
type: openApi
apiId: tokenApi
operationId: getBalancesEvmByAddress
---
-The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
+Provides latest ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/fr/token-api/evm/get-historical-balances-evm-by-address.mdx b/website/src/pages/fr/token-api/evm/get-historical-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..d96ed1b81fa2
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-historical-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Historical Balances
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHistoricalBalancesEvmByAddress
+---
+
+Provides historical ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/fr/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/fr/token-api/evm/get-holders-evm-by-contract.mdx
index 0bb79e41ed54..01a52bbf7ad2 100644
--- a/website/src/pages/fr/token-api/evm/get-holders-evm-by-contract.mdx
+++ b/website/src/pages/fr/token-api/evm/get-holders-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders by Contract Address
+title: Token Holders
template:
type: openApi
apiId: tokenApi
operationId: getHoldersEvmByContract
---
-The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
+Provides ERC-20 token holder balances by contract address.
diff --git a/website/src/pages/fr/token-api/evm/get-nft-activities-evm.mdx b/website/src/pages/fr/token-api/evm/get-nft-activities-evm.mdx
new file mode 100644
index 000000000000..f76eb35f653a
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-nft-activities-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Activities
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftActivitiesEvm
+---
+
+Provides NFT Activities (ex: transfers, mints & burns).
diff --git a/website/src/pages/fr/token-api/evm/get-nft-collections-evm-by-contract.mdx b/website/src/pages/fr/token-api/evm/get-nft-collections-evm-by-contract.mdx
new file mode 100644
index 000000000000..c8e9bfb64219
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-nft-collections-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Collection
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftCollectionsEvmByContract
+---
+
+Provides single NFT collection metadata, total supply, owners & total transfers.
diff --git a/website/src/pages/fr/token-api/evm/get-nft-holders-evm-by-contract.mdx b/website/src/pages/fr/token-api/evm/get-nft-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..091d01a197f4
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-nft-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Holders
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftHoldersEvmByContract
+---
+
+Provides NFT holders per contract.
diff --git a/website/src/pages/fr/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx b/website/src/pages/fr/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
new file mode 100644
index 000000000000..cf9ff1c6e1b8
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Items
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftItemsEvmContractByContractToken_idByToken_id
+---
+
+Provides single NFT token metadata, ownership & traits.
diff --git a/website/src/pages/fr/token-api/evm/get-nft-ownerships-evm-by-address.mdx b/website/src/pages/fr/token-api/evm/get-nft-ownerships-evm-by-address.mdx
new file mode 100644
index 000000000000..4c33526eceb7
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-nft-ownerships-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Ownerships
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftOwnershipsEvmByAddress
+---
+
+Provides NFT Ownerships for Account.
diff --git a/website/src/pages/fr/token-api/evm/get-nft-sales-evm.mdx b/website/src/pages/fr/token-api/evm/get-nft-sales-evm.mdx
new file mode 100644
index 000000000000..f2d78bea4052
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-nft-sales-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Sales
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftSalesEvm
+---
+
+Provides latest NFT marketplace sales.
diff --git a/website/src/pages/fr/token-api/evm/get-ohlc-pools-evm-by-pool.mdx b/website/src/pages/fr/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
new file mode 100644
index 000000000000..d5bc5357eadf
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
@@ -0,0 +1,9 @@
+---
+title: OHLCV by Pool
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPoolsEvmByPool
+---
+
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/fr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/fr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
index d1558ddd6e78..ff8f590b0433 100644
--- a/website/src/pages/fr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
+++ b/website/src/pages/fr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token OHLCV prices by Contract Address
+title: OHLCV by Contract
template:
type: openApi
apiId: tokenApi
operationId: getOhlcPricesEvmByContract
---
-The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/fr/token-api/evm/get-pools-evm.mdx b/website/src/pages/fr/token-api/evm/get-pools-evm.mdx
new file mode 100644
index 000000000000..db32376f5a17
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-pools-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Liquidity Pools
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getPoolsEvm
+---
+
+Provides Uniswap V2 & V3 liquidity pool metadata.
diff --git a/website/src/pages/fr/token-api/evm/get-swaps-evm.mdx b/website/src/pages/fr/token-api/evm/get-swaps-evm.mdx
new file mode 100644
index 000000000000..0a7697f38c8b
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-swaps-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Swap Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getSwapsEvm
+---
+
+Provides Uniswap V2 & V3 swap events.
diff --git a/website/src/pages/fr/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/fr/token-api/evm/get-tokens-evm-by-contract.mdx
index b6fab8011fc2..aed206c15272 100644
--- a/website/src/pages/fr/token-api/evm/get-tokens-evm-by-contract.mdx
+++ b/website/src/pages/fr/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders and Supply by Contract Address
+title: Token Metadata
template:
type: openApi
apiId: tokenApi
operationId: getTokensEvmByContract
---
-The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
+Provides ERC-20 token contract metadata.
diff --git a/website/src/pages/fr/token-api/evm/get-transfers-evm.mdx b/website/src/pages/fr/token-api/evm/get-transfers-evm.mdx
new file mode 100644
index 000000000000..d8e73c90a03c
--- /dev/null
+++ b/website/src/pages/fr/token-api/evm/get-transfers-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Transfer Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvm
+---
+
+Provides ERC-20 & Native transfer events.
diff --git a/website/src/pages/fr/token-api/faq.mdx b/website/src/pages/fr/token-api/faq.mdx
index 55125891c079..af2366f17557 100644
--- a/website/src/pages/fr/token-api/faq.mdx
+++ b/website/src/pages/fr/token-api/faq.mdx
@@ -6,21 +6,37 @@ Get fast answers to easily integrate and scale with The Graph's high-performance
## Général
-### What blockchains does the Token API support?
+### Which blockchains are supported by the Token API?
-Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+Currently, the Token API supports Ethereum, BNB Smart Chain (BSC), Polygon, Optimism, Base, Unichain, and Arbitrum One.
-### Why isn't my API key from The Graph Market working?
+### Does the Token API support NFTs?
-Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+Yes, The Graph Token API currently supports ERC-721 and ERC-1155 NFT token standards, with support for additional NFT standards planned. Endpoints are offered for ownership, collection stats, metadata, sales, holders, and transfer activity.
+
+### Do NFTs include off-chain data?
+
+NFT endpoints currently only include on-chain data. To get off-chain data, use the IPFS or HTTP links included in the NFT item response.
+
+### How do I authenticate requests to the Token API, and why doesn't my API key from The Graph Market work?
+
+Authentication is managed via API tokens obtained through [The Graph Market](https://thegraph.market/). If you're experiencing issues, make sure you're using the API Token generated from the API key, not the API key itself. An API token can be found on The Graph Market dashboard next to each API key. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
### How current is the data provided by the API relative to the blockchain?
The API provides data up to the latest finalized block.
-### How do I authenticate requests to the Token API?
+### How do I retrieve token prices?
+
+By default, token prices are returned with token-related responses, including token balances, token transfers, token metadata, and token holders. Historical prices are available with the Open-High-Low-Close (OHLC) endpoints.
+
+### Does the Token API support historical token data?
-Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+The Token API supports historical token balances with the `/historical/balances/evm/{address}` endpoint. You can query historical price data by pool at `/ohlc/pools/evm/{pool}` and by contract at `/ohlc/prices/evm/{contract}`.
+
+### What exchanges does the Token API use for token prices?
+
+The Token API currently tracks prices on Uniswap v2 and Uniswap v3, with plans to support additional exchanges in the future.
### Does the Token API provide a client SDK?
@@ -34,9 +50,9 @@ Yes, more blockchains will be supported in the future. Please share feedback on
Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
-### Are there plans to support additional use cases such as NFTs?
+### Are there plans to support additional use cases?
-The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
## MCP / LLM / AI Topics
@@ -60,17 +76,25 @@ You can find the code for the MCP client in [The Graph's repo](https://github.co
Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
-### Are there rate limits or usage costs?\*\*
+### Why am I getting 500 errors?
+
+Networks that are currently or temporarily unavailable on a given endpoint will return a `bad_database_response`, `Endpoint is currently not supported for this network` error. Databases that are in the process of ingestion will produce this response.
+
+### Are there rate limits or usage costs?
+
+During Beta, the Token API is free for authorized developers. There are no specific rate limits, but reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What do I do if I notice data inconsistencies in the data returned by the Token API?
-During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+If you notice data inconsistencies, please report the issue on our [Discord](https://discord.gg/graphprotocol). Identifying edge cases can help make sure all data is accurate and up-to-date.
-### What networks are supported, and how do I specify them?
+### How do I specify a network?
-You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of the exact network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`, `unichain`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
### Why do I only see 10 results? How can I get more data?
-Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` and `page` (1-indexed) to return more results. For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
### How do I fetch older transfer history?
diff --git a/website/src/pages/fr/token-api/monitoring/get-health.mdx b/website/src/pages/fr/token-api/monitoring/get-health.mdx
index 57a827b3343b..09f7b954dbf3 100644
--- a/website/src/pages/fr/token-api/monitoring/get-health.mdx
+++ b/website/src/pages/fr/token-api/monitoring/get-health.mdx
@@ -1,7 +1,9 @@
---
-title: Get health status of the API
+title: Health Status
template:
type: openApi
apiId: tokenApi
operationId: getHealth
---
+
+Get health status of the API
diff --git a/website/src/pages/fr/token-api/monitoring/get-networks.mdx b/website/src/pages/fr/token-api/monitoring/get-networks.mdx
index 0ea3c485ddb9..7f3b38ffd7c8 100644
--- a/website/src/pages/fr/token-api/monitoring/get-networks.mdx
+++ b/website/src/pages/fr/token-api/monitoring/get-networks.mdx
@@ -1,7 +1,9 @@
---
-title: Get supported networks of the API
+title: Réseaux pris en charge
template:
type: openApi
apiId: tokenApi
operationId: getNetworks
---
+
+Get supported networks of the API
diff --git a/website/src/pages/fr/token-api/monitoring/get-version.mdx b/website/src/pages/fr/token-api/monitoring/get-version.mdx
index 0be6b7e92d04..fa0040807854 100644
--- a/website/src/pages/fr/token-api/monitoring/get-version.mdx
+++ b/website/src/pages/fr/token-api/monitoring/get-version.mdx
@@ -1,7 +1,9 @@
---
-title: Get the version of the API
+title: Version
template:
type: openApi
apiId: tokenApi
operationId: getVersion
---
+
+Get the version of the API
diff --git a/website/src/pages/fr/token-api/quick-start.mdx b/website/src/pages/fr/token-api/quick-start.mdx
index 4a38a878fd7c..81289c5353bd 100644
--- a/website/src/pages/fr/token-api/quick-start.mdx
+++ b/website/src/pages/fr/token-api/quick-start.mdx
@@ -9,15 +9,15 @@ sidebarTitle: Démarrage rapide
The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
-The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+The Token API provides access to onchain NFT and fungible token data, including live and historical balances, holders, prices, market data, token metadata, and token transfers. This API also uses the Model Context Protocol (MCP) to allow AI tools such as Claude to enrich raw blockchain data with contextual insights.
## Prérequis
-Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+Before you begin, get a JWT API token by signing up on [The Graph Market](https://thegraph.market/). Make sure to use the JWT API Token, not the API key. Each API key can generate a new JWT API Token at any time.
## Authentication
-All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+All API endpoints are authenticated using a JWT API token inserted in the header as `Authorization: Bearer `.
```json
{
@@ -64,6 +64,20 @@ Make sure to replace `` with the JWT Token generated from your API key.
> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+## Chain and Feature Support
+
+| Network | evm-tokens | evm-uniswaps | evm-nft-tokens |
+| ---------------- | :---------: | :----------: | :------------: |
+| Ethereum Mainnet | ✅ | ✅ | ✅ |
+| BSC | ✅\* | ✅ | ✅ |
+| Base | ✅ | ✅ | ✅ |
+| Unichain | ✅ | ✅ | ✅ |
+| Arbitrum-One | Ingesting\* | Ingesting\* | Ingesting\* |
+| Optimism | ✅ | ✅ | ✅ |
+| Polygon | ✅ | ✅ | ✅ |
+
+\*Some chains are still in the process of syncing. You may encounter `bad_database_response` errors or incorrect response values until data is fully synced.
+
## Troubleshooting
If the API call fails, try printing out the full response object for additional error details. For example:
diff --git a/website/src/pages/hi/about.mdx b/website/src/pages/hi/about.mdx
index 98bf7a76374e..a5d65c25b7b4 100644
--- a/website/src/pages/hi/about.mdx
+++ b/website/src/pages/hi/about.mdx
@@ -1,67 +1,46 @@
---
-title: The Graph के बारे में
+title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## The Graph क्या है?
-The Graph एक शक्तिशाली विकेंद्रीकृत प्रोटोकॉल है जो ब्लॉकचेन डेटा को आसानी से क्वेरी और इंडेक्स करने में सक्षम बनाता है। यह ब्लॉकचेन डेटा को क्वेरी करने की जटिल प्रक्रिया को सरल बनाता है, जिससे डैप विकास तेज और आसान हो जाता है।
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## मूल बातें समझना
+Its data services include:
-जटिल स्मार्ट कॉन्ट्रैक्ट्स वाले प्रोजेक्ट्स, जैसे [Uniswap](https://uniswap.org/) और NFT इनिशिएटिव्स जैसे [Bored Ape Yacht Club](https://boredapeyachtclub.com/), डेटा को एथेरियम ब्लॉकचेन पर स्टोर करते हैं, जिससे ब्लॉकचेन से सीधे मूलभूत डेटा के अलावा किसी अन्य जानकारी को पढ़ना बहुत कठिन हो जाता है।
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### The Graph के बिना चुनौतियाँ
+### Why Blockchain Data is Difficult to Query
-उदाहरण के रूप में, Bored Ape Yacht Club के मामले में, आप [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) पर बुनियादी रीड ऑपरेशन कर सकते हैं। आप किसी विशेष एपी के मालिक को पढ़ सकते हैं, एपी के ID के आधार पर कंटेंट URI पढ़ सकते हैं, या कुल आपूर्ति को पढ़ सकते हैं।
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- यह संभव है क्योंकि ये पढ़ाई संचालन सीधे स्मार्ट कॉन्ट्रैक्ट में प्रोग्राम किए गए हैं। हालांकि, अधिक उन्नत, विशिष्ट और वास्तविक दुनिया की क्वेरीज़ और संचालन जैसे कि एग्रीगेशन, सर्च, रिलेशनशिप, और जटिल फ़िल्टरिंग, **संभव नहीं हैं**।
+The result is slow performance, complex infrastructure, and scalability issues.
-- उदाहरण के लिए, यदि आप किसी विशेष पते द्वारा स्वामित्व वाले Apes के बारे में पूछताछ करना चाहते हैं और किसी विशेष विशेषता के आधार पर अपनी खोज को परिष्कृत करना चाहते हैं, तो आप यह जानकारी सीधे कॉन्ट्रैक्ट के साथ बातचीत करके प्राप्त नहीं कर पाएंगे।
+## How The Graph Solves This
-- ज्यादा डेटा प्राप्त करने के लिए, आपको हर एक [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) इवेंट को प्रोसेस करना होगा, IPFS से Token ID और IPFS हैश का उपयोग करके मेटाडेटा पढ़ना होगा, और फिर उसे संक्षिप्त करना होगा।
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### यह एक समस्या क्यों है?
+Find the perfect data service for you:
-यह सरल सवालों का जवाब पाने में एक ब्राउज़र में चल रही एक विकेन्द्रीकृत एप्लिकेशन (dapp) को **घंटे या यहाँ तक कि दिन** लग सकते हैं।
+### 1. Custom Real-Time Data Streams
-Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
+**Use Case:** High-frequency trading, live analytics.
-ब्लॉकचेन की विशेषताएँ, जैसे अंतिमता, चेन पुनर्गठन, और अंकल ब्लॉक्स, प्रक्रिया में जटिलता जोड़ती हैं, जिससे ब्लॉकचेन डेटा से सटीक क्वेरी परिणाम प्राप्त करना समय लेने वाला और अवधारणात्मक रूप से चुनौतीपूर्ण हो जाता है।
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph एक समाधान प्रदान करता है
+### 2. Instant Token Data
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-आज एक विकेंद्रीकृत प्रोटोकॉल है, जो [Graph Node](https://github.com/graphprotocol/graph-node) के ओपन सोर्स इम्प्लीमेंटेशन द्वारा समर्थित है, जो इस प्रक्रिया को सक्षम बनाता है।
+- [Start with Token API](/token-api/quick-start/)
-### The Graph कैसे काम करता है
+### 3. Flexible Historical Queries
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+**Use Case:** Dapp frontends, custom analytics.
-#### विशिष्टताएँ
-
-- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-
-- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-
-- When creating a Subgraph, you need to write a Subgraph manifest.
-
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-
-The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.
-
-
-
-प्रवाह इन चरणों का पालन करता है:
-
-1. एक विकेंद्रीकृत एप्लिकेशन स्मार्ट अनुबंध पर लेनदेन के माध्यम से एथेरियम में डेटा जोड़ता है।
-2. लेन-देन संसाधित करते समय स्मार्ट अनुबंध एक या अधिक घटनाओं का उत्सर्जन करता है।
-3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
-4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
-5. नोड के [GraphQL समापन बिंदु](https://graphql.org/learn/) का उपयोग करते हुए, विकेन्द्रीकृत एप्लिकेशन ब्लॉकचैन से अनुक्रमित डेटा के लिए ग्राफ़ नोड से पूछताछ करता है। ग्राफ़ नोड बदले में इस डेटा को प्राप्त करने के लिए, स्टोर की इंडेक्सिंग क्षमताओं का उपयोग करते हुए, अपने अंतर्निहित डेटा स्टोर के लिए ग्राफ़कॉल प्रश्नों का अनुवाद करता है। विकेंद्रीकृत एप्लिकेशन इस डेटा को एंड-यूजर्स के लिए एक समृद्ध यूआई में प्रदर्शित करता है, जिसका उपयोग वे एथेरियम पर नए लेनदेन जारी करने के लिए करते हैं। चक्र दोहराता है।
-
-## अगले कदम
-
-The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-
-Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/hi/index.json b/website/src/pages/hi/index.json
index 006af907dc33..75135a881710 100644
--- a/website/src/pages/hi/index.json
+++ b/website/src/pages/hi/index.json
@@ -2,7 +2,7 @@
"title": "Home",
"hero": {
"title": "The Graph डॉक्स",
- "description": "अपनी वेब3 परियोजना को शुरू करें उन उपकरणों के साथ जो ब्लॉकचेन डेटा को निकालने, बदलने और लोड करने में सहायता करते हैं।",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "The Graph कैसे काम करता है",
"cta2": "अपना पहला Subgraph बनाएं"
},
@@ -19,10 +19,10 @@
"description": "ब्लॉकचेन डेटा प्राप्त करें और समानांतर निष्पादन के साथ उपयोग करें।",
"cta": "सबस्ट्रीम के साथ विकसित करें"
},
- "sps": {
- "title": "सबस्ट्रीम पावर्ड सबग्राफ",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "सबस्ट्रीम-संचालित सबग्राफ सेट करें"
+ "tokenApi": {
+ "title": "टोकन API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "Graph Node",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "Firehose",
- "description": "ब्लॉकचेन डेटा को फ्लैट फ़ाइलों में निकालें ताकि सिंक समय और स्ट्रीमिंग क्षमताओं में सुधार किया जा सके।",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Firehose के साथ शुरुआत करें"
}
},
@@ -58,6 +58,7 @@
"networks": "नेटवर्क्स ",
"completeThisForm": "इस फ़ॉर्म को पूरा करें "
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "सबग्राफ",
"substreams": "सबस्ट्रीम",
"firehose": "Firehose",
- "tokenapi": "टोकन API"
+ "tokenApi": "टोकन API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "सबस्ट्रीम",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "Graph Explorer",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/hi/indexing/new-chain-integration.mdx b/website/src/pages/hi/indexing/new-chain-integration.mdx
index 1863ea5fa2bc..7628d084da3a 100644
--- a/website/src/pages/hi/indexing/new-chain-integration.mdx
+++ b/website/src/pages/hi/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ Graph Node को EVM चेन से डेटा इन्गेस्ट क
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, in a JSON-RPC batch request
-- `trace_filter` *(सीमित ट्रेसिंग और विकल्पतः Graph Node के लिए आवश्यक)*
+- `trace_filter` _(सीमित ट्रेसिंग और विकल्पतः Graph Node के लिए आवश्यक)_
### 2. Firehose एकीकरण
@@ -55,7 +55,7 @@ JSON-RPC और Firehose दोनों ही सबग्राफ के ल
## Graph Node Configuration
-ग्राफ-नोड को कॉन्फ़िगर करना उतना ही आसान है जितना कि अपने स्थानीय वातावरण को तैयार करना। एक बार जब आपका स्थानीय वातावरण सेट हो जाता है, तो आप स्थानीय रूप से एक सबग्राफ को तैनात करके एकीकरण का परीक्षण कर सकते हैं।
+ग्राफ-नोड को कॉन्फ़िगर करना उतना ही आसान है जितना कि अपने स्थानीय वातावरण को तैयार करना। एक बार जब आपका स्थानीय वातावरण सेट हो जाता है, तो आप स्थानीय रूप से एक सबग्राफ को तैनात करके एकीकरण का परीक्षण कर सकते हैं।
1. [Clone Graph Node](https://github.com/graphprotocol/graph-node)
@@ -63,7 +63,7 @@ JSON-RPC और Firehose दोनों ही सबग्राफ के ल
> कृपया पर्यावरण चर ethereum को खुद नाम में बदलें नहीं। यही रहना चाहिए, चाहे network का नाम भिन्न हो।
-3. एक IPFS node चलाएं या उसे The Graph द्वारा उपयोग किया जाने वाले node का उपयोग करें: https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## सबस्ट्रीम-संचालित सबग्राफ की सेवा
diff --git a/website/src/pages/hi/indexing/overview.mdx b/website/src/pages/hi/indexing/overview.mdx
index 38a778b97854..ea95d6557970 100644
--- a/website/src/pages/hi/indexing/overview.mdx
+++ b/website/src/pages/hi/indexing/overview.mdx
@@ -60,7 +60,7 @@ query indexerAllocations {
Etherscan का उपयोग करके `getRewards()` कॉल करें:
- [ईथरस्कैन इंटरफेस पर रिवॉर्ड्स contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) पर जाएं।
-- `getRewards()` को कॉल करने के लिए:
+- `getRewards()` को कॉल करने के लिए:
- **9. getRewards** ड्रॉपडाउन का विस्तार करें।
- इनपुट में **allocationID** दर्ज करें।
- कृपया **Query** बटन पर क्लिक करें।
@@ -111,11 +111,11 @@ Indexers उन्नत तकनीकों को लागू करके
- **बड़ा **- वर्तमान में उपयोग किए जा रहे सभी सबग्राफ को इंडेक्स करने और संबंधित ट्रैफ़िक के लिए अनुरोधों को सर्व करने के लिए तैयार।
| सेटअप | Postgres (CPUs) | Postgres (मेमोरी in GBs) | Postgres (डिस्क in TBs) | VMs (CPUs) | VMs (मेमोरी in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| छोटा | 4 | 8 | 1 | 4 | 16 |
-| मानक | 8 | 30 | 1 | 12 | 48 |
-| मध्यम | 16 | 64 | 2 | 32 | 64 |
-| बड़ा | 72 | 468 | 3.5 | 48 | 184 |
+| ----- | :------------------: | :---------------------------: | :--------------------------: | :-------------: | :----------------------: |
+| छोटा | 4 | 8 | 1 | 4 | 16 |
+| मानक | 8 | 30 | 1 | 12 | 48 |
+| मध्यम | 16 | 64 | 2 | 32 | 64 |
+| बड़ा | 72 | 468 | 3.5 | 48 | 184 |
### कोई Indexer को कौन-कौन सी बुनियादी सुरक्षा सावधानियाँ बरतनी चाहिए?
@@ -131,7 +131,7 @@ Indexer के इंफ्रास्ट्रक्चर के केंद
- **डेटा एंडपॉइंट** - EVM-संगत नेटवर्क्स के लिए, Graph Node को एक ऐसे एंडपॉइंट से कनेक्ट करने की आवश्यकता होती है जो EVM-संगत JSON-RPC API को एक्सपोज़ करता हो। यह एक सिंगल क्लाइंट के रूप में हो सकता है या यह एक अधिक जटिल सेटअप हो सकता है जो मल्टीपल क्लाइंट्स के बीच लोड बैलेंस करता हो। यह जानना महत्वपूर्ण है कि कुछ सबग्राफ को विशेष क्लाइंट क्षमताओं की आवश्यकता हो सकती है, जैसे कि आर्काइव मोड और/या पैरिटी ट्रेसिंग API।
-- **IPFS node (संस्करण 5 से कम)** - सबग्राफ डिप्लॉयमेंट मेटाडेटा IPFS नेटवर्क पर स्टोर किया जाता है। The Graph Node मुख्य रूप से सबग्राफ डिप्लॉयमेंट के दौरान IPFS node तक पहुंचता है ताकि सबग्राफ मैनिफेस्ट और सभी लिंक की गई फ़ाइलों को प्राप्त किया जा सके। नेटवर्क Indexers को अपना स्वयं का IPFS node होस्ट करने की आवश्यकता नहीं है, नेटवर्क के लिए एक IPFS node होस्ट किया गया है: https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Indexer सेवा** - आवश्यक बाहरी संचार को नेटवर्क के साथ संभालती है। लागत मॉडल और इंडेक्सिंग स्थितियों को साझा करती है, गेटवे से आने वाले क्वेरी अनुरोधों को एक Graph Node तक पहुंचाती है, और गेटवे के साथ स्टेट चैनलों के माध्यम से क्वेरी भुगतान को प्रबंधित करती है।
@@ -147,26 +147,26 @@ Indexer के इंफ्रास्ट्रक्चर के केंद
#### Graph Node
-| पोर्ट | उद्देश्य | रूट्स | आर्गुमेंट्स | पर्यावरण वेरिएबल्स |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for सबग्राफ subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - |
+| पोर्ट | उद्देश्य | रूट्स | आर्गुमेंट्स | पर्यावरण वेरिएबल्स |
+| ----- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | ------------------ |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for सबग्राफ subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - |
#### Indexer सेवा
-| पोर्ट | उद्देश्य | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (भुगतान किए गए सबग्राफ क्वेरीज़ के लिए) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - |
+| पोर्ट | उद्देश्य | Routes | CLI Argument | Environment Variable |
+| ----- | ---------------------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (भुगतान किए गए सबग्राफ क्वेरीज़ के लिए) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - |
#### Indexer एजेंट
-| पोर्ट | उद्देश्य | Routes | CLI Argument | Environment Variable |
-| ----- | ------------------- | ------ | -------------------------- | --------------------------------------- |
-| 8000 | Indexer प्रबंधन API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` |
+| पोर्ट | उद्देश्य | Routes | CLI Argument | Environment Variable |
+| ----- | ----------------------- | ------ | -------------------------- | --------------------------------------- |
+| 8000 | Indexer प्रबंधन API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` |
### Google Cloud पर Terraform का उपयोग करके सर्वर अवसंरचना सेटअप करें
@@ -178,7 +178,7 @@ Indexer के इंफ्रास्ट्रक्चर के केंद
- Kubectl कमांड लाइन टूल
- Terraform
-#### Google Cloud प्रोजेक्ट बनाएं
+#### Google Cloud प्रोजेक्ट बनाएं
- Clone करें या [Indexer repository](https://github.com/graphprotocol/indexer) पर जाएं।
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Docker का उपयोग शुरू करना
@@ -525,7 +525,9 @@ graph indexer status
#### Indexer प्रबंधन Indexer CLI का उपयोग करके
-**Indexer Management API** के साथ इंटरैक्ट करने के लिए सुझाया गया टूल **Indexer CLI** है, जो कि **Graph CLI** का एक एक्सटेंशन है। Indexer agent को एक Indexer से इनपुट की आवश्यकता होती है ताकि वह Indexer की ओर से नेटवर्क के साथ स्वायत्त रूप से इंटरैक्ट कर सके। Indexer agent व्यवहार को परिभाषित करने के लिए **allocation management** मोड और **indexing rules** का उपयोग किया जाता है। Auto mode में, एक Indexer **indexing rules** का उपयोग करके यह तय कर सकता है कि वह किन को इंडेक्स और क्वेरी के लिए सर्व करेगा। इन नियमों को GraphQL API के माध्यम से प्रबंधित किया जाता है, जिसे agent द्वारा सर्व किया जाता है और यह Indexer Management API के रूप में जाना जाता है। Manual mode में, एक Indexer **actions queue** का उपयोग करके allocation actions बना सकता है और उन्हें निष्पादित करने से पहले स्पष्ट रूप से अनुमोदित कर सकता है। Oversight mode में, **indexing rules** का उपयोग **actions queue** को भरने के लिए किया जाता है और इन्हें निष्पादित करने से पहले भी स्पष्ट अनुमोदन की आवश्यकता होती है।
+**Indexer Management API** के साथ इंटरैक्ट करने के लिए सुझाया गया टूल **Indexer CLI** है, जो कि **Graph CLI** का एक एक्सटेंशन है। Indexer agent को एक Indexer से इनपुट की आवश्यकता होती है ताकि वह Indexer की ओर से नेटवर्क के साथ स्वायत्त रूप से इंटरैक्ट कर सके।
+Indexer agent व्यवहार को परिभाषित करने के लिए **allocation management** मोड और **indexing rules** का उपयोग किया जाता है। Auto mode में, एक Indexer **indexing rules** का उपयोग करके यह तय कर सकता है कि वह किन को इंडेक्स और क्वेरी के लिए सर्व करेगा। इन नियमों को GraphQL API के माध्यम से प्रबंधित किया जाता है, जिसे agent द्वारा सर्व किया जाता है और यह Indexer Management API के रूप में जाना जाता है।
+Manual mode में, एक Indexer **actions queue** का उपयोग करके allocation actions बना सकता है और उन्हें निष्पादित करने से पहले स्पष्ट रूप से अनुमोदित कर सकता है। Oversight mode में, **indexing rules** का उपयोग **actions queue** को भरने के लिए किया जाता है और इन्हें निष्पादित करने से पहले भी स्पष्ट अनुमोदन की आवश्यकता होती है।
#### उपयोग
@@ -708,42 +710,6 @@ Supported action types के लिए आवंटन प्रबंधन
कॉस्ट मॉडल बाज़ार और क्वेरी विशेषताओं के आधार पर क्वेरी के लिए डायनामिक मूल्य निर्धारण प्रदान करते हैं। Indexer Service प्रत्येक सबग्राफ के लिए गेटवे के साथ एक कॉस्ट मॉडल साझा करता है, जिसके लिए वे क्वेरी का जवाब देने का इरादा रखते हैं। बदले में, गेटवे इस कॉस्ट मॉडल का उपयोग प्रति क्वेरी Indexer चयन निर्णय लेने और चुने गए Indexers के साथ भुगतान पर बातचीत करने के लिए करते हैं।
-#### Agora
-
-Agora भाषा क्वेरी के लिए लागत मॉडल घोषित करने के लिए एक लचीला प्रारूप प्रदान करती है। एक Agora मूल्य मॉडल बयानों का एक क्रम होता है जो प्रत्येक शीर्ष-स्तरीय GraphQL क्वेरी के लिए क्रम में निष्पादित होता है। प्रत्येक शीर्ष-स्तरीय क्वेरी के लिए, पहला कथन जो उससे मेल खाता है, उस क्वेरी के लिए मूल्य निर्धारित करता है।
-
-एक कथन में एक predicate होता है, जिसका उपयोग GraphQL queries से मिलान करने के लिए किया जाता है, और एक cost expression होता है, जो मूल्यांकन किए जाने पर दशमलव GRT में एक लागत आउटपुट करता है। किसी क्वेरी में नामित आर्गुमेंट स्थिति के मानों को predicate में कैप्चर किया जा सकता है और expression में उपयोग किया जा सकता है। Globals भी सेट किए जा सकते हैं और expression में प्लेसहोल्डर्स के लिए प्रतिस्थापित किए जा सकते हैं।
-
-उदाहरण लागत मॉडल:
-
-```
-#यह कथन skip मान को प्राप्त करता है,
-#शर्त में एक बूलियन अभिव्यक्ति का उपयोग करता है ताकि skip का उपयोग करने वाले विशिष्ट क्वेरीज़ का मिलान किया जा सके,
-#और skip मान और SYSTEM_LOAD ग्लोबल के आधार पर लागत की गणना करने के लिए लागत अभिव्यक्ति का उपयोग करता है।
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-#यह डिफ़ॉल्ट किसी भी GraphQL अभिव्यक्ति से मेल खाएगा।
-#यह ग्लोबल का उपयोग करके लागत की गणना करने के लिए अभिव्यक्ति में प्रतिस्थापित करता है।
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-उपरोक्त मॉडल का उपयोग करके उदाहरण क्वेरी लागत:
-
-| Query | कीमत |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### लागत मॉडल लागू करना
-
-कास्ट मॉडल को Indexer CLI के माध्यम से लागू किया जाता है, जो उन्हें Indexer एजेंट के Indexer Management API को पास करता है ताकि उन्हें डेटाबेस में संग्रहीत किया जा सके। इसके बाद, Indexer Service उन्हें लेगी और जब भी गेटवे इसकी मांग करेंगे, तो उन्हें कास्ट मॉडल प्रदान करेगी।
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## नेटवर्क के साथ इंटरैक्ट करना
### प्रोटोकॉल में staking
@@ -758,7 +724,7 @@ Once an Indexer ने प्रोटोकॉल में GRT को स्
1. ओपन द [Remix app](https://remix.ethereum.org/) एक ब्राउज़र में
-2. `File Explorer` में **GraphToken.abi** नामक फ़ाइल बनाएं जिसमें [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json) हो।
+2. `File Explorer` में **GraphToken.abi** नामक फ़ाइल बनाएं जिसमें [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json) हो।
3. `GraphToken.abi` चयनित और संपादक में खुला होने पर, Remix इंटरफ़ेस में `Deploy and run transactions` अनुभाग पर स्विच करें।
diff --git a/website/src/pages/hi/indexing/tap.mdx b/website/src/pages/hi/indexing/tap.mdx
index 9aff98fe1eae..bed6a68c4a5d 100644
--- a/website/src/pages/hi/indexing/tap.mdx
+++ b/website/src/pages/hi/indexing/tap.mdx
@@ -51,13 +51,13 @@ GraphTally allows a sender to make multiple payments to a receiver, **Receipts**
| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` |
| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` |
-### गेटवे
+### गेटवे
-| घटक | Edge and Node Mainnet (Arbitrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) |
-| ---------------- | --------------------------------------------- | --------------------------------------------- |
-| प्रेषक | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` |
-| हस्ताक्षरकर्ता | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
-| संकेन्द्रीयकर्ता | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
+| घटक | Edge and Node Mainnet (Arbitrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) |
+| ----------------- | --------------------------------------------- | --------------------------------------------- |
+| प्रेषक | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` |
+| हस्ताक्षरकर्ता | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
+| संकेन्द्रीयकर्ता | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
### आवश्यक शर्तें
@@ -117,7 +117,7 @@ operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy al
[database]
# Indexer घटकों के लिए उपयोग किए जाने वाले Postgres डेटाबेस का URL। वही डेटाबेस
-# जिसका उपयोग `indexer-agent` द्वारा किया जाता है। यह अपेक्षित है कि `indexer-agent`
+# जिसका उपयोग `indexer-agent` द्वारा किया जाता है। यह अपेक्षित है कि `indexer-agent`
#आवश्यक तालिकाएँ बनाएगा।
postgres_url = "postgres://postgres@postgres:5432/postgres"
@@ -186,7 +186,7 @@ max_amount_willing_to_lose_grt = 20
### Grafana डैशबोर्ड
-आप Grafana Dashboard (https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) डाउनलोड कर सकते हैं और इम्पोर्ट कर सकते हैं।
+आप Grafana Dashboard (https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) डाउनलोड कर सकते हैं और इम्पोर्ट कर सकते हैं।
### लॉन्चपैड
diff --git a/website/src/pages/hi/indexing/tooling/graph-node.mdx b/website/src/pages/hi/indexing/tooling/graph-node.mdx
index fad29c77e35e..4bc6a8223264 100644
--- a/website/src/pages/hi/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/hi/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ Graph Node (और पूरा indexer stack) को bare metal पर या
### आईपीएफएस नोड्स
-सबग्राफ तैनाती मेटाडेटा IPFS नेटवर्क पर संग्रहीत होता है। ग्राफ नोड मुख्य रूप से सबग्राफ तैनाती के दौरान IPFS नोड तक पहुँचता है ताकि सबग्राफ मैनिफेस्ट और सभी लिंक किए गए फ़ाइलों को प्राप्त कर सके। नेटवर्क Indexer को अपने स्वयं के IPFS नोड की मेज़बानी करने की आवश्यकता नहीं है। नेटवर्क के लिए एक IPFS नोड यहाँ होस्ट किया गया है: https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### प्रोमेथियस मेट्रिक्स सर्वर
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Kubernetes के साथ शुरुआत करना
@@ -77,15 +77,20 @@ Kubernetes का एक पूर्ण उदाहरण कॉन्फ़
जब यह चल रहा होता है तो Graph Node following port को expose करता है:
-| पोर्ट | उद्देश्य | रूट्स | आर्गुमेंट्स | पर्यावरण वेरिएबल्स |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for सबग्राफ subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - |
-
-> **प्रमुख बात**: सार्वजनिक रूप से पोर्ट्स को एक्सपोज़ करने में सावधानी बरतें - \*\*प्रशासनिक पोर्ट्स को लॉक रखना चाहिए। इसमें ग्राफ नोड JSON-RPC एंडपॉइंट भी शामिल है।
+| पोर्ट | उद्देश्य | रूट्स | आर्गुमेंट्स | पर्यावरण वेरिएबल्स |
+| ----- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | ------------------ |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for सबग्राफ subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## Advanced Graph Node configuration
@@ -225,7 +230,7 @@ provider = [ { label = "kovan", url = "http://..", features = [] } ]
### Managing Graph Node
-एक चालू Graph Node (या कई Graph Nodes!) को चलाने के बाद, अगली चुनौती उन Graph Nodes पर तैनात किए गए सबग्राफ को प्रबंधित करने की होती है। ग्राफ-नोड विभिन्न टूल्स प्रदान करता है जो सबग्राफ के प्रबंधन में मदद करते हैं।
+एक चालू Graph Node (या कई Graph Nodes!) को चलाने के बाद, अगली चुनौती उन Graph Nodes पर तैनात किए गए सबग्राफ को प्रबंधित करने की होती है। ग्राफ-नोड विभिन्न टूल्स प्रदान करता है जो सबग्राफ के प्रबंधन में मदद करते हैं।
#### लॉगिंग
@@ -245,7 +250,7 @@ Indexer रिपॉजिटरी एक [example Grafana configuration](https
The graphman कमांड आधिकारिक कंटेनरों में शामिल है, और आप अपने ग्राफ-नोड कंटेनर में docker exec कमांड का उपयोग करके इसे चला सकते हैं। इसके लिए एक `config.toml` फ़ाइल की आवश्यकता होती है।
-`graphman` कमांड्स का पूरा दस्तावेज़ ग्राफ नोड रिपॉजिटरी में उपलब्ध है। ग्राफ नोड `/docs` में [/docs/graphman.md](https://github.com/graphprotocol/ग्राफ-नोड/blob/master/docs/graphman.md) देखें।
+`graphman` कमांड्स का पूरा दस्तावेज़ ग्राफ नोड रिपॉजिटरी में उपलब्ध है। ग्राफ नोड `/docs` में [/docs/graphman.md](https://github.com/graphprotocol/ग्राफ-नोड/blob/master/docs/graphman.md) देखें।
### Subgraph के साथ कार्य करना
@@ -263,7 +268,7 @@ Indexing process के तीन अलग-अलग भाग हैं:
- उपयुक्त संचालकों के साथ घटनाओं को संसाधित करना (इसमें राज्य के लिए श्रृंखला को कॉल करना और स्टोर से डेटा प्राप्त करना शामिल हो सकता है)
- Resulting data को store पर लिखना
-ये चरण पाइपलाइन किए गए हैं (अर्थात वे समानांतर रूप से निष्पादित किए जा सकते हैं), लेकिन वे एक-दूसरे पर निर्भर हैं। जहाँ सबग्राफ को इंडेक्स करने में धीमापन होता है, वहाँ इसकी मूल वजह विशिष्ट सबग्राफ पर निर्भर करेगी।
+ये चरण पाइपलाइन किए गए हैं (अर्थात वे समानांतर रूप से निष्पादित किए जा सकते हैं), लेकिन वे एक-दूसरे पर निर्भर हैं। जहाँ सबग्राफ को इंडेक्स करने में धीमापन होता है, वहाँ इसकी मूल वजह विशिष्ट सबग्राफ पर निर्भर करेगी।
Common causes of indexing slowness:
@@ -330,7 +335,7 @@ Database tables that store entities seem to generally come in two varieties: 'tr
अकाउंट-जैसी तालिकाओं के लिए, `ग्राफ-नोड` ऐसे queries जनरेट कर सकता है जो इस विवरण का लाभ उठाते हैं कि Postgres इतनी तेज़ दर पर डेटा स्टोर करते समय इसे कैसे प्रबंधित करता है। खासतौर पर, हाल के ब्लॉक्स के सभी संस्करण ऐसी तालिका के कुल स्टोरेज के एक छोटे से हिस्से में होते हैं।
-कमांड `graphman stats show प्रत्येक डिप्लॉयमेंट में मौजूद entities प्रकार/टेबल के लिए यह दिखाता है कि प्रत्येक टेबल में कितनी अलग-अलग entities और कितने entities वर्ज़न हैं। यह डेटा Postgres के आंतरिक अनुमानों पर आधारित होता है, और इसलिए यह अनिवार्य रूप से सटीक नहीं होता है और इसमें एक ऑर्डर ऑफ मैग्निट्यूड तक का अंतर हो सकता है। `entities` कॉलम में `-1` का मतलब है कि Postgres मानता है कि सभी पंक्तियां एक अलग entities को शामिल करती हैं।
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
सामान्यतः, वे तालिकाएँ जहाँ विशिष्ट entities की संख्या कुल पंक्तियों/entities संस्करणों की संख्या का 1% से कम हो, वे खाता-जैसा अनुकूलन के लिए अच्छे उम्मीदवार होती हैं। जब `graphman stats show` का आउटपुट यह दर्शाता है कि कोई तालिका इस optimization से लाभ उठा सकती है, तो `graphman stats show ` चलाने पर तालिका की पूरी गणना की जाती है - यह धीमा हो सकता है, लेकिन विशिष्ट entities और कुल entities संस्करणों के अनुपात का सटीक माप प्रदान करता है।
@@ -340,6 +345,4 @@ Uniswap जैसी Subgraphs के लिए, `pair` और `token` टेब
#### सबग्राफ हटाना
-> यह new functionality है, जो garph node 0.29.x में उपलब्ध होगी
-
At some point, एक Indexer किसी दिए गए Subgraph को हटाना चाह सकता है। यह आसानी से `graphman drop` के माध्यम से किया जा सकता है, जो एक deployment और उसके सभी indexed डेटा को हटा देता है। Deployment को या तो सबग्राफ नाम, एक IPFS हैश `Qm..`, या डेटाबेस namespace `sgdNNN` के रूप में निर्दिष्ट किया जा सकता है। आगे का दस्तावेज़ [यहाँ](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop) उपलब्ध है।
diff --git a/website/src/pages/hi/resources/claude-mcp.mdx b/website/src/pages/hi/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..d9220f7d4196
--- /dev/null
+++ b/website/src/pages/hi/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## आवश्यक शर्तें
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/hi/resources/tokenomics.mdx b/website/src/pages/hi/resources/tokenomics.mdx
index ac420f323ab7..00ea0f5b3cef 100644
--- a/website/src/pages/hi/resources/tokenomics.mdx
+++ b/website/src/pages/hi/resources/tokenomics.mdx
@@ -6,7 +6,7 @@ description: The Graph Network को शक्तिशाली टोकन
## Overview
-The Graph एक **decentralized protocol** है, जो **blockchain data** तक आसान पहुँच प्रदान करता है। यह **blockchain data** को उसी तरह **index** करता है, जैसे **Google** वेब को **index** करता है। अगर आपने किसी ऐसे **dapp** का उपयोग किया है जो किसी **Subgraph** से डेटा प्राप्त करता है, तो आपने संभवतः **The Graph** के साथ इंटरैक्ट किया है। आज, **Web3 ecosystem** में हजारों [लोकप्रिय dapps](https://thegraph.com/explorer) **The Graph** का उपयोग कर रहे हैं।
+The Graph एक **decentralized protocol** है, जो **blockchain data** तक आसान पहुँच प्रदान करता है। यह **blockchain data** को उसी तरह **index** करता है, जैसे **Google** वेब को **index** करता है। अगर आपने किसी ऐसे **dapp** का उपयोग किया है जो किसी **Subgraph** से डेटा प्राप्त करता है, तो आपने संभवतः **The Graph** के साथ इंटरैक्ट किया है। आज, **Web3 ecosystem** में हजारों [लोकप्रिय dapps](https://thegraph.com/explorer) **The Graph** का उपयोग कर रहे हैं।
## विशिष्टताएँ
@@ -36,7 +36,7 @@ The Graph ब्लॉकचेन डेटा को अधिक सुलभ
## Delegator(निष्क्रिय रूप से GRT कमाएं)
-**Indexers** को **Delegators** द्वारा **GRT** डेलिगेट किया जाता है, जिससे नेटवर्क पर Subgraphs में Indexer की **stake** बढ़ती है। इसके बदले में, **Delegators** को Indexer से मिलने वाले कुल **query fees** और **indexing rewards** का एक निश्चित प्रतिशत मिलता है। हर **Indexer** स्वतंत्र रूप से तय करता है कि वह **Delegators** को कितना रिवार्ड देगा, जिससे **Indexers** के बीच **Delegators** को आकर्षित करने की प्रतिस्पर्धा बनी रहती है। अधिकांश **Indexers** सालाना **9-12%** रिटर्न ऑफर करते हैं।
+**Indexers** को **Delegators** द्वारा **GRT** डेलिगेट किया जाता है, जिससे नेटवर्क पर Subgraphs में Indexer की **stake** बढ़ती है। इसके बदले में, **Delegators** को Indexer से मिलने वाले कुल **query fees** और **indexing rewards** का एक निश्चित प्रतिशत मिलता है। हर **Indexer** स्वतंत्र रूप से तय करता है कि वह **Delegators** को कितना रिवार्ड देगा, जिससे **Indexers** के बीच **Delegators** को आकर्षित करने की प्रतिस्पर्धा बनी रहती है। अधिकांश **Indexers** सालाना **9-12%** रिटर्न ऑफर करते हैं।
यदि कोई Delegator 15k GRT को किसी ऐसे Indexer को डेलिगेट करता है जो 10% की पेशकश कर रहा है, तो Delegator को वार्षिक रूप से ~1,500 GRT का इनाम प्राप्त होगा।
@@ -46,7 +46,7 @@ The Graph ब्लॉकचेन डेटा को अधिक सुलभ
## Curators (GRT कमाएं)
-**Curators** उच्च-गुणवत्ता वाले **Subgraphs** की पहचान करते हैं और उन्हें **"curate"** करते हैं (अर्थात, उन पर **GRT signal** करते हैं) ताकि **curation shares** कमा सकें। ये **curation shares** उस **Subgraph** द्वारा उत्पन्न सभी भविष्य की **query fees** का एक निश्चित प्रतिशत सुनिश्चित करते हैं। हालाँकि कोई भी स्वतंत्र नेटवर्क प्रतिभागी **Curator** बन सकता है, आमतौर पर **Subgraph developers** अपने स्वयं के **Subgraphs** के पहले **Curators** होते हैं, क्योंकि वे सुनिश्चित करना चाहते हैं कि उनका **Subgraph indexed** हो।
+**Curators** उच्च-गुणवत्ता वाले **Subgraphs** की पहचान करते हैं और उन्हें **"curate"** करते हैं (अर्थात, उन पर **GRT signal** करते हैं) ताकि **curation shares** कमा सकें। ये **curation shares** उस **Subgraph** द्वारा उत्पन्न सभी भविष्य की **query fees** का एक निश्चित प्रतिशत सुनिश्चित करते हैं। हालाँकि कोई भी स्वतंत्र नेटवर्क प्रतिभागी **Curator** बन सकता है, आमतौर पर **Subgraph developers** अपने स्वयं के **Subgraphs** के पहले **Curators** होते हैं, क्योंकि वे सुनिश्चित करना चाहते हैं कि उनका **Subgraph indexed** हो।
**Subgraph developers** को सलाह दी जाती है कि वे अपने **Subgraph** को कम से कम **3,000 GRT** के साथ **curate** करें। हालांकि, यह संख्या **network activity** और **community participation** के अनुसार बदल सकती है।
@@ -54,11 +54,11 @@ The Graph ब्लॉकचेन डेटा को अधिक सुलभ
## डेवलपर्स
-**Developers** **Subgraphs** बनाते हैं और उन्हें **query** करके **blockchain data** प्राप्त करते हैं। चूंकि **Subgraphs** **open source** होते हैं, **developers** मौजूदा **Subgraphs** को **query** करके अपने **dapps** में **blockchain data** लोड कर सकते हैं। **Developers** द्वारा किए गए **queries** के लिए **GRT** में भुगतान किया जाता है, जो नेटवर्क प्रतिभागियों के बीच वितरित किया जाता है।
+**Developers** **Subgraphs** बनाते हैं और उन्हें **query** करके **blockchain data** प्राप्त करते हैं। चूंकि **Subgraphs** **open source** होते हैं, **developers** मौजूदा **Subgraphs** को **query** करके अपने **dapps** में **blockchain data** लोड कर सकते हैं। **Developers** द्वारा किए गए **queries** के लिए **GRT** में भुगतान किया जाता है, जो नेटवर्क प्रतिभागियों के बीच वितरित किया जाता है।
### Creating a Subgraph
-**Developers** **[Subgraph create](/developing/creating-a-subgraph/)** करके **blockchain** पर डेटा **index** कर सकते हैं। **Subgraphs** यह निर्देश देते हैं कि **Indexers** को कौन सा डेटा **consumers** को उपलब्ध कराना चाहिए।
+**Developers** **[Subgraph create](/developing/creating-a-subgraph/)** करके **blockchain** पर डेटा **index** कर सकते हैं। **Subgraphs** यह निर्देश देते हैं कि **Indexers** को कौन सा डेटा **consumers** को उपलब्ध कराना चाहिए।
जब **developers** अपना **Subgraph** बना और टेस्ट कर लेते हैं, तो वे इसे **The Graph** के **decentralized network** पर **[publish](/subgraphs/developing/publishing/publishing-a-subgraph/)** कर सकते हैं।
@@ -92,9 +92,8 @@ Indexers दो तरीकों से GRT रिवार्ड्स कम
**प्रारंभिक टोकन आपूर्ति** 10 बिलियन **GRT** है, और **Indexers** को **Subgraphs** पर **stake allocate** करने के लिए प्रति वर्ष **3%** नई **GRT issuance** का लक्ष्य रखा गया है। इसका मतलब है कि हर साल **Indexers** के योगदान के लिए नए टोकन जारी किए जाएंगे, जिससे कुल **GRT आपूर्ति** 3% बढ़ेगी।
-The Graph में नए टोकन **issuance** को संतुलित करने के लिए कई **burning mechanisms** शामिल किए गए हैं। सालाना लगभग **1% GRT supply** विभिन्न नेटवर्क गतिविधियों के माध्यम से **burn** हो जाती है, और यह संख्या नेटवर्क की वृद्धि के साथ बढ़ रही है। ये **burning mechanisms** शामिल हैं: - **0.5% Delegation Tax**: जब कोई **Delegator** किसी **Indexer** को **GRT** डेलीगेट करता है।
-
-- **1% Curation Tax**: जब **Curators** किसी **Subgraph** पर **GRT signal** करते हैं।
+The Graph में नए टोकन **issuance** को संतुलित करने के लिए कई **burning mechanisms** शामिल किए गए हैं। सालाना लगभग **1% GRT supply** विभिन्न नेटवर्क गतिविधियों के माध्यम से **burn** हो जाती है, और यह संख्या नेटवर्क की वृद्धि के साथ बढ़ रही है। ये **burning mechanisms** शामिल हैं: - **0.5% Delegation Tax**: जब कोई **Delegator** किसी **Indexer** को **GRT** डेलीगेट करता है।
+- **1% Curation Tax**: जब **Curators** किसी **Subgraph** पर **GRT signal** करते हैं।
- **1% Query Fees Burn**: जब **ब्लॉकचेन डेटा** के लिए **queries** की जाती हैं।

@@ -103,4 +102,4 @@ The Graph में नए टोकन **issuance** को संतुलि
## प्रोटोकॉल में सुधार करना
-The Graph Network निरंतर विकसित हो रहा है और प्रोटोकॉल की आर्थिक संरचना में सुधार किए जा रहे हैं ताकि सभी नेटवर्क प्रतिभागियों को सर्वोत्तम अनुभव मिल सके। The Graph Council प्रोटोकॉल परिवर्तनों की निगरानी करता है और समुदाय के सदस्यों को भाग लेने के लिए प्रोत्साहित किया जाता है। प्रोटोकॉल सुधारों में शामिल हों [The Graph Forum] (https://forum.thegraph.com/) में।
+ The Graph Network निरंतर विकसित हो रहा है और प्रोटोकॉल की आर्थिक संरचना में सुधार किए जा रहे हैं ताकि सभी नेटवर्क प्रतिभागियों को सर्वोत्तम अनुभव मिल सके। The Graph Council प्रोटोकॉल परिवर्तनों की निगरानी करता है और समुदाय के सदस्यों को भाग लेने के लिए प्रोत्साहित किया जाता है। प्रोटोकॉल सुधारों में शामिल हों [The Graph Forum] (https://forum.thegraph.com/) में।
diff --git a/website/src/pages/hi/subgraphs/_meta-titles.json b/website/src/pages/hi/subgraphs/_meta-titles.json
index 87cd473806ba..2e0336761d36 100644
--- a/website/src/pages/hi/subgraphs/_meta-titles.json
+++ b/website/src/pages/hi/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "queries",
"developing": "विकसित करना",
"guides": "How-to Guides",
- "best-practices": "Best Practices"
+ "best-practices": "Best Practices",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx b/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx
index 22a80fd744e2..f6540dd317c2 100644
--- a/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx
+++ b/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx
@@ -8,11 +8,11 @@ title: उन्नत Subgraph विशेषताएँ
`specVersion` `0.0.4` से शुरू होकर, सबग्राफ सुविधाओं को स्पष्ट रूप से `विशेषता` अनुभाग में शीर्ष स्तर पर घोषित किया जाना चाहिए, जो उनके `camelCase` नाम का उपयोग करके किया जाता है, जैसा कि नीचे दी गई तालिका में सूचीबद्ध है:
-| विशेषता | नाम |
-| ------------------------------------------------- | -------------------- |
-| [गैर-घातक त्रुटियाँ](#non-fatal-errors) | `गैर-घातक त्रुटियाँ` |
-| [पूर्ण-पाठ खोज](#defining-fulltext-search-fields) | `पूर्ण-पाठ खोज` |
-| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` |
+| विशेषता | नाम |
+| ------------------------------------------------------ | -------------------- |
+| [गैर-घातक त्रुटियाँ](#non-fatal-errors) | `गैर-घातक त्रुटियाँ` |
+| [पूर्ण-पाठ खोज](#defining-fulltext-search-fields) | `पूर्ण-पाठ खोज` |
+| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` |
instance के लिए, यदि कोई सबग्राफ **Full-Text Search** और **Non-fatal Errors** सुविधाओं का उपयोग करता है, तो मैनिफेस्ट में `विशेषता` फ़ील्ड इस प्रकार होनी चाहिए:
@@ -97,7 +97,7 @@ type Stats @aggregation(intervals: ["hour", "day"], source: "Data") {
## गैर-घातक त्रुटियाँ
-indexing त्रुटियाँ, जो पहले से सिंक हो चुके सबग्राफ पर होती हैं, डिफ़ॉल्ट रूप से सबग्राफ को विफल कर देंगी और सिंकिंग रोक देंगी। वैकल्पिक रूप से, सबग्राफ को इस तरह कॉन्फ़िगर किया जा सकता है कि वे त्रुटियों की उपस्थिति में भी सिंकिंग जारी रखें, उन परिवर्तनों को अनदेखा करके जो उस handler द्वारा किए गए थे जिसने त्रुटि उत्पन्न की। यह सबग्राफ लेखकों को अपने सबग्राफ को सही करने का समय देता है, जबकि नवीनतम ब्लॉक के विरुद्ध क्वेरीज़ दी जाती रहती हैं, हालांकि परिणाम उस बग के कारण असंगत हो सकते हैं जिसने त्रुटि उत्पन्न की थी। ध्यान दें कि कुछ त्रुटियाँ फिर भी हमेशा घातक होती हैं। गैर-घातक होने के लिए, त्रुटि को निर्धारक (deterministic) रूप से ज्ञात होना चाहिए।
+indexing त्रुटियाँ, जो पहले से सिंक हो चुके सबग्राफ पर होती हैं, डिफ़ॉल्ट रूप से सबग्राफ को विफल कर देंगी और सिंकिंग रोक देंगी। वैकल्पिक रूप से, सबग्राफ को इस तरह कॉन्फ़िगर किया जा सकता है कि वे त्रुटियों की उपस्थिति में भी सिंकिंग जारी रखें, उन परिवर्तनों को अनदेखा करके जो उस handler द्वारा किए गए थे जिसने त्रुटि उत्पन्न की। यह सबग्राफ लेखकों को अपने सबग्राफ को सही करने का समय देता है, जबकि नवीनतम ब्लॉक के विरुद्ध क्वेरीज़ दी जाती रहती हैं, हालांकि परिणाम उस बग के कारण असंगत हो सकते हैं जिसने त्रुटि उत्पन्न की थी। ध्यान दें कि कुछ त्रुटियाँ फिर भी हमेशा घातक होती हैं। गैर-घातक होने के लिए, त्रुटि को निर्धारक (deterministic) रूप से ज्ञात होना चाहिए।
> **नोट:**ग्राफ नेटवर्क अभी तक गैर-घातक त्रुटियों का समर्थन नहीं करता है, और डेवलपर्स को स्टूडियो के माध्यम से उस कार्यक्षमता का उपयोग करके सबग्राफ को नेटवर्क पर परिनियोजित नहीं करना चाहिए।
@@ -343,7 +343,8 @@ export function handleTransfer(event: TransferEvent): void {
आप [ DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) का उपयोग कर सकते हैं जब आप File Data साधन बना रहे हों ताकि अतिरिक्त जानकारी पास की जा सके जो File Data साधन handler में उपलब्ध होगी।
-यदि आपके पास ऐसी entities हैं जो कई बार रिफ्रेश होती हैं, तो IPFS हैश और entity ID का उपयोग करके unique file-based entities बनाएं, और उन्हें chain-based entity में एक derived field का उपयोग करके संदर्भित करें। entities
+यदि आपके पास ऐसी entities हैं जो कई बार रिफ्रेश होती हैं, तो IPFS हैश और entity ID का उपयोग करके unique file-based entities बनाएं, और उन्हें chain-based entity में एक derived field का उपयोग करके संदर्भित करें।
+entities
> हम ऊपर दिए गए सुझाव को बेहतर बनाने के लिए काम कर रहे हैं, इसलिए क्वेरी केवल "नवीनतम" संस्करण लौटाती हैं
@@ -355,7 +356,7 @@ export function handleTransfer(event: TransferEvent): void {
#### उदाहरण
-[Crypto Coven सबग्राफ migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor)
+[Crypto Coven सबग्राफ migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor)
#### संदर्भ
@@ -529,13 +530,14 @@ calls:
```yaml
calls:
- ERC20DecimalsToken0: ERC20[event.params.token0].decimals()
+
```
### मौजूदा सबग्राफ पर ग्राफ्टिंग
> **नोट**: प्रारंभिक रूप से The Graph Network में अपग्रेड करते समय graft का उपयोग करने की अनुशंसा नहीं की जाती है। अधिक जानें [यहाँ](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network)।
-जब कोई सबग्राफ पहली बार डिप्लॉय किया जाता है, तो यह संबंधित चेन के जेनेसिस ब्लॉक (या प्रत्येक डेटा स्रोत के साथ परिभाषित `startBlock`) से इवेंट्स को indexing करना शुरू करता है। कुछ परिस्थितियों में, मौजूदा सबग्राफ से डेटा को पुन: उपयोग करना और किसी बाद के ब्लॉक से इंडेक्सिंग शुरू करना फायदेमंद होता है। इस indexing मोड को _Grafting_ कहा जाता है। उदाहरण के लिए, विकास के दौरान, यह मैपिंग में छोटे एरर्स को जल्दी से पार करने या किसी मौजूदा सबग्राफ को फिर से चालू करने के लिए उपयोगी होता है, यदि वह फेल हो गया हो।
+जब कोई सबग्राफ पहली बार डिप्लॉय किया जाता है, तो यह संबंधित चेन के जेनेसिस ब्लॉक (या प्रत्येक डेटा स्रोत के साथ परिभाषित `startBlock`) से इवेंट्स को indexing करना शुरू करता है। कुछ परिस्थितियों में, मौजूदा सबग्राफ से डेटा को पुन: उपयोग करना और किसी बाद के ब्लॉक से इंडेक्सिंग शुरू करना फायदेमंद होता है। इस indexing मोड को _Grafting_ कहा जाता है। उदाहरण के लिए, विकास के दौरान, यह मैपिंग में छोटे एरर्स को जल्दी से पार करने या किसी मौजूदा सबग्राफ को फिर से चालू करने के लिए उपयोगी होता है, यदि वह फेल हो गया हो।
एक सबग्राफ को एक बेस सबग्राफ पर graft किया जाता है जब `subgraph.yaml` में सबग्राफ manifest में शीर्ष स्तर पर एक `graft` ब्लॉक होता है।
@@ -546,7 +548,7 @@ graft:
block: 7345624 # Block number
```
-जब कोई सबग्राफ , जिसकी मैनिफेस्ट में `graft` ब्लॉक शामिल होता है, डिप्लॉय किया जाता है, तो ग्राफ-नोड दिए गए `block` तक base सबग्राफ के डेटा को कॉपी करेगा और फिर उस ब्लॉक से नए सबग्राफ को इंडेक्स करना जारी रखेगा। base सबग्राफ को लक्षित ग्राफ-नोड इंस्टेंस पर मौजूद होना चाहिए और कम से कम दिए गए ब्लॉक तक इंडेक्स किया जाना चाहिए। इस प्रतिबंध के कारण, ग्राफ्टिंग का उपयोग केवल डेवलपमेंट के दौरान या किसी आपात स्थिति में एक समान गैर-ग्राफ्टेड सबग्राफ को जल्दी से तैयार करने के लिए किया जाना चाहिए।
+जब कोई सबग्राफ , जिसकी मैनिफेस्ट में `graft` ब्लॉक शामिल होता है, डिप्लॉय किया जाता है, तो ग्राफ-नोड दिए गए `block` तक base सबग्राफ के डेटा को कॉपी करेगा और फिर उस ब्लॉक से नए सबग्राफ को इंडेक्स करना जारी रखेगा। base सबग्राफ को लक्षित ग्राफ-नोड इंस्टेंस पर मौजूद होना चाहिए और कम से कम दिए गए ब्लॉक तक इंडेक्स किया जाना चाहिए। इस प्रतिबंध के कारण, ग्राफ्टिंग का उपयोग केवल डेवलपमेंट के दौरान या किसी आपात स्थिति में एक समान गैर-ग्राफ्टेड सबग्राफ को जल्दी से तैयार करने के लिए किया जाना चाहिए।
ग्राफ्टिंग मूल डेटा के बजाय प्रतिलिपियाँ बनाता है, इसलिए यह शुरू से इंडेक्सिंग करने की तुलना में सबग्राफ को वांछित ब्लॉक तक पहुँचाने में कहीं अधिक तेज़ होता है, हालाँकि बहुत बड़े सबग्राफ के लिए प्रारंभिक डेटा कॉपी करने में अभी भी कई घंटे लग सकते हैं। जब तक ग्राफ्ट किया गया सबग्राफ प्रारंभिक रूप से स्थापित हो रहा होता है, तब तक The ग्राफ नोड उन entity प्रकारों के बारे में जानकारी लॉग करेगा जिन्हें पहले ही कॉपी किया जा चुका है।
diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5f964d3cbb78..edc1d88dc6cf 100644
--- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Patch Changes
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Minor Changes
diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx
index 1bed291fc89f..e1cb224c81ce 100644
--- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ Learn what built-in APIs can be used when writing Subgraph mappings. There are t
The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| Version | Release notes |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
-| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
-| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
-| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
-| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Added `input` field to the Ethereum Transaction object |
+| Version | Release notes |
+| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
+| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
+| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
+| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
+| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
+| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Added `input` field to the Ethereum Transaction object |
### Built-in Types
@@ -595,11 +595,7 @@ The `log` API includes the following functions:
The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on.
```typescript
-log.info('संदेश प्रदर्शित किया जाना है: {}, {}, {}', [
- value.toString(),
- OtherValue.toString(),
- 'पहले से ही एक स्ट्रिंग',
-])
+log.info ('संदेश प्रदर्शित किया जाना है: {}, {}, {}', [value.toString (), OtherValue.toString (), 'पहले से ही एक स्ट्रिंग'])
```
#### एक या अधिक मान लॉग करना
diff --git a/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx
index 180a343470b1..4931e6b1fd34 100644
--- a/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -22,14 +22,14 @@ Start the process and build a Subgraph that matches your needs:
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
-| Version | Release notes |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+| Version | Release notes |
+| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx
index 64ec49930c33..f1f1aacab6ff 100644
--- a/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx
+++ b/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx
@@ -35,7 +35,7 @@ yarn add --dev matchstick-as
brew install postgresql
```
-यहां तक कि नवीनतम libpq.5.lib\_ का एक symlink बनाएं। आपको पहले यह dir बनाने की आवश्यकता हो सकती है: `/usr/local/opt/postgresql/lib/`
+यहां तक कि नवीनतम libpq.5.lib_ का एक symlink बनाएं। आपको पहले यह dir बनाने की आवश्यकता हो सकती है: `/usr/local/opt/postgresql/lib/`
```sh
ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib
diff --git a/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx
index d10ef9160dc6..6a4efc49ef2e 100644
--- a/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx
@@ -213,7 +213,7 @@ Every Subgraph affected with this policy has an option to bring the version in q
If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph: `https://indexer.upgrade.thegraph.com/status`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
diff --git a/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx
index eab335f08623..e4a38c427323 100644
--- a/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -25,7 +25,7 @@ In Subgraph Studio ,आप निम
Deploy करने से पहले, आपको The Graph CLI इंस्टॉल करना होगा।
-आपको The Graph CLI का उपयोग करने के लिए Node.js(https://nodejs.org/) और आपकी पसंद का पैकेज मैनेजर (npm, yarn या pnpm) स्थापित होना चाहिए। सबसे हालिया (https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI संस्करण की जांच करें।
+आपको The Graph CLI का उपयोग करने के लिए Node.js(https://nodejs.org/) और आपकी पसंद का पैकेज मैनेजर (npm, yarn या pnpm) स्थापित होना चाहिए। सबसे हालिया (https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI संस्करण की जांच करें।
### इंस्टॉल करें 'yarn' के साथ
@@ -88,6 +88,8 @@ graph auth
Once you are ready, you can deploy your Subgraph to Subgraph Studio.
> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
+>
+> **Note**: Each account is limited to 3 deployed (unpublished) Subgraphs. If you reach this limit, you must archive or publish existing Subgraphs before deploying new ones.
Use the following CLI command to deploy your Subgraph:
@@ -104,6 +106,8 @@ graph deploy
After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+> **Note**: The development query URL is limited to 3,000 queries per day.
+
Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## अपने Subgraph को प्रकाशित करें
diff --git a/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index 4de4472caf4c..ef1a8060175f 100644
--- a/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -53,7 +53,7 @@ USAGE
FLAGS
-h, --help Show CLI help.
- -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node.
+ -i, --ipfs= [default: https://ipfs.thegraph.com/api/v0] Upload build results to an IPFS node.
--ipfs-hash= IPFS hash of the subgraph manifest to deploy.
--protocol-network= [default: arbitrum-one] The network to use for the subgraph deployment.
diff --git a/website/src/pages/hi/subgraphs/explorer.mdx b/website/src/pages/hi/subgraphs/explorer.mdx
index 64a671781463..dd17553bf415 100644
--- a/website/src/pages/hi/subgraphs/explorer.mdx
+++ b/website/src/pages/hi/subgraphs/explorer.mdx
@@ -2,83 +2,103 @@
title: Graph Explorer
---
-Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Use [Graph Explorer](https://thegraph.com/explorer) and take full advantage of its core features.
## Overview
-Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+This guide explains how to use [Graph Explorer](https://thegraph.com/explorer) to quickly discover and interact with Subgraphs on The Graph Network, delegate GRT, view participant metrics, and analyze network performance.
-## Inside Explorer
+> When you visit Graph Explorer, you can also access the link to [explore Substreams](https://substreams.dev/).
-The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
+## आवश्यक शर्तें
-### Subgraphs Page
+- To perform actions, you need a wallet (e.g., MetaMask) connected to [Graph Explorer](https://thegraph.com/explorer).
+ > Make sure your wallet is connected to the correct network (e.g., Arbitrum). Features and data shown are network specific.
+- GRT tokens if you plan to delegate or curate.
+- Basic knowledge of [Subgraphs](https://thegraph.com/docs/en/subgraphs/developing/subgraphs/)
-After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+## Navigating Graph Explorer
-- Your own finished Subgraphs
-- दूसरों द्वारा प्रकाशित subgraphs
-- The exact Subgraph you want (based on the date created, signal amount, or name).
+### Step 1. Explore Subgraphs
-
+> For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
-When you click into a Subgraph, you will be able to do the following:
+Go to the Subgraphs page in [Graph Explorer](https://thegraph.com/explorer).
-- प्लेग्राउंड में परीक्षण प्रश्न करें और सूचनापूर्ण निर्णय लेने के लिए नेटवर्क विवरण का उपयोग करें।
-- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
+- If you've deployed and published your Subgraph in Subgraph Studio, you can view it here.
+- Search all published Subgraphs and filter them by indexed network, specific categories (such as DeFI, NFTs, and DAOs), and **most queried, most curated, recently created, and recently updated**.
+
+
+
+To find Subgraphs indexing a specific contract, enter the contract address into the search bar.
+
+- For example, you can enter the L2GNS contract on Artbitrum (`0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec`) and this returns all Subgraphs indexing that contract:
+
+
- - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+> Looking for indexing contracts? Check out [this Subgraph](https://thegraph.com/explorer/subgraphs/FMTUN6d7sY2bLnAmNEPJTqiU3iuQht6ZXurpBh71wbWR?view=About&chain=arbitrum-one) which indexes contract addresses listed in its manifest. It shows all current deployments indexing those contracts on Arbitrum One, along with the signal allocated to each.
-
+You can click into any Subgraph, to:
+
+- प्लेग्राउंड में परीक्षण प्रश्न करें और सूचनापूर्ण निर्णय लेने के लिए नेटवर्क विवरण का उपयोग करें।
+- Signal GRT on your own Subgraph or the Subgraphs of others to make Indexers aware of its importance and quality.
+ > This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it'll eventually surface on the network to serve queries.
+
+
On each Subgraph’s dedicated page, you can do the following:
-- Signal/Un-signal on Subgraphs
-- चार्ट, वर्तमान परिनियोजन आईडी और अन्य मेटाडेटा जैसे अधिक विवरण देखें
-- Switch versions to explore past iterations of the Subgraph
- Query Subgraphs via GraphQL
+- View Subgraph ID, current deployment ID, Query URL, and other metadata
+- Signal/unsignal on Subgraphs
- Test Subgraphs in the playground
- View the Indexers that are indexing on a certain Subgraph
- सबग्राफ आँकड़े (आवंटन, क्यूरेटर, आदि)
-- View the entity who published the Subgraph
+- View query fees and charts
+- Change versions to explore past iterations of the Subgraph
+- View entity types
+- View Subgraph activity
-
+
-### Delegate Page
+### Step 2. Delegate GRT
-On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer.
+Go to the [Delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) page to learn how to delegate, get GRT, and choose an Indexer.
-On this page, you can see the following:
+Here, you can:
-- Indexers who collected the most query fees
-- Indexers with the highest estimated APR
+- Compare Indexers by most query fees earned and highest estimated APR.
+- Use the built-in ROI calculator or search by Indexer name or address.
+- Click **"Delegate"** next to an Indexer to stake your GRT.
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
+### Step 3. Monitor Participants in the Network
-### Participants Page
+Go to the [Participants](https://thegraph.com/explorer/participants?chain=arbitrum-one) page to view:
-This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators.
+- Indexers: stakes, allocations, rewards, and delegation parameters
+- Curators: signal amounts, Subgraph shares, and activity history
+- Delegators: current and historical delegations, rewards, and Indexer metrics
-#### 1. इंडेक्सर्स
+#### Indexer
-
+
Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexer's delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**विशिष्टताएँ**
-- क्वेरी फ़ी कट - वह % जो क्वेरी फ़ी रिबेट्स का हिस्सा है जो Indexer, Delegators के साथ बाँटते समय रखता है।
-- प्रभावी पुरस्कार कट - वह इंडेक्सिंग पुरस्कार कट जो डेलीगेशन पूल पर लागू होता है। यदि यह नकारात्मक है, तो इसका मतलब है कि इंडेक्सर अपने पुरस्कारों का एक हिस्सा दे रहा है। यदि यह सकारात्मक है, तो इसका मतलब है कि Indexer अपने कुछ पुरस्कार रख रहा है।
-- कूलडाउन शेष - वह समय जो उपरोक्त डेलीगेशन पैरामीटर को बदलने के लिए Indexer को बचा है। कूलडाउन अवधि वे होती हैं जो Indexers अपने डेलीगेशन पैरामीटर को अपडेट करते समय सेट करते हैं।
-- यह है Indexer का जमा किया गया हिस्सेदारी, जिसे दुष्ट या गलत व्यवहार के लिए काटा जा सकता है।
-- प्रतिनिधि - 'Delegators' से स्टेक जो 'Indexers' द्वारा आवंटित किया जा सकता है, लेकिन इसे स्लैश नहीं किया जा सकता।
-- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
-- अवेलबल डेलीगेशन कैपेसिटी - वह मात्रा जो डेलीगेटेड स्टेक है, जो Indexers अभी भी प्राप्त कर सकते हैं इससे पहले कि वे ओवर-डेलीगेटेड हो जाएं।
-- अधिकतम प्रत्यायोजन क्षमता - प्रत्यायोजित हिस्सेदारी की अधिकतम राशि जिसे इंडेक्सर उत्पादक रूप से स्वीकार कर सकता है। आवंटन या पुरस्कार गणना के लिए एक अतिरिक्त प्रत्यायोजित हिस्सेदारी का उपयोग नहीं किया जा सकता है।
-- क्वेरी शुल्क - यह कुल शुल्क है जो अंतिम उपयोगकर्ताओं ने सभी समय में एक Indexer से क्वेरी के लिए भुगतान किया है।
-- इंडेक्सर रिवार्ड्स - यह इंडेक्सर और उनके प्रतिनिधियों द्वारा हर समय अर्जित किए गए कुल इंडेक्सर पुरस्कार हैं। इंडेक्सर पुरस्कार का भुगतान जीआरटी जारी करने के माध्यम से किया जाता है।
+- Query Fee Cut: The % of the query fee rebates that the Indexer keeps when splitting with Delegators.
+- Effective Reward Cut: The indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
+- Cooldown Remaining: The time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
+- Owned: This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
+- Delegated: Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
+- Allocated: Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
+- Available Delegation Capacity: The amount of delegated stake the Indexers can still receive before they become over-delegated.
+- Max Delegation Capacity: The maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
+- Query Fees: This is the total fees that end users have paid for queries from an Indexer over all time.
+- Indexer Rewards: This is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
Indexers दोनों क्वेरी फीस और इंडेक्सिंग पुरस्कार कमा सकते हैं। कार्यात्मक रूप से, यह तब होता है जब नेटवर्क प्रतिभागी GRT को एक Indexer को सौंपते हैं। इससे Indexers को उनके Indexer पैरामीटर के आधार पर क्वेरी फीस और पुरस्कार प्राप्त होते हैं।
@@ -86,9 +106,9 @@ Indexers दोनों क्वेरी फीस और इंडेक्
To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/)
-
+
-#### 2. क्यूरेटर
+#### Curators
Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
@@ -102,11 +122,11 @@ Curators analyze Subgraphs to identify which Subgraphs are of the highest qualit
- जमा किए गए जीआरटी की संख्या
- एक क्यूरेटर के शेयरों की संख्या
-
+
If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/).
-#### 3. प्रतिनिधि
+#### Delegator
The Graph Network की सुरक्षा और विकेंद्रीकरण को बनाए रखने में Delegators की महत्वपूर्ण भूमिका होती है। वे नेटवर्क में भाग लेते हैं 'delegating' (यानी, 'staking') करके और GRT tokens को एक या एक से अधिक Indexers को सौंपते हैं।
@@ -114,7 +134,7 @@ The Graph Network की सुरक्षा और विकेंद्र
- डेलीगेटर्स विभिन्न कारकों के आधार पर इंडेक्सर्स का चयन करते हैं, जैसे कि पिछले प्रदर्शन, इंडेक्सिंग इनाम दरें, और क्वेरी शुल्क कट।
- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/).
-
+
Delegators तालिका में आप समुदाय में सक्रिय Delegators और महत्वपूर्ण मैट्रिक्स देख सकते हैं:
@@ -127,9 +147,9 @@ Delegators तालिका में आप समुदाय में स
If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers).
-### Network Page
+### Step 4. Analyze Network Performance
-On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
+On the [Network](https://thegraph.com/explorer/network?chain=arbitrum-one) page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
#### Overview
@@ -147,7 +167,7 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep
- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
-
+
#### अवधियों को
@@ -161,69 +181,77 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep
- वितरण युग ऐसे युग हैं जिनमें युगों के लिए राज्य चैनल तय किए जा रहे हैं और अनुक्रमणकर्ता अपनी क्वेरी शुल्क छूट का दावा कर सकते हैं।
- अंतिम चरण वे चरण हैं जिनमें Indexers द्वारा दावा करने के लिए कोई प्रश्न शुल्क छूट शेष नहीं है।
-
+
+
+## Access and Manage Your User Profile
+
+### Step 1. Access Your Profile
-## आपका उपयोगकर्ता प्रोफ़ाइल
+- Click your wallet address in the top right corner
+- Your wallet acts as your user profile
+- In your profile dashboard, you can view and interact with several useful tabs
-आपकी व्यक्तिगत प्रोफ़ाइल वह स्थान है जहां आप अपने नेटवर्क गतिविधि को देख सकते हैं, चाहे आपकी भूमिका नेटवर्क पर कुछ भी हो। आपका क्रिप्टो वॉलेट आपके उपयोगकर्ता प्रोफ़ाइल के रूप में कार्य करेगा, और उपयोगकर्ता डैशबोर्ड के साथ, आप निम्नलिखित टैब देख सकेंगे:
+### Step 2. Explore the Tabs
-### प्रोफ़ाइल अवलोकन
+#### प्रोफ़ाइल अवलोकन
इस खंड में, आप निम्नलिखित देख सकते हैं:
-- आपकी वर्तमान क्रियाओं में से कोई भी आपने किया है।
-- आपकी प्रोफ़ाइल जानकारी, विवरण, और वेबसाइट (यदि आपने एक जोड़ी है)।
+- Your activity
+- Your profile information: total query fees, total shares value, owned stake, stake delegating
-
+
-### सबग्राफ टैब
+#### सबग्राफ टैब
-In the Subgraphs tab, you’ll see your published Subgraphs.
+The Subgraphs tab displays all your published Subgraphs.
-> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> Subgraphs deployed with the CLI for testing purposes will not show up here. Subgraphs will only show up when they are published to the decentralized network.
-
+
-### अनुक्रमण टैब
+#### अनुक्रमण टैब
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+> If you haven't indexed, you will see links to stake to index Subgraphs and browse Subgraphs on Graph Explorer.
-इस खंड में आपके नेट इंडेक्सर रिवार्ड्स और नेट क्वेरी फीस के विवरण भी शामिल होंगे। आपको ये मेट्रिक दिखाई देंगे:
+The Indexing tab displays a table where you can review active and historical allocations to Subgraphs.
-- प्रत्यायोजित हिस्सेदारी - प्रतिनिधियों की हिस्सेदारी जो आपके द्वारा आवंटित की जा सकती है लेकिन कम नहीं की जा सकती
-- कुल प्रश्न शुल्क - कुल शुल्क जो उपयोगकर्ताओं ने समय के साथ आपके द्वारा की गई प्रश्नों के लिए भुगतान किया है
-- इंडेक्सर रिवार्ड्स - जीआरटी में आपको प्राप्त इंडेक्सर रिवार्ड्स की कुल राशि
-- शुल्क में कटौती - क्वेरी शुल्क का % छूट जो आप डेलीगेटर्स के साथ अलग होने पर रखेंगे
-- रिवार्ड्स कट - इंडेक्सर रिवार्ड्स का वह % जिसे आप डेलीगेटर्स के साथ विभाजित करते समय रखेंगे
-- स्वामित्व - यह इंडेक्सर की जमा हिस्सेदारी है, जिसे दुर्भावनापूर्ण या गलत व्यवहार के लिए घटाया जा सकता है
+Track your Indexer performance with visual charts and key metrics, including:
-
+- Delegated Stake: Stake from Delegators that can be allocated by you but cannot be slashed.
+- Total Query Fees: Cumulative fees from served queries.
+- Indexer Rewards (in GRT): Total rewards earned.
+- Fee Cut & Rewards Cut: The % of query fee rebates and Indexer rewards you'll keep when you split with Delegators.
+- Owned Stake: Your deposited stake, which could be slashed for malicious or incorrect behavior.
-### प्रतिनिधि टैब
+
-Delegator ,The Graph नेटवर्क के लिए महत्वपूर्ण हैं। उन्हें अपने ज्ञान का उपयोग करके एक इंडेक्सर का चयन करना चाहिए जो पुरस्कारों पर स्वस्थ रिटर्न प्रदान करेगा।
+#### प्रतिनिधि टैब
-डेलीगेटर्स टैब में, आप अपनी सक्रिय और ऐतिहासिक डेलीगेशंस का विवरण पा सकते हैं, साथ ही उन Indexers के मेट्रिक्स भी जिनकी ओर आपने डेलीगेट किया है।
+> To learn more about the benefits of delegating, check out [delegating](/resources/roles/delegating/delegating/).
-पृष्ठ के पहले भाग में, आप अपना प्रतिनिधिमंडल चार्ट और साथ ही केवल-पुरस्कार चार्ट देख सकते हैं। बाईं ओर, आप वे KPI देख सकते हैं जो आपके वर्तमान डेलिगेशन मेट्रिक्स को दर्शाते हैं।
+The Delegators tab displays your active and historical delegations, along with the metrics for the Indexers you've delegated to.
-इस टैब में आप यहां जो डेलिगेटर मेट्रिक्स देखेंगे उनमें शामिल हैं:
+Top Section:
-- कुल प्रतिनिधिमंडल पुरस्कार
-- कुल अचेतन पुरस्कार
-- कुल एहसास पुरस्कार
+- View delegation and rewards-only charts
+- Track key metrics:
+ - कुल प्रतिनिधिमंडल पुरस्कार
+ - Unrealized rewards
+ - Realized Rewards
-पृष्ठ के दूसरे भाग में, आपके पास प्रतिनिधि तालिका है। यहां आप उन इंडेक्स को देख सकते हैं जिन्हें आपने प्रत्यायोजित किया है, साथ ही उनके विवरण (जैसे रिवॉर्ड कट, कूलडाउन, आदि)।
+Bottom Section:
-तालिका के दाईं ओर स्थित बटनों के साथ, आप अपने प्रतिनिधिमंडल का प्रबंधन कर सकते हैं - अधिक प्रतिनिधि, अप्रतिबंधित, या पिघलने की अवधि के बाद अपने प्रतिनिधिमंडल को वापस ले सकते हैं।
+- Explore a table of your Indexer delegations, including reward cuts, cooldowns, and more.
+- Use the buttons on the right side of the table to manage your delegation - delegate more, undelegate, or withdraw it after the thawing period.
-ध्यान रखें कि यह चार्ट क्षैतिज रूप से स्क्रॉल करने योग्य है, इसलिए यदि आप दाईं ओर स्क्रॉल करते हैं, तो आप अपने प्रतिनिधिमंडल की स्थिति भी देख सकते हैं (प्रतिनिधि, अविभाजित, वापस लेने योग्य)।
+> This table is horizontally scrollable, so scroll right to see delegation status: delegating, undelegating, or withdrawable.
-
+
-### क्यूरेटिंग टैब
+#### क्यूरेटिंग टैब
-In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
+The Curation tab displays all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
इस टैब के भीतर, आपको इसका अवलोकन मिलेगा:
@@ -232,22 +260,22 @@ In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus
- Query rewards per Subgraph
- दिनांक विवरण पर अद्यतन किया गया
-
+
-### आपकी प्रोफ़ाइल सेटिंग्स
+#### आपकी प्रोफ़ाइल सेटिंग्स
अपने उपयोगकर्ता प्रोफ़ाइल के भीतर, आप अपने व्यक्तिगत प्रोफ़ाइल विवरण (जैसे ENS नाम सेट करना) प्रबंधित करने में सक्षम होंगे। यदि आप एक इंडेक्सर हैं, तो आपके पास अपनी उंगलियों पर सेटिंग्स तक और भी अधिक पहुंच है। अपने उपयोगकर्ता प्रोफ़ाइल में, आप अपने प्रतिनिधि पैरामीटर और ऑपरेटर सेट अप करने में सक्षम होंगे।
- ऑपरेटर इंडेक्सर की ओर से प्रोटोकॉल में सीमित कार्रवाई करते हैं, जैसे आवंटन खोलना और बंद करना। ऑपरेटर आमतौर पर अन्य एथेरियम पते होते हैं, जो उनके स्टेकिंग वॉलेट से अलग होते हैं, नेटवर्क तक गेटेड एक्सेस के साथ जिसे इंडेक्सर्स व्यक्तिगत रूप से सेट कर सकते हैं
- प्रत्यायोजन पैरामीटर की सहायता से आप अपने और अपने प्रतिनियुक्तियों के बीच GRT के वितरण को नियंत्रित कर सकते हैं.
-
+
As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.

-## Additional Resources
+### Additional Resources
### वीडियो गाइड
diff --git a/website/src/pages/hi/subgraphs/fair-use-policy.mdx b/website/src/pages/hi/subgraphs/fair-use-policy.mdx
new file mode 100644
index 000000000000..8a27a7ea2887
--- /dev/null
+++ b/website/src/pages/hi/subgraphs/fair-use-policy.mdx
@@ -0,0 +1,51 @@
+---
+title: Fair Use Policy
+---
+
+> Effective Date: May 15, 2025
+
+## Overview
+
+This outlines storage limits for Subgraphs that rely solely on [Edge & Node's Upgrade Indexer](/subgraphs/upgrade-indexer/). This policy is designed to ensure fair and optimized use of queries across the community.
+
+To maintain performance and reliability across its infrastructure, Edge & Node is updating its Upgrade Indexer Subgraph storage policy. Free usage tiers remain available, but users who exceed specified limits will need to upgrade to a paid plan. Storage allocations and thresholds vary by feature.
+
+### 1. Scope
+
+This policy applies to all individual users, teams, chains, and dapps using Edge & Node's Upgrade Indexer in Subgraph Studio for storage and queries.
+
+### 2. Fair Use Storage Limits
+
+**Free Storage: Up to 10 GB**
+
+Beyond that, pricing is variable and adjusts based on usage patterns, network conditions, infrastructure requirements, and specific use cases.
+
+Reach out to Edge & Node at [info@edgeandnode.com](mailto:info@edgeandnode.com) to discuss options that meet your technical needs.
+
+You can monitor your usage via [Subgraph Studio](https://thegraph.com/studio/).
+
+### 3. Fair Use Limits
+
+To preserve the stability of Edge & Node's Subgraph Studio and preserve the reliability of The Graph Network, the Edge & Node Support Team will monitor storage usage and take corresponding action with Subgraphs that have:
+
+- Abnormally high or sustained bandwidth or storage usage beyond posted limits
+- Circumvention of storage thresholds (e.g., use of multiple free-tier accounts)
+
+The Edge & Node Support Team reserves the right to revise storage limits or impose temporary constraints for operational integrity.
+
+If you exceed your included storage:
+
+- Try [pruning Subgraph data](/subgraphs/best-practices/pruning/) to remove unused entities and help stay within storage limits
+- [Add signal to the Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to encourage other Indexers on the network to serve it
+- You will receive multiple notifications and email alerts
+- A grace period of 14 days will be provided to upgrade or reduce storage
+
+Edge & Node's team is committed to helping users avoid unnecessary interruptions and will continue to support all web3 builders.
+
+### 4. Subgraph Data Retention
+
+Subgraphs inactive for over 14 days or Subgraphs that exceed free-tier storage limits will be subject to automatic data archival or deletion. Edge & Node's team will notify you before any such actions are taken.
+
+### 5. Support
+
+If you believe your usage is incorrectly flagged or have unique use cases (e.g. approved special request pending new Subgraph upgrade plan), reach out the Edge & Node team at [info@edgeandnode.com](mailto:info@edgeandnode.com).
diff --git a/website/src/pages/hi/subgraphs/guides/enums.mdx b/website/src/pages/hi/subgraphs/guides/enums.mdx
index d44418fec528..b4883c3bce8b 100644
--- a/website/src/pages/hi/subgraphs/guides/enums.mdx
+++ b/website/src/pages/hi/subgraphs/guides/enums.mdx
@@ -68,7 +68,7 @@ Enums प्रकार सुरक्षा प्रदान करते
NFTs के व्यापार किए जाने वाले विभिन्न मार्केटप्लेस के लिए enums को परिभाषित करने के लिए, अपने Subgraph स्कीमा में निम्नलिखित का उपयोग करें:
```gql
-#मार्केटप्लेस के लिए Enum जो CryptoCoven कॉन्ट्रैक्ट के साथ इंटरएक्टेड हैं (संभवत: ट्रेड/मिंट)
+#मार्केटप्लेस के लिए Enum जो CryptoCoven कॉन्ट्रैक्ट के साथ इंटरएक्टेड हैं (संभवत: ट्रेड/मिंट)
enum Marketplace {
OpenSeaV1 # जब CryptoCoven NFT को इस बाजार में व्यापार किया जाता है
OpenSeaV2 # जब CryptoCoven NFT को OpenSeaV2 बाजार में व्यापार किया जाता है
diff --git a/website/src/pages/hi/subgraphs/guides/near.mdx b/website/src/pages/hi/subgraphs/guides/near.mdx
index d8b019189a96..6d9fbb266667 100644
--- a/website/src/pages/hi/subgraphs/guides/near.mdx
+++ b/website/src/pages/hi/subgraphs/guides/near.mdx
@@ -71,7 +71,7 @@ dataSources:
```
- NEAR सबग्राफ ने एक नए kind का data source (`near`) पेश किया है।
-- `network` को होस्टिंग ग्राफ-नोड पर एक नेटवर्क से मेल खाना चाहिए। सबग्राफ Studio पर, NEAR का मेननेट `near-mainnet` है, और NEAR का टेस्टनेट `near-testnet` है।
+- `network` को होस्टिंग ग्राफ-नोड पर एक नेटवर्क से मेल खाना चाहिए। सबग्राफ Studio पर, NEAR का मेननेट `near-mainnet` है, और NEAR का टेस्टनेट `near-testnet` है।
- NEAR डेटा स्रोतों में एक वैकल्पिक `source.account` फ़ील्ड पेश किया गया है, जो एक मानव-पठनीय आईडी है जो एक [NEAR खाता](https://docs.near.org/concepts/protocol/account-model) से मेल खाती है। यह एक खाता या एक उप-खाता हो सकता है।
- NEAR डेटा स्रोत वैकल्पिक `source.accounts` फ़ील्ड पेश करते हैं, जिसमें वैकल्पिक उपसर्ग और प्रत्यय होते हैं। कम से कम उपसर्ग या प्रत्यय में से एक निर्दिष्ट किया जाना चाहिए, ये किसी भी खाते से मेल खाएंगे जो सूचीबद्ध मानों से शुरू या समाप्त होता है। नीचे दिया गया उदाहरण निम्नलिखित के लिए मेल खाएगा: `[app|good].*[morning.near|morning.testnet]`। यदि केवल उपसर्ग या प्रत्ययों की सूची आवश्यक हो तो दूसरा फ़ील्ड हटा दिया जा सकता है।
@@ -88,7 +88,8 @@ accounts:
NEAR डेटा स्रोत दो प्रकार के हैंडलर का समर्थन करते हैं:
- `blockHandlers`: हर नए NEAR ब्लॉक पर चलते हैं। कोई source.account आवश्यक नहीं है।
-- Here’s the translation of the provided text into Hindi: receiptHandlers: हर रिसीट पर तब चलाए जाते हैं जब डेटा स्रोत का source.account प्राप्तकर्ता हो। ध्यान दें कि केवल बिल्कुल मिलान वाले ही प्रोसेस किए जाते हैं ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) को स्वतंत्र डेटा स्रोत के रूप में जोड़ा जाना चाहिए)।
+- Here’s the translation of the provided text into Hindi:
+ receiptHandlers: हर रिसीट पर तब चलाए जाते हैं जब डेटा स्रोत का source.account प्राप्तकर्ता हो। ध्यान दें कि केवल बिल्कुल मिलान वाले ही प्रोसेस किए जाते हैं ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) को स्वतंत्र डेटा स्रोत के रूप में जोड़ा जाना चाहिए)।
### स्कीमा की परिभाषा
@@ -185,8 +186,8 @@ More information on सबग्राफ Studio पर सबग्राफ
एक बार जब आपका सबग्राफ बना लिया जाता है, तो आप `graph deploy` CLI कमांड का उपयोग करके अपने सबग्राफ को डिप्लॉय कर सकते हैं।
```sh
-$ graph create --node # एक स्थानीय ग्राफ-नोड पर सबग्राफ बनाता है (सबग्राफ Studio पर, यह UI के माध्यम से किया जाता है)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # निर्मित फ़ाइलों को निर्दिष्ट IPFS endpoint पर अपलोड करता है, और फिर manifest IPFS hash के आधार पर निर्दिष्ट ग्राफ-नोड पर सबग्राफ को डिप्लॉय करता है
+$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
+$ graph deploy --node --ipfs https://ipfs.thegraph.com # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
नोड कॉन्फ़िगरेशन इस बात पर निर्भर करेगा कि सबग्राफ कहाँ तैनात किया जा रहा है।
diff --git a/website/src/pages/hi/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/hi/subgraphs/guides/subgraph-composition.mdx
index be71b8199574..3083a342ddc7 100644
--- a/website/src/pages/hi/subgraphs/guides/subgraph-composition.mdx
+++ b/website/src/pages/hi/subgraphs/guides/subgraph-composition.mdx
@@ -39,20 +39,20 @@ Subgraph संयोजन एक शक्तिशाली विशेष
### Source Subgraphs
-- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs).
- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
-- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
-- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
-- Source Subgraphs cannot use grafting on top of existing entities
-- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+- Immutable entities only: All Subgraphs must have [immutable entities](/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed.
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of.
+- Source Subgraphs cannot use grafting on top of existing entities.
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly.
### Composed Subgraphs
-- You can only compose up to a **maximum of 5 source Subgraphs**
-- Composed Subgraphs can only use **datasources from the same chain**
-- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
-- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
-- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+- You can only compose up to a **maximum of 5 source Subgraphs.**
+- Composed Subgraphs can only use **datasources from the same chain.**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time.
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly.
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph).
Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
diff --git a/website/src/pages/hi/subgraphs/mcp/claude.mdx b/website/src/pages/hi/subgraphs/mcp/claude.mdx
new file mode 100644
index 000000000000..1c6a21c24d55
--- /dev/null
+++ b/website/src/pages/hi/subgraphs/mcp/claude.mdx
@@ -0,0 +1,180 @@
+---
+title: Claude Desktop
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Claude to interact directly with Subgraphs on The Graph Network. This integration allows you to find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries—all through natural language conversations with Claude.
+
+## What You Can Do
+
+The Subgraph MCP integration enables you to:
+
+- Access the GraphQL schema for any Subgraph on The Graph Network
+- Execute GraphQL queries against any Subgraph deployment
+- Find top Subgraph deployments for a given keyword or contract address
+- Get 30-day query volume for Subgraph deployments
+- Ask natural language questions about Subgraph data without writing GraphQL queries manually
+
+## आवश्यक शर्तें
+
+- [Node.js](https://nodejs.org/en) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+
+## Installation Options
+
+### Option 1: Using npx (Recommended)
+
+#### Configuration Steps using npx
+
+#### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+#### 2. Add Configuration
+
+Paste the following settings into your config file:
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+#### 3. Add Your Gateway API Key
+
+Replace `GATEWAY_API_KEY` with your API key from [Subgraph Studio](https://thegraph.com/studio/).
+
+#### 4. Save and Restart
+
+Once you've entered your Gateway API key into your settings, save the file and restart Claude Desktop.
+
+### Option 2: Building from Source
+
+#### Requirements
+
+- Rust (latest stable version recommended: 1.75+)
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+ Follow the on-screen instructions. For other platforms, see the [official Rust installation guide](https://www.rust-lang.org/tools/install).
+
+#### Installation Steps
+
+1. **Clone and Build the Repository**
+
+ ```bash
+ git clone git@github.com:graphops/subgraph-mcp.git
+ cd subgraph-mcp
+ cargo build --release
+ ```
+
+2. **Find the Command Path**
+
+ After building, the executable will be located at `target/release/subgraph-mcp` inside your project directory.
+
+ - Navigate to your `subgraph-mcp` directory in terminal
+ - Run `pwd` to get the full path
+ - Combine the output with `/target/release/subgraph-mcp`
+
+3. **Configure Claude Desktop**
+
+ Open your `claude_desktop_config.json` file as described above and add:
+
+ ```json
+ {
+ "mcpServers": {
+ "subgraph": {
+ "command": "/path/to/your/subgraph-mcp/target/release/subgraph-mcp",
+ "env": {
+ "GATEWAY_API_KEY": "your-api-key-here"
+ }
+ }
+ }
+ }
+ ```
+
+ Replace `/path/to/your/subgraph-mcp/target/release/subgraph-mcp` with the actual path to the compiled binary.
+
+## Using The Graph Resource in Claude
+
+After configuring Claude Desktop:
+
+1. Restart Claude Desktop
+2. Start a new conversation
+3. Click on the context menu (top right)
+4. Add "Subgraph Server Instructions" as a resource by adding `graphql://subgraph` to your chat context
+
+> **Important**: Claude Desktop may not automatically utilize the Subgraph MCP. You must manually add "Subgraph Server Instructions" resource to your chat context for each conversation where you want to use it.
+
+## Troubleshooting
+
+To enable logs for the MCP when using the npx option, add the `--verbose true` option to your args array.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID/IPFS hash**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Search subgraphs by keyword**: Find subgraphs by keyword in their display names, ordered by signal
+- **Get deployment 30-day query counts**: Get aggregate query count over the last 30 days for multiple subgraph deployments
+- **Get top Subgraph deployments for a contract**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain, ordered by query fees
+
+## Key Identifier Types
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a subgraph. Use `execute_query_by_subgraph_id` or `get_schema_by_subgraph_id`.
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment. Use `execute_query_by_deployment_id` or `get_schema_by_deployment_id`.
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific, immutable deployment. Use `execute_query_by_deployment_id` (the gateway treats it like a deployment ID for querying) or `get_schema_by_ipfs_hash`.
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Claude will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+```
+Find the top subgraphs for contract 0x1f98431c8ad98523631ae4a59f267346ea31f984 on arbitrum-one
+```
diff --git a/website/src/pages/hi/subgraphs/mcp/cline.mdx b/website/src/pages/hi/subgraphs/mcp/cline.mdx
new file mode 100644
index 000000000000..01bd398a6b89
--- /dev/null
+++ b/website/src/pages/hi/subgraphs/mcp/cline.mdx
@@ -0,0 +1,99 @@
+---
+title: Cline
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cline to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cline.
+
+## आवश्यक शर्तें
+
+- [Cline](https://cline.bot/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+## Using The Graph Resource in Cline
+
+After configuring Cline:
+
+1. Restart Cline
+2. Start a new conversation
+3. Enable the Subgraph MCP from the context menu
+4. Add "Subgraph Server Instructions" as a resource to your chat context
+
+## Available Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cline will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/hi/subgraphs/mcp/cursor.mdx b/website/src/pages/hi/subgraphs/mcp/cursor.mdx
new file mode 100644
index 000000000000..de9f7ef82582
--- /dev/null
+++ b/website/src/pages/hi/subgraphs/mcp/cursor.mdx
@@ -0,0 +1,94 @@
+---
+title: Cursor
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cursor to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cursor.
+
+## आवश्यक शर्तें
+
+- [Cursor](https://www.cursor.com/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+### 4. Restart Cursor
+
+Restart Cursor, and start a new chat.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cursor will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/hi/subgraphs/querying/best-practices.mdx b/website/src/pages/hi/subgraphs/querying/best-practices.mdx
index aa4caefd2f3d..aff2142766ec 100644
--- a/website/src/pages/hi/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/hi/subgraphs/querying/best-practices.mdx
@@ -2,9 +2,7 @@
title: Querying Best Practices
---
-The Graph ब्लॉकचेन से डेटा क्वेरी करने का एक विकेन्द्रीकृत तरीका प्रदान करता है। इसका डेटा एक GraphQL API के माध्यम से एक्सपोज़ किया जाता है, जिससे इसे GraphQL भाषा के साथ क्वेरी करना आसान हो जाता है।
-
-GraphQL भाषा के आवश्यक नियम और Best Practices सीखें ताकि आप अपने Subgraph को optimize कर सकें।
+Use The Graph's GraphQL API to query [Subgraph](/subgraphs/developing/subgraphs/) data efficiently. This guide outlines essential GraphQL rules, guides, and best practices to help you write optimized, reliable queries.
---
@@ -12,9 +10,11 @@ GraphQL भाषा के आवश्यक नियम और Best Practice
### GraphQL Query की संरचना
-REST API के विपरीत, एक रेखांकन API एक स्कीमा पर बनाया गया है जो परिभाषित करता है कि कौन से प्रश्न किए जा सकते हैं।
+> GraphQL queries use the GraphQL language, which is defined in the [GraphQL specification](https://spec.graphql.org/).
+
+Unlike REST APIs, GraphQL APIs are built on a schema-driven design that defines which queries can be performed.
-उदाहरण के लिए, `token` क्वेरी का उपयोग करके एक टोकन प्राप्त करने के लिए की गई क्वेरी इस प्रकार होगी:
+Here's a typical query to fetch a `token`:
```graphql
query GetToken($id: ID!) {
@@ -25,7 +25,7 @@ query GetToken($id: ID!) {
}
```
-जो निम्नलिखित पूर्वानुमानित JSON प्रतिक्रिया लौटाएगा (जब उचित `$id` variable value\_ पास किया जाएगा):
+which will return a predictable JSON response (when passing the proper `$id` variable value):
```json
{
@@ -36,8 +36,6 @@ query GetToken($id: ID!) {
}
```
-GraphQL क्वेरीज़ GraphQL भाषा का उपयोग करती हैं, जो कि [एक स्पेसिफिकेशन](https://spec.graphql.org/) पर परिभाषित है।
-
उपरोक्त `GetToken` क्वेरी कई भाषाओं के भागों से बनी है (नीचे `[...]` प्लेसहोल्डर के साथ प्रतिस्थापित):
```graphql
@@ -50,33 +48,31 @@ query [operationName]([variableName]: [variableType]) {
}
```
-## GraphQL क्वेरी लिखने के नियम
+### GraphQL क्वेरी लिखने के नियम
-- प्रत्येक `queryName` को प्रत्येक ऑपरेशन में केवल एक बार ही उपयोग किया जाना चाहिए।
-- प्रत्येक `field` का चयन में केवल एक बार ही उपयोग किया जा सकता है (हम `token` के अंतर्गत id को दो बार क्वेरी नहीं कर सकते)।
-- कुछ field या क्वेरी (जैसे tokens) जटिल प्रकार के परिणाम लौटाते हैं, जिनके लिए उप-फ़ील्ड का चयन आवश्यक होता है। जब अपेक्षित हो तब चयन न देना (या जब अपेक्षित न हो - उदाहरण के लिए, id पर चयन देना) एक त्रुटि उत्पन्न करेगा। किसी फ़ील्ड के प्रकार को जानने के लिए, कृपया [Graph Explorer](/subgraphs/explorer/) देखें।
-- किसी तर्क को असाइन किया गया कोई भी चर उसके प्रकार से मेल खाना चाहिए।
-- चरों की दी गई सूची में, उनमें से प्रत्येक अद्वितीय होना चाहिए।
-- सभी परिभाषित चर का उपयोग किया जाना चाहिए।
+> Important: Failing to follow these rules will result in an error from The Graph API.
-> ध्यान दें: इन नियमों का पालन न करने पर The Graph API से त्रुटि होगी।
+1. प्रत्येक `queryName` को प्रत्येक ऑपरेशन में केवल एक बार ही उपयोग किया जाना चाहिए।
+2. Each `field` must be used only once in a selection (you cannot query `id` twice under `token`).
+3. Complex types require a selection of sub-fields.
+ - For example, some `fields' or queries (like `tokens`) return complex types which will require a selection of sub-fields. Not providing a selection when expected or providing one when not expected will raise an error, such as `id\`. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
+4. किसी तर्क को असाइन किया गया कोई भी चर उसके प्रकार से मेल खाना चाहिए।
+5. Variables must be uniquely defined and used.
-पूरी नियमों की सूची और कोड उदाहरणों के लिए GraphQL Validations guide देखें: (https://thegraph.com/resources/migration-guides/graphql-validations-migration-guide/)
+**For a complete list of rules with code examples, check out the [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)**.
-### एक ग्राफ़क्यूएल एपीआई के लिए एक प्रश्न भेजना
+### How to Send a Query to a GraphQL API
-GraphQL एक भाषा और प्रथाओं का सेट है जो HTTP के माध्यम से संचालित होता है।
+[GraphQL is a query language](https://graphql.org/learn/) and a set of conventions for APIs, typically used over HTTP to request and send data between clients and servers. This means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-इसका मतलब है कि आप एक GraphQL API को मानक `fetch` (स्थानीय रूप से या `@whatwg-node/fetch` या `isomorphic-fetch` के माध्यम से) का उपयोग करके क्वेरी कर सकते हैं।
-
-हालांकि, जैसा कि ["Querying from an Application"](/subgraphs/querying/from-an-application/) में उल्लेख किया गया है, यह अनुशंसित है कि `graph-client` का उपयोग किया जाए, जो निम्नलिखित अद्वितीय विशेषताओं का समर्थन करता है:
+However, as recommended in [Querying from an Application](/subgraphs/querying/from-an-application/), it's best to use `graph-client`, which supports the following unique features:
- Cross-chain Subgraph Handling: एक ही query में multiple Subgraphs से data प्राप्त करना
- [स्वचालित ब्लॉक ट्रैकिंग](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
-- [स्वचालित Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
+- [स्वचालित Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- पूरी तरह से टाइप किया गया परिणाम
-The Graph के साथ `graph-client` का उपयोग करके क्वेरी करने का तरीका:
+Example query using `graph-client`:
```tsx
import { execute } from '../.graphclient'
@@ -100,15 +96,15 @@ async function main() {
main()
```
-More GraphQL क्लाइंट विकल्पों को ["Querying from an Application"](/subgraphs/querying/from-an-application/) में कवर किया गया है।
+For more alternatives, see ["Querying from an Application"](/subgraphs/querying/from-an-application/).
---
## Best Practices
-### हमेशा स्टैटिक क्वेश्चन लिखें
+### 1. Always Write Static Queries
-एक सामान्य (खराब) प्रथा है कि क्वेरी स्ट्रिंग्स को निम्नलिखित तरीके से गतिशील रूप से बनाया जाए:
+A common bad practice is to dynamically build a query string as follows:
```tsx
const id = params.id
@@ -122,14 +118,16 @@ query GetToken {
`
```
-जबकि उपरोक्त स्निपेट एक मान्य GraphQL क्वेरी उत्पन्न करता है, **इसमें कई कमियाँ हैं:**
+While the example above produces a valid GraphQL query, it comes with several issues:
+
+- The full query is harder to understand.
+- Developers are responsible for safely sanitizing the string interpolation.
+- Not sending the values of the variables as part of the request can block server-side caching.
+- It prevents tools from statically analyzing the query (e.g.linters or type generation tools).
-- यह संपूर्ण क्वेरी को समझना **और कठिन बना देता है।**
-- डेवलपर्स स्ट्रिंग **इंटरपोलेशन को सुरक्षित रूप से सैनिटाइज़ करने के लिए जिम्मेदार होते हैं**
-- रिक्वेस्ट पैरामीटर्स के रूप में वेरिएबल्स के मान न भेजने से **सर्वर-साइड पर संभावित कैशिंग को रोका जा सकता है**
-- यह **टूल्स को क्वेरी का स्टैटिक रूप से विश्लेषण करने से रोकता है** (उदाहरण: Linter या टाइप जेनरेशन टूल्स)
+Instead, it's recommended to **always write queries as static strings**.
-इसी कारण, यह अनुशंसा की जाती है कि हमेशा क्वेरीज़ को स्थिर स्ट्रिंग्स के रूप में लिखा जाए।
+Example of static string:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -151,18 +149,21 @@ const result = await execute(query, {
})
```
-ऐसा करने से **कई लाभ** होते हैं:
+Static strings have several **key advantages**:
-- **आसानी से पढ़ने और बनाए रखने योग्य** क्वेरीज़
-- GraphQL **सर्वर वेरिएबल्स की स्वच्छता को संभालता है**
-- **वेरिएबल्स को सर्वर-स्तर पर कैश** किया जा सकता है
-- **क्वेरीज़ को उपकरणों द्वारा स्थिर रूप से विश्लेषण किया जा सकता है** (अधिक जानकारी निम्नलिखित अनुभागों में) -
+- Queries are easier to read, manage, and debug.
+- Variable sanitization is handled by the GraphQL server The GraphQL.
+- Variables can be cached at the server level.
+- Queries can be statically analyzed by tools (see [GraphQL Essential Tools](/subgraphs/querying/best-practices/#graphql-essential-tools-guides)).
-### स्टेटिक क्वेरीज़ में फ़ील्ड्स को शर्तानुसार कैसे शामिल करें
+### 2. Include Fields Conditionally in Static Queries
-आप `owner` फ़ील्ड को केवल एक विशेष शर्त पर शामिल करना चाह सकते हैं।
+Including fields in static queries only for a particular condition improves performance and keeps responses lightweight by fetching only the necessary data when it's relevant.
-आप इसके लिए `@include(if:...)` निर्देश का उपयोग कर सकते हैं जैसे कि निम्नलिखित:
+- The `@include(if:...)` directive tells the query to **include** a specific field only if the given condition is true.
+- The `@skip(if: ...)` directive tells the query to **exclude** a specific field if the given condition is true.
+
+Example using `owner` field with `@include(if:...)` directive:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -185,15 +186,11 @@ const result = await execute(query, {
})
```
-> नोट: विपरीत निर्देश `@skip(if: ...)` है।
-
-### आप जो चाहते हैं वह मांगें
-
-GraphQL अपने “Ask for what you want” टैगलाइन के लिए प्रसिद्ध हुआ।
+### 3. Ask Only For What You Want
-इस कारण, GraphQL में सभी उपलब्ध फ़ील्ड्स को बिना उन्हें व्यक्तिगत रूप से सूचीबद्ध किए प्राप्त करने का कोई तरीका नहीं है।
+GraphQL is known for its "Ask for what you want” tagline, which is why it requires explicitly listing each field you want. There's no built-in way to fetch all available fields automatically.
-- GraphQL APIs query करते समय, हमेशा वो fields की query करने की सोचें जो वास्तव में use होंगे।
+- When querying GraphQL APIs, always think of querying only the fields that will actually be used.
- सुनिश्चित करें कि क्वेरीज़ केवल उतने ही एंटिटीज़ लाएँ जितनी आपको वास्तव में आवश्यकता है। डिफ़ॉल्ट रूप से, क्वेरीज़ एक संग्रह में 100 एंटिटीज़ लाएँगी, जो आमतौर पर उपयोग में लाई जाने वाली मात्रा से अधिक होती है, जैसे कि उपयोगकर्ता को प्रदर्शित करने के लिए। यह न केवल एक क्वेरी में शीर्ष-स्तरीय संग्रहों पर लागू होता है, बल्कि एंटिटीज़ के नेस्टेड संग्रहों पर भी अधिक लागू होता है।
उदाहरण के लिए, निम्नलिखित क्वेरी में:
@@ -213,11 +210,12 @@ query listTokens {
प्रतिक्रिया में प्रत्येक 100 टोकनों के लिए 100 लेन-देन(transaction) हो सकते हैं।
-यदि application को केवल 10 लेन-देन(transaction) की आवश्यकता है, तो क्वेरी को लेनदेन फ़ील्ड पर स्पष्ट रूप से first: 10 सेट करना चाहिए।
+If the application only needs 10 transactions, the query should explicitly set **`first: 10`** on the transactions field.
-### एक ही क्वेरी का उपयोग करके कई रिकॉर्ड्स का अनुरोध करें
+### 4. Use a Single Query to Request Multiple Records
-डिफ़ॉल्ट रूप से, Subgraphs में एक record के लिए singular entity होती है। कई records प्राप्त करने के लिए, plural entities और filter का उपयोग करें: `where: {id_in:[X,Y,Z]}` या `where: {volume_gt:100000}`
+डिफ़ॉल्ट रूप से, Subgraphs में एक record के लिए singular entity होती है। कई records प्राप्त करने के लिए, plural entities और filter का उपयोग करें:
+`where: {id_in:[X,Y,Z]}` या `where: {volume_gt:100000}`
अप्रभावी क्वेरी करने का उदाहरण:
@@ -247,7 +245,7 @@ query ManyRecords {
}
```
-### एकल अनुरोध में कई क्वेरियों को संयोजित करें।
+### 5. Combine Multiple Queries in a Single Request
आपका application निम्नलिखित प्रकार के डेटा को क्वेरी करने की आवश्यकता हो सकती है: -
@@ -279,9 +277,9 @@ const [tokens, counters] = Promise.all(
)
```
-जबकि यह कार्यान्वयन पूरी तरह से मान्य है, यह GraphQL API के साथ दो राउंड ट्रिप की आवश्यकता होगी।
+While this implementation is valid, it will require two round trips with the GraphQL API.
-सौभाग्य से, एक ही GraphQL अनुरोध में कई क्वेरी भेजना भी मान्य है, जैसा कि नीचे दिया गया है:
+It's best to send multiple queries in the same GraphQL request as follows:
```graphql
import { execute } from "your-favorite-graphql-client"
@@ -302,9 +300,9 @@ query GetTokensandCounters {
const { result: { tokens, counters } } = execute(query)
```
-यह तरीका कुल मिलाकर प्रदर्शन में सुधार करेगा क्योंकि यह नेटवर्क पर बिताया गया समय कम करेगा (API के लिए एक राउंड ट्रिप बचाता है) और एक अधिक संक्षिप्त कार्यान्वयन प्रदान करेगा।
+Sending multiple queries in the same GraphQL request **improves the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and provides a **more concise implementation**.
-### लीवरेज ग्राफक्यूएल फ़्रैगमेंट
+### 6. Leverage GraphQL Fragments
GraphQL क्वेरी लिखने में सहायक एक सुविधा है GraphQL Fragment।
@@ -333,7 +331,7 @@ query {
- बड़ी क्वेरीज़ पढ़ने में कठिन होती हैं।
- जब ऐसे टूल्स का उपयोग किया जाता है जो क्वेरी के आधार पर TypeScript टाइप्स उत्पन्न करते हैं (इस पर अंतिम अनुभाग में और अधिक), newDelegate और oldDelegate दो अलग-अलग इनलाइन इंटरफेस के रूप में परिणत होंगे।
-एक पुनर्गठित संस्करण का प्रश्न निम्नलिखित होगा:
+An optimized version of the query would be the following:
```graphql
query {
@@ -357,15 +355,18 @@ fragment DelegateItem on Transcoder {
}
```
-GraphQL में fragment का उपयोग पढ़ने की सुविधा बढ़ाएगा (विशेष रूप से बड़े स्तर पर) और बेहतर TypeScript प्रकारों की पीढ़ी का परिणाम देगा।
+Using a GraphQL `fragment` improves readability (especially at scale) and results in better TypeScript types generation.
जब टाइप्स जेनरेशन टूल का उपयोग किया जाता है, तो उपरोक्त क्वेरी एक सही 'DelegateItemFragment' टाइप उत्पन्न करेगी (अंतिम "Tools" अनुभाग देखें)।
-### ग्राफकॉल फ्रैगमेंट क्या करें और क्या न करें
+## GraphQL Fragment Guidelines
-### Fragment base must be a type
+### Do's and Don'ts for Fragments
-एक फ़्रैगमेंट गैर-लागू प्रकार पर आधारित नहीं हो सकता, संक्षेप में, **ऐसे प्रकार पर जिसमें फ़ील्ड नहीं होते हैं।**
+1. Fragments cannot be based on a non-applicable types (types without fields).
+2. `BigInt` cannot be used as a fragment's base because it's a **scalar** (native "plain" type).
+
+उदाहरण:
```graphql
fragment MyFragment on BigInt {
@@ -373,11 +374,8 @@ fragment MyFragment on BigInt {
}
```
-BigInt एक **स्केलर** (मूल "plain" type) है जिसे किसी फ़्रैगमेंट के आधार के रूप में उपयोग नहीं किया जा सकता।
-
-#### How to spread a Fragment
-
-फ्रैगमेंट विशिष्ट प्रकारों पर परिभाषित किए जाते हैं और उन्हें क्वेरी में उपयुक्त रूप से उपयोग किया जाना चाहिए।
+3. Fragments belong to specific types and must be used with those same types in queries.
+4. Spread only fragments matching the correct type.
उदाहरण:
@@ -400,20 +398,23 @@ fragment VoteItem on Vote {
}
```
-`newDelegate` और `oldDelegate` प्रकार के `Transcoder` हैं।
+- `newDelegate` and `oldDelegate` are of type `Transcoder`. It's not possible to spread a fragment of type `Vote` here.
-यहाँ `Vote` प्रकार के एक खंड को फैलाना संभव नहीं है।
+5. Fragments must be defined based on their specific usage.
+6. Define fragments as an atomic business unit of data.
-#### Fragment को data की एक atomic business unit के रूप में define करें।
+---
-GraphQL `Fragments` को उनके उपयोग के आधार पर परिभाषित किया जाना चाहिए।
+### How to Define `Fragment` as an Atomic Business Unit of Data
-अधिकांश उपयोग मामलों के लिए, एक प्रकार पर एक फ़्रैगमेंट परिभाषित करना (दोहराए गए फ़ील्ड उपयोग या प्रकार निर्माण के मामले में) पर्याप्त होता है।
+> For most use-cases, defining one fragment per type (in the case of repeated fields usage or type generation) is enough.
-यहाँ एक सामान्य नियम है फ्रैगमेंट्स का उपयोग करने के लिए:
+Here is a rule of thumb for using fragments:
-- जब समान प्रकार के फ़ील्ड किसी क्वेरी में दोहराए जाते हैं, तो उन्हें` Fragment` में समूहित करें।
-- जब समान लेकिन भिन्न फ़ील्ड्स को दोहराया जाता है, तो कई फ़्रैगमेंट्स बनाएं, उदाहरण के लिए:
+- When fields of the same type are repeated in a query, group them in a `Fragment`.
+- When similar but different fields are repeated, create multiple fragments.
+
+उदाहरण:
```graphql
# base fragment (mostly used in listing)
@@ -436,35 +437,45 @@ fragment VoteWithPoll on Vote {
---
-## मूलभूत उपकरण
+## GraphQL Essential Tools Guides
+
+### Test Queries with Graph Explorer
+
+Before integrating GraphQL queries into your dapp, it's best to test them. Instead of running them directly in your app, use a web-based playground.
+
+Start with [Graph Explorer](https://thegraph.com/explorer), a preconfigured GraphQL playground built specifically for Subgraphs. You can experiment with queries and see the structure of the data returned without writing any frontend code.
+
+If you want alternatives to debug/test your queries, check out other similar web-based tools:
+
+- [GraphiQL](https://graphiql-online.com/graphiql)
+- [Altair](https://altairgraphql.dev/)
-### ग्राफक्यूएल वेब-आधारित खोजकर्ता
+### Setting up Workflow and IDE Tools
-क्वेरीज़ को अपने application में चलाकर उनका पुनरावर्तन करना कठिन हो सकता है। इसी कारण, अपनी क्वेरीज़ को अपने application में जोड़ने से पहले उनका परीक्षण करने के लिए बिना किसी संकोच के [Graph Explorer](https://thegraph.com/explorer) का उपयोग करें। Graph Explorer आपको एक पूर्व-कॉन्फ़िगर किया हुआ GraphQL प्लेग्राउंड प्रदान करेगा, जहाँ आप अपनी क्वेरीज़ का परीक्षण कर सकते हैं।
+In order to keep up with querying best practices and syntax rules, use the following workflow and IDE tools.
-यदि आप अपनी क्वेरीज़ को डिबग/परखने के लिए एक अधिक लचीला तरीका ढूंढ रहे हैं, तो अन्य समान वेब-आधारित टूल उपलब्ध हैं जैसे [Altair](https://altairgraphql.dev/) और [GraphiQL](https://graphiql-online.com/graphiql)
+#### GraphQL ESLint
-### ग्राफक्यूएल लाइनिंग
+1. Install GraphQL ESLint
-उपरोक्त सर्वोत्तम प्रथाओं और वाक्य रचना नियमों का पालन करने के लिए, निम्नलिखित वर्कफ़्लो और IDE टूल्स का उपयोग करना अत्यधिक अनुशंसित है।
+Use [GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) to enforce best practices and syntax rules with zero effort.
-**GraphQL ESLint**
+2. Use the ["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config
-[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) आपकी बिना किसी अतिरिक्त प्रयास के GraphQL सर्वोत्तम प्रथाओं का पालन करने में मदद करेगा।
+This will enforce essential rules such as:
-["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) कॉन्फ़िगरेशन सेटअप करने से आवश्यक नियम लागू होंगे जैसे:-
+- `@graphql-eslint/fields-on-correct-type`: Ensures fields match the proper type.
+- `@graphql-eslint/no-unused variables`: Flags unused variables in your queries.
-- `@graphql-eslint/fields-on-correct-type`: क्या कोई फ़ील्ड सही प्रकार पर उपयोग की गई है?
-- `@graphql-eslint/no-unused variables`: क्या दिया गया चर अनुपयोगी रहना चाहिए?
-- और अधिक!
+Result: You'll **catch errors without even testing queries** on the playground or running them in production!
-यह आपको बिना प्लेग्राउंड पर क्वेरी का परीक्षण किए या उन्हें प्रोडक्शन में चलाए बिना ही त्रुटियों को पकड़ने की अनुमति देगा!
+#### Use IDE plugins
-### आईडीई प्लगइन्स
+GraphQL plugins streamline your workflow by offering real-time feedback while you code. They highlight mistakes, suggest completions, and help you explore your schema faster.
-**VSCode और GraphQL**
+1. VS Code
-[GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) आपके विकास वर्कफ़्लो में एक बेहतरीन जोड़ है जिससे आपको यह प्राप्त होता है:
+Install the [GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) to unlock:
- सिंटैक्स हाइलाइटिंग
- ऑटो-कंप्लीट सुझाव
@@ -472,11 +483,11 @@ fragment VoteWithPoll on Vote {
- निबंध
- फ्रैगमेंट्स और इनपुट टाइप्स के लिए परिभाषा पर जाएं।
-यदि आप graphql-eslint का उपयोग कर रहे हैं, तो [ESLint VSCode एक्सटेंशन](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) आपके कोड में त्रुटियों और चेतावनियों को इनलाइन सही तरीके से देखने के लिए आवश्यक है।
+If you are using `graphql-eslint`, use the [ESLint VS Code extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) to visualize errors and warnings inlined in your code correctly.
-**WebStorm/Intellij और GraphQL**
+2. WebStorm/Intellij and GraphQL
-[JS GraphQL प्लगइन](https://plugins.jetbrains.com/plugin/8097-graphql/) आपके GraphQL के साथ काम करने के अनुभव को काफी बेहतर बनाएगा, जिससे आपको निम्नलिखित सुविधाएँ मिलेंगी:
+Install the [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/). It significantly improves the experience of working with GraphQL by providing:
- सिंटैक्स हाइलाइटिंग
- ऑटो-कंप्लीट सुझाव
diff --git a/website/src/pages/hi/subgraphs/querying/from-an-application.mdx b/website/src/pages/hi/subgraphs/querying/from-an-application.mdx
index 32d14acb5375..4ebf7cf278db 100644
--- a/website/src/pages/hi/subgraphs/querying/from-an-application.mdx
+++ b/website/src/pages/hi/subgraphs/querying/from-an-application.mdx
@@ -11,7 +11,8 @@ sidebarTitle: App से Query करना
### सबग्राफ Studio Endpoint
-अपने Subgraph को Subgraph Studio पर deploy करने के बाद, आपको एक endpoint मिलेगा जो इस प्रकार दिखेगा: (https://api.thegraph.com/subgraphs/name/YOUR_SUBGRAPH_NAME)
+अपने Subgraph को Subgraph Studio पर deploy करने के बाद, आपको एक endpoint मिलेगा जो इस प्रकार दिखेगा:
+(https://api.thegraph.com/subgraphs/name/YOUR_SUBGRAPH_NAME)
```
https://api.studio.thegraph.com/query///
@@ -37,7 +38,7 @@ The Graph अपना खुद का GraphQL क्लाइंट, graph-cli
- Cross-chain Subgraph Handling: एक ही query में multiple Subgraphs से data प्राप्त करना
- [स्वचालित ब्लॉक ट्रैकिंग](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
-- [स्वचालित Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
+- [स्वचालित Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- पूरी तरह से टाइप किया गया परिणाम
> नोट: `graph-client` अन्य लोकप्रिय GraphQL क्लाइंट जैसे Apollo और URQL के साथ एकीकृत है, जो React, Angular, Node.js और React Native जैसे परिवेशों के अनुकूल हैं। परिणामस्वरूप, `graph-client` का उपयोग करने से The Graph के साथ काम करने के लिए आपको एक उन्नत अनुभव मिलेगा।
@@ -247,7 +248,7 @@ client
})
```
-### URQL अवलोकन
+### URQL अवलोकन
[URQL](https://formidable.com/open-source/urql/) Node.js, React/Preact, Vue और Svelte वातावरण के भीतर उपलब्ध है, जिसमें कुछ अधिक उन्नत सुविधाएँ शामिल हैं: -
diff --git a/website/src/pages/hi/subgraphs/querying/graphql-api.mdx b/website/src/pages/hi/subgraphs/querying/graphql-api.mdx
index a0e9da503a74..41e99af1e6c7 100644
--- a/website/src/pages/hi/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/hi/subgraphs/querying/graphql-api.mdx
@@ -2,23 +2,37 @@
title: ग्राफक्यूएल एपीआई
---
-The Graph में उपयोग किए जाने वाले GraphQL Query API के बारे में जानें।
+Explore the GraphQL Query API for interacting with Subgraphs on The Graph Network.
-## GraphQL क्या है?
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with existing data.
-[GraphQL](https://graphql.org/learn/) एक API के लिए क्वेरी भाषा है और मौजूदा डेटा के साथ उन क्वेरियों को निष्पादित करने के लिए एक रनटाइम है। The Graph, GraphQL का उपयोग करके Subgraphs से क्वेरी करता है।
+The Graph uses GraphQL to query Subgraphs.
-To समझने के लिए कि GraphQL बड़ी भूमिका कैसे निभाता है, [developing](/subgraphs/developing/introduction/) और [creating a Subgraph](/developing/creating-a-subgraph/) की समीक्षा करें।
+## Core Concepts
-## GraphQL के साथ क्वेरीज़
+### इकाइयां
+
+- **What they are**: Persistent data objects defined with `@entity` in your schema
+- **Key requirement**: Must contain `id: ID!` as primary identifier
+- **Usage**: Foundation for all query operations
+
+### योजना
+
+- **Purpose**: Blueprint defining the data structure and relationships using GraphQL [IDL](https://facebook.github.io/graphql/draft/#sec-Type-System)
+- **Key characteristics**:
+ - Auto-generates query endpoints
+ - Read-only operations (no mutations)
+ - Defines entity interfaces and derived fields
-आपकी Subgraph schema में `Entities` नामक प्रकारों को परिभाषित किया जाता है। प्रत्येक `Entity` प्रकार के लिए, शीर्ष-स्तरीय Query प्रकार पर `entity` और `entities` फ़ील्ड जेनरेट की जाएंगी।
+## Query Structure
-> ध्यान दें: 'queries' को The Graph का उपयोग करते समय 'graphql' क्वेरी के शीर्ष पर शामिल करने की आवश्यकता नहीं है।
+GraphQL queries in The Graph target entities defined in the Subgraph schema. Each `Entity` type generates corresponding `entity` and `entities` fields on the root `Query` type.
-### उदाहरण
+> Note: The `query` keyword is not required at the top level of GraphQL queries.
-एकल 'Token' एंटिटी के लिए क्वेरी करें जो आपके स्कीमा में परिभाषित है
+### Single Entity Queries Example
+
+Query for a single `Token` entity:
```graphql
{
@@ -29,9 +43,11 @@ To समझने के लिए कि GraphQL बड़ी भूमिक
}
```
-> नोट: जब किसी एकल entities के लिए क्वेरी की जा रही हो, तो 'id' फ़ील्ड आवश्यक है, और इसे एक स्ट्रिंग के रूप में लिखा जाना चाहिए।
+> Note: Single entity queries require the `id` parameter as a string.
+
+### Collection Queries Example
-सभी 'Token' entities को क्वेरी करें:
+Query format for all `Token` entities:
```graphql
{
@@ -42,14 +58,14 @@ To समझने के लिए कि GraphQL बड़ी भूमिक
}
```
-### Her translation means sorting out
+### Sorting Example
-जब आप एक संग्रह के लिए क्वेरी कर रहे हों, तो आप:
+Collection queries support the following sort parameters:
-- 'orderBy' पैरामीटर का उपयोग किसी विशिष्ट गुण द्वारा सॉर्ट करने के लिए करें।
-- 'orderDirection' का उपयोग सॉर्ट दिशा निर्दिष्ट करने के लिए करें, 'asc' के लिए आरोही या 'desc' के लिए अवरोही।
+- `orderBy`: Specifies the attribute for sorting
+- `orderDirection`: Accepts `asc` (ascending) or `desc` (descending)
-#### उदाहरण
+#### Standard Sorting Example
```graphql
{
@@ -60,11 +76,7 @@ To समझने के लिए कि GraphQL बड़ी भूमिक
}
```
-#### नेस्टेड इकाई छँटाई के लिए उदाहरण
-
-Graph Node ['v0.30.0'](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) के अनुसार, entities को nested entities के आधार पर क्रमबद्ध किया जा सकता है।
-
-निम्नलिखित उदाहरण में टोकन उनके मालिक के नाम के अनुसार क्रमबद्ध किए गए हैं:
+#### Nested Entity Sorting Example
```graphql
{
@@ -77,20 +89,18 @@ Graph Node ['v0.30.0'](https://github.com/graphprotocol/graph-node/releases/tag/
}
```
-> वर्तमान में, आप '@entity' और '@derivedFrom' फ़ील्ड्स पर एक-स्तरीय गहरे 'String' या 'ID' प्रकारों द्वारा क्रमबद्ध कर सकते हैं। अफसोस,[इंटरफेस द्वारा एक-स्तरीय गहरे entities पर क्रमबद्ध करना](https://github.com/graphprotocol/graph-node/pull/4058), ऐसे फ़ील्ड्स द्वारा क्रमबद्ध करना जो एरेज़ और नेस्टेड entities हैं, अभी तक समर्थित नहीं है।
+> Note: Nested sorting supports one-level-deep `String` or `ID` types on `@entity` and `@derivedFrom` fields.
-### पृष्ठ पर अंक लगाना
+### Pagination Example
-जब एक संग्रह के लिए क्वेरी की जाती है, तो यह सबसे अच्छा होता है:
+When querying a collection, it is best to:
- संग्रह की `शुरुआत` से पेजिनेट करने के लिए first पैरामीटर का उपयोग करें।
- डिफ़ॉल्ट सॉर्ट आदेश `ID` के अनुसार आरोही अल्फ़ान्यूमेरिक क्रम में होता है, **न** कि निर्माण समय के अनुसार।
- `skip` पैरामीटर का उपयोग entities को स्किप करने और पेजिनेट करने के लिए करें। instancesके लिए, first:100 पहले 100 entities दिखाता है और first:100, skip:100 अगले 100 entities दिखाता है।
- `skip` मानों का उपयोग queries में करने से बचें क्योंकि ये सामान्यतः खराब प्रदर्शन करते हैं। एक बड़ी संख्या में आइटम प्राप्त करने के लिए, पिछले उदाहरण में दिखाए गए अनुसार किसी गुण के आधार पर entities के माध्यम से पेज करना सबसे अच्छा होता है।
-#### उदाहरण जो `first` का उपयोग करता है
-
-पहले 10 टोकन पूछें:
+#### Standard Pagination Example
```graphql
{
@@ -101,11 +111,7 @@ Graph Node ['v0.30.0'](https://github.com/graphprotocol/graph-node/releases/tag/
}
```
-संग्रह के मध्य में स्थित entities के समूहों के लिए queries करने के लिए, `skip` पैरामीटर को `first` पैरामीटर के साथ उपयोग किया जा सकता है, ताकि संग्रह की शुरुआत से निर्धारित संख्या में entities को छोड़ दिया जा सके।
-
-#### `first` और `skip` का उपयोग करते हुए उदाहरण
-
-कलेक्शन की शुरुआत से 10 स्थानों के बाद 10 `Token` entities को queries करें।
+#### Offset Pagination Example
```graphql
{
@@ -116,9 +122,7 @@ Graph Node ['v0.30.0'](https://github.com/graphprotocol/graph-node/releases/tag/
}
```
-#### उदाहरण 'first' और 'id_ge' का उपयोग करते हुए।
-
-यदि एक क्लाइंट को बड़ी संख्या में एंटिटीज़ पुनर्प्राप्त करने की आवश्यकता है, तो एट्रिब्यूट पर आधारित क्वेरी बनाना और उस एट्रिब्यूट द्वारा फ़िल्टर करना अधिक प्रभावशाली है। उदाहरण के लिए, एक क्लाइंट इस क्वेरी का उपयोग करके बड़ी संख्या में टोकन पुनर्प्राप्त कर सकता है:
+#### Cursor-based Pagination Example
```graphql
query manyTokens($lastID: String) {
@@ -129,16 +133,11 @@ query manyTokens($lastID: String) {
}
```
-पहली बार, यह queries को `lastID = ""`, के साथ भेजेगा, और subsequent requests के लिए यह `lastID` को पिछले अनुरोध में आखिरी entity के `id` attribute के रूप में सेट करेगा। यह तरीका increasing 'skip' मानों का उपयोग करने की तुलना में काफी बेहतर प्रदर्शन करेगा।
-
### छनन
-- आप अपनी क्वेरी में विभिन्न गुणों को फ़िल्टर करने के लिए 'where' पैरामीटर का उपयोग कर सकते हैं।
-- आप 'where' पैरामीटर के भीतर कई मानों पर फ़िल्टर कर सकते हैं।
-
-#### उदाहरण 'where' का उपयोग करते हुए
+The `where` parameter filters entities based on specified conditions.
-'failed' परिणाम वाली क्वेरी चुनौतियाँ:
+#### Basic Filtering Example
```graphql
{
@@ -152,9 +151,7 @@ query manyTokens($lastID: String) {
}
```
-आप मूल्य तुलना के लिए '\_gt' , '\_lte' जैसे प्रत्ययों का उपयोग कर सकते हैं।
-
-#### श्रेणी फ़िल्टरिंग के लिए उदाहरण
+#### Numeric Comparison Example
```graphql
{
@@ -166,11 +163,7 @@ query manyTokens($lastID: String) {
}
```
-#### ब्लॉक फ़िल्टरिंग के लिए उदाहरण
-
-आप उन इकाइयों entities को भी फ़िल्टर कर सकते हैं जिन्हें किसी निर्दिष्ट ब्लॉक में या उसके बाद अपडेट किया गया था, '\_change_block(number_gte: Int)' के साथ।
-
-यह उपयोगी हो सकता है यदि आप केवल उन entities को लाना चाहते हैं जो बदल गई हैं, उदाहरण के लिए, पिछली बार जब आपने पोल किया था तब से। या वैकल्पिक रूप से, यह जांचने या डिबग करने के लिए उपयोगी हो सकता है कि आपकी Subgraph में entities कैसे बदल रही हैं (यदि इसे एक ब्लॉक फ़िल्टर के साथ जोड़ा जाए, तो आप केवल उन्हीं entities को अलग कर सकते हैं जो एक विशिष्ट ब्लॉक में बदली हैं)।
+#### Block-based Filtering Example
```graphql
{
@@ -182,11 +175,7 @@ query manyTokens($lastID: String) {
}
```
-#### नेस्टेड इकाई फ़िल्टरिंग के लिए उदाहरण
-
-नेस्टेड इकाइयों के आधार पर फ़िल्टरिंग उन फ़ील्ड्स में संभव है जिनके अंत में '\_' प्रत्यय होता है।
-
-यह उपयोगी हो सकता है यदि आप केवल उन संस्थाओं को लाना चाहते हैं जिनके चाइल्ड-स्तरीय निकाय प्रदान की गई शर्तों को पूरा करते हैं।
+#### Nested Entity Filtering Example
```graphql
{
@@ -200,13 +189,11 @@ query manyTokens($lastID: String) {
}
```
-#### लॉजिकल ऑपरेटर्स
-
-ग्राफ-नोड ['v0.30.0'](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) से, आप एक ही 'where' आर्गुमेंट में कई पैरामीटर्स को समूहित कर सकते हैं और 'and' या 'or' ऑपरेटर्स का उपयोग करके एक से अधिक मानदंडों के आधार पर परिणामों को फ़िल्टर कर सकते हैं।
+#### Logical Operators
-##### `AND` ऑपरेटर
+##### AND Operations Example
-निम्नलिखित उदाहरण उन चुनौतियों को फ़िल्टर करता है जिनका `outcome`` succeeded` है और जिनका `number` `100` या उससे अधिक है।
+निम्नलिखित उदाहरण उन चुनौतियों को फ़िल्टर करता है जिनका ```outcome`` succeeded``` है और जिनका `number` `100` या उससे अधिक है।
```graphql
{
@@ -220,27 +207,11 @@ query manyTokens($lastID: String) {
}
```
-> **सिंटैक्टिक शुगर**: आप उपरोक्त को queriesसरल बना सकते हैं `and` ऑपरेटर को हटाकर और उप-वाक्यांश को कॉमा से अलग करके पास करके।
->
-> ```graphql
-> {
-> challenges(where: { number_gte: 100, outcome: "succeeded" }) {
-> challenger
-> outcome
-> application {
-> id
-> }
-> }
-> }
-> ```
-
-##### `OR` ऑपरेटर।
-
-निम्नलिखित उदाहरण उन चुनौतियों को फ़िल्टर करता है जिनका `outcome` `succeeded` है या जिनका `number` `100` या उससे अधिक है।
+**Syntactic sugar:** You can simplify the above query by removing the `and` operator and by passing a sub-expression separated by commas.
```graphql
{
- challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenges(where: { number_gte: 100, outcome: "succeeded" }) {
challenger
outcome
application {
@@ -250,52 +221,36 @@ query manyTokens($lastID: String) {
}
```
-> **नोट**: queries बनाते समय, `or` ऑपरेटर के उपयोग से होने वाले प्रदर्शन प्रभावों पर विचार करना महत्वपूर्ण है। हालांकि `or` खोज परिणामों को व्यापक बनाने के लिए एक उपयोगी उपकरण हो सकता है, लेकिन इसके कुछ महत्वपूर्ण लागतें भी होती हैं। `or` के साथ मुख्य समस्या यह है कि यह queries को धीमा कर सकता है। इसका कारण यह है कि `or` के उपयोग से डेटाबेस को कई इंडेक्स स्कैन करने पड़ते हैं, जो एक समय-सापेक्ष प्रक्रिया हो सकती है। इन समस्याओं से बचने के लिए, यह अनुशंसा की जाती है कि डेवलपर्स or के बजाय and ऑपरेटर का उपयोग करें जब भी संभव हो। यह अधिक सटीक फ़िल्टरिंग की अनुमति देता है और तेज़, अधिक सटीक queries प्रदान कर सकता है।
-
-#### सभी फ़िल्टर
-
-पैरामीटर प्रत्यय की पूरी सूची:
+##### OR Operations Example
+```graphql
+{
+ challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenger
+ outcome
+ application {
+ id
+ }
+ }
+}
```
-_
-_not
-_gt
-_lt
-_gte
-_lte
-_in
-_not_in
-_contains
-_contains_nocase
-_not_contains
-_not_contains_nocase
-_starts_with
-_starts_with_nocase
-_ends_with
-_ends_with_nocase
-_not_starts_with
-_not_starts_with_nocase
-_not_ends_with
-_not_ends_with_nocase
-```
-
-> कुछ प्रत्यय केवल विशिष्ट प्रकारों के लिए समर्थित होते हैं। उदाहरण के लिए, `Boolean` केवल` _not, _in`, और `_not_`in का समर्थन करता है, लेकिन \_ केवल ऑब्जेक्ट और इंटरफेस प्रकारों के लिए उपलब्ध है।
-इसके अलावा, `where` आर्ग्यूमेंट के हिस्से के रूप में निम्नलिखित वैश्विक फ़िल्टर उपलब्ध हैं:
+Global filter parameter:
```graphql
_change_block(number_gte: Int)
```
-### समय-यात्रा क्वेरी
+### Time-travel Queries Example
-आप न केवल नवीनतम ब्लॉक के लिए, जो डिफ़ॉल्ट होता है, बल्कि अतीत के किसी भी मनमाने ब्लॉक के लिए भी अपनी entities की स्थिति को queries कर सकते हैं। जिस ब्लॉक पर queries होनी चाहिए, उसे या तो उसके ब्लॉक नंबर या उसके ब्लॉक हैश द्वारा निर्दिष्ट किया जा सकता है, इसके लिए queries के शीर्ष स्तर के फ़ील्ड्स में block आर्ग्यूमेंट शामिल किया जाता है।
+Queries support historical state retrieval using the `block` parameter:
-ऐसे queries का परिणाम समय के साथ नहीं बदलेगा, यानी किसी निश्चित पिछले ब्लॉक परqueries करने से हमेशा वही परिणाम मिलेगा, चाहे इसे कभी भी निष्पादित किया जाए। इसका एकमात्र अपवाद यह है कि यदि आप किसी ऐसे ब्लॉक पर queries करते हैं जो chain के हेड के बहुत करीब है, तो परिणाम बदल सकता है यदि वह ब्लॉक मुख्य chain पर **not** होता है और chain का पुनर्गठन हो जाता है। एक बार जब किसी ब्लॉक को अंतिम (final) माना जा सकता है, तो queries का परिणाम नहीं बदलेगा।
+- `number`: Integer block number
+- `hash`: String block hash
> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail.
-#### उदाहरण
+#### Block Number Query Example
```graphql
{
@@ -309,9 +264,7 @@ _change_block(number_gte: Int)
}
```
-यह queries `Challenge` entities और उनके संबद्ध `Application` entities को लौटाएगी, जैसा कि वे ब्लॉक संख्या 8,000,000 के प्रोसेस होने के ठीक बाद मौजूद थे।
-
-#### उदाहरण
+#### Block Hash Query Example
```graphql
{
@@ -325,28 +278,26 @@ _change_block(number_gte: Int)
}
```
-यह queries `Challenge` entities और उनसे संबंधित `Application` entities को वापस करेगी, जैसा कि वे दिए गए हैश वाले ब्लॉक को प्रोसेस करने के तुरंत बाद मौजूद थीं।
-
-### पूर्ण पाठ खोज प्रश्न
+### Full-Text Search Example
-Fulltext search query fields एक अभिव्यक्तिपूर्ण टेक्स्ट खोज API प्रदान करते हैं जिसे Subgraph schema में जोड़ा जा सकता है और अनुकूलित किया जा सकता है। Fulltext search को अपने Subgraph में जोड़ने के लिए [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) देखें।
+Full-text search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Full-text Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add full-text search to your Subgraph.
-फ़ुलटेक्स्ट सर्च क्वेरीज़ में एक आवश्यक फ़ील्ड होता है, ' text ', जिसमें सर्च शब्द प्रदान किए जाते हैं। इस ' text ' सर्च फ़ील्ड में उपयोग करने के लिए कई विशेष फ़ुलटेक्स्ट ऑपरेटर उपलब्ध हैं।
+Full-text search queries have one required field, `text`, for supplying search terms. Several special full-text operators are available to be used in this `text` search field.
-पूर्ण पाठ खोज ऑपरेटर:
+Full-text search fields use the required `text` parameter with the following operators:
-| प्रतीक | ऑपरेटर | Description |
-| --- | --- | --- |
-| `&` | `And` | सभी प्रदान किए गए शब्दों को शामिल करने वाली संस्थाओं के लिए एक से अधिक खोज शब्दों को फ़िल्टर में संयोजित करने के लिए |
-| | | ' Or' | या ऑपरेटर द्वारा अलग किए गए एकाधिक खोज शब्दों वाली क्वेरी सभी संस्थाओं को प्रदान की गई शर्तों में से किसी से मेल के साथ वापस कर देगी |
-| `<->` | ' द्वारा अनुसरण करें ' | दो शब्दों के बीच की दूरी निर्दिष्ट करें। |
-| `:*` | ' उपसर्ग ' | उन शब्दों को खोजने के लिए उपसर्ग खोज शब्द का उपयोग करें जिनके उपसर्ग मेल खाते हैं (2 वर्ण आवश्यक हैं।) |
+| Operator | प्रतीक | Description |
+| --------- | ------ | --------------------------------------------------------------- |
+| And | `&` | Matches entities containing all terms |
+| Or | `\|` | Return all entities with a match from any of the provided terms |
+| Follow by | `<->` | Matches terms with specified distance |
+| Prefix | `:*` | Matches word prefixes (minimum 2 characters) |
-#### उदाहरण
+#### Search Examples
-' or 'ऑपरेटर का उपयोग करके, यह क्वेरी उन ब्लॉग एंटिटीज़ को फ़िल्टर करेगी जिनके पूर्ण-पाठ (fulltext) फ़ील्ड में "anarchism" या "crumpet" में से किसी एक के विभिन्न रूप शामिल हैं।
+OR operator:
-```graphql
+```
{
blogSearch(text: "anarchism | crumpets") {
id
@@ -357,7 +308,7 @@ Fulltext search query fields एक अभिव्यक्तिपूर्
}
```
-' follow by ' ऑपरेटर पूर्ण-पाठ दस्तावेज़ों में विशिष्ट दूरी पर स्थित शब्दों को निर्दिष्ट करता है। निम्नलिखित क्वेरी उन सभी ब्लॉगों को लौटाएगी जिनमें "विकेंद्रीकृत" के विभिन्न रूप "philosophy" के बाद आते हैं।
+“Follow” by operator:
```graphql
{
@@ -370,7 +321,7 @@ Fulltext search query fields एक अभिव्यक्तिपूर्
}
```
-अधिक जटिल फिल्टर बनाने के लिए फुलटेक्स्ट ऑपरेटरों को मिलाएं। इस उदाहरण क्वेरी के अनुसरण के साथ एक बहाना खोज ऑपरेटर संयुक्त रूप से सभी ब्लॉग संस्थाओं को उन शब्दों से मिलाएगा जो "लू" से शुरू होते हैं और उसके बाद "संगीत"।
+Combined operators:
```graphql
{
@@ -383,29 +334,19 @@ Fulltext search query fields एक अभिव्यक्तिपूर्
}
```
-### मान्यकरण
+### स्कीमा की परिभाषा
-Graph Node अपने द्वारा प्राप्त GraphQL क्वेरी की स्पेसिफिकेशन-आधारित(https://spec.graphql.org/October2021/#sec-Validation) वैलिडेशन करता है, जो graphql-tools-rs(https://github.com/dotansimha/graphql-tools-rs#validation-rules) पर आधारित है, जो graphql-js संदर्भ कार्यान्वयन(https://github.com/graphql/graphql-js/tree/main/src/validation) पर आधारित है। क्वेरी जो वैलिडेशन नियम में विफल होती हैं, वे एक मानक त्रुटि के साथ विफल होती हैं - अधिक जानने के लिए GraphQL स्पेसिफिकेशन(https://spec.graphql.org/October2021/#sec-Validation) पर जाएं।
-
-## योजना
-
-आपके डेटा स्रोतों का स्कीमा, अर्थात् उपलब्ध प्रश्न करने के लिए संस्थाओं की प्रकार, मान और उनके बीच के संबंध, GraphQL Interface Definition Language (IDL)(https://facebook.github.io/graphql/draft/#sec-Type-System) के माध्यम से परिभाषित किए गए हैं।
-
-GraphQL स्कीमाएँ आमतौर पर queries, subscriptions और mutations के लिए रूट टाइप्स को परिभाषित करती हैं। The Graph केवल queries को सपोर्ट करता है। आपके Subgraph के लिए रूट Query टाइप अपने आप उत्पन्न हो जाता है, जो कि आपके [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph) में शामिल GraphQL स्कीमा से आता है।
-
-> ध्यान दें: हमारा एपीआई म्यूटेशन को उजागर नहीं करता है क्योंकि डेवलपर्स से उम्मीद की जाती है कि वे अपने एप्लिकेशन से अंतर्निहित ब्लॉकचेन के खिलाफ सीधे लेन-देन(transaction) जारी करेंगे।
-
-### इकाइयां
+Entity types require:
-आपके स्कीमा में जिन भी GraphQL प्रकारों में @entity निर्देश होते हैं, उन्हें संस्थाएँ (entities) माना जाएगा और उनमें एक ID फ़ील्ड होना चाहिए।
+- GraphQL Interface Definition Language (IDL) format
+- `@entity` directive
+- `ID` field
-> नोट: वर्तमान में, आपकी स्कीमा में सभी प्रकारों में @entity निर्देश होना चाहिए। भविष्य में, हम उन प्रकारों को मूल्य वस्तुएं मानेंगे जिनमें @entity निर्देश नहीं होगा, लेकिन यह अभी तक समर्थित नहीं है।
+### Subgraph Metadata Example
-### सबग्राफ मेटाडेटा
+The `_Meta_` object provides subgraph metadata:
-सभी Subgraph में एक स्वचालित रूप से उत्पन्न `_Meta_` ऑब्जेक्ट होता है, जो Subgraph मेटाडाटा तक पहुंच प्रदान करता है। इसे निम्नलिखित तरीके से क्वेरी किया जा सकता है:
-
-```graphQL
+```graphql
{
_meta(block: { number: 123987 }) {
block {
@@ -419,14 +360,49 @@ GraphQL स्कीमाएँ आमतौर पर queries, subscriptions
}
```
-यदि कोई ब्लॉक प्रदान किया जाता है, तो मेटाडेटा उस ब्लॉक के अनुसार होगा, यदि नहीं, तो नवीनतम इंडेक्स किया गया ब्लॉक उपयोग किया जाएगा। यदि प्रदान किया जाता है, तो ब्लॉक को Subgraph के प्रारंभिक ब्लॉक के बाद और सबसे हाल ही में इंडेक्स किए गए ब्लॉक के बराबर या उससे कम होना चाहिए।
-
-deployment एक विशिष्ट ID है, जो subgraph.yaml फ़ाइल के IPFS CID के अनुरूप है।
+Metadata fields:
+
+- `deployment`: IPFS CID of the subgraph.yaml
+- `block`: Latest block information
+- `hasIndexingErrors`: Boolean indicating past indexing errors
+
+> Note: When writing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use `and` operators instead of `or` whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
+
+### GraphQL Filter Operators Reference
+
+This table explains each filter operator available in The Graph's GraphQL API. These operators are used as suffixes to field names when filtering data using the `where` parameter.
+
+| ऑपरेटर | Description | उदाहरण |
+| ------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
+| `_` | Matches entities where the specified field equals another entity | `{ where: { owner_: { name: "Alice" } } }` |
+| `_not` | Negates the specified condition | `{ where: { active_not: true } }` |
+| `_gt` | Greater than (>) | `{ where: { price_gt: "100" } }` |
+| `_lt` | Less than (`\<`) | `{ where: { price_lt: "100" } }` |
+| `_gte` | Greater than or equal to (>=) | `{ where: { price_gte: "100" } }` |
+| `_lte` | Less than or equal to (`\<=`) | `{ where: { price_lte: "100" } }` |
+| `_in` | Value is in the specified array | `{ where: { category_in: ["Art", "Music"] } }` |
+| `_not_in` | Value is not in the specified array | `{ where: { category_not_in: ["Art", "Music"] } }` |
+| `_contains` | Field contains the specified string (case-sensitive) | `{ where: { name_contains: "token" } }` |
+| `_contains_nocase` | Field contains the specified string (case-insensitive) | `{ where: { name_contains_nocase: "token" } }` |
+| `_not_contains` | Field does not contain the specified string (case-sensitive) | `{ where: { name_not_contains: "test" } }` |
+| `_not_contains_nocase` | Field does not contain the specified string (case-insensitive) | `{ where: { name_not_contains_nocase: "test" } }` |
+| `_starts_with` | Field starts with the specified string (case-sensitive) | `{ where: { name_starts_with: "Crypto" } }` |
+| `_starts_with_nocase` | Field starts with the specified string (case-insensitive) | `{ where: { name_starts_with_nocase: "crypto" } }` |
+| `_ends_with` | Field ends with the specified string (case-sensitive) | `{ where: { name_ends_with: "Token" } }` |
+| `_ends_with_nocase` | Field ends with the specified string (case-insensitive) | `{ where: { name_ends_with_nocase: "token" } }` |
+| `_not_starts_with` | Field does not start with the specified string (case-sensitive) | `{ where: { name_not_starts_with: "Test" } }` |
+| `_not_starts_with_nocase` | Field does not start with the specified string (case-insensitive) | `{ where: { name_not_starts_with_nocase: "test" } }` |
+| `_not_ends_with` | Field does not end with the specified string (case-sensitive) | `{ where: { name_not_ends_with: "Test" } }` |
+| `_not_ends_with_nocase` | Field does not end with the specified string (case-insensitive) | `{ where: { name_not_ends_with_nocase: "test" } }` |
+
+#### Notes
+
+- Type support varies by operator. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`.
+- The `_` operator is only available for object and interface types.
+- String comparison operators are especially useful for text fields.
+- Numeric comparison operators work with both number and string-encoded number fields.
+- Use these operators in combination with logical operators (`and`, `or`) for complex filtering.
-block नवीनतम ब्लॉक के बारे में जानकारी प्रदान करता है (किसी भी ब्लॉक सीमाओं को ध्यान में रखते हुए जो कि \_meta में पास की जाती हैं):
-
-- हैश: ब्लॉक का हैश
-- नंबर: ब्लॉक नंबर
-- टाइमस्टैम्प: यदि उपलब्ध हो, तो ब्लॉक का टाइमस्टैम्प (यह वर्तमान में केवल EVM नेटवर्क को इंडेक्स करने वाले Subgraphs के लिए उपलब्ध है)
+### मान्यकरण
-`hasIndexingErrors` एक boolean है जो यह पहचानता है कि Subgraph को किसी पिछले block पर Indexing errors का सामना करना पड़ा था।
+Graph Node अपने द्वारा प्राप्त GraphQL क्वेरी की स्पेसिफिकेशन-आधारित(https://spec.graphql.org/October2021/#sec-Validation) वैलिडेशन करता है, जो graphql-tools-rs(https://github.com/dotansimha/graphql-tools-rs#validation-rules) पर आधारित है, जो graphql-js संदर्भ कार्यान्वयन(https://github.com/graphql/graphql-js/tree/main/src/validation) पर आधारित है। क्वेरी जो वैलिडेशन नियम में विफल होती हैं, वे एक मानक त्रुटि के साथ विफल होती हैं - अधिक जानने के लिए GraphQL स्पेसिफिकेशन(https://spec.graphql.org/October2021/#sec-Validation) पर जाएं।
diff --git a/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx
index 257bce21d38c..450086fa648e 100644
--- a/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx
@@ -1,34 +1,86 @@
---
-title: API Keys को प्रबंधित करना
+title: How to Manage API keys
---
+This guide shows you how to create, manage, and secure API keys for your [Subgraphs](/subgraphs/developing/subgraphs/).
+
## Overview
-Subgraphs को query करने के लिए API keys आवश्यक होते हैं। ये यह सुनिश्चित करते हैं कि application services के बीच कनेक्शन वैध और अधिकृत हैं, साथ ही एंड यूज़र और डिवाइस की पहचान को प्रमाणित करते हैं।
+API keys are required to query Subgraphs. They authenticate users and devices, authorize access to specific endpoints, enforce rate limits, and enable usage tracking across The Graph.
+
+## आवश्यक शर्तें
+
+- A [Subgraph Studio](https://thegraph.com/studio/) account
+
+## Create a New API Key
+
+1. Navigate to [Subgraph Studio](https://thegraph.com/studio/)
+2. Click the **API Keys** tab in the navigation menu
+3. Click the **Create API Key** button
+
+A new window will pop up:
+
+4. Enter a name for your API key
+5. Optional: You can enable a period spending limit
+6. Click **Create API Key**
+
+
+
+## Manage API Keys
+
+The “API keys” table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+
+### How to Set Spending Limits
+
+1. Find your API key in the API keys table
+2. Click the "three dots" icon next to the key
+3. Select "Manage spending limit"
+4. Enter your desired monthly limit in USD
+5. Click **Save**
+
+> Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+
+### How to Rename an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Rename API key"
+3. Enter the new name
+4. Click **Save**
+
+### How to Regenerate an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Regenerate API key"
+3. Confirm the action in the pop up dialog
+
+> Warning: Regenerating an API key will invalidate the previous key immediately. Update your applications with the new key to prevent service interruption.
+
+## API Key Details
-### API Keys बनाएं और प्रबंधित करें
+### Monitoring Usage
-Subgraph Studio पर जाएं: https://thegraph.com/studio/ और API Keys टैब पर क्लिक करें ताकि आप अपने विशेष Subgraphs के लिए API keys बना और प्रबंधित कर सकें।
+1. Click on your API key to view the Details page
+2. Check the **Overview** section for:
+ - Total number of queries
+ - GRT spent
+ - Current usage statistics
-"API keys" तालिका मौजूदा API keys को सूचीबद्ध करती है और आपको उन्हें प्रबंधित या हटाने की अनुमति देती है। प्रत्येक कुंजी के लिए, आप इसकी स्थिति, वर्तमान अवधि के लिए लागत, वर्तमान अवधि के लिए खर्च सीमा और कुल क्वेरी संख्या देख सकते हैं।
+### Restricting Domain Access
-आप दिए गए API key के दाईं ओर स्थित "तीन बिंदु" मेनू पर क्लिक करके:
+1. Click on your API key to open the Details page
+2. Navigate to the **Security** section
+3. Click "Add Domain"
+4. Enter the authorized domain name
+5. Click **Save**
-- Rename API key
-- Regenerate API key
-- Delete API key
-- Manage spending limit: यह USD में दी गई API key के लिए एक optional monthly spending limit है। यह limit per billing period (calendar month) के लिए है।
+### Limiting Subgraph Access
-### API Keysविवरण
+1. In the API key Details page
+2. Navigate to the **Security** section
+3. Click "Assign Subgraphs"
+4. Select the Subgraphs you want to authorize
+5. Click **Save**
-Details page देखने के लिए आप individual API key पर click कर सकते हैं:
+## Additional Resources
-1. **अवलोकन** अनुभाग के अंतर्गत, आप:
- - अपना कुंजी नाम संपादित करें
- - एपीआई कुंजियों को पुन: उत्पन्न करें
- - आंकड़ों के साथ एपीआई कुंजी का वर्तमान उपयोग देखें:
- - प्रश्नों की संख्या
- - जीआरटी की राशि खर्च की गई
-2. नीचे **Security** अनुभाग में, आप अपनी पसंद के अनुसार सुरक्षा सेटिंग्स को सक्रिय कर सकते हैं। विशेष रूप से, आप:
- - अपनी API कुंजी का उपयोग करने के लिए प्राधिकृत डोमेन नाम देखें और प्रबंधित करें
- - अपने API key के साथ जिन Subgraphs को query किया जा सकता है, उन्हें असाइन करें।
+[Deploying Using Subgraph Studio](/subgraphs/developing/deploying/using-subgraph-studio/)
diff --git a/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 17258dd13ea1..c48a3021233a 100644
--- a/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,15 +2,19 @@
title: Subgraph ID vs Deployment ID
---
+Managing and accessing Subgraphs relies on two distinct identification systems: Subgraph IDs and Deployment IDs.
+
A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
-Here are some key differences between the two IDs: 
+Both identifiers are accessible in [Subgraph Studio](https://thegraph.com/studio/):
+
+
## Deployment ID
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://ipfs.thegraph.com/ipfs/QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
@@ -18,6 +22,12 @@ Example endpoint that uses Deployment ID:
`https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB`
+Using Deployment IDs for queries offers precise version control but comes with specific implications:
+
+- Advantages: Complete control over which version you're querying, ensuring consistent results
+- Challenges: Requires manual updates to query code when new Subgraph versions are published
+- Use case: Ideal for production environments where stability and predictability are crucial
+
## Subgraph ID
The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
@@ -25,3 +35,20 @@ The Subgraph ID is a unique identifier for a Subgraph. It remains constant acros
Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
+
+Using Subgraph IDs comes with important considerations:
+
+- Benefits: Automatically queries the latest version, reducing maintenance overhead
+- Limitations: May encounter version synchronization delays or breaking schema changes
+- Use case: Better suited for development environments or when staying current is more important than version stability
+
+## Deployment ID vs Subgraph ID
+
+Here are the key differences between the two IDs:
+
+| Consideration | Deployment ID | Subgraph ID |
+| ----------------------- | --------------------- | --------------- |
+| Version Pinning | Specific version | Always latest |
+| Maintenance Effort | High (manual updates) | Low (automatic) |
+| Environment Suitability | Production | Development |
+| Sync Status Awareness | Not required | Critical |
diff --git a/website/src/pages/hi/subgraphs/quick-start.mdx b/website/src/pages/hi/subgraphs/quick-start.mdx
index cbf3550a3170..d11eb2cc3d0e 100644
--- a/website/src/pages/hi/subgraphs/quick-start.mdx
+++ b/website/src/pages/hi/subgraphs/quick-start.mdx
@@ -2,24 +2,28 @@
title: Quick Start
---
-The Graph पर आसानी से एक [सबग्राफ](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) को बनाना, प्रकाशित करना और क्वेरी करना सीखें।
+Create, deploy, and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph Network.
+
+By the end, you'll have:
+
+- Initialized a Subgraph from a smart contract
+- Deployed it to Subgraph Studio for testing
+- Published to The Graph Network for decentralized indexing
## पूर्वावश्यकताएँ
- एक क्रिप्टो वॉलेट
-- एक स्मार्ट contract पता एक [supported network](/supported-networks/) पर।
-- [Node.js](https://nodejs.org/) इंस्टॉल किया गया
-- आपकी पसंद का एक पैकेज मैनेजर (`npm`, `yarn` या `pnpm`)
+- A deployed smart contract on a [supported network](/supported-networks/)
+- [Node.js](https://nodejs.org/) & a package manager of your choice (`npm`, `yarn` or `pnpm`)
## सबग्राफ कैसे बनाएं
### 1. सबग्राफ Studio में एक सबग्राफ बनाएँ -
-[Subgraph Studio](https://thegraph.com/studio/) पर जाएँ और अपने वॉलेट को कनेक्ट करें।
-
-सबग्राफ Studio आपको Subgraphs बनाने, प्रबंधित करने, तैनात करने और प्रकाशित करने की सुविधा देता है, साथ ही API कुंजी बनाने और प्रबंधित करने की सुविधा भी प्रदान करता है।
-
-"Create a सबग्राफ" पर क्लिक करें। यह अनुशंसा की जाती है कि सबग्राफ का नाम टाइटल केस में रखा जाए: "सबग्राफ Name Chain Name"।
+1. Go to [Subgraph Studio](https://thegraph.com/studio/)
+2. Connect your wallet
+3. Click "Create a Subgraph"
+4. Name it in Title Case: "Subgraph Name Chain Name"
### 2. ग्राफ़ सीएलआई स्थापित करें
@@ -37,20 +41,22 @@ Using [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### अपने सबग्राफ को प्रारंभ करें
+Verify install:
-> आप अपने विशिष्ट Subgraph के लिए कमांड Subgraph Studio के Subgraph पेज पर पा सकते हैं।
+```sh
+graph --version
+```
-`graph init` कमांड स्वचालित रूप से आपके contract की घटनाओं के आधार पर एक सबग्राफ का खाका तैयार करेगा।
+### अपने सबग्राफ को प्रारंभ करें
-निम्नलिखित कमांड एक मौजूदा contract से आपका सबग्राफ प्रारंभ करता है:
+> You can find commands for your specific Subgraph in [Subgraph Studio](https://thegraph.com/studio/).
+
+The following command initializes your Subgraph from an existing contract and indexes events:
```sh
graph init
```
-यदि आपका contract उस ब्लॉकस्कैनर पर वेरीफाई किया गया है जहाँ यह डिप्लॉय किया गया है (जैसे [Etherscan](https://etherscan.io/)), तो ABI अपने आप CLI में क्रिएट हो जाएगा।
-
जब आप अपने सबग्राफ को प्रारंभ करते हैं, तो CLI आपसे निम्नलिखित जानकारी मांगेगा:
- **प्रोटोकॉल**: वह प्रोटोकॉल चुनें जिससे आपका सबग्राफ डेटा को indexing करेगा।
@@ -59,19 +65,17 @@ graph init
- \*\*Ethereum नेटवर्क (वैकल्पिक): आपको यह निर्दिष्ट करने की आवश्यकता हो सकती है कि आपका Subgraph किस EVM-संगत नेटवर्क से डेटा को इंडेक्स करेगा।
- **contract एड्रेस**: उस स्मार्ट contract एड्रेस को खोजें जिससे आप डेटा क्वेरी करना चाहते हैं।
- **ABI**: यदि ABI स्वतः नहीं भरा जाता है, तो आपको इसे JSON फ़ाइल के रूप में मैन्युअल रूप से इनपुट करना होगा।
-- **Start Block**: आपको स्टार्ट ब्लॉक इनपुट करना चाहिए ताकि ब्लॉकचेन डेटा की सबग्राफ indexing को ऑप्टिमाइज़ किया जा सके। स्टार्ट ब्लॉक को खोजने के लिए उस ब्लॉक को ढूंढें जहां आपका contract डिप्लॉय किया गया था।
+- **Start Block**: You should input the start block where the contract was deployed to optimize Subgraph indexing of blockchain data.
- **contract का नाम**: अपने contract का नाम दर्ज करें।
- **contract इवेंट्स को entities के रूप में इंडेक्स करें**: इसे true पर सेट करने की सलाह दी जाती है, क्योंकि यह हर उत्सर्जित इवेंट के लिए स्वचालित रूप से आपके सबग्राफ में मैपिंग जोड़ देगा।
- **एक और contract जोड़ें** (वैकल्पिक): आप एक और contract जोड़ सकते हैं।
-इसका एक उदाहरण देखने के लिए निम्नलिखित स्क्रीनशॉट देखें कि जब आप अपना सबग्राफ इनिशियलाइज़ करते हैं तो क्या अपेक्षा करें:
+See the following screenshot for an example of what to expect when initializing your Subgraph:

### अपना सबग्राफ संपादित करें
-`init` कमांड पिछले चरण में एक प्रारंभिक सबग्राफ बनाता है जिसे आप अपने सबग्राफ को बनाने के लिए एक शुरुआती बिंदु के रूप में उपयोग कर सकते हैं।
-
सबग्राफ में परिवर्तन करते समय, आप मुख्य रूप से तीन फ़ाइलों के साथ काम करेंगे:
- मैनिफेस्ट (`subgraph.yaml`) - यह निर्धारित करता है कि आपका सबग्राफ किन डेटा स्रोतों को इंडेक्स करेगा।
@@ -82,9 +86,7 @@ graph init
### 5. अपना Subgraph डिप्लॉय करें
-> तैनाती करना प्रकाशन के समान नहीं है।
-
-जब आप किसी सबग्राफ को तैनात (deploy) करते हैं, तो आप इसे [सबग्राफ Studio](https://thegraph.com/studio/) पर अपलोड करते हैं, जहाँ आप इसका परीक्षण, स्टेजिंग और समीक्षा कर सकते हैं। तैनात किए गए सबग्राफ का Indexing [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/) द्वारा किया जाता है, जो Edge & Node द्वारा संचालित एक एकल Indexer है, न कि The Graph Network में मौजूद कई विकेंद्रीकृत Indexers द्वारा। एक तैनात (deployed) सबग्राफ का उपयोग निःशुल्क है, यह दर-सीमित (rate-limited) होता है, सार्वजनिक रूप से दृश्य (visible) नहीं होता, और इसे मुख्य रूप से विकास (development), स्टेजिंग और परीक्षण (testing) उद्देश्यों के लिए डिज़ाइन किया गया है।
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
एक बार जब आपका सबग्राफ लिखा जा चुका हो, तो निम्नलिखित कमांड चलाएँ:
@@ -107,8 +109,6 @@ graph deploy
```
````
-The CLI एक संस्करण लेबल के लिए पूछेगा। यह दृढ़ता से सिफारिश की जाती है कि [semantic versioning](https://semver.org/) का उपयोग करें, जैसे 0.0.1।
-
### 6. अपने सबग्राफ की समीक्षा करें
यदि आप अपना सबग्राफ प्रकाशित करने से पहले उसका परीक्षण करना चाहते हैं, तो आप [सबग्राफ Studio](https://thegraph.com/studio/) का उपयोग करके निम्नलिखित कर सकते हैं:
@@ -125,55 +125,13 @@ The CLI एक संस्करण लेबल के लिए पूछे
- यह आपके सबग्राफ को विकेंद्रीकृत [Indexers](/indexing/overview/) द्वारा The Graph Network पर अनुक्रमित किए जाने के लिए उपलब्ध कराता है।
- यह आपकी दर सीमा को हटा देता है और आपके सबग्राफ को [Graph Explorer](https://thegraph.com/explorer/) में सार्वजनिक रूप से खोजने योग्य और क्वेरी करने योग्य बनाता है।
-- यह आपके सबग्राफ को [Curators](/resources/roles/curating/) के लिए उपलब्ध कराता है ताकि वे इसे क्यूरेट कर सकें।
-
-> अधिक मात्रा में GRT को आप और अन्य लोग आपके सबग्राफ पर क्यूरेट करते हैं, तो अधिक Indexers को आपके सबग्राफ को इंडेक्स करने के लिए प्रोत्साहित किया जाएगा, जिससे सेवा की गुणवत्ता में सुधार होगा, विलंबता (latency) कम होगी, और आपके सबग्राफ के लिए नेटवर्क की पुनरावृत्ति (redundancy) बढ़ेगी।
-
-#### Subgraph Studio से प्रकाशित
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to add curation signal.
-अपने सबग्राफ को प्रकाशित करने के लिए, डैशबोर्ड में Publish बटन पर क्लिक करें।
+To publish your Subgraph, click the Publish button in the dashboard and select your network.

-अपने सबग्राफ को प्रकाशित करने के लिए उस नेटवर्क का चयन करें, जिसे आप चुनना चाहते हैं।
-
-#### Publishing from the CLI
-
-जैसा कि संस्करण 0.73.0 में है, अब आप अपने सबग्राफ को Graph CLI के साथ प्रकाशित कर सकते हैं।
-
-`graph-cli` खोलें।
-
-निम्नलिखित कमांड का उपयोग करें:
-
-````
-```sh
-graph codegen && graph build
-```
-
-Then,
-
-```sh
-graph publish
-```
-````
-
-3. एक विंडो खुलेगी, जिससे आप अपना वॉलेट कनेक्ट कर सकते हैं, मेटाडेटा जोड़ सकते हैं और अपने फ़ाइनलाइज़ किए गए सबग्राफ को अपनी पसंद के नेटवर्क पर डिप्लॉय कर सकते हैं।
-
-
-
-अपने परिनियोजन को अनुकूलित करने के लिए, [Publishing a सबग्राफ](/subgraphs/developing/publishing/publishing-a-subgraph/) देखें।
-
-#### सिग्नल को अपने Subgraph में जोड़ना
-
-1. Indexers को अपने सबग्राफ से क्वेरी करने के लिए आकर्षित करने हेतु, आपको इसमें GRT क्यूरेशन सिग्नल जोड़ना चाहिए।
-
- - यह कार्रवाई सेवा की गुणवत्ता में सुधार करती है, विलंबता को कम करती है, और आपके सबग्राफ के लिए नेटवर्क की पुनरावृत्ति और उपलब्धता को बढ़ाती है।
-
-2. यदि इंडेक्सिंग पुरस्कारों के लिए योग्य हैं, तो Indexers संकेतित राशि के आधार पर GRT पुरस्कार प्राप्त करते हैं।
-
- - यह अनुशंसा की जाती है कि कम से कम 3,000 GRT को क्यूरेट किया जाए ताकि 3 Indexers को आकर्षित किया जा सके। सबग्राफ फीचर उपयोग और समर्थित नेटवर्क के आधार पर पुरस्कार पात्रता की जांच करें।
-
-Curation के बारे में और जानने के लिए, [Curating](/resources/roles/curating/) पढ़ें.
+> It is recommended that you curate your own Subgraph with at least 3,000 GRT to incentivize indexing.
गैस लागत बचाने के लिए, आप अपने सबग्राफ को उसी लेनदेन में प्रकाशित कर सकते हैं जिसमें आप इसे क्यूरेट कर रहे हैं, बस इस विकल्प का चयन करें:
diff --git a/website/src/pages/hi/subgraphs/upgrade-indexer.mdx b/website/src/pages/hi/subgraphs/upgrade-indexer.mdx
new file mode 100644
index 000000000000..af859932f444
--- /dev/null
+++ b/website/src/pages/hi/subgraphs/upgrade-indexer.mdx
@@ -0,0 +1,25 @@
+---
+title: Edge & Node Upgrade Indexer
+sidebarTitle: Upgrade Indexer
+---
+
+## Overview
+
+The Upgrade Indexer is a specialized Indexer operated by Edge & Node. It supports newly integrated chains within The Graph ecosystem and ensures new Subgraphs are immediately available for querying, eliminating potential downtime.
+
+Originally designed as a transitional support, its primary purpose was to facilitate the migration of Subgraphs from the hosted service to the decentralized network. Currently, it supports newly deployed Subgraphs before the full Chain Integration Process (CIP) Indexing rewards are activated.
+
+### What it does
+
+- Provides immediate query support for all newly deployed Subgraphs.
+- Functions as the sole supporting Indexer for each chain until indexing rewards are activated.
+
+### What it does **not** do
+
+- Does not permanently index Subgraphs. Subgraph owners should curate Subgraphs to use independent Indexers long term.
+- Does not compete for rewards. The Upgrade Indexer's participation on the Graph Network does not dilute rewards for other Indexers.
+- Doesn't support Time Travel Queries (TTQ). All Subgraphs on the Upgrade Indexer are auto-pruned. If TTQs are needed on a Subgraph, [curation signal can be added](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to attract Indexers that will support this feature.
+
+### निष्कर्ष
+
+The Edge & Node Upgrade Indexer is foundational in supporting chain integrations and mitigating data latency risks. It plays a critical role in scaling The Graph's decentralized infrastructure by ensuring immediate query support and fostering community-driven indexing.
diff --git a/website/src/pages/hi/substreams/_meta-titles.json b/website/src/pages/hi/substreams/_meta-titles.json
index 83856f5ffbb5..468ba823a2ab 100644
--- a/website/src/pages/hi/substreams/_meta-titles.json
+++ b/website/src/pages/hi/substreams/_meta-titles.json
@@ -1,3 +1,4 @@
{
- "developing": "विकसित करना"
+ "developing": "विकसित करना",
+ "sps": "सबस्ट्रीम-संचालित सबग्राफ की सेवा"
}
diff --git a/website/src/pages/hi/substreams/developing/sinks.mdx b/website/src/pages/hi/substreams/developing/sinks.mdx
index 8978f7af1938..ddac3daea4f1 100644
--- a/website/src/pages/hi/substreams/developing/sinks.mdx
+++ b/website/src/pages/hi/substreams/developing/sinks.mdx
@@ -8,14 +8,13 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database or a file.
## Sinks
> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed.
- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database.
-- [Subgraph](./sps/introduction.mdx): Configure an API to meet your data needs and host it on The Graph Network.
- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application.
- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic.
- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks.
@@ -26,26 +25,26 @@ Sinks are integrations that allow you to send the extracted data to different de
### Official
-| नाम | समर्थन | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| नाम | समर्थन | Maintainer | Source Code |
+| ---------- | ------ | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| नाम | समर्थन | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| नाम | समर्थन | Maintainer | Source Code |
+| ---------- | ------ | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Official Support (by one of the main Substreams providers)
- C = Community Support
diff --git a/website/src/pages/hi/substreams/quick-start.mdx b/website/src/pages/hi/substreams/quick-start.mdx
index c4a0d5be8e23..d775f2f6dc57 100644
--- a/website/src/pages/hi/substreams/quick-start.mdx
+++ b/website/src/pages/hi/substreams/quick-start.mdx
@@ -31,6 +31,7 @@ If you can't find a Substreams package that meets your specific needs, you can d
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/).
diff --git a/website/src/pages/hi/substreams/sps/faq.mdx b/website/src/pages/hi/substreams/sps/faq.mdx
new file mode 100644
index 000000000000..1351b52c67a9
--- /dev/null
+++ b/website/src/pages/hi/substreams/sps/faq.mdx
@@ -0,0 +1,96 @@
+---
+title: सबस्ट्रीम्स-पावर्ड सबग्राफ FAQ
+sidebarTitle: FAQ
+---
+
+## सबस्ट्रीम क्या होते हैं?
+
+सबस्ट्रीम एक अत्यधिक शक्तिशाली प्रोसेसिंग इंजन है जो ब्लॉकचेन डेटा की समृद्ध स्ट्रीम्स को उपभोग करने में सक्षम है। यह आपको ब्लॉकचेन डेटा को परिष्कृत और आकार देने की अनुमति देता है ताकि एंड-यूजर applications द्वारा इसे तेजी और सहजता से पचाया जा सके।
+
+यह एक ब्लॉकचेन-अज्ञेयवादी, समानांतरित, और स्ट्रीमिंग-प्रथम इंजन है, जो ब्लॉकचेन डेटा ट्रांसफॉर्मेशन लेयर के रूप में कार्य करता है। यह [Firehose](https://firehose.streamingfast.io/) द्वारा संचालित है और डेवलपर्स को Rust मॉड्यूल लिखने, कम्युनिटी मॉड्यूल्स पर निर्माण करने, बेहद उच्च-प्रदर्शन इंडेक्सिंग प्रदान करने, और अपना डेटा कहीं भी [sink](/substreams/developing/sinks/) करने में सक्षम बनाता है।
+
+सबस्ट्रीम को [StreamingFast](https://www.streamingfast.io/) द्वारा विकसित किया गया है। सबस्ट्रीम के बारे में अधिक जानने के लिए [सबस्ट्रीम Documentation](/substreams/introduction/) पर जाएं।
+
+## सबस्ट्रीम-संचालित सबग्राफ क्या हैं?
+
+[सबस्ट्रीम-powered सबग्राफ](/sps/introduction/)सबस्ट्रीमकी शक्ति को सबग्राफ की queryability के साथ जोड़ते हैं। जब किसी सबस्ट्रीम-powered सबग्राफ को प्रकाशित किया जाता है, तो सबस्ट्रीम परिवर्तनों द्वारा निर्मित डेटा [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) उत्पन्न कर सकता है, जो सबग्राफ entities के साथ संगत होते हैं।
+
+यदि आप पहले से ही सबग्राफ विकास से परिचित हैं, तो ध्यान दें कि सबस्ट्रीम-संचालित सबग्राफ को उसी तरह से क्वेरी किया जा सकता है जैसे कि इसे AssemblyScript ट्रांसफॉर्मेशन लेयर द्वारा उत्पन्न किया गया हो। यह सबग्राफ के सभी लाभ प्रदान करता है, जिसमें एक डायनेमिक और लचीला GraphQL API शामिल है।
+
+## सबस्ट्रीम-powered सबग्राफ सामान्य सबग्राफ से कैसे भिन्न हैं?
+
+सबग्राफ डेटा सोर्सेस से बने होते हैं, जो ऑनचेन आयोजन को निर्धारित करते हैं और उन आयोजन को Assemblyscript में लिखे handler के माध्यम से कैसे ट्रांसफॉर्म करना चाहिए। ये आयोजन क्रमवार तरीके से प्रोसेस किए जाते हैं, जिस क्रम में ये आयोजन ऑनचेन होते हैं।
+
+By contrast, सबस्ट्रीम-powered सबग्राफ के पास एक ही datasource होता है जो एक सबस्ट्रीम package को संदर्भित करता है, जिसे ग्राफ नोड द्वारा प्रोसेस किया जाता है। सबस्ट्रीम को पारंपरिक सबग्राफ की तुलना में अतिरिक्त विस्तृत ऑनचेन डेटा तक पहुंच प्राप्त होती है, और यह बड़े पैमाने पर समानांतर प्रोसेसिंग से भी लाभ उठा सकते हैं, जिससे प्रोसेसिंग समय काफी तेज़ हो सकता है।
+
+## सबस्ट्रीम- powered सबग्राफ के उपयोग के लाभ क्या हैं?
+
+सबस्ट्रीम-powered सबग्राफ सभी लाभों को एक साथ लाते हैं जो सबस्ट्रीम और सबग्राफ प्रदान करते हैं। वे अधिक संयोजनशीलता और उच्च-प्रदर्शन इंडेक्सिंग को The Graph में लाते हैं। वे नए डेटा उपयोग के मामलों को भी सक्षम बनाते हैं; उदाहरण के लिए, एक बार जब आपने अपना सबस्ट्रीम-powered सबग्राफ बना लिया, तो आप अपने [सबस्ट्रीम modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) को पुन: उपयोग कर सकते हैं ताकि विभिन्न [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) जैसे कि PostgreSQL, MongoDB, और Kafka में आउटपुट किया जा सके।
+
+## Substream के क्या benefit हैं?
+
+Substream का उपयोग करने के कई benefit हैं, जिनमें:
+
+- Composable: आप Substreams modules को LEGO blocks की तरह stack कर सकते हैं, और community module पर निर्माण करके public data को अधिक refining कर कते हैं।
+
+- High-performance indexing: बड़े पैमाने पर parallel operation के विशाल संगठनों के माध्यम से कई गुना तेज़ सूचीकरण (think BigQuery).
+
+- Sink anywhere: अपना डेटा कहीं भी सिंक करें: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
+
+- Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks.
+
+- Access to additional data which is not available as part of the JSON RPC
+
+- All the benefits of the Firehose.
+
+## What is the Firehose?
+
+Developed by [StreamingFast](https://www.streamingfast.io/), the Firehose is a blockchain data extraction layer designed from scratch to process the full history of blockchains at speeds that were previously unseen. Providing a files-based and streaming-first approach, it is a core component of StreamingFast's suite of open-source technologies and the foundation for Substreams.
+
+Go to the [documentation](https://firehose.streamingfast.io/) to learn more about the Firehose.
+
+## Firehouse के क्या benefits हैं?
+
+Firehouse का उपयोग करने के कई benefits हैं, जिनमें:
+
+- सबसे कम latency और कोई मतदान नहीं: streaming-first fashion में, Firehose nodes को पहले block data को push करने की दौड़ के लिए designed किया गया है।
+
+- Prevents downtimes: उच्च उपलब्धता के लिए मौलिक रूप से design किया गया है।
+
+- Never miss a beat: Firehose stream cursor को forks to handle और किसी भी स्थिति में जहां आप छोड़े थे वहां जारी रहने के लिए design किया गया है।
+
+- Richest data model: Best data model जिसमें balance changes, the full call tree, आंतरिक लेनदेन, logs, storage changes, gas costs और बहुत कुछ शामिल है।
+
+- Leverages flat files: blockchain data को flat files में निकाला जाता है, जो सबसे सस्ते और सबसे अधिक अनुकूल गणना संसाधन होता है।
+
+## डेवलपर्स सबस्ट्रीम-powered सबग्राफ और सबस्ट्रीम के बारे में अधिक जानकारी कहाँ प्राप्त कर सकते हैं?
+
+[सबस्ट्रीम documentation](/substreams/introduction/) आपको सबस्ट्रीम modules बनाने का तरीका सिखाएगी।
+
+The [सबस्ट्रीम-powered सबग्राफ documentation](/sps/introduction/) आपको यह दिखाएगी कि उन्हें The Graph पर परिनियोजन के लिए कैसे संकलित किया जाए।
+
+[नवीनतम Substreams Codegen टूल](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) आपको बिना किसी कोड के एक Substreams प्रोजेक्ट शुरू करने की अनुमति देगा।
+
+## Substreams में Rust modules का क्या भूमिका है?
+
+Rust मॉड्यूल्स AssemblyScript मापर्स के समकक्ष होते हैं सबग्राफ में। इन्हें समान तरीके से WASM में संकलित किया जाता है, लेकिन प्रोग्रामिंग मॉडल समानांतर निष्पादन की अनुमति देता है। ये उस प्रकार के रूपांतरण और समुच्चयन को परिभाषित करते हैं, जिन्हें आप कच्चे ब्लॉकचेन डेटा पर लागू करना चाहते हैं।
+
+See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
+
+## Substreams को composable क्या बनाता है?
+
+When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used.
+
+ऐसे मान लीजिए, एलिस एक DEX प्राइस मॉड्यूल बना सकती है, बॉब इसका उपयोग करके अपने इच्छित कुछ टोकनों के लिए एक वॉल्यूम एग्रीगेटर बना सकता है, और लिसा चार अलग-अलग DEX प्राइस मॉड्यूल को जोड़कर एक प्राइस ओरैकल बना सकती है। एक ही सबस्ट्रीम अनुरोध इन सभी व्यक्तिगत मॉड्यूल्स को एक साथ पैकेज करेगा, उन्हें आपस में लिंक करेगा, और एक अधिक परिष्कृत डेटा स्ट्रीम प्रदान करेगा। उस स्ट्रीम का उपयोग फिर एक सबग्राफ को पॉप्युलेट करने के लिए किया जा सकता है और उपभोक्ताओं द्वारा क्वेरी किया जा सकता है।
+
+## आप कैसे एक Substreams-powered Subgraph बना सकते हैं और deploy कर सकते हैं?
+
+सबस्ट्रीम-समर्थित सबग्राफ को [परिभाषित](/sps/introduction/) करने के बाद, आप इसे Graph CLI का उपयोग करके [सबग्राफ Studio](https://thegraph.com/studio/) में डिप्लॉय कर सकते हैं।
+
+## आप सबस्ट्रीम और सबस्ट्रीम-powered सबग्राफ के उदाहरण कहाँ पा सकते हैं?
+
+आप [इस Github रिपॉज़िटरी](https://github.com/pinax-network/awesome-substreams) पर जाकर सबस्ट्रीम और सबस्ट्रीम -powered सबग्राफके उदाहरण देख सकते हैं।
+
+## सबस्ट्रीम और सबस्ट्रीम-powered सबग्राफ का The Graph Network के लिए क्या अर्थ है?
+
+The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them.
diff --git a/website/src/pages/hi/substreams/sps/introduction.mdx b/website/src/pages/hi/substreams/sps/introduction.mdx
new file mode 100644
index 000000000000..56ee02d1d54a
--- /dev/null
+++ b/website/src/pages/hi/substreams/sps/introduction.mdx
@@ -0,0 +1,31 @@
+---
+title: सबस्ट्रीम-पावर्ड सबग्राफ का परिचय
+sidebarTitle: Introduction
+---
+
+अपने सबग्राफ की कार्यक्षमता और स्केलेबिलिटी को बढ़ाएं [सबस्ट्रीम](/substreams/introduction/) का उपयोग करके, जो प्री-इंडेक्स्ड ब्लॉकचेन डेटा को स्ट्रीम करता है।
+
+## Overview
+
+सबस्ट्रीम पैकेज (.spkg) को डेटा स्रोत के रूप में उपयोग करें ताकि आपका सबग्राफ पहले से इंडेक्स किए गए ब्लॉकचेन डेटा की स्ट्रीम तक पहुंच प्राप्त कर सके। यह बड़े या जटिल ब्लॉकचेन नेटवर्क के साथ अधिक कुशल और स्केलेबल डेटा हैंडलिंग को सक्षम बनाता है।
+
+### विशिष्टताएँ
+
+इस तकनीक को सक्षम करने के दो तरीके हैं:
+
+1. **सबस्ट्रीम [triggers](/sps/triggers/) का उपयोग करना**: किसी भी सबस्ट्रीम मॉड्यूल से उपभोग करने के लिए, Protobuf मॉडल को एक सबग्राफ हैंडलर के माध्यम से आयात करें और अपनी पूरी लॉजिक को एक सबग्राफ में स्थानांतरित करें। इस विधि से Subgraph में सीधे सबग्राफ entities बनाई जाती हैं।
+
+2. **[Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out) का उपयोग करके**: अधिक लॉजिक को सबस्ट्रीम में लिखकर, आप सीधे मॉड्यूल के आउटपुट को [`ग्राफ-नोड`](/indexing/tooling/graph-node/) में कंज्यूम कर सकते हैं। graph-node में, आप सबस्ट्रीम डेटा का उपयोग करके अपनी सबग्राफ entities बना सकते हैं।
+
+आप अपना लॉजिक सबग्राफ या सबस्ट्रीम में कहीं भी रख सकते हैं। हालाँकि, अपने डेटा की आवश्यकताओं के अनुसार निर्णय लें, क्योंकि सबस्ट्रीम एक समानांतर मॉडल का उपयोग करता है, और ट्रिगर `graph node` में रैखिक रूप से उपभोग किए जाते हैं।
+
+### Additional Resources
+
+इन लिंक पर जाएं ताकि आप कोड-जनरेशन टूलिंग का उपयोग करके अपना पहला एंड-टू-एंड सबस्ट्रीम प्रोजेक्ट तेजी से बना सकें:
+
+- [Solana](/substreams/developing/solana/transactions/)
+- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm)
+- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
+- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
+- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/hi/substreams/sps/triggers.mdx b/website/src/pages/hi/substreams/sps/triggers.mdx
new file mode 100644
index 000000000000..196694448b05
--- /dev/null
+++ b/website/src/pages/hi/substreams/sps/triggers.mdx
@@ -0,0 +1,47 @@
+---
+title: सबस्ट्रीम्स ट्रिगर्स
+---
+
+कस्टम ट्रिगर्स का उपयोग करें और पूर्ण रूप से GraphQL को सक्षम करें।
+
+## Overview
+
+कस्टम ट्रिगर्स आपको डेटा सीधे आपके सबग्राफ मैपिंग फ़ाइल और entities में भेजने की अनुमति देते हैं, जो तालिकाओं और फ़ील्ड्स के समान होते हैं। इससे आप पूरी तरह से GraphQL लेयर का उपयोग कर सकते हैं।
+
+आपके सबस्ट्रीम मॉड्यूल द्वारा उत्पन्न Protobuf परिभाषाओं को आयात करके, आप इस डेटा को अपने सबग्राफ के handler में प्राप्त और प्रोसेस कर सकते हैं। यह सबग्राफ ढांचे के भीतर कुशल और सुव्यवस्थित डेटा प्रबंधन सुनिश्चित करता है।
+
+### `handleTransactions` को परिभाषित करना
+
+यह कोड एक सबग्राफ handler में `handleTransactions` फ़ंक्शन को परिभाषित करने का तरीका दर्शाता है। यह फ़ंक्शन कच्चे सबस्ट्रीम बाइट्स को पैरामीटर के रूप में प्राप्त करता है और उन्हें `Transactions` ऑब्जेक्ट में डिकोड करता है। प्रत्येक लेन-देन के लिए, एक नया सबग्राफ entity बनाया जाता है।
+
+```tsx
+export function handleTransactions(bytes: Uint8Array): void {
+ let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1.
+ if (transactions.length == 0) {
+ log.info('No transactions found', [])
+ return
+ }
+
+ for (let i = 0; i < transactions.length; i++) {
+ // 2.
+ let transaction = transactions[i]
+
+ let entity = new Transaction(transaction.hash) // 3.
+ entity.from = transaction.from
+ entity.to = transaction.to
+ entity.save()
+ }
+}
+```
+
+यहाँ आप `mappings.ts` फ़ाइल में जो देख रहे हैं:
+
+1. Substreams डेटा को जनरेट किए गए Transactions ऑब्जेक्ट में डिकोड किया जाता है, यह ऑब्जेक्ट किसी अन्य AssemblyScript ऑब्जेक्ट की तरह उपयोग किया जाता है।
+2. लेनदेन पर लूप करना
+3. यहाँ आप `mappings.ts` फ़ाइल में जो देख रहे हैं:
+
+एक ट्रिगर-आधारित सबग्राफ का विस्तृत उदाहरण देखने के लिए, [इस ट्यूटोरियल को देखें](/sps/tutorial/)।
+
+### Additional Resources
+
+अपने पहले प्रोजेक्ट को डेवलपमेंट कंटेनर में स्कैफोल्ड करने के लिए, इनमें से किसी एक [How-To Guide](/substreams/developing/dev-container/) को देखें।
diff --git a/website/src/pages/hi/substreams/sps/tutorial.mdx b/website/src/pages/hi/substreams/sps/tutorial.mdx
new file mode 100644
index 000000000000..18d38dc06938
--- /dev/null
+++ b/website/src/pages/hi/substreams/sps/tutorial.mdx
@@ -0,0 +1,155 @@
+---
+title: "ट्यूटोरियल: Solana पर एक Substreams-शक्ति वाले Subgraph सेट करें"
+sidebarTitle: Tutorial
+---
+
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
+
+## शुरू करिये
+
+For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial)
+
+### आवश्यक शर्तें
+
+'शुरू करने से पहले, सुनिश्चित करें कि:'
+
+- अपने विकास पर्यावरण को सेट अप करने के लिए Getting Started Guide(https://github.com/streamingfast/substreams-starter) को पूरा करें, एक Dev Container का उपयोग करके।
+- The Graph और मूल ब्लॉकचेन अवधारणाओं जैसे कि लेनदेन और Protobufs से परिचित रहें।
+
+### चरण 1: अपने प्रोजेक्ट को प्रारंभ करें
+
+1. अपने Dev Container को खोलें और अपने प्रोजेक्ट को शुरू करने के लिए निम्नलिखित कमांड चलाएं:
+
+ ```bash
+ substreams प्रारंभ करें
+ ```
+
+2. "Minimal" प्रोजेक्ट विकल्प चुनें।
+
+3. substreams.yaml फ़ाइल की सामग्री को निम्नलिखित कॉन्फ़िगरेशन से बदलें, जो SPL टोकन प्रोग्राम आईडी पर Orca अकाउंट के लेनदेन को फ़िल्टर करता है:
+
+```yaml
+specVersion: v0.1.0
+package:
+ name: my_project_sol
+ version: v0.1.0
+
+imports: # Pass your spkg of interest
+ solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg
+
+modules:
+ - name: map_spl_transfers
+ use: solana:map_block # Select corresponding modules available within your spkg
+ initialBlock: 260000082
+
+ - name: map_transactions_by_programid
+ use: solana:solana:transactions_by_programid_without_votes
+
+network: solana-mainnet-beta
+
+params: # Modify the param fields to meet your needs
+ # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA
+ map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE
+```
+
+### चरण 2: Subgraph Manifest उत्पन्न करें
+
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
+
+```bash
+'सबस्ट्रीम्स कोडजेन' subgraph
+```
+
+आप subgraph.yaml मैनिफेस्ट बनाएंगे जो डेटा स्रोत के रूप में Substreams पैकेज को इम्पोर्ट करता है:
+
+```yaml
+---
+dataSources:
+ - kind: substreams
+ name: my_project_sol
+ network: solana-mainnet-beta
+ source:
+ package:
+ moduleName: map_spl_transfers # Module defined in the substreams.yaml
+ file: ./my-project-sol-v0.1.0.spkg
+ mapping:
+ apiVersion: 0.0.9
+ kind: substreams/graph-entities
+ file: ./src/mappings.ts
+ handler: handleTriggers
+```
+
+### चरण 3: schema.graphql में संस्थाएँ परिभाषित करें
+
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
+
+Here is an example:
+
+```graphql
+type MyTransfer @entity {
+ id: ID!
+ amount: String!
+ source: String!
+ designation: String!
+ signers: [String!]!
+}
+```
+
+यह स्कीमा एक MyTransfer एंटिटी को परिभाषित करता है जिसमें फ़ील्ड्स जैसे कि id, amount, source, designation, और signers शामिल हैं।
+
+### चरण 4: mappings.ts में Substreams डेटा को संभालें
+
+With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
+
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
+
+```ts
+import { Protobuf } from 'as-proto/assembly'
+import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events'
+import { MyTransfer } from '../generated/schema'
+
+export function handleTriggers(bytes: Uint8Array): void {
+ const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode)
+
+ for (let i = 0; i < input.data.length; i++) {
+ const event = input.data[i]
+
+ if (event.transfer != null) {
+ let entity_id: string = `${event.txnId}-${i}`
+ const entity = new MyTransfer(entity_id)
+ entity.amount = event.transfer!.instruction!.amount.toString()
+ entity.source = event.transfer!.accounts!.source
+ entity.designation = event.transfer!.accounts!.destination
+
+ if (event.transfer!.accounts!.signer!.single != null) {
+ entity.signers = [event.transfer!.accounts!.signer!.single!.signer]
+ } else if (event.transfer!.accounts!.signer!.multisig != null) {
+ entity.signers = event.transfer!.accounts!.signer!.multisig!.signers
+ }
+ entity.save()
+ }
+ }
+}
+```
+
+### चरण 5: Protobuf फ़ाइलें उत्पन्न करें
+
+AssemblyScript में Protobuf ऑब्जेक्ट बनाने के लिए, निम्नलिखित कमांड चलाएँ:
+
+```bash
+npm चलाएँ protogen
+```
+
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
+
+### निष्कर्ष
+
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+
+### Video Tutorial
+
+
+
+### Additional Resources
+
+अधिक उन्नत अनुकूलन और ऑप्टिमाइजेशन के लिए, आधिकारिक Substreams documentation(https://substreams.streamingfast.io/tutorials/solana) देखें।
diff --git a/website/src/pages/hi/supported-networks.mdx b/website/src/pages/hi/supported-networks.mdx
index 9ddc02928dbf..5e80de66ca27 100644
--- a/website/src/pages/hi/supported-networks.mdx
+++ b/website/src/pages/hi/supported-networks.mdx
@@ -4,17 +4,17 @@ hideTableOfContents: true
hideContentHeader: true
---
-import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, NetworksTable } from '@/supportedNetworks'
import { Heading } from '@/components'
import { useI18n } from '@/i18n'
export const getStaticProps = getSupportedNetworksStaticProps
-
+
{useI18n().t('index.supportedNetworks.title')}
-
+
- सबग्राफ स्टूडियो निर्भर करता है अंतर्निहित प्रौद्योगिकियों की स्थिरता और विश्वसनीयता पर, जैसे JSON-RPC, फायरहोस और सबस्ट्रीम्स एंडपॉइंट्स।
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
diff --git a/website/src/pages/hi/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/hi/token-api/evm/get-balances-evm-by-address.mdx
index 3386fd078059..68385ffc4272 100644
--- a/website/src/pages/hi/token-api/evm/get-balances-evm-by-address.mdx
+++ b/website/src/pages/hi/token-api/evm/get-balances-evm-by-address.mdx
@@ -1,9 +1,9 @@
---
-title: Token Balances by Wallet Address
+title: Balances by Address
template:
type: openApi
apiId: tokenApi
operationId: getBalancesEvmByAddress
---
-The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
+Provides latest ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/hi/token-api/evm/get-historical-balances-evm-by-address.mdx b/website/src/pages/hi/token-api/evm/get-historical-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..d96ed1b81fa2
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-historical-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Historical Balances
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHistoricalBalancesEvmByAddress
+---
+
+Provides historical ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/hi/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/hi/token-api/evm/get-holders-evm-by-contract.mdx
index 0bb79e41ed54..01a52bbf7ad2 100644
--- a/website/src/pages/hi/token-api/evm/get-holders-evm-by-contract.mdx
+++ b/website/src/pages/hi/token-api/evm/get-holders-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders by Contract Address
+title: Token Holders
template:
type: openApi
apiId: tokenApi
operationId: getHoldersEvmByContract
---
-The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
+Provides ERC-20 token holder balances by contract address.
diff --git a/website/src/pages/hi/token-api/evm/get-nft-activities-evm.mdx b/website/src/pages/hi/token-api/evm/get-nft-activities-evm.mdx
new file mode 100644
index 000000000000..f76eb35f653a
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-nft-activities-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Activities
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftActivitiesEvm
+---
+
+Provides NFT Activities (ex: transfers, mints & burns).
diff --git a/website/src/pages/hi/token-api/evm/get-nft-collections-evm-by-contract.mdx b/website/src/pages/hi/token-api/evm/get-nft-collections-evm-by-contract.mdx
new file mode 100644
index 000000000000..c8e9bfb64219
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-nft-collections-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Collection
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftCollectionsEvmByContract
+---
+
+Provides single NFT collection metadata, total supply, owners & total transfers.
diff --git a/website/src/pages/hi/token-api/evm/get-nft-holders-evm-by-contract.mdx b/website/src/pages/hi/token-api/evm/get-nft-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..091d01a197f4
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-nft-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Holders
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftHoldersEvmByContract
+---
+
+Provides NFT holders per contract.
diff --git a/website/src/pages/hi/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx b/website/src/pages/hi/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
new file mode 100644
index 000000000000..cf9ff1c6e1b8
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Items
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftItemsEvmContractByContractToken_idByToken_id
+---
+
+Provides single NFT token metadata, ownership & traits.
diff --git a/website/src/pages/hi/token-api/evm/get-nft-ownerships-evm-by-address.mdx b/website/src/pages/hi/token-api/evm/get-nft-ownerships-evm-by-address.mdx
new file mode 100644
index 000000000000..4c33526eceb7
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-nft-ownerships-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Ownerships
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftOwnershipsEvmByAddress
+---
+
+Provides NFT Ownerships for Account.
diff --git a/website/src/pages/hi/token-api/evm/get-nft-sales-evm.mdx b/website/src/pages/hi/token-api/evm/get-nft-sales-evm.mdx
new file mode 100644
index 000000000000..f2d78bea4052
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-nft-sales-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Sales
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftSalesEvm
+---
+
+Provides latest NFT marketplace sales.
diff --git a/website/src/pages/hi/token-api/evm/get-ohlc-pools-evm-by-pool.mdx b/website/src/pages/hi/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
new file mode 100644
index 000000000000..d5bc5357eadf
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
@@ -0,0 +1,9 @@
+---
+title: OHLCV by Pool
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPoolsEvmByPool
+---
+
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/hi/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/hi/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
index d1558ddd6e78..ff8f590b0433 100644
--- a/website/src/pages/hi/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
+++ b/website/src/pages/hi/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token OHLCV prices by Contract Address
+title: OHLCV by Contract
template:
type: openApi
apiId: tokenApi
operationId: getOhlcPricesEvmByContract
---
-The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/hi/token-api/evm/get-pools-evm.mdx b/website/src/pages/hi/token-api/evm/get-pools-evm.mdx
new file mode 100644
index 000000000000..db32376f5a17
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-pools-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Liquidity Pools
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getPoolsEvm
+---
+
+Provides Uniswap V2 & V3 liquidity pool metadata.
diff --git a/website/src/pages/hi/token-api/evm/get-swaps-evm.mdx b/website/src/pages/hi/token-api/evm/get-swaps-evm.mdx
new file mode 100644
index 000000000000..0a7697f38c8b
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-swaps-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Swap Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getSwapsEvm
+---
+
+Provides Uniswap V2 & V3 swap events.
diff --git a/website/src/pages/hi/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/hi/token-api/evm/get-tokens-evm-by-contract.mdx
index b6fab8011fc2..aed206c15272 100644
--- a/website/src/pages/hi/token-api/evm/get-tokens-evm-by-contract.mdx
+++ b/website/src/pages/hi/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders and Supply by Contract Address
+title: Token Metadata
template:
type: openApi
apiId: tokenApi
operationId: getTokensEvmByContract
---
-The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
+Provides ERC-20 token contract metadata.
diff --git a/website/src/pages/hi/token-api/evm/get-transfers-evm.mdx b/website/src/pages/hi/token-api/evm/get-transfers-evm.mdx
new file mode 100644
index 000000000000..d8e73c90a03c
--- /dev/null
+++ b/website/src/pages/hi/token-api/evm/get-transfers-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Transfer Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvm
+---
+
+Provides ERC-20 & Native transfer events.
diff --git a/website/src/pages/hi/token-api/faq.mdx b/website/src/pages/hi/token-api/faq.mdx
index 5d8d28b2e970..339e40f286e6 100644
--- a/website/src/pages/hi/token-api/faq.mdx
+++ b/website/src/pages/hi/token-api/faq.mdx
@@ -6,21 +6,37 @@ Get fast answers to easily integrate and scale with The Graph's high-performance
## आम
-### What blockchains does the Token API support?
+### Which blockchains are supported by the Token API?
-Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+Currently, the Token API supports Ethereum, BNB Smart Chain (BSC), Polygon, Optimism, Base, Unichain, and Arbitrum One.
-### Why isn't my API key from The Graph Market working?
+### Does the Token API support NFTs?
-Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+Yes, The Graph Token API currently supports ERC-721 and ERC-1155 NFT token standards, with support for additional NFT standards planned. Endpoints are offered for ownership, collection stats, metadata, sales, holders, and transfer activity.
+
+### Do NFTs include off-chain data?
+
+NFT endpoints currently only include on-chain data. To get off-chain data, use the IPFS or HTTP links included in the NFT item response.
+
+### How do I authenticate requests to the Token API, and why doesn't my API key from The Graph Market work?
+
+Authentication is managed via API tokens obtained through [The Graph Market](https://thegraph.market/). If you're experiencing issues, make sure you're using the API Token generated from the API key, not the API key itself. An API token can be found on The Graph Market dashboard next to each API key. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
### How current is the data provided by the API relative to the blockchain?
The API provides data up to the latest finalized block.
-### How do I authenticate requests to the Token API?
+### How do I retrieve token prices?
+
+By default, token prices are returned with token-related responses, including token balances, token transfers, token metadata, and token holders. Historical prices are available with the Open-High-Low-Close (OHLC) endpoints.
+
+### Does the Token API support historical token data?
-Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+The Token API supports historical token balances with the `/historical/balances/evm/{address}` endpoint. You can query historical price data by pool at `/ohlc/pools/evm/{pool}` and by contract at `/ohlc/prices/evm/{contract}`.
+
+### What exchanges does the Token API use for token prices?
+
+The Token API currently tracks prices on Uniswap v2 and Uniswap v3, with plans to support additional exchanges in the future.
### Does the Token API provide a client SDK?
@@ -34,9 +50,9 @@ Yes, more blockchains will be supported in the future. Please share feedback on
Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
-### Are there plans to support additional use cases such as NFTs?
+### Are there plans to support additional use cases?
-The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
## MCP / LLM / AI Topics
@@ -60,17 +76,25 @@ You can find the code for the MCP client in [The Graph's repo](https://github.co
Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
-### Are there rate limits or usage costs?\*\*
+### Why am I getting 500 errors?
+
+Networks that are currently or temporarily unavailable on a given endpoint will return a `bad_database_response`, `Endpoint is currently not supported for this network` error. Databases that are in the process of ingestion will produce this response.
+
+### Are there rate limits or usage costs?
+
+During Beta, the Token API is free for authorized developers. There are no specific rate limits, but reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What do I do if I notice data inconsistencies in the data returned by the Token API?
-During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+If you notice data inconsistencies, please report the issue on our [Discord](https://discord.gg/graphprotocol). Identifying edge cases can help make sure all data is accurate and up-to-date.
-### What networks are supported, and how do I specify them?
+### How do I specify a network?
-You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of the exact network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`, `unichain`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
### Why do I only see 10 results? How can I get more data?
-Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` and `page` (1-indexed) to return more results. For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
### How do I fetch older transfer history?
diff --git a/website/src/pages/hi/token-api/monitoring/get-health.mdx b/website/src/pages/hi/token-api/monitoring/get-health.mdx
index 57a827b3343b..09f7b954dbf3 100644
--- a/website/src/pages/hi/token-api/monitoring/get-health.mdx
+++ b/website/src/pages/hi/token-api/monitoring/get-health.mdx
@@ -1,7 +1,9 @@
---
-title: Get health status of the API
+title: Health Status
template:
type: openApi
apiId: tokenApi
operationId: getHealth
---
+
+Get health status of the API
diff --git a/website/src/pages/hi/token-api/monitoring/get-networks.mdx b/website/src/pages/hi/token-api/monitoring/get-networks.mdx
index 0ea3c485ddb9..66f3d9940efc 100644
--- a/website/src/pages/hi/token-api/monitoring/get-networks.mdx
+++ b/website/src/pages/hi/token-api/monitoring/get-networks.mdx
@@ -1,7 +1,9 @@
---
-title: Get supported networks of the API
+title: समर्थित नेटवर्क
template:
type: openApi
apiId: tokenApi
operationId: getNetworks
---
+
+Get supported networks of the API
diff --git a/website/src/pages/hi/token-api/monitoring/get-version.mdx b/website/src/pages/hi/token-api/monitoring/get-version.mdx
index 0be6b7e92d04..fa0040807854 100644
--- a/website/src/pages/hi/token-api/monitoring/get-version.mdx
+++ b/website/src/pages/hi/token-api/monitoring/get-version.mdx
@@ -1,7 +1,9 @@
---
-title: Get the version of the API
+title: Version
template:
type: openApi
apiId: tokenApi
operationId: getVersion
---
+
+Get the version of the API
diff --git a/website/src/pages/hi/token-api/quick-start.mdx b/website/src/pages/hi/token-api/quick-start.mdx
index a381a3c8565c..44779ab04196 100644
--- a/website/src/pages/hi/token-api/quick-start.mdx
+++ b/website/src/pages/hi/token-api/quick-start.mdx
@@ -9,15 +9,15 @@ sidebarTitle: Quick Start
The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
-The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+The Token API provides access to onchain NFT and fungible token data, including live and historical balances, holders, prices, market data, token metadata, and token transfers. This API also uses the Model Context Protocol (MCP) to allow AI tools such as Claude to enrich raw blockchain data with contextual insights.
## आवश्यक शर्तें
-Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+Before you begin, get a JWT API token by signing up on [The Graph Market](https://thegraph.market/). Make sure to use the JWT API Token, not the API key. Each API key can generate a new JWT API Token at any time.
## Authentication
-All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+All API endpoints are authenticated using a JWT API token inserted in the header as `Authorization: Bearer `.
```json
{
@@ -64,6 +64,20 @@ Make sure to replace `` with the JWT Token generated from your API key.
> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+## Chain and Feature Support
+
+| Network | evm-tokens | evm-uniswaps | evm-nft-tokens |
+| ---------------- | :---------: | :----------: | :------------: |
+| Ethereum Mainnet | ✅ | ✅ | ✅ |
+| BSC | ✅\* | ✅ | ✅ |
+| Base | ✅ | ✅ | ✅ |
+| Unichain | ✅ | ✅ | ✅ |
+| Arbitrum-One | Ingesting\* | Ingesting\* | Ingesting\* |
+| Optimism | ✅ | ✅ | ✅ |
+| Polygon | ✅ | ✅ | ✅ |
+
+\*Some chains are still in the process of syncing. You may encounter `bad_database_response` errors or incorrect response values until data is fully synced.
+
## Troubleshooting
If the API call fails, try printing out the full response object for additional error details. For example:
diff --git a/website/src/pages/it/about.mdx b/website/src/pages/it/about.mdx
index 62f0bf4d3c61..46674c3384d1 100644
--- a/website/src/pages/it/about.mdx
+++ b/website/src/pages/it/about.mdx
@@ -1,67 +1,46 @@
---
-title: Informazioni su The Graph
+title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## Che cos'è The Graph?
-The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier.
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## Understanding the Basics
+Its data services include:
-Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain.
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### Challenges Without The Graph
+### Why Blockchain Data is Difficult to Query
-In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply.
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**.
+The result is slow performance, complex infrastructure, and scalability issues.
-- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself.
+## How The Graph Solves This
-- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it.
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### Why is this a problem?
+Find the perfect data service for you:
-It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions.
+### 1. Custom Real-Time Data Streams
-Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
+**Use Case:** High-frequency trading, live analytics.
-Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data.
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph Provides a Solution
+### 2. Instant Token Data
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
+- [Start with Token API](/token-api/quick-start/)
-### How The Graph Functions
+### 3. Flexible Historical Queries
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+**Use Case:** Dapp frontends, custom analytics.
-#### Specifics
-
-- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-
-- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-
-- When creating a Subgraph, you need to write a Subgraph manifest.
-
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-
-The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.
-
-
-
-Il flusso segue questi passi:
-
-1. Una dapp aggiunge dati a Ethereum attraverso una transazione su uno smart contract.
-2. Lo smart contract emette uno o più eventi durante l'elaborazione della transazione.
-3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
-4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
-5. La dapp effettua query del Graph Node per ottenere dati indicizzati dalla blockchain, utilizzando il [ GraphQL endpoint del nodo](https://graphql.org/learn/). Il Graph Node a sua volta traduce le query GraphQL in query per il suo archivio dati sottostante, al fine di recuperare questi dati, sfruttando le capacità di indicizzazione dell'archivio. La dapp visualizza questi dati in una ricca interfaccia utente per gli utenti finali, che li utilizzano per emettere nuove transazioni su Ethereum. Il ciclo si ripete.
-
-## I prossimi passi
-
-The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-
-Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/it/index.json b/website/src/pages/it/index.json
index f243894b47b5..7743ea9ba09b 100644
--- a/website/src/pages/it/index.json
+++ b/website/src/pages/it/index.json
@@ -2,7 +2,7 @@
"title": "Home",
"hero": {
"title": "The Graph Docs",
- "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "How The Graph works",
"cta2": "Build your first subgraph"
},
@@ -19,10 +19,10 @@
"description": "Fetch and consume blockchain data with parallel execution.",
"cta": "Develop with Substreams"
},
- "sps": {
- "title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "Set up a Substreams-powered subgraph"
+ "tokenApi": {
+ "title": "Token API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "Graph Node",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "Firehose",
- "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Get started with Firehose"
}
},
@@ -58,6 +58,7 @@
"networks": "networks",
"completeThisForm": "complete this form"
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "Subgraphs",
"substreams": "Substreams",
"firehose": "Firehose",
- "tokenapi": "Token API"
+ "tokenApi": "Token API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "Substreams",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "Graph Explorer",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/it/indexing/new-chain-integration.mdx b/website/src/pages/it/indexing/new-chain-integration.mdx
index 670e06c752c3..cf698522f7e5 100644
--- a/website/src/pages/it/indexing/new-chain-integration.mdx
+++ b/website/src/pages/it/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, in a JSON-RPC batch request
-- `trace_filter` *(limited tracing and optionally required for Graph Node)*
+- `trace_filter` _(limited tracing and optionally required for Graph Node)_
### 2. Firehose Integration
@@ -63,7 +63,7 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
> Do not change the env var name itself. It must remain `ethereum` even if the network name is different.
-3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## Substreams-powered Subgraphs
diff --git a/website/src/pages/it/indexing/overview.mdx b/website/src/pages/it/indexing/overview.mdx
index 371d0f48cf9a..c35cbc2b8a4e 100644
--- a/website/src/pages/it/indexing/overview.mdx
+++ b/website/src/pages/it/indexing/overview.mdx
@@ -110,12 +110,12 @@ Indexers may differentiate themselves by applying advanced techniques for making
- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
-| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
+| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -131,7 +131,7 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
@@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer
#### Graph Node
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Getting started using Docker
@@ -708,42 +708,6 @@ Note that supported action types for allocation management have different input
Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
-#### Agora
-
-The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query.
-
-A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression.
-
-Example cost model:
-
-```
-# This statement captures the skip value,
-# uses a boolean expression in the predicate to match specific queries that use `skip`
-# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-# This default will match any GraphQL expression.
-# It uses a Global substituted into the expression to calculate cost
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-Example query costing using the above model:
-
-| Query | Price |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### Applying the cost model
-
-Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them.
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## Interacting with the network
### Stake in the protocol
diff --git a/website/src/pages/it/indexing/tooling/graph-node.mdx b/website/src/pages/it/indexing/tooling/graph-node.mdx
index 32015adbd8fd..8f079681a0af 100644
--- a/website/src/pages/it/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/it/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ While some Subgraphs may just require a full node, some may have indexing featur
### Nodi IPFS
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### Server di metriche Prometheus
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Come iniziare con Kubernetes
@@ -77,15 +77,20 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
Quando è in funzione, Graph Node espone le seguenti porte:
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
-
-> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## Configurazione avanzata del Graph Node
@@ -330,7 +335,7 @@ Le tabelle di database che memorizzano le entità sembrano essere generalmente d
For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table.
-The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show ` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions.
@@ -340,6 +345,4 @@ For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates f
#### Removing Subgraphs
-> Si tratta di una nuova funzionalità, che sarà disponibile in Graph Node 0.29.x
-
At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/it/resources/claude-mcp.mdx b/website/src/pages/it/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..5b55bbcbe0a4
--- /dev/null
+++ b/website/src/pages/it/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/it/subgraphs/_meta-titles.json b/website/src/pages/it/subgraphs/_meta-titles.json
index 3fd405eed29a..f095d374344f 100644
--- a/website/src/pages/it/subgraphs/_meta-titles.json
+++ b/website/src/pages/it/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "Querying",
"developing": "Developing",
"guides": "How-to Guides",
- "best-practices": "Best Practices"
+ "best-practices": "Best Practices",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5f964d3cbb78..edc1d88dc6cf 100644
--- a/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Patch Changes
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Minor Changes
diff --git a/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx
index 06fd431e7048..fb87d521d968 100644
--- a/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ La libreria `@graphprotocol/graph-ts` fornisce le seguenti API:
The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| Versione | Note di rilascio |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
-| 0.0.7 | Aggiunte le classi `TransactionReceipt` e `Log` ai tipi di Ethereum Aggiunto il campo `receipt` all'oggetto Ethereum Event |
-| 0.0.6 | Aggiunto il campo `nonce` all'oggetto Ethereum Transaction Aggiunto `baseFeePerGas` all'oggetto Ethereum Block |
-| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Aggiunto il campo `functionSignature` all'oggetto Ethereum SmartContractCall |
-| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Aggiunto il campo `input` all'oggetto Ethereum Transaction |
+| Versione | Note di rilascio |
+| :------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
+| 0.0.7 | Aggiunte le classi `TransactionReceipt` e `Log` ai tipi di Ethereum Aggiunto il campo `receipt` all'oggetto Ethereum Event |
+| 0.0.6 | Aggiunto il campo `nonce` all'oggetto Ethereum Transaction Aggiunto `baseFeePerGas` all'oggetto Ethereum Block |
+| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
+| 0.0.4 | Aggiunto il campo `functionSignature` all'oggetto Ethereum SmartContractCall |
+| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Aggiunto il campo `input` all'oggetto Ethereum Transaction |
### Tipi integrati
@@ -770,44 +770,44 @@ Quando il tipo di un valore è certo, può essere convertito in un [tipo incorpo
### Riferimento alle conversioni di tipo
-| Fonte(i) | Destinazione | Funzione di conversione |
-| -------------------- | -------------------- | --------------------------- |
-| Address | Bytes | none |
-| Address | String | s.toHexString() |
-| BigDecimal | String | s.toString() |
-| BigInt | BigDecimal | s.toBigDecimal() |
-| BigInt | String (hexadecimal) | s.toHexString() o s.toHex() |
-| BigInt | String (unicode) | s.toString() |
-| BigInt | i32 | s.toI32() |
-| Boolean | Boolean | none |
-| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) |
-| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) |
-| Bytes | String (hexadecimal) | s.toHexString() o s.toHex() |
-| Bytes | String (unicode) | s.toString() |
-| Bytes | String (base58) | s.toBase58() |
-| Bytes | i32 | s.toI32() |
-| Bytes | u32 | s.toU32() |
-| Bytes | JSON | json.fromBytes(s) |
-| int8 | i32 | none |
-| int32 | i32 | none |
-| int32 | BigInt | BigInt.fromI32(s) |
-| uint24 | i32 | none |
-| int64 - int256 | BigInt | none |
-| uint32 - uint256 | BigInt | none |
-| JSON | boolean | s.toBool() |
-| JSON | i64 | s.toI64() |
-| JSON | u64 | s.toU64() |
-| JSON | f64 | s.toF64() |
-| JSON | BigInt | s.toBigInt() |
-| JSON | string | s.toString() |
-| JSON | Array | s.toArray() |
-| JSON | Object | s.toObject() |
-| String | Address | Address.fromString(s) |
-| Bytes | Address | Address.fromBytes(s) |
-| String | BigInt | BigInt.fromString(s) |
-| String | BigDecimal | BigDecimal.fromString(s) |
-| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) |
-| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) |
+| Fonte(i) | Destinazione | Funzione di conversione |
+| -------------------- | --------------------- | -------------------------------- |
+| Address | Bytes | none |
+| Address | String | s.toHexString() |
+| BigDecimal | String | s.toString() |
+| BigInt | BigDecimal | s.toBigDecimal() |
+| BigInt | String (hexadecimal) | s.toHexString() o s.toHex() |
+| BigInt | String (unicode) | s.toString() |
+| BigInt | i32 | s.toI32() |
+| Boolean | Boolean | none |
+| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) |
+| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) |
+| Bytes | String (hexadecimal) | s.toHexString() o s.toHex() |
+| Bytes | String (unicode) | s.toString() |
+| Bytes | String (base58) | s.toBase58() |
+| Bytes | i32 | s.toI32() |
+| Bytes | u32 | s.toU32() |
+| Bytes | JSON | json.fromBytes(s) |
+| int8 | i32 | none |
+| int32 | i32 | none |
+| int32 | BigInt | BigInt.fromI32(s) |
+| uint24 | i32 | none |
+| int64 - int256 | BigInt | none |
+| uint32 - uint256 | BigInt | none |
+| JSON | boolean | s.toBool() |
+| JSON | i64 | s.toI64() |
+| JSON | u64 | s.toU64() |
+| JSON | f64 | s.toF64() |
+| JSON | BigInt | s.toBigInt() |
+| JSON | string | s.toString() |
+| JSON | Array | s.toArray() |
+| JSON | Object | s.toObject() |
+| String | Address | Address.fromString(s) |
+| Bytes | Address | Address.fromBytes(s) |
+| String | BigInt | BigInt.fromString(s) |
+| String | BigDecimal | BigDecimal.fromString(s) |
+| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) |
+| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) |
### Metadati della Data Source
diff --git a/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx
index 49090d6b963f..5b0ac052a82d 100644
--- a/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -22,14 +22,14 @@ Start the process and build a Subgraph that matches your needs:
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
-| Versione | Note di rilascio |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+| Versione | Note di rilascio |
+| :------: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx
index f8b9f74c6479..14056f78d173 100644
--- a/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx
@@ -212,7 +212,7 @@ Every Subgraph affected with this policy has an option to bring the version in q
If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph: `https://indexer.upgrade.thegraph.com/status`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
diff --git a/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 3a07d7d50b24..f1e69207ee72 100644
--- a/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -88,6 +88,8 @@ graph auth
Once you are ready, you can deploy your Subgraph to Subgraph Studio.
> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
+>
+> **Note**: Each account is limited to 3 deployed (unpublished) Subgraphs. If you reach this limit, you must archive or publish existing Subgraphs before deploying new ones.
Use the following CLI command to deploy your Subgraph:
@@ -104,6 +106,8 @@ After running this command, the CLI will ask for a version label.
After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+> **Note**: The development query URL is limited to 3,000 queries per day.
+
Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
diff --git a/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index 1672a6619d13..985a5bd7d9d7 100644
--- a/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -53,7 +53,7 @@ USAGE
FLAGS
-h, --help Show CLI help.
- -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node.
+ -i, --ipfs= [default: https://ipfs.thegraph.com/api/v0] Upload build results to an IPFS node.
--ipfs-hash= IPFS hash of the subgraph manifest to deploy.
--protocol-network= [default: arbitrum-one] The network to use for the subgraph deployment.
diff --git a/website/src/pages/it/subgraphs/explorer.mdx b/website/src/pages/it/subgraphs/explorer.mdx
index 5db7212c1fb0..c249b659cdb3 100644
--- a/website/src/pages/it/subgraphs/explorer.mdx
+++ b/website/src/pages/it/subgraphs/explorer.mdx
@@ -2,83 +2,103 @@
title: Graph Explorer
---
-Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Use [Graph Explorer](https://thegraph.com/explorer) and take full advantage of its core features.
## Panoramica
-Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+This guide explains how to use [Graph Explorer](https://thegraph.com/explorer) to quickly discover and interact with Subgraphs on The Graph Network, delegate GRT, view participant metrics, and analyze network performance.
-## Inside Explorer
+> When you visit Graph Explorer, you can also access the link to [explore Substreams](https://substreams.dev/).
-The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
+## Prerequisites
-### Subgraphs Page
+- To perform actions, you need a wallet (e.g., MetaMask) connected to [Graph Explorer](https://thegraph.com/explorer).
+ > Make sure your wallet is connected to the correct network (e.g., Arbitrum). Features and data shown are network specific.
+- GRT tokens if you plan to delegate or curate.
+- Basic knowledge of [Subgraphs](https://thegraph.com/docs/en/subgraphs/developing/subgraphs/)
-After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+## Navigating Graph Explorer
-- Your own finished Subgraphs
-- Subgraphs published by others
-- The exact Subgraph you want (based on the date created, signal amount, or name).
+### Step 1. Explore Subgraphs
-
+> For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
-When you click into a Subgraph, you will be able to do the following:
+Go to the Subgraphs page in [Graph Explorer](https://thegraph.com/explorer).
-- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
+- If you've deployed and published your Subgraph in Subgraph Studio, you can view it here.
+- Search all published Subgraphs and filter them by indexed network, specific categories (such as DeFI, NFTs, and DAOs), and **most queried, most curated, recently created, and recently updated**.
+
+
+
+To find Subgraphs indexing a specific contract, enter the contract address into the search bar.
+
+- For example, you can enter the L2GNS contract on Artbitrum (`0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec`) and this returns all Subgraphs indexing that contract:
+
+
- - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+> Looking for indexing contracts? Check out [this Subgraph](https://thegraph.com/explorer/subgraphs/FMTUN6d7sY2bLnAmNEPJTqiU3iuQht6ZXurpBh71wbWR?view=About&chain=arbitrum-one) which indexes contract addresses listed in its manifest. It shows all current deployments indexing those contracts on Arbitrum One, along with the signal allocated to each.
-
+You can click into any Subgraph, to:
+
+- Test queries in the playground and be able to leverage network details to make informed decisions.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make Indexers aware of its importance and quality.
+ > This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it'll eventually surface on the network to serve queries.
+
+
On each Subgraph’s dedicated page, you can do the following:
-- Signal/Un-signal on Subgraphs
-- Visualizza ulteriori dettagli, come grafici, ID di distribuzione corrente e altri metadati
-- Switch versions to explore past iterations of the Subgraph
- Query Subgraphs via GraphQL
+- View Subgraph ID, current deployment ID, Query URL, and other metadata
+- Signal/unsignal on Subgraphs
- Test Subgraphs in the playground
- View the Indexers that are indexing on a certain Subgraph
- Statistiche del subgraph (allocazione, Curator, ecc.)
-- View the entity who published the Subgraph
+- View query fees and charts
+- Change versions to explore past iterations of the Subgraph
+- View entity types
+- View Subgraph activity
-
+
-### Delegate Page
+### Step 2. Delegate GRT
-On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer.
+Go to the [Delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) page to learn how to delegate, get GRT, and choose an Indexer.
-On this page, you can see the following:
+Here, you can:
-- Indexers who collected the most query fees
-- Indexers with the highest estimated APR
+- Compare Indexers by most query fees earned and highest estimated APR.
+- Use the built-in ROI calculator or search by Indexer name or address.
+- Click **"Delegate"** next to an Indexer to stake your GRT.
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
+### Step 3. Monitor Participants in the Network
-### Participants Page
+Go to the [Participants](https://thegraph.com/explorer/participants?chain=arbitrum-one) page to view:
-This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators.
+- Indexers: stakes, allocations, rewards, and delegation parameters
+- Curators: signal amounts, Subgraph shares, and activity history
+- Delegators: current and historical delegations, rewards, and Indexer metrics
-#### 1. Indexer
+#### Indexers
-
+
Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexer's delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
-- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators.
-- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
-- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
-- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
-- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
-- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
-- Max Delegation Capacity - l'importo massimo di stake delegato che l'Indexer può accettare in modo produttivo. Uno stake delegato in eccesso non può essere utilizzato per l'allocazione o per il calcolo dei premi.
-- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
-- Indexer Rewards - è il totale delle ricompense dell'Indexer guadagnate dall'Indexer e dai suoi Delegator in tutto il tempo. Le ricompense degli Indexer vengono pagate tramite l'emissione di GRT.
+- Query Fee Cut: The % of the query fee rebates that the Indexer keeps when splitting with Delegators.
+- Effective Reward Cut: The indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
+- Cooldown Remaining: The time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
+- Owned: This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
+- Delegated: Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
+- Allocated: Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
+- Available Delegation Capacity: The amount of delegated stake the Indexers can still receive before they become over-delegated.
+- Max Delegation Capacity: The maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
+- Query Fees: This is the total fees that end users have paid for queries from an Indexer over all time.
+- Indexer Rewards: This is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters.
@@ -86,9 +106,9 @@ Indexers can earn both query fees and indexing rewards. Functionally, this happe
To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/)
-
+
-#### 2. Curator
+#### Curators
Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
@@ -102,11 +122,11 @@ In the The Curator table listed below you can see:
- Il numero di GRT depositato
- Il numero di azioni possedute da un Curator
-
+
If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/).
-#### 3. Delegator
+#### Delegators
Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers.
@@ -114,7 +134,7 @@ Delegators play a key role in maintaining the security and decentralization of T
- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts.
- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/).
-
+
In the Delegators table you can see the active Delegators in the community and important metrics:
@@ -127,9 +147,9 @@ In the Delegators table you can see the active Delegators in the community and i
If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers).
-### Network Page
+### Step 4. Analyze Network Performance
-On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
+On the [Network](https://thegraph.com/explorer/network?chain=arbitrum-one) page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
#### Panoramica
@@ -147,7 +167,7 @@ A few key details to note:
- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
-
+
#### Epoche
@@ -161,69 +181,77 @@ Nella sezione Epoche è possibile analizzare su base epocale metriche come:
- Le epoche di distribuzione sono le epoche in cui i canali di stato per le epoche vengono regolati e gli Indexer possono richiedere gli sconti sulle tariffe di query.
- The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers.
-
+
+
+## Access and Manage Your User Profile
+
+### Step 1. Access Your Profile
-## Il profilo utente
+- Click your wallet address in the top right corner
+- Your wallet acts as your user profile
+- In your profile dashboard, you can view and interact with several useful tabs
-Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs:
+### Step 2. Explore the Tabs
-### Panoramica del profilo
+#### Panoramica del profilo
In this section, you can view the following:
-- Any of your current actions you've done.
-- Your profile information, description, and website (if you added one).
+- Your activity
+- Your profile information: total query fees, total shares value, owned stake, stake delegating
-
+
-### Scheda di subgraph
+#### Scheda di subgraph
-In the Subgraphs tab, you’ll see your published Subgraphs.
+The Subgraphs tab displays all your published Subgraphs.
-> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> Subgraphs deployed with the CLI for testing purposes will not show up here. Subgraphs will only show up when they are published to the decentralized network.
-
+
-### Scheda di indicizzazione
+#### Scheda di indicizzazione
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+> If you haven't indexed, you will see links to stake to index Subgraphs and browse Subgraphs on Graph Explorer.
-Questa sezione include anche i dettagli sui compensi netti degli Indexer e sulle tariffe nette di query. Verranno visualizzate le seguenti metriche:
+The Indexing tab displays a table where you can review active and historical allocations to Subgraphs.
-- Delegated Stake (stake delegato)- lo stake dei Delegator che può essere assegnata dall'utente, ma che non può essere tagliata
-- Total Query Fees (totale delle tariffe di query)- il totale delle tariffe che gli utenti hanno pagato per le query servite da voi nel tempo
-- Indexer Rewards (ricompense dell'Indexer) - l'importo totale delle ricompense dell'Indexer ricevuti, in GRT
-- Fee Cut (percentuale delle tariffe) - la percentuale dei rimborsi delle tariffe di query che si trattiene quando si divide con i Delegator
-- Rewards Cut (percentuale delle ricompense) - la percentuale delle ricompense dell'Indexer che verrà trattenuta quando si divide con i Delegator
-- Owned (di proprietà)- la quota depositata, che potrebbe essere tagliata in caso di comportamento dannoso o scorretto
+Track your Indexer performance with visual charts and key metrics, including:
-
+- Delegated Stake: Stake from Delegators that can be allocated by you but cannot be slashed.
+- Total Query Fees: Cumulative fees from served queries.
+- Indexer Rewards (in GRT): Total rewards earned.
+- Fee Cut & Rewards Cut: The % of query fee rebates and Indexer rewards you'll keep when you split with Delegators.
+- Owned Stake: Your deposited stake, which could be slashed for malicious or incorrect behavior.
-### Scheda di delege
+
-Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards.
+#### Scheda di delege
-In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards.
+> To learn more about the benefits of delegating, check out [delegating](/resources/roles/delegating/delegating/).
-Nella prima metà della pagina è possibile vedere il grafico delle deleghe e quello dei sole ricompense. A sinistra, si possono vedere i KPI che riflettono le metriche delle delega attuali.
+The Delegators tab displays your active and historical delegations, along with the metrics for the Indexers you've delegated to.
-Le metriche del Delegator visualizzate in questa scheda includono:
+Top Section:
-- Ricompense di delega totali
-- Ricompense totali non realizzate
-- Ricomepsne totali realizzate
+- View delegation and rewards-only charts
+- Track key metrics:
+ - Ricompense di delega totali
+ - Unrealized rewards
+ - Realized Rewards
-Nella seconda metà della pagina si trova la tabella dei Delegator. Qui si possono vedere gli Indexer verso i quali si è delegata la delega, nonché i loro dettagli (come le percentuali delle ricompense, il cooldown, ecc.).
+Bottom Section:
-Con i pulsanti sul lato destro della tabella è possibile gestire la delega - delegare di più, non delegare o ritirare la delega dopo il periodo di scongelamento.
+- Explore a table of your Indexer delegations, including reward cuts, cooldowns, and more.
+- Use the buttons on the right side of the table to manage your delegation - delegate more, undelegate, or withdraw it after the thawing period.
-Tenete presente che questo grafico è scorrevole orizzontalmente, quindi se scorrete fino a destra, potete anche vedere lo stato della vostra delegazione (delegante, non delegante, revocabile).
+> This table is horizontally scrollable, so scroll right to see delegation status: delegating, undelegating, or withdrawable.
-
+
-### Scheda di curation
+#### Scheda di curation
-In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
+The Curation tab displays all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
All'interno di questa scheda è presente una panoramica di:
@@ -232,22 +260,22 @@ All'interno di questa scheda è presente una panoramica di:
- Query rewards per Subgraph
- Aggiornamento attuale dei dettagli
-
+
-### Impostazioni del profilo
+#### Impostazioni del profilo
All'interno del vostro profilo utente, potrete gestire i dettagli del vostro profilo personale (come impostare un nome ENS). Se siete un Indexer, avrete ancora più accesso alle impostazioni a portata di mano. Nel vostro profilo utente, potrete impostare i parametri di delegazione e gli operatori.
- Gli operatori compiono azioni limitate nel protocollo per conto dell'Indexer, come l'apertura e la chiusura delle allocation. Gli operatori sono in genere altri indirizzi Ethereum, separati dal loro wallet di staking, con un accesso limitato alla rete che gli Indexer possono impostare personalmente
- I parametri di delega vi permettono di controllare la distribuzione di GRT tra voi e i vostri Delegator.
-
+
As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.

-## Additional Resources
+### Additional Resources
### Video Guide
diff --git a/website/src/pages/it/subgraphs/fair-use-policy.mdx b/website/src/pages/it/subgraphs/fair-use-policy.mdx
new file mode 100644
index 000000000000..822fe1758eff
--- /dev/null
+++ b/website/src/pages/it/subgraphs/fair-use-policy.mdx
@@ -0,0 +1,51 @@
+---
+title: Fair Use Policy
+---
+
+> Effective Date: May 15, 2025
+
+## Panoramica
+
+This outlines storage limits for Subgraphs that rely solely on [Edge & Node's Upgrade Indexer](/subgraphs/upgrade-indexer/). This policy is designed to ensure fair and optimized use of queries across the community.
+
+To maintain performance and reliability across its infrastructure, Edge & Node is updating its Upgrade Indexer Subgraph storage policy. Free usage tiers remain available, but users who exceed specified limits will need to upgrade to a paid plan. Storage allocations and thresholds vary by feature.
+
+### 1. Scope
+
+This policy applies to all individual users, teams, chains, and dapps using Edge & Node's Upgrade Indexer in Subgraph Studio for storage and queries.
+
+### 2. Fair Use Storage Limits
+
+**Free Storage: Up to 10 GB**
+
+Beyond that, pricing is variable and adjusts based on usage patterns, network conditions, infrastructure requirements, and specific use cases.
+
+Reach out to Edge & Node at [info@edgeandnode.com](mailto:info@edgeandnode.com) to discuss options that meet your technical needs.
+
+You can monitor your usage via [Subgraph Studio](https://thegraph.com/studio/).
+
+### 3. Fair Use Limits
+
+To preserve the stability of Edge & Node's Subgraph Studio and preserve the reliability of The Graph Network, the Edge & Node Support Team will monitor storage usage and take corresponding action with Subgraphs that have:
+
+- Abnormally high or sustained bandwidth or storage usage beyond posted limits
+- Circumvention of storage thresholds (e.g., use of multiple free-tier accounts)
+
+The Edge & Node Support Team reserves the right to revise storage limits or impose temporary constraints for operational integrity.
+
+If you exceed your included storage:
+
+- Try [pruning Subgraph data](/subgraphs/best-practices/pruning/) to remove unused entities and help stay within storage limits
+- [Add signal to the Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to encourage other Indexers on the network to serve it
+- You will receive multiple notifications and email alerts
+- A grace period of 14 days will be provided to upgrade or reduce storage
+
+Edge & Node's team is committed to helping users avoid unnecessary interruptions and will continue to support all web3 builders.
+
+### 4. Subgraph Data Retention
+
+Subgraphs inactive for over 14 days or Subgraphs that exceed free-tier storage limits will be subject to automatic data archival or deletion. Edge & Node's team will notify you before any such actions are taken.
+
+### 5. Support
+
+If you believe your usage is incorrectly flagged or have unique use cases (e.g. approved special request pending new Subgraph upgrade plan), reach out the Edge & Node team at [info@edgeandnode.com](mailto:info@edgeandnode.com).
diff --git a/website/src/pages/it/subgraphs/guides/near.mdx b/website/src/pages/it/subgraphs/guides/near.mdx
index baa5bcc79157..f3d88823e92e 100644
--- a/website/src/pages/it/subgraphs/guides/near.mdx
+++ b/website/src/pages/it/subgraphs/guides/near.mdx
@@ -186,7 +186,7 @@ Once your Subgraph has been created, you can deploy your Subgraph by using the `
```sh
$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
+$ graph deploy --node --ipfs https://ipfs.thegraph.com # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
The node configuration will depend on where the Subgraph is being deployed.
diff --git a/website/src/pages/it/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/it/subgraphs/guides/subgraph-composition.mdx
index 51d882cda5e9..08112a03ca4d 100644
--- a/website/src/pages/it/subgraphs/guides/subgraph-composition.mdx
+++ b/website/src/pages/it/subgraphs/guides/subgraph-composition.mdx
@@ -39,20 +39,20 @@ While the source Subgraph is a standard Subgraph, the dependent Subgraph uses th
### Source Subgraphs
-- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs).
- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
-- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
-- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
-- Source Subgraphs cannot use grafting on top of existing entities
-- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+- Immutable entities only: All Subgraphs must have [immutable entities](/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed.
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of.
+- Source Subgraphs cannot use grafting on top of existing entities.
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly.
### Composed Subgraphs
-- You can only compose up to a **maximum of 5 source Subgraphs**
-- Composed Subgraphs can only use **datasources from the same chain**
-- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
-- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
-- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+- You can only compose up to a **maximum of 5 source Subgraphs.**
+- Composed Subgraphs can only use **datasources from the same chain.**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time.
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly.
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph).
Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
diff --git a/website/src/pages/it/subgraphs/mcp/claude.mdx b/website/src/pages/it/subgraphs/mcp/claude.mdx
new file mode 100644
index 000000000000..8b61438d2ab7
--- /dev/null
+++ b/website/src/pages/it/subgraphs/mcp/claude.mdx
@@ -0,0 +1,180 @@
+---
+title: Claude Desktop
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Claude to interact directly with Subgraphs on The Graph Network. This integration allows you to find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries—all through natural language conversations with Claude.
+
+## What You Can Do
+
+The Subgraph MCP integration enables you to:
+
+- Access the GraphQL schema for any Subgraph on The Graph Network
+- Execute GraphQL queries against any Subgraph deployment
+- Find top Subgraph deployments for a given keyword or contract address
+- Get 30-day query volume for Subgraph deployments
+- Ask natural language questions about Subgraph data without writing GraphQL queries manually
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+
+## Installation Options
+
+### Option 1: Using npx (Recommended)
+
+#### Configuration Steps using npx
+
+#### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+#### 2. Add Configuration
+
+Paste the following settings into your config file:
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+#### 3. Add Your Gateway API Key
+
+Replace `GATEWAY_API_KEY` with your API key from [Subgraph Studio](https://thegraph.com/studio/).
+
+#### 4. Save and Restart
+
+Once you've entered your Gateway API key into your settings, save the file and restart Claude Desktop.
+
+### Option 2: Building from Source
+
+#### Requirements
+
+- Rust (latest stable version recommended: 1.75+)
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+ Follow the on-screen instructions. For other platforms, see the [official Rust installation guide](https://www.rust-lang.org/tools/install).
+
+#### Installation Steps
+
+1. **Clone and Build the Repository**
+
+ ```bash
+ git clone git@github.com:graphops/subgraph-mcp.git
+ cd subgraph-mcp
+ cargo build --release
+ ```
+
+2. **Find the Command Path**
+
+ After building, the executable will be located at `target/release/subgraph-mcp` inside your project directory.
+
+ - Navigate to your `subgraph-mcp` directory in terminal
+ - Run `pwd` to get the full path
+ - Combine the output with `/target/release/subgraph-mcp`
+
+3. **Configure Claude Desktop**
+
+ Open your `claude_desktop_config.json` file as described above and add:
+
+ ```json
+ {
+ "mcpServers": {
+ "subgraph": {
+ "command": "/path/to/your/subgraph-mcp/target/release/subgraph-mcp",
+ "env": {
+ "GATEWAY_API_KEY": "your-api-key-here"
+ }
+ }
+ }
+ }
+ ```
+
+ Replace `/path/to/your/subgraph-mcp/target/release/subgraph-mcp` with the actual path to the compiled binary.
+
+## Using The Graph Resource in Claude
+
+After configuring Claude Desktop:
+
+1. Restart Claude Desktop
+2. Start a new conversation
+3. Click on the context menu (top right)
+4. Add "Subgraph Server Instructions" as a resource by adding `graphql://subgraph` to your chat context
+
+> **Important**: Claude Desktop may not automatically utilize the Subgraph MCP. You must manually add "Subgraph Server Instructions" resource to your chat context for each conversation where you want to use it.
+
+## Troubleshooting
+
+To enable logs for the MCP when using the npx option, add the `--verbose true` option to your args array.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID/IPFS hash**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Search subgraphs by keyword**: Find subgraphs by keyword in their display names, ordered by signal
+- **Get deployment 30-day query counts**: Get aggregate query count over the last 30 days for multiple subgraph deployments
+- **Get top Subgraph deployments for a contract**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain, ordered by query fees
+
+## Key Identifier Types
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a subgraph. Use `execute_query_by_subgraph_id` or `get_schema_by_subgraph_id`.
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment. Use `execute_query_by_deployment_id` or `get_schema_by_deployment_id`.
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific, immutable deployment. Use `execute_query_by_deployment_id` (the gateway treats it like a deployment ID for querying) or `get_schema_by_ipfs_hash`.
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Claude will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+```
+Find the top subgraphs for contract 0x1f98431c8ad98523631ae4a59f267346ea31f984 on arbitrum-one
+```
diff --git a/website/src/pages/it/subgraphs/mcp/cline.mdx b/website/src/pages/it/subgraphs/mcp/cline.mdx
new file mode 100644
index 000000000000..156221d9a127
--- /dev/null
+++ b/website/src/pages/it/subgraphs/mcp/cline.mdx
@@ -0,0 +1,99 @@
+---
+title: Cline
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cline to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cline.
+
+## Prerequisites
+
+- [Cline](https://cline.bot/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+## Using The Graph Resource in Cline
+
+After configuring Cline:
+
+1. Restart Cline
+2. Start a new conversation
+3. Enable the Subgraph MCP from the context menu
+4. Add "Subgraph Server Instructions" as a resource to your chat context
+
+## Available Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cline will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/it/subgraphs/mcp/cursor.mdx b/website/src/pages/it/subgraphs/mcp/cursor.mdx
new file mode 100644
index 000000000000..298f43ece048
--- /dev/null
+++ b/website/src/pages/it/subgraphs/mcp/cursor.mdx
@@ -0,0 +1,94 @@
+---
+title: Cursor
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cursor to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cursor.
+
+## Prerequisites
+
+- [Cursor](https://www.cursor.com/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+### 4. Restart Cursor
+
+Restart Cursor, and start a new chat.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cursor will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/it/subgraphs/querying/best-practices.mdx b/website/src/pages/it/subgraphs/querying/best-practices.mdx
index d4bb8b226105..387fe858e2d1 100644
--- a/website/src/pages/it/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/it/subgraphs/querying/best-practices.mdx
@@ -2,9 +2,7 @@
title: Querying Best Practices
---
-The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-
-Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
+Use The Graph's GraphQL API to query [Subgraph](/subgraphs/developing/subgraphs/) data efficiently. This guide outlines essential GraphQL rules, guides, and best practices to help you write optimized, reliable queries.
---
@@ -12,9 +10,11 @@ Learn the essential GraphQL language rules and best practices to optimize your S
### The Anatomy of a GraphQL Query
-A differenza del REST API, un GraphQL API si basa su uno schema che definisce le query che possono essere eseguite.
+> GraphQL queries use the GraphQL language, which is defined in the [GraphQL specification](https://spec.graphql.org/).
+
+Unlike REST APIs, GraphQL APIs are built on a schema-driven design that defines which queries can be performed.
-For example, a query to get a token using the `token` query will look as follows:
+Here's a typical query to fetch a `token`:
```graphql
query GetToken($id: ID!) {
@@ -25,7 +25,7 @@ query GetToken($id: ID!) {
}
```
-which will return the following predictable JSON response (_when passing the proper `$id` variable value_):
+which will return a predictable JSON response (when passing the proper `$id` variable value):
```json
{
@@ -36,8 +36,6 @@ which will return the following predictable JSON response (_when passing the pro
}
```
-GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/).
-
The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders):
```graphql
@@ -50,33 +48,31 @@ query [operationName]([variableName]: [variableType]) {
}
```
-## Rules for Writing GraphQL Queries
+### Rules for Writing GraphQL Queries
-- Each `queryName` must only be used once per operation.
-- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`)
-- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
-- Qualsiasi variabile assegnata a un argomento deve corrispondere al suo tipo.
-- In un determinato elenco di variabili, ciascuna di esse deve essere unica.
-- Tutte le variabili definite devono essere utilizzate.
+> Important: Failing to follow these rules will result in an error from The Graph API.
-> Note: Failing to follow these rules will result in an error from The Graph API.
+1. Each `queryName` must only be used once per operation.
+2. Each `field` must be used only once in a selection (you cannot query `id` twice under `token`).
+3. Complex types require a selection of sub-fields.
+ - For example, some `fields' or queries (like `tokens`) return complex types which will require a selection of sub-fields. Not providing a selection when expected or providing one when not expected will raise an error, such as `id\`. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
+4. Qualsiasi variabile assegnata a un argomento deve corrispondere al suo tipo.
+5. Variables must be uniquely defined and used.
-For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/).
+**For a complete list of rules with code examples, check out the [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)**.
-### Invio di una query a un API GraphQL
+### How to Send a Query to a GraphQL API
-GraphQL is a language and set of conventions that transport over HTTP.
+[GraphQL is a query language](https://graphql.org/learn/) and a set of conventions for APIs, typically used over HTTP to request and send data between clients and servers. This means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-
-However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
+However, as recommended in [Querying from an Application](/subgraphs/querying/from-an-application/), it's best to use `graph-client`, which supports the following unique features:
- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Risultato completamente tipizzato
-Here's how to query The Graph with `graph-client`:
+Example query using `graph-client`:
```tsx
import { execute } from '../.graphclient'
@@ -100,15 +96,15 @@ async function main() {
main()
```
-More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/).
+For more alternatives, see ["Querying from an Application"](/subgraphs/querying/from-an-application/).
---
## Best Practices
-### Scrivere sempre query statiche
+### 1. Always Write Static Queries
-A common (bad) practice is to dynamically build query strings as follows:
+A common bad practice is to dynamically build a query string as follows:
```tsx
const id = params.id
@@ -124,14 +120,16 @@ query GetToken {
// Execute query...
```
-While the above snippet produces a valid GraphQL query, **it has many drawbacks**:
+While the example above produces a valid GraphQL query, it comes with several issues:
+
+- The full query is harder to understand.
+- Developers are responsible for safely sanitizing the string interpolation.
+- Not sending the values of the variables as part of the request can block server-side caching.
+- It prevents tools from statically analyzing the query (e.g.linters or type generation tools).
-- it makes it **harder to understand** the query as a whole
-- developers are **responsible for safely sanitizing the string interpolation**
-- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side**
-- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools)
+Instead, it's recommended to **always write queries as static strings**.
-For this reason, it is recommended to always write queries as static strings:
+Example of static string:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -153,18 +151,21 @@ const result = await execute(query, {
})
```
-Doing so brings **many advantages**:
+Static strings have several **key advantages**:
-- **Easy to read and maintain** queries
-- The GraphQL **server handles variables sanitization**
-- **Variables can be cached** at server-level
-- **Queries can be statically analyzed by tools** (more on this in the following sections)
+- Queries are easier to read, manage, and debug.
+- Variable sanitization is handled by the GraphQL server The GraphQL.
+- Variables can be cached at the server level.
+- Queries can be statically analyzed by tools (see [GraphQL Essential Tools](/subgraphs/querying/best-practices/#graphql-essential-tools-guides)).
-### How to include fields conditionally in static queries
+### 2. Include Fields Conditionally in Static Queries
-You might want to include the `owner` field only on a particular condition.
+Including fields in static queries only for a particular condition improves performance and keeps responses lightweight by fetching only the necessary data when it's relevant.
-For this, you can leverage the `@include(if:...)` directive as follows:
+- The `@include(if:...)` directive tells the query to **include** a specific field only if the given condition is true.
+- The `@skip(if: ...)` directive tells the query to **exclude** a specific field if the given condition is true.
+
+Example using `owner` field with `@include(if:...)` directive:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -187,15 +188,11 @@ const result = await execute(query, {
})
```
-> Note: The opposite directive is `@skip(if: ...)`.
-
-### Ask for what you want
-
-GraphQL became famous for its "Ask for what you want" tagline.
+### 3. Ask Only For What You Want
-For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually.
+GraphQL is known for its "Ask for what you want” tagline, which is why it requires explicitly listing each field you want. There's no built-in way to fetch all available fields automatically.
-- Quando si interrogano le GraphQL API, si deve sempre pensare di effettuare query di solo i campi che verranno effettivamente utilizzati.
+- When querying GraphQL APIs, always think of querying only the fields that will actually be used.
- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities.
For example, in the following query:
@@ -215,9 +212,9 @@ query listTokens {
The response could contain 100 transactions for each of the 100 tokens.
-If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field.
+If the application only needs 10 transactions, the query should explicitly set **`first: 10`** on the transactions field.
-### Use a single query to request multiple records
+### 4. Use a Single Query to Request Multiple Records
By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
@@ -249,7 +246,7 @@ query ManyRecords {
}
```
-### Combine multiple queries in a single request
+### 5. Combine Multiple Queries in a Single Request
Your application might require querying multiple types of data as follows:
@@ -281,9 +278,9 @@ const [tokens, counters] = Promise.all(
)
```
-While this implementation is totally valid, it will require two round trips with the GraphQL API.
+While this implementation is valid, it will require two round trips with the GraphQL API.
-Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows:
+It's best to send multiple queries in the same GraphQL request as follows:
```graphql
import { execute } from "your-favorite-graphql-client"
@@ -304,9 +301,9 @@ query GetTokensandCounters {
const { result: { tokens, counters } } = execute(query)
```
-This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**.
+Sending multiple queries in the same GraphQL request **improves the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and provides a **more concise implementation**.
-### Sfruttare i frammenti GraphQL
+### 6. Leverage GraphQL Fragments
A helpful feature to write GraphQL queries is GraphQL Fragment.
@@ -335,7 +332,7 @@ Such repeated fields (`id`, `active`, `status`) bring many issues:
- More extensive queries become harder to read.
- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces.
-A refactored version of the query would be the following:
+An optimized version of the query would be the following:
```graphql
query {
@@ -359,15 +356,18 @@ fragment DelegateItem on Transcoder {
}
```
-Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation.
+Using a GraphQL `fragment` improves readability (especially at scale) and results in better TypeScript types generation.
When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_).
-### I frammenti GraphQL da fare e da non fare
+## GraphQL Fragment Guidelines
-### La base del frammento deve essere un tipo
+### Do's and Don'ts for Fragments
-A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**:
+1. Fragments cannot be based on a non-applicable types (types without fields).
+2. `BigInt` cannot be used as a fragment's base because it's a **scalar** (native "plain" type).
+
+Example:
```graphql
fragment MyFragment on BigInt {
@@ -375,11 +375,8 @@ fragment MyFragment on BigInt {
}
```
-`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base.
-
-#### Come diffondere un frammento
-
-Fragments are defined on specific types and should be used accordingly in queries.
+3. Fragments belong to specific types and must be used with those same types in queries.
+4. Spread only fragments matching the correct type.
Example:
@@ -402,20 +399,23 @@ fragment VoteItem on Vote {
}
```
-`newDelegate` and `oldDelegate` are of type `Transcoder`.
+- `newDelegate` and `oldDelegate` are of type `Transcoder`. It's not possible to spread a fragment of type `Vote` here.
-It is not possible to spread a fragment of type `Vote` here.
+5. Fragments must be defined based on their specific usage.
+6. Define fragments as an atomic business unit of data.
-#### Definire il frammento come unità aziendale atomica di dati
+---
-GraphQL `Fragment`s must be defined based on their usage.
+### How to Define `Fragment` as an Atomic Business Unit of Data
-For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient.
+> For most use-cases, defining one fragment per type (in the case of repeated fields usage or type generation) is enough.
Here is a rule of thumb for using fragments:
- When fields of the same type are repeated in a query, group them in a `Fragment`.
-- When similar but different fields are repeated, create multiple fragments, for instance:
+- When similar but different fields are repeated, create multiple fragments.
+
+Example:
```graphql
# base fragment (mostly used in listing)
@@ -438,35 +438,45 @@ fragment VoteWithPoll on Vote {
---
-## The Essential Tools
+## GraphQL Essential Tools Guides
+
+### Test Queries with Graph Explorer
+
+Before integrating GraphQL queries into your dapp, it's best to test them. Instead of running them directly in your app, use a web-based playground.
+
+Start with [Graph Explorer](https://thegraph.com/explorer), a preconfigured GraphQL playground built specifically for Subgraphs. You can experiment with queries and see the structure of the data returned without writing any frontend code.
+
+If you want alternatives to debug/test your queries, check out other similar web-based tools:
+
+- [GraphiQL](https://graphiql-online.com/graphiql)
+- [Altair](https://altairgraphql.dev/)
-### Esploratori web GraphQL
+### Setting up Workflow and IDE Tools
-Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries.
+In order to keep up with querying best practices and syntax rules, use the following workflow and IDE tools.
-If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql).
+#### GraphQL ESLint
-### Linting di GraphQL
+1. Install GraphQL ESLint
-In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools.
+Use [GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) to enforce best practices and syntax rules with zero effort.
-**GraphQL ESLint**
+2. Use the ["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config
-[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort.
+This will enforce essential rules such as:
-[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as:
+- `@graphql-eslint/fields-on-correct-type`: Ensures fields match the proper type.
+- `@graphql-eslint/no-unused variables`: Flags unused variables in your queries.
-- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type?
-- `@graphql-eslint/no-unused variables`: should a given variable stay unused?
-- e altro ancora!
+Result: You'll **catch errors without even testing queries** on the playground or running them in production!
-This will allow you to **catch errors without even testing queries** on the playground or running them in production!
+#### Use IDE plugins
-### Plugin IDE
+GraphQL plugins streamline your workflow by offering real-time feedback while you code. They highlight mistakes, suggest completions, and help you explore your schema faster.
-**VSCode and GraphQL**
+1. VS Code
-The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get:
+Install the [GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) to unlock:
- Syntax highlighting
- Autocomplete suggestions
@@ -474,11 +484,11 @@ The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemNa
- Snippets
- Go to definition for fragments and input types
-If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly.
+If you are using `graphql-eslint`, use the [ESLint VS Code extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) to visualize errors and warnings inlined in your code correctly.
-**WebStorm/Intellij and GraphQL**
+2. WebStorm/Intellij and GraphQL
-The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing:
+Install the [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/). It significantly improves the experience of working with GraphQL by providing:
- Syntax highlighting
- Autocomplete suggestions
diff --git a/website/src/pages/it/subgraphs/querying/graphql-api.mdx b/website/src/pages/it/subgraphs/querying/graphql-api.mdx
index 6449bb254449..9e8a3dc6e6bf 100644
--- a/website/src/pages/it/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/it/subgraphs/querying/graphql-api.mdx
@@ -2,23 +2,37 @@
title: API GraphQL
---
-Learn about the GraphQL Query API used in The Graph.
+Explore the GraphQL Query API for interacting with Subgraphs on The Graph Network.
-## What is GraphQL?
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with existing data.
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
+The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
+## Core Concepts
-## Queries with GraphQL
+### Entità
+
+- **What they are**: Persistent data objects defined with `@entity` in your schema
+- **Key requirement**: Must contain `id: ID!` as primary identifier
+- **Usage**: Foundation for all query operations
+
+### Schema
+
+- **Purpose**: Blueprint defining the data structure and relationships using GraphQL [IDL](https://facebook.github.io/graphql/draft/#sec-Type-System)
+- **Key characteristics**:
+ - Auto-generates query endpoints
+ - Read-only operations (no mutations)
+ - Defines entity interfaces and derived fields
-In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+## Query Structure
-> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
+GraphQL queries in The Graph target entities defined in the Subgraph schema. Each `Entity` type generates corresponding `entity` and `entities` fields on the root `Query` type.
-### Esempi
+> Note: The `query` keyword is not required at the top level of GraphQL queries.
-Query for a single `Token` entity defined in your schema:
+### Single Entity Queries Example
+
+Query for a single `Token` entity:
```graphql
{
@@ -29,9 +43,11 @@ Query for a single `Token` entity defined in your schema:
}
```
-> Note: When querying for a single entity, the `id` field is required, and it must be written as a string.
+> Note: Single entity queries require the `id` parameter as a string.
+
+### Collection Queries Example
-Query all `Token` entities:
+Query format for all `Token` entities:
```graphql
{
@@ -42,14 +58,14 @@ Query all `Token` entities:
}
```
-### Ordinamento
+### Sorting Example
-When querying a collection, you may:
+Collection queries support the following sort parameters:
-- Use the `orderBy` parameter to sort by a specific attribute.
-- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending.
+- `orderBy`: Specifies the attribute for sorting
+- `orderDirection`: Accepts `asc` (ascending) or `desc` (descending)
-#### Esempio
+#### Standard Sorting Example
```graphql
{
@@ -60,11 +76,7 @@ When querying a collection, you may:
}
```
-#### Esempio di ordinamento di entità annidate
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities.
-
-The following example shows tokens sorted by the name of their owner:
+#### Nested Entity Sorting Example
```graphql
{
@@ -77,20 +89,18 @@ The following example shows tokens sorted by the name of their owner:
}
```
-> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
+> Note: Nested sorting supports one-level-deep `String` or `ID` types on `@entity` and `@derivedFrom` fields.
-### Impaginazione
+### Pagination Example
-When querying a collection, it's best to:
+When querying a collection, it is best to:
- Use the `first` parameter to paginate from the beginning of the collection.
- The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time.
- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities.
- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above.
-#### Example using `first`
-
-Eseguire query di primi 10 token:
+#### Standard Pagination Example
```graphql
{
@@ -101,11 +111,7 @@ Eseguire query di primi 10 token:
}
```
-To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection.
-
-#### Example using `first` and `skip`
-
-Query 10 `Token` entities, offset by 10 places from the beginning of the collection:
+#### Offset Pagination Example
```graphql
{
@@ -116,9 +122,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect
}
```
-#### Example using `first` and `id_ge`
-
-If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query:
+#### Cursor-based Pagination Example
```graphql
query manyTokens($lastID: String) {
@@ -129,16 +133,11 @@ query manyTokens($lastID: String) {
}
```
-The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values.
-
### Filtraggio
-- You can use the `where` parameter in your queries to filter for different properties.
-- You can filter on multiple values within the `where` parameter.
-
-#### Example using `where`
+The `where` parameter filters entities based on specified conditions.
-Query challenges with `failed` outcome:
+#### Basic Filtering Example
```graphql
{
@@ -152,9 +151,7 @@ Query challenges with `failed` outcome:
}
```
-You can use suffixes like `_gt`, `_lte` for value comparison:
-
-#### Esempio di filtraggio dell'intervallo
+#### Numeric Comparison Example
```graphql
{
@@ -166,11 +163,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
}
```
-#### Esempio di filtraggio dei blocchi
-
-You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-
-This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
+#### Block-based Filtering Example
```graphql
{
@@ -182,11 +175,7 @@ This can be useful if you are looking to fetch only entities which have changed,
}
```
-#### Esempio di filtraggio di entità annidate
-
-Filtering on the basis of nested entities is possible in the fields with the `_` suffix.
-
-Questo può essere utile se si vuole recuperare solo le entità il cui livello di figlio soddisfa le condizioni fornite.
+#### Nested Entity Filtering Example
```graphql
{
@@ -200,11 +189,9 @@ Questo può essere utile se si vuole recuperare solo le entità il cui livello d
}
```
-#### Operatori logici
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria.
+#### Logical Operators
-##### `AND` Operator
+##### AND Operations Example
The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`.
@@ -220,27 +207,11 @@ The following example filters for challenges with `outcome` `succeeded` and `num
}
```
-> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas.
->
-> ```graphql
-> {
-> challenges(where: { number_gte: 100, outcome: "succeeded" }) {
-> challenger
-> outcome
-> application {
-> id
-> }
-> }
-> }
-> ```
-
-##### `OR` Operator
-
-The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`.
+**Syntactic sugar:** You can simplify the above query by removing the `and` operator and by passing a sub-expression separated by commas.
```graphql
{
- challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenges(where: { number_gte: 100, outcome: "succeeded" }) {
challenger
outcome
application {
@@ -250,52 +221,36 @@ The following example filters for challenges with `outcome` `succeeded` or `numb
}
```
-> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
-
-#### Tutti i filtri
-
-Elenco completo dei suffissi dei parametri:
+##### OR Operations Example
+```graphql
+{
+ challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenger
+ outcome
+ application {
+ id
+ }
+ }
+}
```
-_
-_not
-_gt
-_lt
-_gte
-_lte
-_in
-_not_in
-_contains
-_contains_nocase
-_not_contains
-_not_contains_nocase
-_starts_with
-_starts_with_nocase
-_ends_with
-_ends_with_nocase
-_not_starts_with
-_not_starts_with_nocase
-_not_ends_with
-_not_ends_with_nocase
-```
-
-> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types.
-In addition, the following global filters are available as part of `where` argument:
+Global filter parameter:
```graphql
_change_block(number_gte: Int)
```
-### Query Time-travel
+### Time-travel Queries Example
-You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries.
+Queries support historical state retrieval using the `block` parameter:
-The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change.
+- `number`: Integer block number
+- `hash`: String block hash
> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail.
-#### Esempio
+#### Block Number Query Example
```graphql
{
@@ -309,9 +264,7 @@ The result of such a query will not change over time, i.e., querying at a certai
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000.
-
-#### Esempio
+#### Block Hash Query Example
```graphql
{
@@ -325,28 +278,26 @@ This query will return `Challenge` entities, and their associated `Application`
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash.
-
-### Query di ricerca fulltext
+### Full-Text Search Example
-Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
+Full-text search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Full-text Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add full-text search to your Subgraph.
-Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
+Full-text search queries have one required field, `text`, for supplying search terms. Several special full-text operators are available to be used in this `text` search field.
-Operatori di ricerca fulltext:
+Full-text search fields use the required `text` parameter with the following operators:
-| Simbolo | Operatore | Descrizione |
-| --- | --- | --- |
-| `&` | `And` | Per combinare più termini di ricerca in un filtro per le entità che includono tutti i termini forniti |
-| | | `Or` | Le query con più termini di ricerca separati dall'operatore Or restituiranno tutte le entità con una corrispondenza tra i termini forniti |
-| `<->` | `Follow by` | Specifica la distanza tra due parole. |
-| `:*` | `Prefix` | Utilizzare il termine di ricerca del prefisso per trovare le parole il cui prefisso corrisponde (sono richiesti 2 caratteri.) |
+| Operator | Simbolo | Description |
+| --------- | ------- | --------------------------------------------------------------- |
+| And | `&` | Matches entities containing all terms |
+| Or | `\|` | Return all entities with a match from any of the provided terms |
+| Follow by | `<->` | Matches terms with specified distance |
+| Prefix | `:*` | Matches word prefixes (minimum 2 characters) |
-#### Esempi
+#### Search Examples
-Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields.
+OR operator:
-```graphql
+```
{
blogSearch(text: "anarchism | crumpets") {
id
@@ -357,7 +308,7 @@ Using the `or` operator, this query will filter to blog entities with variations
}
```
-The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy"
+“Follow” by operator:
```graphql
{
@@ -370,7 +321,7 @@ The `follow by` operator specifies a words a specific distance apart in the full
}
```
-Combinare gli operatori fulltext per creare filtri più complessi. Con un operatore di ricerca pretext combinato con un follow by questa query di esempio corrisponderà a tutte le entità del blog con parole che iniziano con "lou" seguite da "music".
+Combined operators:
```graphql
{
@@ -383,29 +334,19 @@ Combinare gli operatori fulltext per creare filtri più complessi. Con un operat
}
```
-### Validazione
+### Schema Definition
-Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
-
-## Schema
-
-The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
-
-> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
-
-### Entità
+Entity types require:
-All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field.
+- GraphQL Interface Definition Language (IDL) format
+- `@entity` directive
+- `ID` field
-> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported.
+### Subgraph Metadata Example
-### Metadati del Subgraph
+The `_Meta_` object provides subgraph metadata:
-All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
-
-```graphQL
+```graphql
{
_meta(block: { number: 123987 }) {
block {
@@ -419,14 +360,49 @@ All Subgraphs have an auto-generated `_Meta_` object, which provides access to S
}
```
-If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
-
-`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
+Metadata fields:
+
+- `deployment`: IPFS CID of the subgraph.yaml
+- `block`: Latest block information
+- `hasIndexingErrors`: Boolean indicating past indexing errors
+
+> Note: When writing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use `and` operators instead of `or` whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
+
+### GraphQL Filter Operators Reference
+
+This table explains each filter operator available in The Graph's GraphQL API. These operators are used as suffixes to field names when filtering data using the `where` parameter.
+
+| Operatore | Descrizione | Esempio |
+| ------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
+| `_` | Matches entities where the specified field equals another entity | `{ where: { owner_: { name: "Alice" } } }` |
+| `_not` | Negates the specified condition | `{ where: { active_not: true } }` |
+| `_gt` | Greater than (>) | `{ where: { price_gt: "100" } }` |
+| `_lt` | Less than (`\<`) | `{ where: { price_lt: "100" } }` |
+| `_gte` | Greater than or equal to (>=) | `{ where: { price_gte: "100" } }` |
+| `_lte` | Less than or equal to (`\<=`) | `{ where: { price_lte: "100" } }` |
+| `_in` | Value is in the specified array | `{ where: { category_in: ["Art", "Music"] } }` |
+| `_not_in` | Value is not in the specified array | `{ where: { category_not_in: ["Art", "Music"] } }` |
+| `_contains` | Field contains the specified string (case-sensitive) | `{ where: { name_contains: "token" } }` |
+| `_contains_nocase` | Field contains the specified string (case-insensitive) | `{ where: { name_contains_nocase: "token" } }` |
+| `_not_contains` | Field does not contain the specified string (case-sensitive) | `{ where: { name_not_contains: "test" } }` |
+| `_not_contains_nocase` | Field does not contain the specified string (case-insensitive) | `{ where: { name_not_contains_nocase: "test" } }` |
+| `_starts_with` | Field starts with the specified string (case-sensitive) | `{ where: { name_starts_with: "Crypto" } }` |
+| `_starts_with_nocase` | Field starts with the specified string (case-insensitive) | `{ where: { name_starts_with_nocase: "crypto" } }` |
+| `_ends_with` | Field ends with the specified string (case-sensitive) | `{ where: { name_ends_with: "Token" } }` |
+| `_ends_with_nocase` | Field ends with the specified string (case-insensitive) | `{ where: { name_ends_with_nocase: "token" } }` |
+| `_not_starts_with` | Field does not start with the specified string (case-sensitive) | `{ where: { name_not_starts_with: "Test" } }` |
+| `_not_starts_with_nocase` | Field does not start with the specified string (case-insensitive) | `{ where: { name_not_starts_with_nocase: "test" } }` |
+| `_not_ends_with` | Field does not end with the specified string (case-sensitive) | `{ where: { name_not_ends_with: "Test" } }` |
+| `_not_ends_with_nocase` | Field does not end with the specified string (case-insensitive) | `{ where: { name_not_ends_with_nocase: "test" } }` |
+
+#### Notes
+
+- Type support varies by operator. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`.
+- The `_` operator is only available for object and interface types.
+- String comparison operators are especially useful for text fields.
+- Numeric comparison operators work with both number and string-encoded number fields.
+- Use these operators in combination with logical operators (`and`, `or`) for complex filtering.
-`block` provides information about the latest block (taking into account any block constraints passed to `_meta`):
-
-- hash: l'hash del blocco
-- numero: il numero del blocco
-- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
+### Validazione
-`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
+Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
diff --git a/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx
index fc4ebe1f3daf..9e1b5a568fd3 100644
--- a/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx
@@ -1,34 +1,86 @@
---
-title: Managing API keys
+title: How to Manage API keys
---
+This guide shows you how to create, manage, and secure API keys for your [Subgraphs](/subgraphs/developing/subgraphs/).
+
## Panoramica
-API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are required to query Subgraphs. They authenticate users and devices, authorize access to specific endpoints, enforce rate limits, and enable usage tracking across The Graph.
+
+## Prerequisites
+
+- A [Subgraph Studio](https://thegraph.com/studio/) account
+
+## Create a New API Key
+
+1. Navigate to [Subgraph Studio](https://thegraph.com/studio/)
+2. Click the **API Keys** tab in the navigation menu
+3. Click the **Create API Key** button
+
+A new window will pop up:
+
+4. Enter a name for your API key
+5. Optional: You can enable a period spending limit
+6. Click **Create API Key**
+
+
+
+## Manage API Keys
+
+The “API keys” table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+
+### How to Set Spending Limits
+
+1. Find your API key in the API keys table
+2. Click the "three dots" icon next to the key
+3. Select "Manage spending limit"
+4. Enter your desired monthly limit in USD
+5. Click **Save**
+
+> Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+
+### How to Rename an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Rename API key"
+3. Enter the new name
+4. Click **Save**
+
+### How to Regenerate an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Regenerate API key"
+3. Confirm the action in the pop up dialog
+
+> Warning: Regenerating an API key will invalidate the previous key immediately. Update your applications with the new key to prevent service interruption.
+
+## API Key Details
-### Create and Manage API Keys
+### Monitoring Usage
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
+1. Click on your API key to view the Details page
+2. Check the **Overview** section for:
+ - Total number of queries
+ - GRT spent
+ - Current usage statistics
-The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+### Restricting Domain Access
-You can click the "three dots" menu to the right of a given API key to:
+1. Click on your API key to open the Details page
+2. Navigate to the **Security** section
+3. Click "Add Domain"
+4. Enter the authorized domain name
+5. Click **Save**
-- Rename API key
-- Regenerate API key
-- Delete API key
-- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+### Limiting Subgraph Access
-### API Key Details
+1. In the API key Details page
+2. Navigate to the **Security** section
+3. Click "Assign Subgraphs"
+4. Select the Subgraphs you want to authorize
+5. Click **Save**
-You can click on an individual API key to view the Details page:
+## Additional Resources
-1. Under the **Overview** section, you can:
- - Modificare il nome della chiave
- - Rigenerare le chiavi API
- - Visualizza l'utilizzo attuale della chiave API con le statistiche:
- - Numero di query
- - Importo di GRT speso
-2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- - Visualizzare e gestire i nomi di dominio autorizzati a utilizzare la chiave API
- - Assign Subgraphs that can be queried with your API key
+[Deploying Using Subgraph Studio](/subgraphs/developing/deploying/using-subgraph-studio/)
diff --git a/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 17258dd13ea1..c48a3021233a 100644
--- a/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,15 +2,19 @@
title: Subgraph ID vs Deployment ID
---
+Managing and accessing Subgraphs relies on two distinct identification systems: Subgraph IDs and Deployment IDs.
+
A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
-Here are some key differences between the two IDs: 
+Both identifiers are accessible in [Subgraph Studio](https://thegraph.com/studio/):
+
+
## Deployment ID
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://ipfs.thegraph.com/ipfs/QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
@@ -18,6 +22,12 @@ Example endpoint that uses Deployment ID:
`https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB`
+Using Deployment IDs for queries offers precise version control but comes with specific implications:
+
+- Advantages: Complete control over which version you're querying, ensuring consistent results
+- Challenges: Requires manual updates to query code when new Subgraph versions are published
+- Use case: Ideal for production environments where stability and predictability are crucial
+
## Subgraph ID
The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
@@ -25,3 +35,20 @@ The Subgraph ID is a unique identifier for a Subgraph. It remains constant acros
Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
+
+Using Subgraph IDs comes with important considerations:
+
+- Benefits: Automatically queries the latest version, reducing maintenance overhead
+- Limitations: May encounter version synchronization delays or breaking schema changes
+- Use case: Better suited for development environments or when staying current is more important than version stability
+
+## Deployment ID vs Subgraph ID
+
+Here are the key differences between the two IDs:
+
+| Consideration | Deployment ID | Subgraph ID |
+| ----------------------- | --------------------- | --------------- |
+| Version Pinning | Specific version | Always latest |
+| Maintenance Effort | High (manual updates) | Low (automatic) |
+| Environment Suitability | Production | Development |
+| Sync Status Awareness | Not required | Critical |
diff --git a/website/src/pages/it/subgraphs/quick-start.mdx b/website/src/pages/it/subgraphs/quick-start.mdx
index a803ac8695fa..b5c4a0fdc09e 100644
--- a/website/src/pages/it/subgraphs/quick-start.mdx
+++ b/website/src/pages/it/subgraphs/quick-start.mdx
@@ -2,24 +2,28 @@
title: Quick Start
---
-Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Create, deploy, and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph Network.
+
+By the end, you'll have:
+
+- Initialized a Subgraph from a smart contract
+- Deployed it to Subgraph Studio for testing
+- Published to The Graph Network for decentralized indexing
## Prerequisites
- A crypto wallet
-- A smart contract address on a [supported network](/supported-networks/)
-- [Node.js](https://nodejs.org/) installed
-- A package manager of your choice (`npm`, `yarn` or `pnpm`)
+- A deployed smart contract on a [supported network](/supported-networks/)
+- [Node.js](https://nodejs.org/) & a package manager of your choice (`npm`, `yarn` or `pnpm`)
## How to Build a Subgraph
### 1. Create a Subgraph in Subgraph Studio
-Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-
-Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-
-Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
+1. Go to [Subgraph Studio](https://thegraph.com/studio/)
+2. Connect your wallet
+3. Click "Create a Subgraph"
+4. Name it in Title Case: "Subgraph Name Chain Name"
### 2. Install the Graph CLI
@@ -37,20 +41,22 @@ Using [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your Subgraph
+Verify install:
-> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
+```sh
+graph --version
+```
-The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
+### 3. Initialize your Subgraph
-The following command initializes your Subgraph from an existing contract:
+> You can find commands for your specific Subgraph in [Subgraph Studio](https://thegraph.com/studio/).
+
+The following command initializes your Subgraph from an existing contract and indexes events:
```sh
graph init
```
-If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-
When you initialize your Subgraph, the CLI will ask you for the following information:
- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
@@ -59,19 +65,17 @@ When you initialize your Subgraph, the CLI will ask you for the following inform
- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block where the contract was deployed to optimize Subgraph indexing of blockchain data.
- **Contract Name**: Input the name of your contract.
- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-See the following screenshot for an example for what to expect when initializing your Subgraph:
+See the following screenshot for an example of what to expect when initializing your Subgraph:

### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-
When making changes to the Subgraph, you will mainly work with three files:
- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
@@ -82,9 +86,7 @@ For a detailed breakdown on how to write your Subgraph, check out [Creating a Su
### 5. Deploy your Subgraph
-> Remember, deploying is not the same as publishing.
-
-When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
Once your Subgraph is written, run the following commands:
@@ -107,8 +109,6 @@ graph deploy
```
````
-The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-
### 6. Review your Subgraph
If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
@@ -125,55 +125,13 @@ When your Subgraph is ready for a production environment, you can publish it to
- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-
-> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
-
-#### Publishing with Subgraph Studio
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to add curation signal.
-To publish your Subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard and select your network.

-Select the network to which you would like to publish your Subgraph.
-
-#### Publishing from the CLI
-
-As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
-
-Open the `graph-cli`.
-
-Use the following commands:
-
-````
-```sh
-graph codegen && graph build
-```
-
-Then,
-
-```sh
-graph publish
-```
-````
-
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.
-
-
-
-To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-
-#### Adding signal to your Subgraph
-
-1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
-
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
-
-2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
-
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
-
-To learn more about curation, read [Curating](/resources/roles/curating/).
+> It is recommended that you curate your own Subgraph with at least 3,000 GRT to incentivize indexing.
To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:
diff --git a/website/src/pages/it/subgraphs/upgrade-indexer.mdx b/website/src/pages/it/subgraphs/upgrade-indexer.mdx
new file mode 100644
index 000000000000..8fb72e077a03
--- /dev/null
+++ b/website/src/pages/it/subgraphs/upgrade-indexer.mdx
@@ -0,0 +1,25 @@
+---
+title: Edge & Node Upgrade Indexer
+sidebarTitle: Upgrade Indexer
+---
+
+## Panoramica
+
+The Upgrade Indexer is a specialized Indexer operated by Edge & Node. It supports newly integrated chains within The Graph ecosystem and ensures new Subgraphs are immediately available for querying, eliminating potential downtime.
+
+Originally designed as a transitional support, its primary purpose was to facilitate the migration of Subgraphs from the hosted service to the decentralized network. Currently, it supports newly deployed Subgraphs before the full Chain Integration Process (CIP) Indexing rewards are activated.
+
+### What it does
+
+- Provides immediate query support for all newly deployed Subgraphs.
+- Functions as the sole supporting Indexer for each chain until indexing rewards are activated.
+
+### What it does **not** do
+
+- Does not permanently index Subgraphs. Subgraph owners should curate Subgraphs to use independent Indexers long term.
+- Does not compete for rewards. The Upgrade Indexer's participation on the Graph Network does not dilute rewards for other Indexers.
+- Doesn't support Time Travel Queries (TTQ). All Subgraphs on the Upgrade Indexer are auto-pruned. If TTQs are needed on a Subgraph, [curation signal can be added](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to attract Indexers that will support this feature.
+
+### Conclusion
+
+The Edge & Node Upgrade Indexer is foundational in supporting chain integrations and mitigating data latency risks. It plays a critical role in scaling The Graph's decentralized infrastructure by ensuring immediate query support and fostering community-driven indexing.
diff --git a/website/src/pages/it/substreams/_meta-titles.json b/website/src/pages/it/substreams/_meta-titles.json
index 6262ad528c3a..b8799cc89251 100644
--- a/website/src/pages/it/substreams/_meta-titles.json
+++ b/website/src/pages/it/substreams/_meta-titles.json
@@ -1,3 +1,4 @@
{
- "developing": "Developing"
+ "developing": "Developing",
+ "sps": "Substreams-powered Subgraphs"
}
diff --git a/website/src/pages/it/substreams/developing/sinks.mdx b/website/src/pages/it/substreams/developing/sinks.mdx
index 5b96274b08b7..6bd1b0f60fa0 100644
--- a/website/src/pages/it/substreams/developing/sinks.mdx
+++ b/website/src/pages/it/substreams/developing/sinks.mdx
@@ -8,14 +8,13 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database or a file.
## Sinks
> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed.
- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database.
-- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network.
- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application.
- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic.
- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks.
@@ -26,26 +25,26 @@ Sinks are integrations that allow you to send the extracted data to different de
### Official
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Official Support (by one of the main Substreams providers)
- C = Community Support
diff --git a/website/src/pages/it/substreams/quick-start.mdx b/website/src/pages/it/substreams/quick-start.mdx
index a4c82cb797d9..0f1eb0783b99 100644
--- a/website/src/pages/it/substreams/quick-start.mdx
+++ b/website/src/pages/it/substreams/quick-start.mdx
@@ -31,6 +31,7 @@ If you can't find a Substreams package that meets your specific needs, you can d
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/).
diff --git a/website/src/pages/it/substreams/sps/faq.mdx b/website/src/pages/it/substreams/sps/faq.mdx
new file mode 100644
index 000000000000..250c466d5929
--- /dev/null
+++ b/website/src/pages/it/substreams/sps/faq.mdx
@@ -0,0 +1,96 @@
+---
+title: Substreams-Powered Subgraphs FAQ
+sidebarTitle: FAQ
+---
+
+## What are Substreams?
+
+Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications.
+
+Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere.
+
+Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
+
+## What are Substreams-powered Subgraphs?
+
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
+
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
+
+## How are Substreams-powered Subgraphs different from Subgraphs?
+
+Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain.
+
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+
+## What are the benefits of using Substreams-powered Subgraphs?
+
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+
+## What are the benefits of Substreams?
+
+There are many benefits to using Substreams, including:
+
+- Composable: You can stack Substreams modules like LEGO blocks, and build upon community modules, further refining public data.
+
+- High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery).
+
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
+
+- Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks.
+
+- Access to additional data which is not available as part of the JSON RPC
+
+- All the benefits of the Firehose.
+
+## What is the Firehose?
+
+Developed by [StreamingFast](https://www.streamingfast.io/), the Firehose is a blockchain data extraction layer designed from scratch to process the full history of blockchains at speeds that were previously unseen. Providing a files-based and streaming-first approach, it is a core component of StreamingFast's suite of open-source technologies and the foundation for Substreams.
+
+Go to the [documentation](https://firehose.streamingfast.io/) to learn more about the Firehose.
+
+## What are the benefits of the Firehose?
+
+There are many benefits to using Firehose, including:
+
+- Lowest latency & no polling: In a streaming-first fashion, the Firehose nodes are designed to race to push out the block data first.
+
+- Prevents downtimes: Designed from the ground up for High Availability.
+
+- Never miss a beat: The Firehose stream cursor is designed to handle forks and to continue where you left off in any condition.
+
+- Richest data model: Best data model that includes the balance changes, the full call tree, internal transactions, logs, storage changes, gas costs, and more.
+
+- Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available.
+
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
+
+The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules.
+
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+
+The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
+
+## What is the role of Rust modules in Substreams?
+
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
+
+See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
+
+## What makes Substreams composable?
+
+When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used.
+
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
+
+## How can you build and deploy a Substreams-powered Subgraph?
+
+After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
+
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
+
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
+
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
+
+The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them.
diff --git a/website/src/pages/it/substreams/sps/introduction.mdx b/website/src/pages/it/substreams/sps/introduction.mdx
new file mode 100644
index 000000000000..0e5be69aa0c3
--- /dev/null
+++ b/website/src/pages/it/substreams/sps/introduction.mdx
@@ -0,0 +1,31 @@
+---
+title: Introduction to Substreams-Powered Subgraphs
+sidebarTitle: Introduzione
+---
+
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+
+## Panoramica
+
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+
+### Specifics
+
+There are two methods of enabling this technology:
+
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
+
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
+
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+
+### Additional Resources
+
+Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly:
+
+- [Solana](/substreams/developing/solana/transactions/)
+- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm)
+- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
+- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
+- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/it/substreams/sps/triggers.mdx b/website/src/pages/it/substreams/sps/triggers.mdx
new file mode 100644
index 000000000000..711dcaa6423a
--- /dev/null
+++ b/website/src/pages/it/substreams/sps/triggers.mdx
@@ -0,0 +1,47 @@
+---
+title: Substreams Triggers
+---
+
+Use Custom Triggers and enable the full use GraphQL.
+
+## Panoramica
+
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
+
+### Defining `handleTransactions`
+
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
+
+```tsx
+export function handleTransactions(bytes: Uint8Array): void {
+ let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1.
+ if (transactions.length == 0) {
+ log.info('No transactions found', [])
+ return
+ }
+
+ for (let i = 0; i < transactions.length; i++) {
+ // 2.
+ let transaction = transactions[i]
+
+ let entity = new Transaction(transaction.hash) // 3.
+ entity.from = transaction.from
+ entity.to = transaction.to
+ entity.save()
+ }
+}
+```
+
+Here's what you're seeing in the `mappings.ts` file:
+
+1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object
+2. Looping over the transactions
+3. Create a new Subgraph entity for every transaction
+
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
+
+### Additional Resources
+
+To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/).
diff --git a/website/src/pages/it/substreams/sps/tutorial.mdx b/website/src/pages/it/substreams/sps/tutorial.mdx
new file mode 100644
index 000000000000..06a271e30ff1
--- /dev/null
+++ b/website/src/pages/it/substreams/sps/tutorial.mdx
@@ -0,0 +1,155 @@
+---
+title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana"
+sidebarTitle: Tutorial
+---
+
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
+
+## Iniziare
+
+For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial)
+
+### Prerequisites
+
+Before starting, make sure to:
+
+- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container.
+- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs.
+
+### Step 1: Initialize Your Project
+
+1. Open your Dev Container and run the following command to initialize your project:
+
+ ```bash
+ substreams init
+ ```
+
+2. Select the "minimal" project option.
+
+3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID:
+
+```yaml
+specVersion: v0.1.0
+package:
+ name: my_project_sol
+ version: v0.1.0
+
+imports: # Pass your spkg of interest
+ solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg
+
+modules:
+ - name: map_spl_transfers
+ use: solana:map_block # Select corresponding modules available within your spkg
+ initialBlock: 260000082
+
+ - name: map_transactions_by_programid
+ use: solana:solana:transactions_by_programid_without_votes
+
+network: solana-mainnet-beta
+
+params: # Modify the param fields to meet your needs
+ # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA
+ map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE
+```
+
+### Step 2: Generate the Subgraph Manifest
+
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
+
+```bash
+substreams codegen subgraph
+```
+
+You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source:
+
+```yaml
+---
+dataSources:
+ - kind: substreams
+ name: my_project_sol
+ network: solana-mainnet-beta
+ source:
+ package:
+ moduleName: map_spl_transfers # Module defined in the substreams.yaml
+ file: ./my-project-sol-v0.1.0.spkg
+ mapping:
+ apiVersion: 0.0.9
+ kind: substreams/graph-entities
+ file: ./src/mappings.ts
+ handler: handleTriggers
+```
+
+### Step 3: Define Entities in `schema.graphql`
+
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
+
+Here is an example:
+
+```graphql
+type MyTransfer @entity {
+ id: ID!
+ amount: String!
+ source: String!
+ designation: String!
+ signers: [String!]!
+}
+```
+
+This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`.
+
+### Step 4: Handle Substreams Data in `mappings.ts`
+
+With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
+
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
+
+```ts
+import { Protobuf } from 'as-proto/assembly'
+import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events'
+import { MyTransfer } from '../generated/schema'
+
+export function handleTriggers(bytes: Uint8Array): void {
+ const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode)
+
+ for (let i = 0; i < input.data.length; i++) {
+ const event = input.data[i]
+
+ if (event.transfer != null) {
+ let entity_id: string = `${event.txnId}-${i}`
+ const entity = new MyTransfer(entity_id)
+ entity.amount = event.transfer!.instruction!.amount.toString()
+ entity.source = event.transfer!.accounts!.source
+ entity.designation = event.transfer!.accounts!.destination
+
+ if (event.transfer!.accounts!.signer!.single != null) {
+ entity.signers = [event.transfer!.accounts!.signer!.single!.signer]
+ } else if (event.transfer!.accounts!.signer!.multisig != null) {
+ entity.signers = event.transfer!.accounts!.signer!.multisig!.signers
+ }
+ entity.save()
+ }
+ }
+}
+```
+
+### Step 5: Generate Protobuf Files
+
+To generate Protobuf objects in AssemblyScript, run the following command:
+
+```bash
+npm run protogen
+```
+
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
+
+### Conclusion
+
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+
+### Video Tutorial
+
+
+
+### Additional Resources
+
+For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana).
diff --git a/website/src/pages/it/supported-networks.mdx b/website/src/pages/it/supported-networks.mdx
index ef2c28393033..9592cfabc0ad 100644
--- a/website/src/pages/it/supported-networks.mdx
+++ b/website/src/pages/it/supported-networks.mdx
@@ -4,17 +4,17 @@ hideTableOfContents: true
hideContentHeader: true
---
-import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, NetworksTable } from '@/supportedNetworks'
import { Heading } from '@/components'
import { useI18n } from '@/i18n'
export const getStaticProps = getSupportedNetworksStaticProps
-
+
{useI18n().t('index.supportedNetworks.title')}
-
+
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
diff --git a/website/src/pages/it/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/it/token-api/evm/get-balances-evm-by-address.mdx
index 3386fd078059..68385ffc4272 100644
--- a/website/src/pages/it/token-api/evm/get-balances-evm-by-address.mdx
+++ b/website/src/pages/it/token-api/evm/get-balances-evm-by-address.mdx
@@ -1,9 +1,9 @@
---
-title: Token Balances by Wallet Address
+title: Balances by Address
template:
type: openApi
apiId: tokenApi
operationId: getBalancesEvmByAddress
---
-The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
+Provides latest ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/it/token-api/evm/get-historical-balances-evm-by-address.mdx b/website/src/pages/it/token-api/evm/get-historical-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..d96ed1b81fa2
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-historical-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Historical Balances
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHistoricalBalancesEvmByAddress
+---
+
+Provides historical ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/it/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/it/token-api/evm/get-holders-evm-by-contract.mdx
index 0bb79e41ed54..01a52bbf7ad2 100644
--- a/website/src/pages/it/token-api/evm/get-holders-evm-by-contract.mdx
+++ b/website/src/pages/it/token-api/evm/get-holders-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders by Contract Address
+title: Token Holders
template:
type: openApi
apiId: tokenApi
operationId: getHoldersEvmByContract
---
-The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
+Provides ERC-20 token holder balances by contract address.
diff --git a/website/src/pages/it/token-api/evm/get-nft-activities-evm.mdx b/website/src/pages/it/token-api/evm/get-nft-activities-evm.mdx
new file mode 100644
index 000000000000..f76eb35f653a
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-nft-activities-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Activities
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftActivitiesEvm
+---
+
+Provides NFT Activities (ex: transfers, mints & burns).
diff --git a/website/src/pages/it/token-api/evm/get-nft-collections-evm-by-contract.mdx b/website/src/pages/it/token-api/evm/get-nft-collections-evm-by-contract.mdx
new file mode 100644
index 000000000000..c8e9bfb64219
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-nft-collections-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Collection
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftCollectionsEvmByContract
+---
+
+Provides single NFT collection metadata, total supply, owners & total transfers.
diff --git a/website/src/pages/it/token-api/evm/get-nft-holders-evm-by-contract.mdx b/website/src/pages/it/token-api/evm/get-nft-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..091d01a197f4
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-nft-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Holders
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftHoldersEvmByContract
+---
+
+Provides NFT holders per contract.
diff --git a/website/src/pages/it/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx b/website/src/pages/it/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
new file mode 100644
index 000000000000..cf9ff1c6e1b8
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Items
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftItemsEvmContractByContractToken_idByToken_id
+---
+
+Provides single NFT token metadata, ownership & traits.
diff --git a/website/src/pages/it/token-api/evm/get-nft-ownerships-evm-by-address.mdx b/website/src/pages/it/token-api/evm/get-nft-ownerships-evm-by-address.mdx
new file mode 100644
index 000000000000..4c33526eceb7
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-nft-ownerships-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Ownerships
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftOwnershipsEvmByAddress
+---
+
+Provides NFT Ownerships for Account.
diff --git a/website/src/pages/it/token-api/evm/get-nft-sales-evm.mdx b/website/src/pages/it/token-api/evm/get-nft-sales-evm.mdx
new file mode 100644
index 000000000000..f2d78bea4052
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-nft-sales-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Sales
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftSalesEvm
+---
+
+Provides latest NFT marketplace sales.
diff --git a/website/src/pages/it/token-api/evm/get-ohlc-pools-evm-by-pool.mdx b/website/src/pages/it/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
new file mode 100644
index 000000000000..d5bc5357eadf
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
@@ -0,0 +1,9 @@
+---
+title: OHLCV by Pool
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPoolsEvmByPool
+---
+
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/it/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/it/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
index d1558ddd6e78..ff8f590b0433 100644
--- a/website/src/pages/it/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
+++ b/website/src/pages/it/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token OHLCV prices by Contract Address
+title: OHLCV by Contract
template:
type: openApi
apiId: tokenApi
operationId: getOhlcPricesEvmByContract
---
-The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/it/token-api/evm/get-pools-evm.mdx b/website/src/pages/it/token-api/evm/get-pools-evm.mdx
new file mode 100644
index 000000000000..db32376f5a17
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-pools-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Liquidity Pools
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getPoolsEvm
+---
+
+Provides Uniswap V2 & V3 liquidity pool metadata.
diff --git a/website/src/pages/it/token-api/evm/get-swaps-evm.mdx b/website/src/pages/it/token-api/evm/get-swaps-evm.mdx
new file mode 100644
index 000000000000..0a7697f38c8b
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-swaps-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Swap Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getSwapsEvm
+---
+
+Provides Uniswap V2 & V3 swap events.
diff --git a/website/src/pages/it/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/it/token-api/evm/get-tokens-evm-by-contract.mdx
index b6fab8011fc2..aed206c15272 100644
--- a/website/src/pages/it/token-api/evm/get-tokens-evm-by-contract.mdx
+++ b/website/src/pages/it/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders and Supply by Contract Address
+title: Token Metadata
template:
type: openApi
apiId: tokenApi
operationId: getTokensEvmByContract
---
-The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
+Provides ERC-20 token contract metadata.
diff --git a/website/src/pages/it/token-api/evm/get-transfers-evm.mdx b/website/src/pages/it/token-api/evm/get-transfers-evm.mdx
new file mode 100644
index 000000000000..d8e73c90a03c
--- /dev/null
+++ b/website/src/pages/it/token-api/evm/get-transfers-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Transfer Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvm
+---
+
+Provides ERC-20 & Native transfer events.
diff --git a/website/src/pages/it/token-api/faq.mdx b/website/src/pages/it/token-api/faq.mdx
index 6178aee33e86..3bf60c0cda8f 100644
--- a/website/src/pages/it/token-api/faq.mdx
+++ b/website/src/pages/it/token-api/faq.mdx
@@ -6,21 +6,37 @@ Get fast answers to easily integrate and scale with The Graph's high-performance
## General
-### What blockchains does the Token API support?
+### Which blockchains are supported by the Token API?
-Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+Currently, the Token API supports Ethereum, BNB Smart Chain (BSC), Polygon, Optimism, Base, Unichain, and Arbitrum One.
-### Why isn't my API key from The Graph Market working?
+### Does the Token API support NFTs?
-Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+Yes, The Graph Token API currently supports ERC-721 and ERC-1155 NFT token standards, with support for additional NFT standards planned. Endpoints are offered for ownership, collection stats, metadata, sales, holders, and transfer activity.
+
+### Do NFTs include off-chain data?
+
+NFT endpoints currently only include on-chain data. To get off-chain data, use the IPFS or HTTP links included in the NFT item response.
+
+### How do I authenticate requests to the Token API, and why doesn't my API key from The Graph Market work?
+
+Authentication is managed via API tokens obtained through [The Graph Market](https://thegraph.market/). If you're experiencing issues, make sure you're using the API Token generated from the API key, not the API key itself. An API token can be found on The Graph Market dashboard next to each API key. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
### How current is the data provided by the API relative to the blockchain?
The API provides data up to the latest finalized block.
-### How do I authenticate requests to the Token API?
+### How do I retrieve token prices?
+
+By default, token prices are returned with token-related responses, including token balances, token transfers, token metadata, and token holders. Historical prices are available with the Open-High-Low-Close (OHLC) endpoints.
+
+### Does the Token API support historical token data?
-Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+The Token API supports historical token balances with the `/historical/balances/evm/{address}` endpoint. You can query historical price data by pool at `/ohlc/pools/evm/{pool}` and by contract at `/ohlc/prices/evm/{contract}`.
+
+### What exchanges does the Token API use for token prices?
+
+The Token API currently tracks prices on Uniswap v2 and Uniswap v3, with plans to support additional exchanges in the future.
### Does the Token API provide a client SDK?
@@ -34,9 +50,9 @@ Yes, more blockchains will be supported in the future. Please share feedback on
Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
-### Are there plans to support additional use cases such as NFTs?
+### Are there plans to support additional use cases?
-The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
## MCP / LLM / AI Topics
@@ -60,17 +76,25 @@ You can find the code for the MCP client in [The Graph's repo](https://github.co
Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
-### Are there rate limits or usage costs?\*\*
+### Why am I getting 500 errors?
+
+Networks that are currently or temporarily unavailable on a given endpoint will return a `bad_database_response`, `Endpoint is currently not supported for this network` error. Databases that are in the process of ingestion will produce this response.
+
+### Are there rate limits or usage costs?
+
+During Beta, the Token API is free for authorized developers. There are no specific rate limits, but reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What do I do if I notice data inconsistencies in the data returned by the Token API?
-During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+If you notice data inconsistencies, please report the issue on our [Discord](https://discord.gg/graphprotocol). Identifying edge cases can help make sure all data is accurate and up-to-date.
-### What networks are supported, and how do I specify them?
+### How do I specify a network?
-You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of the exact network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`, `unichain`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
### Why do I only see 10 results? How can I get more data?
-Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` and `page` (1-indexed) to return more results. For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
### How do I fetch older transfer history?
diff --git a/website/src/pages/it/token-api/monitoring/get-health.mdx b/website/src/pages/it/token-api/monitoring/get-health.mdx
index 57a827b3343b..09f7b954dbf3 100644
--- a/website/src/pages/it/token-api/monitoring/get-health.mdx
+++ b/website/src/pages/it/token-api/monitoring/get-health.mdx
@@ -1,7 +1,9 @@
---
-title: Get health status of the API
+title: Health Status
template:
type: openApi
apiId: tokenApi
operationId: getHealth
---
+
+Get health status of the API
diff --git a/website/src/pages/it/token-api/monitoring/get-networks.mdx b/website/src/pages/it/token-api/monitoring/get-networks.mdx
index 0ea3c485ddb9..f4b65492ed15 100644
--- a/website/src/pages/it/token-api/monitoring/get-networks.mdx
+++ b/website/src/pages/it/token-api/monitoring/get-networks.mdx
@@ -1,7 +1,9 @@
---
-title: Get supported networks of the API
+title: Supported Networks
template:
type: openApi
apiId: tokenApi
operationId: getNetworks
---
+
+Get supported networks of the API
diff --git a/website/src/pages/it/token-api/monitoring/get-version.mdx b/website/src/pages/it/token-api/monitoring/get-version.mdx
index 0be6b7e92d04..8766594e2dd8 100644
--- a/website/src/pages/it/token-api/monitoring/get-version.mdx
+++ b/website/src/pages/it/token-api/monitoring/get-version.mdx
@@ -1,7 +1,9 @@
---
-title: Get the version of the API
+title: Versione
template:
type: openApi
apiId: tokenApi
operationId: getVersion
---
+
+Get the version of the API
diff --git a/website/src/pages/it/token-api/quick-start.mdx b/website/src/pages/it/token-api/quick-start.mdx
index 4653c3d41ac6..5b3d052d9ec5 100644
--- a/website/src/pages/it/token-api/quick-start.mdx
+++ b/website/src/pages/it/token-api/quick-start.mdx
@@ -9,15 +9,15 @@ sidebarTitle: Quick Start
The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
-The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+The Token API provides access to onchain NFT and fungible token data, including live and historical balances, holders, prices, market data, token metadata, and token transfers. This API also uses the Model Context Protocol (MCP) to allow AI tools such as Claude to enrich raw blockchain data with contextual insights.
## Prerequisites
-Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+Before you begin, get a JWT API token by signing up on [The Graph Market](https://thegraph.market/). Make sure to use the JWT API Token, not the API key. Each API key can generate a new JWT API Token at any time.
## Authentication
-All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+All API endpoints are authenticated using a JWT API token inserted in the header as `Authorization: Bearer `.
```json
{
@@ -64,6 +64,20 @@ Make sure to replace `` with the JWT Token generated from your API key.
> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+## Chain and Feature Support
+
+| Network | evm-tokens | evm-uniswaps | evm-nft-tokens |
+| ---------------- | :---------: | :----------: | :------------: |
+| Ethereum Mainnet | ✅ | ✅ | ✅ |
+| BSC | ✅\* | ✅ | ✅ |
+| Base | ✅ | ✅ | ✅ |
+| Unichain | ✅ | ✅ | ✅ |
+| Arbitrum-One | Ingesting\* | Ingesting\* | Ingesting\* |
+| Optimism | ✅ | ✅ | ✅ |
+| Polygon | ✅ | ✅ | ✅ |
+
+\*Some chains are still in the process of syncing. You may encounter `bad_database_response` errors or incorrect response values until data is fully synced.
+
## Troubleshooting
If the API call fails, try printing out the full response object for additional error details. For example:
diff --git a/website/src/pages/ja/about.mdx b/website/src/pages/ja/about.mdx
index b4462cd3c1c8..7487094da17a 100644
--- a/website/src/pages/ja/about.mdx
+++ b/website/src/pages/ja/about.mdx
@@ -1,67 +1,46 @@
---
-title: The Graphについて
+title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## とは「ザ・グラフ」
-The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier.
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## Understanding the Basics
+Its data services include:
-Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain.
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### Challenges Without The Graph
+### Why Blockchain Data is Difficult to Query
-In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply.
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**.
+The result is slow performance, complex infrastructure, and scalability issues.
-- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself.
+## How The Graph Solves This
-- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it.
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### Why is this a problem?
+Find the perfect data service for you:
-It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions.
+### 1. Custom Real-Time Data Streams
-Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
+**Use Case:** High-frequency trading, live analytics.
-Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data.
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph Provides a Solution
+### 2. Instant Token Data
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
+- [Start with Token API](/token-api/quick-start/)
-### How The Graph Functions
+### 3. Flexible Historical Queries
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+**Use Case:** Dapp frontends, custom analytics.
-#### Specifics
-
-- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-
-- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-
-- When creating a Subgraph, you need to write a Subgraph manifest.
-
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-
-The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.
-
-
-
-フローは以下のステップに従います。
-
-1. Dapp は、スマート コントラクトのトランザクションを通じて Ethereum にデータを追加します。
-2. スマートコントラクトは、トランザクションの処理中に 1 つまたは複数のイベントを発行します。
-3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
-4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
-5. Dapp は、ノードの [GraphQL エンドポイント](https://graphql.org/learn/) を使用して、ブロックチェーンからインデックス付けされたデータをグラフ ノードに照会します。グラフ ノードは、ストアのインデックス作成機能を利用して、このデータを取得するために、GraphQL クエリを基盤となるデータ ストアのクエリに変換します。 dapp は、このデータをエンドユーザー向けの豊富な UI に表示し、エンドユーザーはそれを使用して Ethereum で新しいトランザクションを発行します。サイクルが繰り返されます。
-
-## 次のステップ
-
-The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-
-Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/ja/index.json b/website/src/pages/ja/index.json
index 2034192e0089..180785dbdec9 100644
--- a/website/src/pages/ja/index.json
+++ b/website/src/pages/ja/index.json
@@ -2,7 +2,7 @@
"title": "Home",
"hero": {
"title": "The Graphのドキュメント",
- "description": "ブロックチェーンデータを抽出、変換、読み込み可能なツールを用いて、あなたのWeb3プロジェクトを開始しましょう。",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "The Graphの仕組み",
"cta2": "最初のサブグラフを作る"
},
@@ -19,10 +19,10 @@
"description": "並列実行でブロックチェーンのデータを取得し、使用できます。",
"cta": "サブストリームを使用する"
},
- "sps": {
- "title": "サブストリームを用いたサブグラフ",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "サブストリームを用いたサブグラフの設定を行う"
+ "tokenApi": {
+ "title": "Token API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "グラフノード",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "Firehose",
- "description": "ブロックチェーンデータをフラットファイルに抽出し、時間動機機能とストリーミング機能を向上させます。",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Firehoseを使う"
}
},
@@ -58,6 +58,7 @@
"networks": "ネットワーク",
"completeThisForm": "フォームを記入する"
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "サブグラフ",
"substreams": "サブストリーム",
"firehose": "Firehose",
- "tokenapi": "Token API"
+ "tokenApi": "Token API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "サブストリーム",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "グラフエクスプローラ",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/ja/indexing/new-chain-integration.mdx b/website/src/pages/ja/indexing/new-chain-integration.mdx
index f6fa2b643fc3..c0fe3f26c7b2 100644
--- a/website/src/pages/ja/indexing/new-chain-integration.mdx
+++ b/website/src/pages/ja/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, in a JSON-RPC batch request
-- `trace_filter` *(limited tracing and optionally required for Graph Node)*
+- `trace_filter` _(limited tracing and optionally required for Graph Node)_
### 2. Firehose Integration
@@ -63,7 +63,7 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
> Do not change the env var name itself. It must remain `ethereum` even if the network name is different.
-3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## Substreams-powered Subgraphs
diff --git a/website/src/pages/ja/indexing/overview.mdx b/website/src/pages/ja/indexing/overview.mdx
index 25b94c36ca88..da40b4b0353f 100644
--- a/website/src/pages/ja/indexing/overview.mdx
+++ b/website/src/pages/ja/indexing/overview.mdx
@@ -110,12 +110,12 @@ Indexers may differentiate themselves by applying advanced techniques for making
- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
-| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
+| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -131,7 +131,7 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
@@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer
#### グラフノード
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Getting started using Docker
@@ -708,42 +708,6 @@ Note that supported action types for allocation management have different input
Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
-#### Agora
-
-The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query.
-
-A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression.
-
-Example cost model:
-
-```
-# This statement captures the skip value,
-# uses a boolean expression in the predicate to match specific queries that use `skip`
-# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-# This default will match any GraphQL expression.
-# It uses a Global substituted into the expression to calculate cost
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-Example query costing using the above model:
-
-| Query | Price |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### Applying the cost model
-
-Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them.
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## Interacting with the network
### Stake in the protocol
diff --git a/website/src/pages/ja/indexing/tooling/graph-node.mdx b/website/src/pages/ja/indexing/tooling/graph-node.mdx
index dfbb9aeea657..2778a2e090b2 100644
--- a/website/src/pages/ja/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/ja/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ While some Subgraphs may just require a full node, some may have indexing featur
### IPFSノード
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### Prometheus メトリクスサーバー
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Kubernetesを始めるにあたって
@@ -77,15 +77,20 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
グラフノードは起動時に以下のポートを公開します。
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
-
-> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## グラフノードの高度な設定
@@ -330,7 +335,7 @@ Indexers can use [qlog](https://github.com/graphprotocol/qlog/) to process and s
For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table.
-The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show ` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions.
@@ -340,6 +345,4 @@ For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates f
#### Removing Subgraphs
-> これは新しい機能で、Graph Node 0.29.xで利用可能になる予定です。
-
At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/ja/resources/claude-mcp.mdx b/website/src/pages/ja/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..5b55bbcbe0a4
--- /dev/null
+++ b/website/src/pages/ja/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/ja/subgraphs/_meta-titles.json b/website/src/pages/ja/subgraphs/_meta-titles.json
index 5c6121aa7d88..cedc24a8e1b5 100644
--- a/website/src/pages/ja/subgraphs/_meta-titles.json
+++ b/website/src/pages/ja/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "クエリ",
"developing": "開発",
"guides": "How-to Guides",
- "best-practices": "ベストプラクティス"
+ "best-practices": "ベストプラクティス",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5f964d3cbb78..edc1d88dc6cf 100644
--- a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Patch Changes
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Minor Changes
diff --git a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx
index 94df906daad7..b9e5cace8281 100644
--- a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ Since language mappings are written in AssemblyScript, it is useful to review th
The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| バージョン | リリースノート |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
-| 0.0.7 | Ethereum タイプに `TransactionReceipt` と `Log` クラスを追加 Ethereum Event オブジェクトに `receipt` フィールドを追加。 |
-| 0.0.6 | Ethereum Transactionオブジェクトに`nonce`フィールドを追加 Ethereum Blockオブジェクトに`baseFeePerGas`を追加。 |
+| バージョン | リリースノート |
+| :---: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
+| 0.0.7 | Ethereum タイプに `TransactionReceipt` と `Log` クラスを追加 Ethereum Event オブジェクトに `receipt` フィールドを追加。 |
+| 0.0.6 | Ethereum Transactionオブジェクトに`nonce`フィールドを追加 Ethereum Blockオブジェクトに`baseFeePerGas`を追加。 |
| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Ethereum SmartContractCall オブジェクトにfunctionSignatureフィールドを追加 |
-| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Ethereum Transaction オブジェクトに inputフィールドを追加 |
+| 0.0.4 | Ethereum SmartContractCall オブジェクトにfunctionSignatureフィールドを追加 |
+| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Ethereum Transaction オブジェクトに inputフィールドを追加 |
### 組み込み型
@@ -286,7 +286,7 @@ The store API facilitates the retrieval of entities that were created or updated
- For some Subgraphs, these missed lookups can contribute significantly to the indexing time.
```typescript
-let id = event.transaction.hash // または ID が構築される方法
+let id =event.transaction.hash // または ID が構築される方法
let transfer = Transfer.loadInBlock(id)
if (transfer == null) {
transfer = 新しい転送(id)
@@ -770,44 +770,44 @@ if (value.kind == JSONValueKind.BOOL) {
### タイプ 変換参照
-| Source(s) | Destination | Conversion function |
-| -------------------- | -------------------- | ---------------------------- |
-| Address | Bytes | none |
-| Address | String | s.toHexString() |
-| BigDecimal | String | s.toString() |
-| BigInt | BigDecimal | s.toBigDecimal() |
-| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() |
-| BigInt | String (unicode) | s.toString() |
-| BigInt | i32 | s.toI32() |
-| Boolean | Boolean | none |
-| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) |
-| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) |
-| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() |
-| Bytes | String (unicode) | s.toString() |
-| Bytes | String (base58) | s.toBase58() |
-| Bytes | i32 | s.toI32() |
-| Bytes | u32 | s.toU32() |
-| Bytes | JSON | json.fromBytes(s) |
-| int8 | i32 | none |
-| int32 | i32 | none |
-| int32 | BigInt | Bigint.fromI32(s) |
-| uint24 | i32 | none |
-| int64 - int256 | BigInt | none |
-| uint32 - uint256 | BigInt | none |
-| JSON | boolean | s.toBool() |
-| JSON | i64 | s.toI64() |
-| JSON | u64 | s.toU64() |
-| JSON | f64 | s.toF64() |
-| JSON | BigInt | s.toBigInt() |
-| JSON | string | s.toString() |
-| JSON | Array | s.toArray() |
-| JSON | Object | s.toObject() |
-| String | Address | Address.fromString(s) |
-| Bytes | Address | Address.fromString(s) |
-| String | BigInt | BigDecimal.fromString(s) |
-| String | BigDecimal | BigDecimal.fromString(s) |
-| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) |
-| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) |
+| Source(s) | Destination | Conversion function |
+| -------------------- | -------------------- | -------------------------------- |
+| Address | Bytes | none |
+| Address | String | s.toHexString() |
+| BigDecimal | String | s.toString() |
+| BigInt | BigDecimal | s.toBigDecimal() |
+| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() |
+| BigInt | String (unicode) | s.toString() |
+| BigInt | i32 | s.toI32() |
+| Boolean | Boolean | none |
+| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) |
+| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) |
+| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() |
+| Bytes | String (unicode) | s.toString() |
+| Bytes | String (base58) | s.toBase58() |
+| Bytes | i32 | s.toI32() |
+| Bytes | u32 | s.toU32() |
+| Bytes | JSON | json.fromBytes(s) |
+| int8 | i32 | none |
+| int32 | i32 | none |
+| int32 | BigInt | Bigint.fromI32(s) |
+| uint24 | i32 | none |
+| int64 - int256 | BigInt | none |
+| uint32 - uint256 | BigInt | none |
+| JSON | boolean | s.toBool() |
+| JSON | i64 | s.toI64() |
+| JSON | u64 | s.toU64() |
+| JSON | f64 | s.toF64() |
+| JSON | BigInt | s.toBigInt() |
+| JSON | string | s.toString() |
+| JSON | Array | s.toArray() |
+| JSON | Object | s.toObject() |
+| String | Address | Address.fromString(s) |
+| Bytes | Address | Address.fromString(s) |
+| String | BigInt | BigDecimal.fromString(s) |
+| String | BigDecimal | BigDecimal.fromString(s) |
+| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) |
+| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) |
### データソースのメタデータ
diff --git a/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx
index 3fd648b44813..3c40e48ef42d 100644
--- a/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -22,14 +22,14 @@ Start the process and build a Subgraph that matches your needs:
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
-| バージョン | リリースノート |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
+| バージョン | リリースノート |
+| :---: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx
index a43e7a32c7b8..271a81b74cfa 100644
--- a/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx
@@ -212,7 +212,7 @@ Every Subgraph affected with this policy has an option to bring the version in q
If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph: `https://indexer.upgrade.thegraph.com/status`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
diff --git a/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 4e8503e208e4..3b4a1ecf9ba7 100644
--- a/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -88,6 +88,8 @@ graph auth
Once you are ready, you can deploy your Subgraph to Subgraph Studio.
> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
+>
+> **Note**: Each account is limited to 3 deployed (unpublished) Subgraphs. If you reach this limit, you must archive or publish existing Subgraphs before deploying new ones.
Use the following CLI command to deploy your Subgraph:
@@ -104,6 +106,8 @@ After running this command, the CLI will ask for a version label.
After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+> **Note**: The development query URL is limited to 3,000 queries per day.
+
Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
diff --git a/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index c26672ec6b84..4efa2628552b 100644
--- a/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -53,7 +53,7 @@ USAGE
FLAGS
-h, --help Show CLI help.
- -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node.
+ -i, --ipfs= [default: https://ipfs.thegraph.com/api/v0] Upload build results to an IPFS node.
--ipfs-hash= IPFS hash of the subgraph manifest to deploy.
--protocol-network= [default: arbitrum-one] The network to use for the subgraph deployment.
diff --git a/website/src/pages/ja/subgraphs/explorer.mdx b/website/src/pages/ja/subgraphs/explorer.mdx
index 0357d63fda7e..303ac4e7f221 100644
--- a/website/src/pages/ja/subgraphs/explorer.mdx
+++ b/website/src/pages/ja/subgraphs/explorer.mdx
@@ -2,83 +2,103 @@
title: グラフエクスプローラ
---
-Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Use [Graph Explorer](https://thegraph.com/explorer) and take full advantage of its core features.
## 概要
-Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+This guide explains how to use [Graph Explorer](https://thegraph.com/explorer) to quickly discover and interact with Subgraphs on The Graph Network, delegate GRT, view participant metrics, and analyze network performance.
-## Inside Explorer
+> When you visit Graph Explorer, you can also access the link to [explore Substreams](https://substreams.dev/).
-The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
+## Prerequisites
-### Subgraphs Page
+- To perform actions, you need a wallet (e.g., MetaMask) connected to [Graph Explorer](https://thegraph.com/explorer).
+ > Make sure your wallet is connected to the correct network (e.g., Arbitrum). Features and data shown are network specific.
+- GRT tokens if you plan to delegate or curate.
+- Basic knowledge of [Subgraphs](https://thegraph.com/docs/en/subgraphs/developing/subgraphs/)
-After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+## Navigating Graph Explorer
-- Your own finished Subgraphs
-- Subgraphs published by others
-- The exact Subgraph you want (based on the date created, signal amount, or name).
+### Step 1. Explore Subgraphs
-
+> For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
-When you click into a Subgraph, you will be able to do the following:
+Go to the Subgraphs page in [Graph Explorer](https://thegraph.com/explorer).
-- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
+- If you've deployed and published your Subgraph in Subgraph Studio, you can view it here.
+- Search all published Subgraphs and filter them by indexed network, specific categories (such as DeFI, NFTs, and DAOs), and **most queried, most curated, recently created, and recently updated**.
+
+
+
+To find Subgraphs indexing a specific contract, enter the contract address into the search bar.
+
+- For example, you can enter the L2GNS contract on Artbitrum (`0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec`) and this returns all Subgraphs indexing that contract:
+
+
- - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+> Looking for indexing contracts? Check out [this Subgraph](https://thegraph.com/explorer/subgraphs/FMTUN6d7sY2bLnAmNEPJTqiU3iuQht6ZXurpBh71wbWR?view=About&chain=arbitrum-one) which indexes contract addresses listed in its manifest. It shows all current deployments indexing those contracts on Arbitrum One, along with the signal allocated to each.
-
+You can click into any Subgraph, to:
+
+- Test queries in the playground and be able to leverage network details to make informed decisions.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make Indexers aware of its importance and quality.
+ > This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it'll eventually surface on the network to serve queries.
+
+
On each Subgraph’s dedicated page, you can do the following:
-- Signal/Un-signal on Subgraphs
-- チャート、現在のデプロイメント ID、その他のメタデータなどの詳細情報の表示
-- Switch versions to explore past iterations of the Subgraph
- Query Subgraphs via GraphQL
+- View Subgraph ID, current deployment ID, Query URL, and other metadata
+- Signal/unsignal on Subgraphs
- Test Subgraphs in the playground
- View the Indexers that are indexing on a certain Subgraph
- サブグラフの統計情報(割り当て数、キュレーターなど)
-- View the entity who published the Subgraph
+- View query fees and charts
+- Change versions to explore past iterations of the Subgraph
+- View entity types
+- View Subgraph activity
-
+
-### Delegate Page
+### Step 2. Delegate GRT
-On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer.
+Go to the [Delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) page to learn how to delegate, get GRT, and choose an Indexer.
-On this page, you can see the following:
+Here, you can:
-- Indexers who collected the most query fees
-- Indexers with the highest estimated APR
+- Compare Indexers by most query fees earned and highest estimated APR.
+- Use the built-in ROI calculator or search by Indexer name or address.
+- Click **"Delegate"** next to an Indexer to stake your GRT.
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
+### Step 3. Monitor Participants in the Network
-### Participants Page
+Go to the [Participants](https://thegraph.com/explorer/participants?chain=arbitrum-one) page to view:
-This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators.
+- Indexers: stakes, allocations, rewards, and delegation parameters
+- Curators: signal amounts, Subgraph shares, and activity history
+- Delegators: current and historical delegations, rewards, and Indexer metrics
-#### 1. インデクサー(Indexers)
+#### インデクサー
-
+
Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexer's delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
-- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators.
-- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
-- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
-- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
-- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
-- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
-- Max Delegation Capacity - インデクサーが生産的に受け入れることができる委任されたステークの最大量。超過した委任されたステークは、割り当てや報酬の計算には使用できません。
-- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
-- Indexer Rewards - インデクサーとそのデリゲーターが過去に獲得したインデクサー報酬の総額。 インデクサー報酬は GRT の発行によって支払われます
+- Query Fee Cut: The % of the query fee rebates that the Indexer keeps when splitting with Delegators.
+- Effective Reward Cut: The indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
+- Cooldown Remaining: The time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
+- Owned: This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
+- Delegated: Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
+- Allocated: Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
+- Available Delegation Capacity: The amount of delegated stake the Indexers can still receive before they become over-delegated.
+- Max Delegation Capacity: The maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
+- Query Fees: This is the total fees that end users have paid for queries from an Indexer over all time.
+- Indexer Rewards: This is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters.
@@ -86,9 +106,9 @@ Indexers can earn both query fees and indexing rewards. Functionally, this happe
To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/)
-
+
-#### 2. キュレーター
+#### Curators
Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
@@ -102,11 +122,11 @@ In the The Curator table listed below you can see:
- デポジットされた GRT の数
- キュレーターが所有するシェア数
-
+
If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/).
-#### 3. デリゲーター
+#### 委任者
Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers.
@@ -114,7 +134,7 @@ Delegators play a key role in maintaining the security and decentralization of T
- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts.
- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/).
-
+
In the Delegators table you can see the active Delegators in the community and important metrics:
@@ -127,9 +147,9 @@ In the Delegators table you can see the active Delegators in the community and i
If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers).
-### Network Page
+### Step 4. Analyze Network Performance
-On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
+On the [Network](https://thegraph.com/explorer/network?chain=arbitrum-one) page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
#### 概要
@@ -147,7 +167,7 @@ A few key details to note:
- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
-
+
#### エポック
@@ -161,69 +181,77 @@ A few key details to note:
- 分配エポックとは、そのエポックの状態チャンネルが確定し、インデクサーがクエリフィーのリベートを請求できるようになるエポックのこと
- The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers.
-
+
+
+## Access and Manage Your User Profile
+
+### Step 1. Access Your Profile
-## ユーザープロファイル
+- Click your wallet address in the top right corner
+- Your wallet acts as your user profile
+- In your profile dashboard, you can view and interact with several useful tabs
-Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs:
+### Step 2. Explore the Tabs
-### プロフィールの概要
+#### プロフィールの概要
In this section, you can view the following:
-- Any of your current actions you've done.
-- Your profile information, description, and website (if you added one).
+- Your activity
+- Your profile information: total query fees, total shares value, owned stake, stake delegating
-
+
-### サブグラフタブ
+#### サブグラフタブ
-In the Subgraphs tab, you’ll see your published Subgraphs.
+The Subgraphs tab displays all your published Subgraphs.
-> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> Subgraphs deployed with the CLI for testing purposes will not show up here. Subgraphs will only show up when they are published to the decentralized network.
-
+
-### インデックスタブ
+#### インデックスタブ
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+> If you haven't indexed, you will see links to stake to index Subgraphs and browse Subgraphs on Graph Explorer.
-このセクションには、インデクサー報酬とクエリフィーの詳細も含まれます。 以下のような指標が表示されます:
+The Indexing tab displays a table where you can review active and historical allocations to Subgraphs.
-- Delegated Stake - あなたが割り当てることができるデリゲーターからのステークですが、スラッシュされることはできません
-- Total Query Fees - 提供したクエリに対してユーザーが支払った料金の合計額
-- Indexer Rewards - 受け取ったインデクサー報酬の総額(GRT)
-- Fee Cut - デリゲーターとの分配時に保持するクエリフィーリベートの割合
-- Rewards Cut - デリゲーターとの分配時に保有するインデクサー報酬の割合
-- Owned - 預けているステークであり、悪質な行為や不正行為があった場合にスラッシュされる可能性がある
+Track your Indexer performance with visual charts and key metrics, including:
-
+- Delegated Stake: Stake from Delegators that can be allocated by you but cannot be slashed.
+- Total Query Fees: Cumulative fees from served queries.
+- Indexer Rewards (in GRT): Total rewards earned.
+- Fee Cut & Rewards Cut: The % of query fee rebates and Indexer rewards you'll keep when you split with Delegators.
+- Owned Stake: Your deposited stake, which could be slashed for malicious or incorrect behavior.
-### デリゲーションタブ
+
-Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards.
+#### デリゲーションタブ
-In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards.
+> To learn more about the benefits of delegating, check out [delegating](/resources/roles/delegating/delegating/).
-ページの前半には、自分のデリゲーションチャートと報酬のみのチャートが表示されています。 左側には、現在のデリゲーションメトリクスを反映した KPI が表示されています。
+The Delegators tab displays your active and historical delegations, along with the metrics for the Indexers you've delegated to.
-このタブに表示される委任者の指標には、次のものがあります。
+Top Section:
-- デリゲーション報酬の合計
-- 未実現報酬の合計
-- 実現報酬の合計
+- View delegation and rewards-only charts
+- Track key metrics:
+ - デリゲーション報酬の合計
+ - Unrealized rewards
+ - Realized Rewards
-ページの後半には、デリゲーションテーブルがあります。 ここには、あなたがデリゲートしたインデクサーとその詳細(報酬のカットやクールダウンなど)が表示されています。
+Bottom Section:
-テーブルの右側にあるボタンで、デリゲートを管理することができます。追加でデリゲートする、デリゲートを解除する、解凍期間後にデリゲートを取り消すなどの操作が可能です。
+- Explore a table of your Indexer delegations, including reward cuts, cooldowns, and more.
+- Use the buttons on the right side of the table to manage your delegation - delegate more, undelegate, or withdraw it after the thawing period.
-このグラフは水平方向にスクロールできるので、右端までスクロールすると、委任のステータス (委任中、委任解除中、取り消し可能) も表示されることに注意してください。
+> This table is horizontally scrollable, so scroll right to see delegation status: delegating, undelegating, or withdrawable.
-
+
-### キュレーションタブ
+#### キュレーションタブ
-In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
+The Curation tab displays all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
このタブでは、以下の概要を見ることができます:
@@ -232,22 +260,22 @@ In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus
- Query rewards per Subgraph
- 日付詳細に更新済み
-
+
-### プロフィールの設定
+#### プロフィールの設定
ユーザープロフィールでは、個人的なプロフィールの詳細(ENS ネームの設定など)を管理することができます。 インデクサーの方は、さらに多くの設定が可能です。 ユーザープロファイルでは、デリゲーションパラメーターとオペレーターを設定することができます。
- オペレーターは、インデクサーに代わって、割り当ての開始や終了など、プロトコル上の限定的なアクションを行います。 オペレーターは通常、ステーキングウォレットとは別の他の Ethereum アドレスで、インデクサーが個人的に設定できるネットワークへのゲート付きアクセス権を持っています。
- 「Delegation parameters」では、自分とデリゲーターの間で GRT の分配をコントロールすることができます。
-
+
As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.

-## その他のリソース
+### その他のリソース
### Video Guide
diff --git a/website/src/pages/ja/subgraphs/fair-use-policy.mdx b/website/src/pages/ja/subgraphs/fair-use-policy.mdx
new file mode 100644
index 000000000000..733b54ee4e67
--- /dev/null
+++ b/website/src/pages/ja/subgraphs/fair-use-policy.mdx
@@ -0,0 +1,51 @@
+---
+title: Fair Use Policy
+---
+
+> Effective Date: May 15, 2025
+
+## 概要
+
+This outlines storage limits for Subgraphs that rely solely on [Edge & Node's Upgrade Indexer](/subgraphs/upgrade-indexer/). This policy is designed to ensure fair and optimized use of queries across the community.
+
+To maintain performance and reliability across its infrastructure, Edge & Node is updating its Upgrade Indexer Subgraph storage policy. Free usage tiers remain available, but users who exceed specified limits will need to upgrade to a paid plan. Storage allocations and thresholds vary by feature.
+
+### 1. Scope
+
+This policy applies to all individual users, teams, chains, and dapps using Edge & Node's Upgrade Indexer in Subgraph Studio for storage and queries.
+
+### 2. Fair Use Storage Limits
+
+**Free Storage: Up to 10 GB**
+
+Beyond that, pricing is variable and adjusts based on usage patterns, network conditions, infrastructure requirements, and specific use cases.
+
+Reach out to Edge & Node at [info@edgeandnode.com](mailto:info@edgeandnode.com) to discuss options that meet your technical needs.
+
+You can monitor your usage via [Subgraph Studio](https://thegraph.com/studio/).
+
+### 3. Fair Use Limits
+
+To preserve the stability of Edge & Node's Subgraph Studio and preserve the reliability of The Graph Network, the Edge & Node Support Team will monitor storage usage and take corresponding action with Subgraphs that have:
+
+- Abnormally high or sustained bandwidth or storage usage beyond posted limits
+- Circumvention of storage thresholds (e.g., use of multiple free-tier accounts)
+
+The Edge & Node Support Team reserves the right to revise storage limits or impose temporary constraints for operational integrity.
+
+If you exceed your included storage:
+
+- Try [pruning Subgraph data](/subgraphs/best-practices/pruning/) to remove unused entities and help stay within storage limits
+- [Add signal to the Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to encourage other Indexers on the network to serve it
+- You will receive multiple notifications and email alerts
+- A grace period of 14 days will be provided to upgrade or reduce storage
+
+Edge & Node's team is committed to helping users avoid unnecessary interruptions and will continue to support all web3 builders.
+
+### 4. Subgraph Data Retention
+
+Subgraphs inactive for over 14 days or Subgraphs that exceed free-tier storage limits will be subject to automatic data archival or deletion. Edge & Node's team will notify you before any such actions are taken.
+
+### 5. Support
+
+If you believe your usage is incorrectly flagged or have unique use cases (e.g. approved special request pending new Subgraph upgrade plan), reach out the Edge & Node team at [info@edgeandnode.com](mailto:info@edgeandnode.com).
diff --git a/website/src/pages/ja/subgraphs/guides/near.mdx b/website/src/pages/ja/subgraphs/guides/near.mdx
index 9e3738689919..8eb27cab1c50 100644
--- a/website/src/pages/ja/subgraphs/guides/near.mdx
+++ b/website/src/pages/ja/subgraphs/guides/near.mdx
@@ -186,7 +186,7 @@ Once your Subgraph has been created, you can deploy your Subgraph by using the `
```sh
$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
+$ graph deploy --node --ipfs https://ipfs.thegraph.com # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
The node configuration will depend on where the Subgraph is being deployed.
diff --git a/website/src/pages/ja/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ja/subgraphs/guides/subgraph-composition.mdx
index 62b2d8eb4657..f781231fe623 100644
--- a/website/src/pages/ja/subgraphs/guides/subgraph-composition.mdx
+++ b/website/src/pages/ja/subgraphs/guides/subgraph-composition.mdx
@@ -39,20 +39,20 @@ While the source Subgraph is a standard Subgraph, the dependent Subgraph uses th
### Source Subgraphs
-- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs).
- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
-- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
-- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
-- Source Subgraphs cannot use grafting on top of existing entities
-- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+- Immutable entities only: All Subgraphs must have [immutable entities](/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed.
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of.
+- Source Subgraphs cannot use grafting on top of existing entities.
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly.
### Composed Subgraphs
-- You can only compose up to a **maximum of 5 source Subgraphs**
-- Composed Subgraphs can only use **datasources from the same chain**
-- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
-- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
-- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+- You can only compose up to a **maximum of 5 source Subgraphs.**
+- Composed Subgraphs can only use **datasources from the same chain.**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time.
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly.
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph).
Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
diff --git a/website/src/pages/ja/subgraphs/mcp/claude.mdx b/website/src/pages/ja/subgraphs/mcp/claude.mdx
new file mode 100644
index 000000000000..8b61438d2ab7
--- /dev/null
+++ b/website/src/pages/ja/subgraphs/mcp/claude.mdx
@@ -0,0 +1,180 @@
+---
+title: Claude Desktop
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Claude to interact directly with Subgraphs on The Graph Network. This integration allows you to find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries—all through natural language conversations with Claude.
+
+## What You Can Do
+
+The Subgraph MCP integration enables you to:
+
+- Access the GraphQL schema for any Subgraph on The Graph Network
+- Execute GraphQL queries against any Subgraph deployment
+- Find top Subgraph deployments for a given keyword or contract address
+- Get 30-day query volume for Subgraph deployments
+- Ask natural language questions about Subgraph data without writing GraphQL queries manually
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+
+## Installation Options
+
+### Option 1: Using npx (Recommended)
+
+#### Configuration Steps using npx
+
+#### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+#### 2. Add Configuration
+
+Paste the following settings into your config file:
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+#### 3. Add Your Gateway API Key
+
+Replace `GATEWAY_API_KEY` with your API key from [Subgraph Studio](https://thegraph.com/studio/).
+
+#### 4. Save and Restart
+
+Once you've entered your Gateway API key into your settings, save the file and restart Claude Desktop.
+
+### Option 2: Building from Source
+
+#### Requirements
+
+- Rust (latest stable version recommended: 1.75+)
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+ Follow the on-screen instructions. For other platforms, see the [official Rust installation guide](https://www.rust-lang.org/tools/install).
+
+#### Installation Steps
+
+1. **Clone and Build the Repository**
+
+ ```bash
+ git clone git@github.com:graphops/subgraph-mcp.git
+ cd subgraph-mcp
+ cargo build --release
+ ```
+
+2. **Find the Command Path**
+
+ After building, the executable will be located at `target/release/subgraph-mcp` inside your project directory.
+
+ - Navigate to your `subgraph-mcp` directory in terminal
+ - Run `pwd` to get the full path
+ - Combine the output with `/target/release/subgraph-mcp`
+
+3. **Configure Claude Desktop**
+
+ Open your `claude_desktop_config.json` file as described above and add:
+
+ ```json
+ {
+ "mcpServers": {
+ "subgraph": {
+ "command": "/path/to/your/subgraph-mcp/target/release/subgraph-mcp",
+ "env": {
+ "GATEWAY_API_KEY": "your-api-key-here"
+ }
+ }
+ }
+ }
+ ```
+
+ Replace `/path/to/your/subgraph-mcp/target/release/subgraph-mcp` with the actual path to the compiled binary.
+
+## Using The Graph Resource in Claude
+
+After configuring Claude Desktop:
+
+1. Restart Claude Desktop
+2. Start a new conversation
+3. Click on the context menu (top right)
+4. Add "Subgraph Server Instructions" as a resource by adding `graphql://subgraph` to your chat context
+
+> **Important**: Claude Desktop may not automatically utilize the Subgraph MCP. You must manually add "Subgraph Server Instructions" resource to your chat context for each conversation where you want to use it.
+
+## Troubleshooting
+
+To enable logs for the MCP when using the npx option, add the `--verbose true` option to your args array.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID/IPFS hash**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Search subgraphs by keyword**: Find subgraphs by keyword in their display names, ordered by signal
+- **Get deployment 30-day query counts**: Get aggregate query count over the last 30 days for multiple subgraph deployments
+- **Get top Subgraph deployments for a contract**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain, ordered by query fees
+
+## Key Identifier Types
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a subgraph. Use `execute_query_by_subgraph_id` or `get_schema_by_subgraph_id`.
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment. Use `execute_query_by_deployment_id` or `get_schema_by_deployment_id`.
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific, immutable deployment. Use `execute_query_by_deployment_id` (the gateway treats it like a deployment ID for querying) or `get_schema_by_ipfs_hash`.
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Claude will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+```
+Find the top subgraphs for contract 0x1f98431c8ad98523631ae4a59f267346ea31f984 on arbitrum-one
+```
diff --git a/website/src/pages/ja/subgraphs/mcp/cline.mdx b/website/src/pages/ja/subgraphs/mcp/cline.mdx
new file mode 100644
index 000000000000..156221d9a127
--- /dev/null
+++ b/website/src/pages/ja/subgraphs/mcp/cline.mdx
@@ -0,0 +1,99 @@
+---
+title: Cline
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cline to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cline.
+
+## Prerequisites
+
+- [Cline](https://cline.bot/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+## Using The Graph Resource in Cline
+
+After configuring Cline:
+
+1. Restart Cline
+2. Start a new conversation
+3. Enable the Subgraph MCP from the context menu
+4. Add "Subgraph Server Instructions" as a resource to your chat context
+
+## Available Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cline will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/ja/subgraphs/mcp/cursor.mdx b/website/src/pages/ja/subgraphs/mcp/cursor.mdx
new file mode 100644
index 000000000000..298f43ece048
--- /dev/null
+++ b/website/src/pages/ja/subgraphs/mcp/cursor.mdx
@@ -0,0 +1,94 @@
+---
+title: Cursor
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cursor to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cursor.
+
+## Prerequisites
+
+- [Cursor](https://www.cursor.com/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+### 4. Restart Cursor
+
+Restart Cursor, and start a new chat.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cursor will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/ja/subgraphs/querying/best-practices.mdx b/website/src/pages/ja/subgraphs/querying/best-practices.mdx
index bd25c5d2fea6..9285942ffbda 100644
--- a/website/src/pages/ja/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/ja/subgraphs/querying/best-practices.mdx
@@ -2,9 +2,7 @@
title: クエリのベストプラクティス
---
-The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-
-Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
+Use The Graph's GraphQL API to query [Subgraph](/subgraphs/developing/subgraphs/) data efficiently. This guide outlines essential GraphQL rules, guides, and best practices to help you write optimized, reliable queries.
---
@@ -12,9 +10,11 @@ Learn the essential GraphQL language rules and best practices to optimize your S
### The Anatomy of a GraphQL Query
-REST APIとは異なり、GraphQL APIは実行可能なクエリを定義するSchemaをベースに構築されています。
+> GraphQL queries use the GraphQL language, which is defined in the [GraphQL specification](https://spec.graphql.org/).
+
+Unlike REST APIs, GraphQL APIs are built on a schema-driven design that defines which queries can be performed.
-For example, a query to get a token using the `token` query will look as follows:
+Here's a typical query to fetch a `token`:
```graphql
query GetToken($id: ID!) {
@@ -25,7 +25,7 @@ query GetToken($id: ID!) {
}
```
-which will return the following predictable JSON response (_when passing the proper `$id` variable value_):
+which will return a predictable JSON response (when passing the proper `$id` variable value):
```json
{
@@ -36,8 +36,6 @@ which will return the following predictable JSON response (_when passing the pro
}
```
-GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/).
-
The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders):
```graphql
@@ -50,33 +48,31 @@ query [operationName]([variableName]: [variableType]) {
}
```
-## Rules for Writing GraphQL Queries
+### Rules for Writing GraphQL Queries
-- Each `queryName` must only be used once per operation.
-- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`)
-- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
-- 引数に代入される変数は、その型と一致しなければなりません。
-- 与えられた変数のリストにおいて、各変数は一意でなければなりません。
-- 定義された変数はすべて使用する必要があります。
+> Important: Failing to follow these rules will result in an error from The Graph API.
-> Note: Failing to follow these rules will result in an error from The Graph API.
+1. Each `queryName` must only be used once per operation.
+2. Each `field` must be used only once in a selection (you cannot query `id` twice under `token`).
+3. Complex types require a selection of sub-fields.
+ - For example, some `fields' or queries (like `tokens`) return complex types which will require a selection of sub-fields. Not providing a selection when expected or providing one when not expected will raise an error, such as `id\`. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
+4. 引数に代入される変数は、その型と一致しなければなりません。
+5. Variables must be uniquely defined and used.
-For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/).
+**For a complete list of rules with code examples, check out the [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)**.
-### GraphQL APIへのクエリの送信
+### How to Send a Query to a GraphQL API
-GraphQL is a language and set of conventions that transport over HTTP.
+[GraphQL is a query language](https://graphql.org/learn/) and a set of conventions for APIs, typically used over HTTP to request and send data between clients and servers. This means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-
-However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
+However, as recommended in [Querying from an Application](/subgraphs/querying/from-an-application/), it's best to use `graph-client`, which supports the following unique features:
- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- 完全なタイプ付け結果
-Here's how to query The Graph with `graph-client`:
+Example query using `graph-client`:
```tsx
import { execute } from '../.graphclient'
@@ -100,15 +96,15 @@ async function main() {
main()
```
-More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/).
+For more alternatives, see ["Querying from an Application"](/subgraphs/querying/from-an-application/).
---
## ベストプラクティス
-### 常に静的なクエリを記述
+### 1. Always Write Static Queries
-A common (bad) practice is to dynamically build query strings as follows:
+A common bad practice is to dynamically build a query string as follows:
```tsx
const id = params.id
@@ -124,14 +120,16 @@ query GetToken {
// Execute query...
```
-While the above snippet produces a valid GraphQL query, **it has many drawbacks**:
+While the example above produces a valid GraphQL query, it comes with several issues:
+
+- The full query is harder to understand.
+- Developers are responsible for safely sanitizing the string interpolation.
+- Not sending the values of the variables as part of the request can block server-side caching.
+- It prevents tools from statically analyzing the query (e.g.linters or type generation tools).
-- it makes it **harder to understand** the query as a whole
-- developers are **responsible for safely sanitizing the string interpolation**
-- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side**
-- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools)
+Instead, it's recommended to **always write queries as static strings**.
-For this reason, it is recommended to always write queries as static strings:
+Example of static string:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -153,18 +151,21 @@ const result = await execute(query, {
})
```
-Doing so brings **many advantages**:
+Static strings have several **key advantages**:
-- **Easy to read and maintain** queries
-- The GraphQL **server handles variables sanitization**
-- **Variables can be cached** at server-level
-- **Queries can be statically analyzed by tools** (more on this in the following sections)
+- Queries are easier to read, manage, and debug.
+- Variable sanitization is handled by the GraphQL server The GraphQL.
+- Variables can be cached at the server level.
+- Queries can be statically analyzed by tools (see [GraphQL Essential Tools](/subgraphs/querying/best-practices/#graphql-essential-tools-guides)).
-### How to include fields conditionally in static queries
+### 2. Include Fields Conditionally in Static Queries
-You might want to include the `owner` field only on a particular condition.
+Including fields in static queries only for a particular condition improves performance and keeps responses lightweight by fetching only the necessary data when it's relevant.
-For this, you can leverage the `@include(if:...)` directive as follows:
+- The `@include(if:...)` directive tells the query to **include** a specific field only if the given condition is true.
+- The `@skip(if: ...)` directive tells the query to **exclude** a specific field if the given condition is true.
+
+Example using `owner` field with `@include(if:...)` directive:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -187,15 +188,11 @@ const result = await execute(query, {
})
```
-> Note: The opposite directive is `@skip(if: ...)`.
-
-### Ask for what you want
-
-GraphQL became famous for its "Ask for what you want" tagline.
+### 3. Ask Only For What You Want
-For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually.
+GraphQL is known for its "Ask for what you want” tagline, which is why it requires explicitly listing each field you want. There's no built-in way to fetch all available fields automatically.
-- GraphQL APIをクエリする際には、実際に使用するフィールドのみをクエリするように常に考えてください。
+- When querying GraphQL APIs, always think of querying only the fields that will actually be used.
- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities.
For example, in the following query:
@@ -215,9 +212,9 @@ query listTokens {
The response could contain 100 transactions for each of the 100 tokens.
-If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field.
+If the application only needs 10 transactions, the query should explicitly set **`first: 10`** on the transactions field.
-### Use a single query to request multiple records
+### 4. Use a Single Query to Request Multiple Records
By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
@@ -249,7 +246,7 @@ query ManyRecords {
}
```
-### Combine multiple queries in a single request
+### 5. Combine Multiple Queries in a Single Request
Your application might require querying multiple types of data as follows:
@@ -281,9 +278,9 @@ const [tokens, counters] = Promise.all(
)
```
-While this implementation is totally valid, it will require two round trips with the GraphQL API.
+While this implementation is valid, it will require two round trips with the GraphQL API.
-Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows:
+It's best to send multiple queries in the same GraphQL request as follows:
```graphql
import { execute } from "your-favorite-graphql-client"
@@ -302,9 +299,9 @@ query GetTokensandCounters {
`
```
-This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**.
+Sending multiple queries in the same GraphQL request **improves the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and provides a **more concise implementation**.
-### GraphQLフラグメントの活用
+### 6. Leverage GraphQL Fragments
A helpful feature to write GraphQL queries is GraphQL Fragment.
@@ -333,7 +330,7 @@ Such repeated fields (`id`, `active`, `status`) bring many issues:
- More extensive queries become harder to read.
- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces.
-A refactored version of the query would be the following:
+An optimized version of the query would be the following:
```graphql
query {
@@ -357,15 +354,18 @@ fragment DelegateItem on Transcoder {
}
```
-Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation.
+Using a GraphQL `fragment` improves readability (especially at scale) and results in better TypeScript types generation.
When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_).
-### GraphQLフラグメントの注意点
+## GraphQL Fragment Guidelines
-### フラグメントベースは型である必要があります
+### Do's and Don'ts for Fragments
-A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**:
+1. Fragments cannot be based on a non-applicable types (types without fields).
+2. `BigInt` cannot be used as a fragment's base because it's a **scalar** (native "plain" type).
+
+例:
```graphql
fragment MyFragment on BigInt {
@@ -373,11 +373,8 @@ fragment MyFragment on BigInt {
}
```
-`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base.
-
-#### フラグメントを拡散する方法
-
-Fragments are defined on specific types and should be used accordingly in queries.
+3. Fragments belong to specific types and must be used with those same types in queries.
+4. Spread only fragments matching the correct type.
例:
@@ -400,20 +397,23 @@ fragment VoteItem on Vote {
}
```
-`newDelegate` and `oldDelegate` are of type `Transcoder`.
+- `newDelegate` and `oldDelegate` are of type `Transcoder`. It's not possible to spread a fragment of type `Vote` here.
-It is not possible to spread a fragment of type `Vote` here.
+5. Fragments must be defined based on their specific usage.
+6. Define fragments as an atomic business unit of data.
-#### フラグメントをデータのアトミックなビジネス単位として定義する
+---
-GraphQL `Fragment`s must be defined based on their usage.
+### How to Define `Fragment` as an Atomic Business Unit of Data
-For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient.
+> For most use-cases, defining one fragment per type (in the case of repeated fields usage or type generation) is enough.
Here is a rule of thumb for using fragments:
- When fields of the same type are repeated in a query, group them in a `Fragment`.
-- When similar but different fields are repeated, create multiple fragments, for instance:
+- When similar but different fields are repeated, create multiple fragments.
+
+例:
```graphql
# base fragment (mostly used in listing)
@@ -436,35 +436,45 @@ fragment VoteWithPoll on Vote {
---
-## The Essential Tools
+## GraphQL Essential Tools Guides
+
+### Test Queries with Graph Explorer
+
+Before integrating GraphQL queries into your dapp, it's best to test them. Instead of running them directly in your app, use a web-based playground.
+
+Start with [Graph Explorer](https://thegraph.com/explorer), a preconfigured GraphQL playground built specifically for Subgraphs. You can experiment with queries and see the structure of the data returned without writing any frontend code.
+
+If you want alternatives to debug/test your queries, check out other similar web-based tools:
+
+- [GraphiQL](https://graphiql-online.com/graphiql)
+- [Altair](https://altairgraphql.dev/)
-### GraphQL ウェブベースのエクスプローラ
+### Setting up Workflow and IDE Tools
-Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries.
+In order to keep up with querying best practices and syntax rules, use the following workflow and IDE tools.
-If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql).
+#### GraphQL ESLint
-### GraphQL Linting
+1. Install GraphQL ESLint
-In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools.
+Use [GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) to enforce best practices and syntax rules with zero effort.
-**GraphQL ESLint**
+2. Use the ["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config
-[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort.
+This will enforce essential rules such as:
-[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as:
+- `@graphql-eslint/fields-on-correct-type`: Ensures fields match the proper type.
+- `@graphql-eslint/no-unused variables`: Flags unused variables in your queries.
-- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type?
-- `@graphql-eslint/no-unused variables`: should a given variable stay unused?
-- ともっと
+Result: You'll **catch errors without even testing queries** on the playground or running them in production!
-This will allow you to **catch errors without even testing queries** on the playground or running them in production!
+#### Use IDE plugins
-### IDE plugins
+GraphQL plugins streamline your workflow by offering real-time feedback while you code. They highlight mistakes, suggest completions, and help you explore your schema faster.
-**VSCode and GraphQL**
+1. VS Code
-The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get:
+Install the [GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) to unlock:
- Syntax highlighting
- Autocomplete suggestions
@@ -472,11 +482,11 @@ The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemNa
- Snippets
- Go to definition for fragments and input types
-If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly.
+If you are using `graphql-eslint`, use the [ESLint VS Code extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) to visualize errors and warnings inlined in your code correctly.
-**WebStorm/Intellij and GraphQL**
+2. WebStorm/Intellij and GraphQL
-The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing:
+Install the [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/). It significantly improves the experience of working with GraphQL by providing:
- Syntax highlighting
- Autocomplete suggestions
diff --git a/website/src/pages/ja/subgraphs/querying/graphql-api.mdx b/website/src/pages/ja/subgraphs/querying/graphql-api.mdx
index c1700fb5e9da..e356e2efe616 100644
--- a/website/src/pages/ja/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/ja/subgraphs/querying/graphql-api.mdx
@@ -2,23 +2,37 @@
title: GraphQL API
---
-Learn about the GraphQL Query API used in The Graph.
+Explore the GraphQL Query API for interacting with Subgraphs on The Graph Network.
-## What is GraphQL?
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with existing data.
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
+The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
+## Core Concepts
-## Queries with GraphQL
+### エンティティ
+
+- **What they are**: Persistent data objects defined with `@entity` in your schema
+- **Key requirement**: Must contain `id: ID!` as primary identifier
+- **Usage**: Foundation for all query operations
+
+### スキーマ
+
+- **Purpose**: Blueprint defining the data structure and relationships using GraphQL [IDL](https://facebook.github.io/graphql/draft/#sec-Type-System)
+- **Key characteristics**:
+ - Auto-generates query endpoints
+ - Read-only operations (no mutations)
+ - Defines entity interfaces and derived fields
-In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+## Query Structure
-> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
+GraphQL queries in The Graph target entities defined in the Subgraph schema. Each `Entity` type generates corresponding `entity` and `entities` fields on the root `Query` type.
-### 例
+> Note: The `query` keyword is not required at the top level of GraphQL queries.
-Query for a single `Token` entity defined in your schema:
+### Single Entity Queries Example
+
+Query for a single `Token` entity:
```graphql
{
@@ -29,9 +43,11 @@ Query for a single `Token` entity defined in your schema:
}
```
-> Note: When querying for a single entity, the `id` field is required, and it must be written as a string.
+> Note: Single entity queries require the `id` parameter as a string.
+
+### Collection Queries Example
-Query all `Token` entities:
+Query format for all `Token` entities:
```graphql
{
@@ -42,14 +58,14 @@ Query all `Token` entities:
}
```
-### 並べ替え
+### Sorting Example
-When querying a collection, you may:
+Collection queries support the following sort parameters:
-- Use the `orderBy` parameter to sort by a specific attribute.
-- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending.
+- `orderBy`: Specifies the attribute for sorting
+- `orderDirection`: Accepts `asc` (ascending) or `desc` (descending)
-#### 例
+#### Standard Sorting Example
```graphql
{
@@ -60,11 +76,7 @@ When querying a collection, you may:
}
```
-#### ネストされたエンティティの並べ替えの例
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities.
-
-The following example shows tokens sorted by the name of their owner:
+#### Nested Entity Sorting Example
```graphql
{
@@ -77,20 +89,18 @@ The following example shows tokens sorted by the name of their owner:
}
```
-> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
+> Note: Nested sorting supports one-level-deep `String` or `ID` types on `@entity` and `@derivedFrom` fields.
-### ページネーション
+### Pagination Example
-When querying a collection, it's best to:
+When querying a collection, it is best to:
- Use the `first` parameter to paginate from the beginning of the collection.
- The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time.
- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities.
- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above.
-#### Example using `first`
-
-最初の 10 個のトークンを照会します。
+#### Standard Pagination Example
```graphql
{
@@ -101,11 +111,7 @@ When querying a collection, it's best to:
}
```
-To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection.
-
-#### Example using `first` and `skip`
-
-Query 10 `Token` entities, offset by 10 places from the beginning of the collection:
+#### Offset Pagination Example
```graphql
{
@@ -116,9 +122,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect
}
```
-#### Example using `first` and `id_ge`
-
-If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query:
+#### Cursor-based Pagination Example
```graphql
query manyTokens($lastID: String) {
@@ -129,16 +133,11 @@ query manyTokens($lastID: String) {
}
```
-The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values.
-
### フィルタリング
-- You can use the `where` parameter in your queries to filter for different properties.
-- You can filter on multiple values within the `where` parameter.
-
-#### Example using `where`
+The `where` parameter filters entities based on specified conditions.
-Query challenges with `failed` outcome:
+#### Basic Filtering Example
```graphql
{
@@ -152,9 +151,7 @@ Query challenges with `failed` outcome:
}
```
-You can use suffixes like `_gt`, `_lte` for value comparison:
-
-#### 範囲フィルタリングの例
+#### Numeric Comparison Example
```graphql
{
@@ -166,11 +163,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
}
```
-#### ブロックフィルタリングの例
-
-You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-
-This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
+#### Block-based Filtering Example
```graphql
{
@@ -182,11 +175,7 @@ This can be useful if you are looking to fetch only entities which have changed,
}
```
-#### ネストされたエンティティ フィルタリングの例
-
-Filtering on the basis of nested entities is possible in the fields with the `_` suffix.
-
-これは、子レベルのエンティティが指定された条件を満たすエンティティのみをフェッチする場合に役立ちます。
+#### Nested Entity Filtering Example
```graphql
{
@@ -200,11 +189,9 @@ Filtering on the basis of nested entities is possible in the fields with the `_`
}
```
-#### 論理演算子
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria.
+#### Logical Operators
-##### `AND` Operator
+##### AND Operations Example
The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`.
@@ -220,27 +207,11 @@ The following example filters for challenges with `outcome` `succeeded` and `num
}
```
-> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas.
->
-> ```graphql
-> {
-> challenges(where: { number_gte: 100, outcome: "succeeded" }) {
-> challenger
-> outcome
-> application {
-> id
-> }
-> }
-> }
-> ```
-
-##### `OR` Operator
-
-The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`.
+**Syntactic sugar:** You can simplify the above query by removing the `and` operator and by passing a sub-expression separated by commas.
```graphql
{
- challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenges(where: { number_gte: 100, outcome: "succeeded" }) {
challenger
outcome
application {
@@ -250,52 +221,36 @@ The following example filters for challenges with `outcome` `succeeded` or `numb
}
```
-> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
-
-#### すべてのフィルター
-
-パラメータのサフィックスの全リスト:
+##### OR Operations Example
+```graphql
+{
+ challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenger
+ outcome
+ application {
+ id
+ }
+ }
+}
```
-_
-_not
-_gt
-_lt
-_gte
-_lte
-_in
-_not_in
-_contains
-_contains_nocase
-_not_contains
-_not_contains_nocase
-_starts_with
-_starts_with_nocase
-_ends_with
-_ends_with_nocase
-_not_starts_with
-_not_starts_with_nocase
-_not_ends_with
-_not_ends_with_nocase
-```
-
-> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types.
-In addition, the following global filters are available as part of `where` argument:
+Global filter parameter:
```graphql
_change_block(number_gte: Int)
```
-### タイムトラベル クエリ
+### Time-travel Queries Example
-You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries.
+Queries support historical state retrieval using the `block` parameter:
-The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change.
+- `number`: Integer block number
+- `hash`: String block hash
> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail.
-#### 例
+#### Block Number Query Example
```graphql
{
@@ -309,9 +264,7 @@ The result of such a query will not change over time, i.e., querying at a certai
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000.
-
-#### 例
+#### Block Hash Query Example
```graphql
{
@@ -325,28 +278,26 @@ This query will return `Challenge` entities, and their associated `Application`
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash.
-
-### 全文検索クエリ
+### Full-Text Search Example
-Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
+Full-text search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Full-text Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add full-text search to your Subgraph.
-Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
+Full-text search queries have one required field, `text`, for supplying search terms. Several special full-text operators are available to be used in this `text` search field.
-全文検索演算子:
+Full-text search fields use the required `text` parameter with the following operators:
-| シンボル | オペレーター | 説明書き |
-| --- | --- | --- |
-| `&` | `And` | 複数の検索語を組み合わせて、指定したすべての検索語を含むエンティティをフィルタリングします。 |
-| | | `Or` | 複数の検索語をオペレーターで区切って検索すると、指定した語のいずれかにマッチするすべてのエンティティが返されます。 |
-| `<->` | `Follow by` | 2 つの単語の間の距離を指定します。 |
-| `:*` | `Prefix` | プレフィックス検索語を使って、プレフィックスが一致する単語を検索します(2 文字必要) |
+| Operator | シンボル | Description |
+| --------- | ------ | --------------------------------------------------------------- |
+| And | `&` | Matches entities containing all terms |
+| Or | `\|` | Return all entities with a match from any of the provided terms |
+| Follow by | `<->` | Matches terms with specified distance |
+| Prefix | `:*` | Matches word prefixes (minimum 2 characters) |
-#### 例
+#### Search Examples
-Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields.
+OR operator:
-```graphql
+```
{
blogSearch(text: "anarchism | crumpets") {
id
@@ -357,7 +308,7 @@ Using the `or` operator, this query will filter to blog entities with variations
}
```
-The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy"
+“Follow” by operator:
```graphql
{
@@ -370,7 +321,7 @@ The `follow by` operator specifies a words a specific distance apart in the full
}
```
-全文演算子を組み合わせて、より複雑なフィルターを作成します。口実検索演算子を follow by このサンプル クエリと組み合わせて使用すると、"lou" で始まり、その後に "music" が続く単語を持つすべてのブログ エンティティが一致します。
+Combined operators:
```graphql
{
@@ -383,29 +334,19 @@ The `follow by` operator specifies a words a specific distance apart in the full
}
```
-### 認証
+### スキーマ定義
-Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
-
-## スキーマ
-
-The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
-
-> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
-
-### エンティティ
+Entity types require:
-All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field.
+- GraphQL Interface Definition Language (IDL) format
+- `@entity` directive
+- `ID` field
-> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported.
+### Subgraph Metadata Example
-### サブグラフ メタデータ
+The `_Meta_` object provides subgraph metadata:
-All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
-
-```graphQL
+```graphql
{
_meta(block: { number: 123987 }) {
block {
@@ -419,14 +360,49 @@ All Subgraphs have an auto-generated `_Meta_` object, which provides access to S
}
```
-If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
-
-`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
+Metadata fields:
+
+- `deployment`: IPFS CID of the subgraph.yaml
+- `block`: Latest block information
+- `hasIndexingErrors`: Boolean indicating past indexing errors
+
+> Note: When writing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use `and` operators instead of `or` whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
+
+### GraphQL Filter Operators Reference
+
+This table explains each filter operator available in The Graph's GraphQL API. These operators are used as suffixes to field names when filtering data using the `where` parameter.
+
+| オペレーター | 説明書き | 例 |
+| ------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
+| `_` | Matches entities where the specified field equals another entity | `{ where: { owner_: { name: "Alice" } } }` |
+| `_not` | Negates the specified condition | `{ where: { active_not: true } }` |
+| `_gt` | Greater than (>) | `{ where: { price_gt: "100" } }` |
+| `_lt` | Less than (`\<`) | `{ where: { price_lt: "100" } }` |
+| `_gte` | Greater than or equal to (>=) | `{ where: { price_gte: "100" } }` |
+| `_lte` | Less than or equal to (`\<=`) | `{ where: { price_lte: "100" } }` |
+| `_in` | Value is in the specified array | `{ where: { category_in: ["Art", "Music"] } }` |
+| `_not_in` | Value is not in the specified array | `{ where: { category_not_in: ["Art", "Music"] } }` |
+| `_contains` | Field contains the specified string (case-sensitive) | `{ where: { name_contains: "token" } }` |
+| `_contains_nocase` | Field contains the specified string (case-insensitive) | `{ where: { name_contains_nocase: "token" } }` |
+| `_not_contains` | Field does not contain the specified string (case-sensitive) | `{ where: { name_not_contains: "test" } }` |
+| `_not_contains_nocase` | Field does not contain the specified string (case-insensitive) | `{ where: { name_not_contains_nocase: "test" } }` |
+| `_starts_with` | Field starts with the specified string (case-sensitive) | `{ where: { name_starts_with: "Crypto" } }` |
+| `_starts_with_nocase` | Field starts with the specified string (case-insensitive) | `{ where: { name_starts_with_nocase: "crypto" } }` |
+| `_ends_with` | Field ends with the specified string (case-sensitive) | `{ where: { name_ends_with: "Token" } }` |
+| `_ends_with_nocase` | Field ends with the specified string (case-insensitive) | `{ where: { name_ends_with_nocase: "token" } }` |
+| `_not_starts_with` | Field does not start with the specified string (case-sensitive) | `{ where: { name_not_starts_with: "Test" } }` |
+| `_not_starts_with_nocase` | Field does not start with the specified string (case-insensitive) | `{ where: { name_not_starts_with_nocase: "test" } }` |
+| `_not_ends_with` | Field does not end with the specified string (case-sensitive) | `{ where: { name_not_ends_with: "Test" } }` |
+| `_not_ends_with_nocase` | Field does not end with the specified string (case-insensitive) | `{ where: { name_not_ends_with_nocase: "test" } }` |
+
+#### Notes
+
+- Type support varies by operator. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`.
+- The `_` operator is only available for object and interface types.
+- String comparison operators are especially useful for text fields.
+- Numeric comparison operators work with both number and string-encoded number fields.
+- Use these operators in combination with logical operators (`and`, `or`) for complex filtering.
-`block` provides information about the latest block (taking into account any block constraints passed to `_meta`):
-
-- hash: ブロックのハッシュ
-- number: ブロック番号
-- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
+### 認証
-`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
+Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
diff --git a/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx
index 5e0531142b22..df2ec7ee4812 100644
--- a/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx
@@ -1,34 +1,86 @@
---
-title: Managing API keys
+title: How to Manage API keys
---
+This guide shows you how to create, manage, and secure API keys for your [Subgraphs](/subgraphs/developing/subgraphs/).
+
## 概要
-API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are required to query Subgraphs. They authenticate users and devices, authorize access to specific endpoints, enforce rate limits, and enable usage tracking across The Graph.
+
+## Prerequisites
+
+- A [Subgraph Studio](https://thegraph.com/studio/) account
+
+## Create a New API Key
+
+1. Navigate to [Subgraph Studio](https://thegraph.com/studio/)
+2. Click the **API Keys** tab in the navigation menu
+3. Click the **Create API Key** button
+
+A new window will pop up:
+
+4. Enter a name for your API key
+5. Optional: You can enable a period spending limit
+6. Click **Create API Key**
+
+
+
+## Manage API Keys
+
+The “API keys” table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+
+### How to Set Spending Limits
+
+1. Find your API key in the API keys table
+2. Click the "three dots" icon next to the key
+3. Select "Manage spending limit"
+4. Enter your desired monthly limit in USD
+5. Click **Save**
+
+> Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+
+### How to Rename an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Rename API key"
+3. Enter the new name
+4. Click **Save**
+
+### How to Regenerate an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Regenerate API key"
+3. Confirm the action in the pop up dialog
+
+> Warning: Regenerating an API key will invalidate the previous key immediately. Update your applications with the new key to prevent service interruption.
+
+## API Key Details
-### Create and Manage API Keys
+### Monitoring Usage
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
+1. Click on your API key to view the Details page
+2. Check the **Overview** section for:
+ - Total number of queries
+ - GRT spent
+ - Current usage statistics
-The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+### Restricting Domain Access
-You can click the "three dots" menu to the right of a given API key to:
+1. Click on your API key to open the Details page
+2. Navigate to the **Security** section
+3. Click "Add Domain"
+4. Enter the authorized domain name
+5. Click **Save**
-- Rename API key
-- Regenerate API key
-- Delete API key
-- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+### Limiting Subgraph Access
-### API Key Details
+1. In the API key Details page
+2. Navigate to the **Security** section
+3. Click "Assign Subgraphs"
+4. Select the Subgraphs you want to authorize
+5. Click **Save**
-You can click on an individual API key to view the Details page:
+## その他のリソース
-1. Under the **Overview** section, you can:
- - キー名の編集
- - API キーの再生成
- - API キーの現在の使用状況を統計で表示:
- - クエリの数
- - 使用した GRT の量
-2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- - API キーの使用を許可されたドメイン名の表示と管理
- - Assign Subgraphs that can be queried with your API key
+[Deploying Using Subgraph Studio](/subgraphs/developing/deploying/using-subgraph-studio/)
diff --git a/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 4bf98ccc0c6f..a674c835a953 100644
--- a/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,15 +2,19 @@
title: Subgraph ID vs Deployment ID
---
+Managing and accessing Subgraphs relies on two distinct identification systems: Subgraph IDs and Deployment IDs.
+
A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
-Here are some key differences between the two IDs: 
+Both identifiers are accessible in [Subgraph Studio](https://thegraph.com/studio/):
+
+
## Deployment ID
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://ipfs.thegraph.com/ipfs/QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
@@ -18,6 +22,12 @@ Deployment ID を使用するエンドポイントの例:
`https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB`
+Using Deployment IDs for queries offers precise version control but comes with specific implications:
+
+- Advantages: Complete control over which version you're querying, ensuring consistent results
+- Challenges: Requires manual updates to query code when new Subgraph versions are published
+- Use case: Ideal for production environments where stability and predictability are crucial
+
## Subgraph ID
The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
@@ -25,3 +35,20 @@ The Subgraph ID is a unique identifier for a Subgraph. It remains constant acros
Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
+
+Using Subgraph IDs comes with important considerations:
+
+- Benefits: Automatically queries the latest version, reducing maintenance overhead
+- Limitations: May encounter version synchronization delays or breaking schema changes
+- Use case: Better suited for development environments or when staying current is more important than version stability
+
+## Deployment ID vs Subgraph ID
+
+Here are the key differences between the two IDs:
+
+| Consideration | Deployment ID | Subgraph ID |
+| ----------------------- | --------------------- | --------------- |
+| Version Pinning | Specific version | Always latest |
+| Maintenance Effort | High (manual updates) | Low (automatic) |
+| Environment Suitability | Production | Development |
+| Sync Status Awareness | Not required | Critical |
diff --git a/website/src/pages/ja/subgraphs/quick-start.mdx b/website/src/pages/ja/subgraphs/quick-start.mdx
index df410ba8ec9b..e900192305b4 100644
--- a/website/src/pages/ja/subgraphs/quick-start.mdx
+++ b/website/src/pages/ja/subgraphs/quick-start.mdx
@@ -2,24 +2,28 @@
title: クイックスタート
---
-Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Create, deploy, and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph Network.
+
+By the end, you'll have:
+
+- Initialized a Subgraph from a smart contract
+- Deployed it to Subgraph Studio for testing
+- Published to The Graph Network for decentralized indexing
## Prerequisites
- クリプトウォレット
-- A smart contract address on a [supported network](/supported-networks/)
-- [Node.js](https://nodejs.org/) installed
-- A package manager of your choice (`npm`, `yarn` or `pnpm`)
+- A deployed smart contract on a [supported network](/supported-networks/)
+- [Node.js](https://nodejs.org/) & a package manager of your choice (`npm`, `yarn` or `pnpm`)
## How to Build a Subgraph
### 1. Create a Subgraph in Subgraph Studio
-Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-
-Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-
-Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
+1. Go to [Subgraph Studio](https://thegraph.com/studio/)
+2. Connect your wallet
+3. Click "Create a Subgraph"
+4. Name it in Title Case: "Subgraph Name Chain Name"
### 2. Graph CLI をインストールする
@@ -37,20 +41,22 @@ Using [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your Subgraph
+Verify install:
-> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
+```sh
+graph --version
+```
-The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
+### 3. Initialize your Subgraph
-The following command initializes your Subgraph from an existing contract:
+> You can find commands for your specific Subgraph in [Subgraph Studio](https://thegraph.com/studio/).
+
+The following command initializes your Subgraph from an existing contract and indexes events:
```sh
graph init
```
-If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-
When you initialize your Subgraph, the CLI will ask you for the following information:
- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
@@ -59,19 +65,17 @@ When you initialize your Subgraph, the CLI will ask you for the following inform
- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block where the contract was deployed to optimize Subgraph indexing of blockchain data.
- **Contract Name**: Input the name of your contract.
- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-See the following screenshot for an example for what to expect when initializing your Subgraph:
+See the following screenshot for an example of what to expect when initializing your Subgraph:

### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-
When making changes to the Subgraph, you will mainly work with three files:
- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
@@ -82,9 +86,7 @@ For a detailed breakdown on how to write your Subgraph, check out [Creating a Su
### 5. Deploy your Subgraph
-> Remember, deploying is not the same as publishing.
-
-When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
Once your Subgraph is written, run the following commands:
@@ -107,8 +109,6 @@ graph deploy
```
````
-The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-
### 6. Review your Subgraph
If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
@@ -125,55 +125,13 @@ When your Subgraph is ready for a production environment, you can publish it to
- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-
-> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
-
-#### Publishing with Subgraph Studio
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to add curation signal.
-To publish your Subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard and select your network.

-Select the network to which you would like to publish your Subgraph.
-
-#### Publishing from the CLI
-
-As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
-
-Open the `graph-cli`.
-
-Use the following commands:
-
-````
-```sh
-graph codegen && graph build
-```
-
-Then,
-
-```sh
-graph publish
-```
-````
-
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.
-
-
-
-To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-
-#### Adding signal to your Subgraph
-
-1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
-
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
-
-2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
-
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
-
-To learn more about curation, read [Curating](/resources/roles/curating/).
+> It is recommended that you curate your own Subgraph with at least 3,000 GRT to incentivize indexing.
To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:
diff --git a/website/src/pages/ja/subgraphs/upgrade-indexer.mdx b/website/src/pages/ja/subgraphs/upgrade-indexer.mdx
new file mode 100644
index 000000000000..c0ac0d0e0666
--- /dev/null
+++ b/website/src/pages/ja/subgraphs/upgrade-indexer.mdx
@@ -0,0 +1,25 @@
+---
+title: Edge & Node Upgrade Indexer
+sidebarTitle: Upgrade Indexer
+---
+
+## 概要
+
+The Upgrade Indexer is a specialized Indexer operated by Edge & Node. It supports newly integrated chains within The Graph ecosystem and ensures new Subgraphs are immediately available for querying, eliminating potential downtime.
+
+Originally designed as a transitional support, its primary purpose was to facilitate the migration of Subgraphs from the hosted service to the decentralized network. Currently, it supports newly deployed Subgraphs before the full Chain Integration Process (CIP) Indexing rewards are activated.
+
+### What it does
+
+- Provides immediate query support for all newly deployed Subgraphs.
+- Functions as the sole supporting Indexer for each chain until indexing rewards are activated.
+
+### What it does **not** do
+
+- Does not permanently index Subgraphs. Subgraph owners should curate Subgraphs to use independent Indexers long term.
+- Does not compete for rewards. The Upgrade Indexer's participation on the Graph Network does not dilute rewards for other Indexers.
+- Doesn't support Time Travel Queries (TTQ). All Subgraphs on the Upgrade Indexer are auto-pruned. If TTQs are needed on a Subgraph, [curation signal can be added](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to attract Indexers that will support this feature.
+
+### Conclusion
+
+The Edge & Node Upgrade Indexer is foundational in supporting chain integrations and mitigating data latency risks. It plays a critical role in scaling The Graph's decentralized infrastructure by ensuring immediate query support and fostering community-driven indexing.
diff --git a/website/src/pages/ja/substreams/_meta-titles.json b/website/src/pages/ja/substreams/_meta-titles.json
index 1c58294c4bfc..17029448ac79 100644
--- a/website/src/pages/ja/substreams/_meta-titles.json
+++ b/website/src/pages/ja/substreams/_meta-titles.json
@@ -1,3 +1,4 @@
{
- "developing": "開発"
+ "developing": "開発",
+ "sps": "Substreams-powered Subgraphs"
}
diff --git a/website/src/pages/ja/substreams/developing/sinks.mdx b/website/src/pages/ja/substreams/developing/sinks.mdx
index 56936182c3aa..6c06a4e86cbb 100644
--- a/website/src/pages/ja/substreams/developing/sinks.mdx
+++ b/website/src/pages/ja/substreams/developing/sinks.mdx
@@ -8,14 +8,13 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database or a file.
## Sinks
> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed.
- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database.
-- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network.
- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application.
- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic.
- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks.
@@ -26,26 +25,26 @@ Sinks are integrations that allow you to send the extracted data to different de
### Official
-| 名称 | サポート | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| 名称 | サポート | Maintainer | Source Code |
+| ---------- | ---- | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| 名称 | サポート | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| 名称 | サポート | Maintainer | Source Code |
+| ---------- | ---- | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Official Support (by one of the main Substreams providers)
- C = Community Support
diff --git a/website/src/pages/ja/substreams/quick-start.mdx b/website/src/pages/ja/substreams/quick-start.mdx
index 6bbe99168657..48b581eedec3 100644
--- a/website/src/pages/ja/substreams/quick-start.mdx
+++ b/website/src/pages/ja/substreams/quick-start.mdx
@@ -31,6 +31,7 @@ If you can't find a Substreams package that meets your specific needs, you can d
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/).
diff --git a/website/src/pages/ja/substreams/sps/faq.mdx b/website/src/pages/ja/substreams/sps/faq.mdx
new file mode 100644
index 000000000000..c038b396b268
--- /dev/null
+++ b/website/src/pages/ja/substreams/sps/faq.mdx
@@ -0,0 +1,96 @@
+---
+title: Substreams-Powered Subgraphs FAQ
+sidebarTitle: FAQ
+---
+
+## サブストリームとは何ですか?
+
+Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications.
+
+Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere.
+
+Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
+
+## What are Substreams-powered Subgraphs?
+
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
+
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
+
+## How are Substreams-powered Subgraphs different from Subgraphs?
+
+Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain.
+
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+
+## What are the benefits of using Substreams-powered Subgraphs?
+
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+
+## サブストリームの利点は何ですか?
+
+サブストリームを使用すると、次のような多くの利点があります。
+
+- コンポーザブル: レゴ ブロックのようなサブストリーム モジュールを積み重ね、コミュニティ モジュールを基にして公開データをさらに洗練させることができます。
+
+- 高パフォーマンスのインデックス作成: 並列操作の大規模なクラスター (BigQuery を考えてください) を通じて、桁違いに高速なインデックス作成を実現します。
+
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
+
+- プログラム可能: コードを使用して抽出をカスタマイズし、変換時の集計を実行し、複数のシンクの出力をモデル化します。
+
+- JSON RPC の一部として利用できない追加データへのアクセス
+
+- Firehose のすべての利点。
+
+## 消防ホースとは何ですか?
+
+[StreamingFast](https://www.streamingfast.io/) によって開発された Firehose は、ブロックチェーンの全履歴をこれまで見たことのない速度で処理するためにゼロから設計されたブロックチェーン データ抽出レイヤーです。ファイルベースでストリーミングファーストのアプローチを提供するこれは、StreamingFast のオープンソース テクノロジ スイートの中核コンポーネントであり、サブストリームの基盤です。
+
+Firehose の詳細については、[documentation](https://firehose.streamingfast.io/) にアクセスしてください。
+
+## 消防ホースの利点は何ですか?
+
+Firehose を使用すると、次のような多くの利点があります。
+
+- 最低のレイテンシーとポーリングなし: ストリーミングファーストの方式で、Firehose ノードはブロック データを最初にプッシュするために競合するように設計されています。
+
+- ダウンタイムの防止: 高可用性を実現するためにゼロから設計されています。
+
+- ビートを見逃すことはありません: Firehose ストリーム カーソルは、フォークを処理し、どのような状況でも中断したところから続行するように設計されています。
+
+- 最も豊富なデータ モデル: 残高の変更、完全なコール ツリー、内部トランザクション、ログ、ストレージの変更、ガス料金などが含まれる最適なデータ モデル。
+
+- フラット ファイルの活用: ブロックチェーン データは、利用可能な最も安価で最適化されたコンピューティング リソースであるフラット ファイルに抽出されます。
+
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
+
+The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules.
+
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+
+The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
+
+## サブストリームにおけるRustモジュールの役割は何ですか?
+
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
+
+See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
+
+## サブストリームを構成可能にするものは何ですか?
+
+サブストリームを使用すると、変換レイヤーで合成が行われ、キャッシュされたモジュールを再利用できるようになります。
+
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
+
+## サブストリームを利用したサブグラフを構築してデプロイするにはどうすればよいでしょうか?
+
+After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
+
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
+
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
+
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
+
+この統合は、非常に高いパフォーマンスのインデクシングと、コミュニティモジュールを活用し、それらを基に構築することによる大きな組み合わせ可能性を含む多くの利点を約束しています。
diff --git a/website/src/pages/ja/substreams/sps/introduction.mdx b/website/src/pages/ja/substreams/sps/introduction.mdx
new file mode 100644
index 000000000000..71fabdd0416c
--- /dev/null
+++ b/website/src/pages/ja/substreams/sps/introduction.mdx
@@ -0,0 +1,31 @@
+---
+title: Introduction to Substreams-Powered Subgraphs
+sidebarTitle: イントロダクション
+---
+
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+
+## 概要
+
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+
+### Specifics
+
+There are two methods of enabling this technology:
+
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
+
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
+
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+
+### その他のリソース
+
+Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly:
+
+- [Solana](/substreams/developing/solana/transactions/)
+- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm)
+- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
+- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
+- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/ja/substreams/sps/triggers.mdx b/website/src/pages/ja/substreams/sps/triggers.mdx
new file mode 100644
index 000000000000..9ddb07c5477c
--- /dev/null
+++ b/website/src/pages/ja/substreams/sps/triggers.mdx
@@ -0,0 +1,47 @@
+---
+title: Substreams Triggers
+---
+
+Use Custom Triggers and enable the full use GraphQL.
+
+## 概要
+
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
+
+### Defining `handleTransactions`
+
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
+
+```tsx
+export function handleTransactions(bytes: Uint8Array): void {
+ let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1.
+ if (transactions.length == 0) {
+ log.info('No transactions found', [])
+ return
+ }
+
+ for (let i = 0; i < transactions.length; i++) {
+ // 2.
+ let transaction = transactions[i]
+
+ let entity = new Transaction(transaction.hash) // 3.
+ entity.from = transaction.from
+ entity.to = transaction.to
+ entity.save()
+ }
+}
+```
+
+Here's what you're seeing in the `mappings.ts` file:
+
+1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object
+2. Looping over the transactions
+3. Create a new Subgraph entity for every transaction
+
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
+
+### その他のリソース
+
+To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/).
diff --git a/website/src/pages/ja/substreams/sps/tutorial.mdx b/website/src/pages/ja/substreams/sps/tutorial.mdx
new file mode 100644
index 000000000000..46c4c8305676
--- /dev/null
+++ b/website/src/pages/ja/substreams/sps/tutorial.mdx
@@ -0,0 +1,155 @@
+---
+title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana"
+sidebarTitle: Tutorial
+---
+
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
+
+## 始めましょう
+
+For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial)
+
+### Prerequisites
+
+Before starting, make sure to:
+
+- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container.
+- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs.
+
+### Step 1: Initialize Your Project
+
+1. Open your Dev Container and run the following command to initialize your project:
+
+ ```bash
+ substreams init
+ ```
+
+2. Select the "minimal" project option.
+
+3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID:
+
+```yaml
+specVersion: v0.1.0
+package:
+ name: my_project_sol
+ version: v0.1.0
+
+imports: # Pass your spkg of interest
+ solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg
+
+modules:
+ - name: map_spl_transfers
+ use: solana:map_block # Select corresponding modules available within your spkg
+ initialBlock: 260000082
+
+ - name: map_transactions_by_programid
+ use: solana:solana:transactions_by_programid_without_votes
+
+network: solana-mainnet-beta
+
+params: # Modify the param fields to meet your needs
+ # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA
+ map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE
+```
+
+### Step 2: Generate the Subgraph Manifest
+
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
+
+```bash
+substreams codegen subgraph
+```
+
+You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source:
+
+```yaml
+---
+dataSources:
+ - kind: substreams
+ name: my_project_sol
+ network: solana-mainnet-beta
+ source:
+ package:
+ moduleName: map_spl_transfers # Module defined in the substreams.yaml
+ file: ./my-project-sol-v0.1.0.spkg
+ mapping:
+ apiVersion: 0.0.9
+ kind: substreams/graph-entities
+ file: ./src/mappings.ts
+ handler: handleTriggers
+```
+
+### Step 3: Define Entities in `schema.graphql`
+
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
+
+Here is an example:
+
+```graphql
+type MyTransfer @entity {
+ id: ID!
+ amount: String!
+ source: String!
+ designation: String!
+ signers: [String!]!
+}
+```
+
+This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`.
+
+### Step 4: Handle Substreams Data in `mappings.ts`
+
+With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
+
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
+
+```ts
+import { Protobuf } from 'as-proto/assembly'
+import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events'
+import { MyTransfer } from '../generated/schema'
+
+export function handleTriggers(bytes: Uint8Array): void {
+ const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode)
+
+ for (let i = 0; i < input.data.length; i++) {
+ const event = input.data[i]
+
+ if (event.transfer != null) {
+ let entity_id: string = `${event.txnId}-${i}`
+ const entity = new MyTransfer(entity_id)
+ entity.amount = event.transfer!.instruction!.amount.toString()
+ entity.source = event.transfer!.accounts!.source
+ entity.designation = event.transfer!.accounts!.destination
+
+ if (event.transfer!.accounts!.signer!.single != null) {
+ entity.signers = [event.transfer!.accounts!.signer!.single!.signer]
+ } else if (event.transfer!.accounts!.signer!.multisig != null) {
+ entity.signers = event.transfer!.accounts!.signer!.multisig!.signers
+ }
+ entity.save()
+ }
+ }
+}
+```
+
+### Step 5: Generate Protobuf Files
+
+To generate Protobuf objects in AssemblyScript, run the following command:
+
+```bash
+npm run protogen
+```
+
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
+
+### Conclusion
+
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+
+### Video Tutorial
+
+
+
+### その他のリソース
+
+For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana).
diff --git a/website/src/pages/ja/supported-networks.mdx b/website/src/pages/ja/supported-networks.mdx
index 4e138e5575cc..c9dc22ed741b 100644
--- a/website/src/pages/ja/supported-networks.mdx
+++ b/website/src/pages/ja/supported-networks.mdx
@@ -4,17 +4,17 @@ hideTableOfContents: true
hideContentHeader: true
---
-import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, NetworksTable } from '@/supportedNetworks'
import { Heading } from '@/components'
import { useI18n } from '@/i18n'
export const getStaticProps = getSupportedNetworksStaticProps
-
+
{useI18n().t('index.supportedNetworks.title')}
-
+
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
diff --git a/website/src/pages/ja/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/ja/token-api/evm/get-balances-evm-by-address.mdx
index 3386fd078059..68385ffc4272 100644
--- a/website/src/pages/ja/token-api/evm/get-balances-evm-by-address.mdx
+++ b/website/src/pages/ja/token-api/evm/get-balances-evm-by-address.mdx
@@ -1,9 +1,9 @@
---
-title: Token Balances by Wallet Address
+title: Balances by Address
template:
type: openApi
apiId: tokenApi
operationId: getBalancesEvmByAddress
---
-The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
+Provides latest ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/ja/token-api/evm/get-historical-balances-evm-by-address.mdx b/website/src/pages/ja/token-api/evm/get-historical-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..d96ed1b81fa2
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-historical-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Historical Balances
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHistoricalBalancesEvmByAddress
+---
+
+Provides historical ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/ja/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/ja/token-api/evm/get-holders-evm-by-contract.mdx
index 0bb79e41ed54..01a52bbf7ad2 100644
--- a/website/src/pages/ja/token-api/evm/get-holders-evm-by-contract.mdx
+++ b/website/src/pages/ja/token-api/evm/get-holders-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders by Contract Address
+title: Token Holders
template:
type: openApi
apiId: tokenApi
operationId: getHoldersEvmByContract
---
-The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
+Provides ERC-20 token holder balances by contract address.
diff --git a/website/src/pages/ja/token-api/evm/get-nft-activities-evm.mdx b/website/src/pages/ja/token-api/evm/get-nft-activities-evm.mdx
new file mode 100644
index 000000000000..f76eb35f653a
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-nft-activities-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Activities
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftActivitiesEvm
+---
+
+Provides NFT Activities (ex: transfers, mints & burns).
diff --git a/website/src/pages/ja/token-api/evm/get-nft-collections-evm-by-contract.mdx b/website/src/pages/ja/token-api/evm/get-nft-collections-evm-by-contract.mdx
new file mode 100644
index 000000000000..c8e9bfb64219
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-nft-collections-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Collection
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftCollectionsEvmByContract
+---
+
+Provides single NFT collection metadata, total supply, owners & total transfers.
diff --git a/website/src/pages/ja/token-api/evm/get-nft-holders-evm-by-contract.mdx b/website/src/pages/ja/token-api/evm/get-nft-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..091d01a197f4
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-nft-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Holders
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftHoldersEvmByContract
+---
+
+Provides NFT holders per contract.
diff --git a/website/src/pages/ja/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx b/website/src/pages/ja/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
new file mode 100644
index 000000000000..cf9ff1c6e1b8
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Items
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftItemsEvmContractByContractToken_idByToken_id
+---
+
+Provides single NFT token metadata, ownership & traits.
diff --git a/website/src/pages/ja/token-api/evm/get-nft-ownerships-evm-by-address.mdx b/website/src/pages/ja/token-api/evm/get-nft-ownerships-evm-by-address.mdx
new file mode 100644
index 000000000000..4c33526eceb7
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-nft-ownerships-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Ownerships
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftOwnershipsEvmByAddress
+---
+
+Provides NFT Ownerships for Account.
diff --git a/website/src/pages/ja/token-api/evm/get-nft-sales-evm.mdx b/website/src/pages/ja/token-api/evm/get-nft-sales-evm.mdx
new file mode 100644
index 000000000000..f2d78bea4052
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-nft-sales-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Sales
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftSalesEvm
+---
+
+Provides latest NFT marketplace sales.
diff --git a/website/src/pages/ja/token-api/evm/get-ohlc-pools-evm-by-pool.mdx b/website/src/pages/ja/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
new file mode 100644
index 000000000000..d5bc5357eadf
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
@@ -0,0 +1,9 @@
+---
+title: OHLCV by Pool
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPoolsEvmByPool
+---
+
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/ja/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/ja/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
index d1558ddd6e78..ff8f590b0433 100644
--- a/website/src/pages/ja/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
+++ b/website/src/pages/ja/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token OHLCV prices by Contract Address
+title: OHLCV by Contract
template:
type: openApi
apiId: tokenApi
operationId: getOhlcPricesEvmByContract
---
-The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/ja/token-api/evm/get-pools-evm.mdx b/website/src/pages/ja/token-api/evm/get-pools-evm.mdx
new file mode 100644
index 000000000000..db32376f5a17
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-pools-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Liquidity Pools
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getPoolsEvm
+---
+
+Provides Uniswap V2 & V3 liquidity pool metadata.
diff --git a/website/src/pages/ja/token-api/evm/get-swaps-evm.mdx b/website/src/pages/ja/token-api/evm/get-swaps-evm.mdx
new file mode 100644
index 000000000000..0a7697f38c8b
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-swaps-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Swap Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getSwapsEvm
+---
+
+Provides Uniswap V2 & V3 swap events.
diff --git a/website/src/pages/ja/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/ja/token-api/evm/get-tokens-evm-by-contract.mdx
index b6fab8011fc2..aed206c15272 100644
--- a/website/src/pages/ja/token-api/evm/get-tokens-evm-by-contract.mdx
+++ b/website/src/pages/ja/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders and Supply by Contract Address
+title: Token Metadata
template:
type: openApi
apiId: tokenApi
operationId: getTokensEvmByContract
---
-The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
+Provides ERC-20 token contract metadata.
diff --git a/website/src/pages/ja/token-api/evm/get-transfers-evm.mdx b/website/src/pages/ja/token-api/evm/get-transfers-evm.mdx
new file mode 100644
index 000000000000..d8e73c90a03c
--- /dev/null
+++ b/website/src/pages/ja/token-api/evm/get-transfers-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Transfer Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvm
+---
+
+Provides ERC-20 & Native transfer events.
diff --git a/website/src/pages/ja/token-api/faq.mdx b/website/src/pages/ja/token-api/faq.mdx
index 6178aee33e86..3bf60c0cda8f 100644
--- a/website/src/pages/ja/token-api/faq.mdx
+++ b/website/src/pages/ja/token-api/faq.mdx
@@ -6,21 +6,37 @@ Get fast answers to easily integrate and scale with The Graph's high-performance
## General
-### What blockchains does the Token API support?
+### Which blockchains are supported by the Token API?
-Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+Currently, the Token API supports Ethereum, BNB Smart Chain (BSC), Polygon, Optimism, Base, Unichain, and Arbitrum One.
-### Why isn't my API key from The Graph Market working?
+### Does the Token API support NFTs?
-Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+Yes, The Graph Token API currently supports ERC-721 and ERC-1155 NFT token standards, with support for additional NFT standards planned. Endpoints are offered for ownership, collection stats, metadata, sales, holders, and transfer activity.
+
+### Do NFTs include off-chain data?
+
+NFT endpoints currently only include on-chain data. To get off-chain data, use the IPFS or HTTP links included in the NFT item response.
+
+### How do I authenticate requests to the Token API, and why doesn't my API key from The Graph Market work?
+
+Authentication is managed via API tokens obtained through [The Graph Market](https://thegraph.market/). If you're experiencing issues, make sure you're using the API Token generated from the API key, not the API key itself. An API token can be found on The Graph Market dashboard next to each API key. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
### How current is the data provided by the API relative to the blockchain?
The API provides data up to the latest finalized block.
-### How do I authenticate requests to the Token API?
+### How do I retrieve token prices?
+
+By default, token prices are returned with token-related responses, including token balances, token transfers, token metadata, and token holders. Historical prices are available with the Open-High-Low-Close (OHLC) endpoints.
+
+### Does the Token API support historical token data?
-Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+The Token API supports historical token balances with the `/historical/balances/evm/{address}` endpoint. You can query historical price data by pool at `/ohlc/pools/evm/{pool}` and by contract at `/ohlc/prices/evm/{contract}`.
+
+### What exchanges does the Token API use for token prices?
+
+The Token API currently tracks prices on Uniswap v2 and Uniswap v3, with plans to support additional exchanges in the future.
### Does the Token API provide a client SDK?
@@ -34,9 +50,9 @@ Yes, more blockchains will be supported in the future. Please share feedback on
Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
-### Are there plans to support additional use cases such as NFTs?
+### Are there plans to support additional use cases?
-The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
## MCP / LLM / AI Topics
@@ -60,17 +76,25 @@ You can find the code for the MCP client in [The Graph's repo](https://github.co
Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
-### Are there rate limits or usage costs?\*\*
+### Why am I getting 500 errors?
+
+Networks that are currently or temporarily unavailable on a given endpoint will return a `bad_database_response`, `Endpoint is currently not supported for this network` error. Databases that are in the process of ingestion will produce this response.
+
+### Are there rate limits or usage costs?
+
+During Beta, the Token API is free for authorized developers. There are no specific rate limits, but reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What do I do if I notice data inconsistencies in the data returned by the Token API?
-During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+If you notice data inconsistencies, please report the issue on our [Discord](https://discord.gg/graphprotocol). Identifying edge cases can help make sure all data is accurate and up-to-date.
-### What networks are supported, and how do I specify them?
+### How do I specify a network?
-You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of the exact network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`, `unichain`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
### Why do I only see 10 results? How can I get more data?
-Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` and `page` (1-indexed) to return more results. For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
### How do I fetch older transfer history?
diff --git a/website/src/pages/ja/token-api/monitoring/get-health.mdx b/website/src/pages/ja/token-api/monitoring/get-health.mdx
index 57a827b3343b..09f7b954dbf3 100644
--- a/website/src/pages/ja/token-api/monitoring/get-health.mdx
+++ b/website/src/pages/ja/token-api/monitoring/get-health.mdx
@@ -1,7 +1,9 @@
---
-title: Get health status of the API
+title: Health Status
template:
type: openApi
apiId: tokenApi
operationId: getHealth
---
+
+Get health status of the API
diff --git a/website/src/pages/ja/token-api/monitoring/get-networks.mdx b/website/src/pages/ja/token-api/monitoring/get-networks.mdx
index 0ea3c485ddb9..4e77c80a0125 100644
--- a/website/src/pages/ja/token-api/monitoring/get-networks.mdx
+++ b/website/src/pages/ja/token-api/monitoring/get-networks.mdx
@@ -1,7 +1,9 @@
---
-title: Get supported networks of the API
+title: サポートされているネットワーク
template:
type: openApi
apiId: tokenApi
operationId: getNetworks
---
+
+Get supported networks of the API
diff --git a/website/src/pages/ja/token-api/monitoring/get-version.mdx b/website/src/pages/ja/token-api/monitoring/get-version.mdx
index 0be6b7e92d04..d0609db887f2 100644
--- a/website/src/pages/ja/token-api/monitoring/get-version.mdx
+++ b/website/src/pages/ja/token-api/monitoring/get-version.mdx
@@ -1,7 +1,9 @@
---
-title: Get the version of the API
+title: バージョン
template:
type: openApi
apiId: tokenApi
operationId: getVersion
---
+
+Get the version of the API
diff --git a/website/src/pages/ja/token-api/quick-start.mdx b/website/src/pages/ja/token-api/quick-start.mdx
index 0b64515243cb..f7707d1e33ef 100644
--- a/website/src/pages/ja/token-api/quick-start.mdx
+++ b/website/src/pages/ja/token-api/quick-start.mdx
@@ -9,15 +9,15 @@ sidebarTitle: クイックスタート
The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
-The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+The Token API provides access to onchain NFT and fungible token data, including live and historical balances, holders, prices, market data, token metadata, and token transfers. This API also uses the Model Context Protocol (MCP) to allow AI tools such as Claude to enrich raw blockchain data with contextual insights.
## Prerequisites
-Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+Before you begin, get a JWT API token by signing up on [The Graph Market](https://thegraph.market/). Make sure to use the JWT API Token, not the API key. Each API key can generate a new JWT API Token at any time.
## Authentication
-All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+All API endpoints are authenticated using a JWT API token inserted in the header as `Authorization: Bearer `.
```json
{
@@ -64,6 +64,20 @@ Make sure to replace `` with the JWT Token generated from your API key.
> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+## Chain and Feature Support
+
+| Network | evm-tokens | evm-uniswaps | evm-nft-tokens |
+| ---------------- | :---------: | :----------: | :------------: |
+| Ethereum Mainnet | ✅ | ✅ | ✅ |
+| BSC | ✅\* | ✅ | ✅ |
+| Base | ✅ | ✅ | ✅ |
+| Unichain | ✅ | ✅ | ✅ |
+| Arbitrum-One | Ingesting\* | Ingesting\* | Ingesting\* |
+| Optimism | ✅ | ✅ | ✅ |
+| Polygon | ✅ | ✅ | ✅ |
+
+\*Some chains are still in the process of syncing. You may encounter `bad_database_response` errors or incorrect response values until data is fully synced.
+
## Troubleshooting
If the API call fails, try printing out the full response object for additional error details. For example:
diff --git a/website/src/pages/ko/about.mdx b/website/src/pages/ko/about.mdx
index 833b097673d2..7fda868aab9d 100644
--- a/website/src/pages/ko/about.mdx
+++ b/website/src/pages/ko/about.mdx
@@ -1,67 +1,46 @@
---
title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## What is The Graph?
-The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier.
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## Understanding the Basics
+Its data services include:
-Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain.
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### Challenges Without The Graph
+### Why Blockchain Data is Difficult to Query
-In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply.
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**.
+The result is slow performance, complex infrastructure, and scalability issues.
-- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself.
+## How The Graph Solves This
-- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it.
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### Why is this a problem?
+Find the perfect data service for you:
-It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions.
+### 1. Custom Real-Time Data Streams
-Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
+**Use Case:** High-frequency trading, live analytics.
-Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data.
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph Provides a Solution
+### 2. Instant Token Data
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
+- [Start with Token API](/token-api/quick-start/)
-### How The Graph Functions
+### 3. Flexible Historical Queries
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+**Use Case:** Dapp frontends, custom analytics.
-#### Specifics
-
-- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-
-- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-
-- When creating a Subgraph, you need to write a Subgraph manifest.
-
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-
-The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.
-
-
-
-The flow follows these steps:
-
-1. A dapp adds data to Ethereum through a transaction on a smart contract.
-2. The smart contract emits one or more events while processing the transaction.
-3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
-4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
-5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats.
-
-## Next Steps
-
-The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-
-Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/ko/index.json b/website/src/pages/ko/index.json
index 95bf30d1752a..73ec88c04d73 100644
--- a/website/src/pages/ko/index.json
+++ b/website/src/pages/ko/index.json
@@ -2,7 +2,7 @@
"title": "Home",
"hero": {
"title": "The Graph Docs",
- "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "How The Graph works",
"cta2": "Build your first subgraph"
},
@@ -19,10 +19,10 @@
"description": "Fetch and consume blockchain data with parallel execution.",
"cta": "Develop with Substreams"
},
- "sps": {
- "title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "Set up a Substreams-powered subgraph"
+ "tokenApi": {
+ "title": "Token API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "Graph Node",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "Firehose",
- "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Get started with Firehose"
}
},
@@ -58,6 +58,7 @@
"networks": "networks",
"completeThisForm": "complete this form"
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "Subgraphs",
"substreams": "Substreams",
"firehose": "Firehose",
- "tokenapi": "Token API"
+ "tokenApi": "Token API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "Substreams",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "Graph Explorer",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/ko/indexing/new-chain-integration.mdx b/website/src/pages/ko/indexing/new-chain-integration.mdx
index 670e06c752c3..cf698522f7e5 100644
--- a/website/src/pages/ko/indexing/new-chain-integration.mdx
+++ b/website/src/pages/ko/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, in a JSON-RPC batch request
-- `trace_filter` *(limited tracing and optionally required for Graph Node)*
+- `trace_filter` _(limited tracing and optionally required for Graph Node)_
### 2. Firehose Integration
@@ -63,7 +63,7 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
> Do not change the env var name itself. It must remain `ethereum` even if the network name is different.
-3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## Substreams-powered Subgraphs
diff --git a/website/src/pages/ko/indexing/overview.mdx b/website/src/pages/ko/indexing/overview.mdx
index 4a980db27f12..767ec67a8691 100644
--- a/website/src/pages/ko/indexing/overview.mdx
+++ b/website/src/pages/ko/indexing/overview.mdx
@@ -110,12 +110,12 @@ Indexers may differentiate themselves by applying advanced techniques for making
- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
-| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
+| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -131,7 +131,7 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
@@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer
#### Graph Node
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Getting started using Docker
@@ -708,42 +708,6 @@ Note that supported action types for allocation management have different input
Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
-#### Agora
-
-The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query.
-
-A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression.
-
-Example cost model:
-
-```
-# This statement captures the skip value,
-# uses a boolean expression in the predicate to match specific queries that use `skip`
-# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-# This default will match any GraphQL expression.
-# It uses a Global substituted into the expression to calculate cost
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-Example query costing using the above model:
-
-| Query | Price |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### Applying the cost model
-
-Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them.
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## Interacting with the network
### Stake in the protocol
diff --git a/website/src/pages/ko/indexing/tooling/graph-node.mdx b/website/src/pages/ko/indexing/tooling/graph-node.mdx
index edde8a157fd3..56cea09618e3 100644
--- a/website/src/pages/ko/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/ko/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ While some Subgraphs may just require a full node, some may have indexing featur
### IPFS Nodes
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### Prometheus metrics server
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Getting started with Kubernetes
@@ -77,15 +77,20 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
When it is running Graph Node exposes the following ports:
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
-
-> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## Advanced Graph Node configuration
@@ -330,7 +335,7 @@ Database tables that store entities seem to generally come in two varieties: 'tr
For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table.
-The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show ` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions.
@@ -340,6 +345,4 @@ For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates f
#### Removing Subgraphs
-> This is new functionality, which will be available in Graph Node 0.29.x
-
At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/ko/resources/claude-mcp.mdx b/website/src/pages/ko/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..5b55bbcbe0a4
--- /dev/null
+++ b/website/src/pages/ko/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/ko/subgraphs/_meta-titles.json b/website/src/pages/ko/subgraphs/_meta-titles.json
index 3fd405eed29a..f095d374344f 100644
--- a/website/src/pages/ko/subgraphs/_meta-titles.json
+++ b/website/src/pages/ko/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "Querying",
"developing": "Developing",
"guides": "How-to Guides",
- "best-practices": "Best Practices"
+ "best-practices": "Best Practices",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5f964d3cbb78..edc1d88dc6cf 100644
--- a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Patch Changes
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Minor Changes
diff --git a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx
index 5be2530c4d6b..2e256ae18190 100644
--- a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs:
The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| Version | Release notes |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
-| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
-| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
-| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
-| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Added `input` field to the Ethereum Transaction object |
+| Version | Release notes |
+| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
+| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
+| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
+| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
+| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
+| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Added `input` field to the Ethereum Transaction object |
### Built-in Types
diff --git a/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx
index 180a343470b1..4931e6b1fd34 100644
--- a/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -22,14 +22,14 @@ Start the process and build a Subgraph that matches your needs:
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
-| Version | Release notes |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+| Version | Release notes |
+| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx
index 3b2b1bbc70ae..5c8016b18c91 100644
--- a/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx
@@ -212,7 +212,7 @@ Every Subgraph affected with this policy has an option to bring the version in q
If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph: `https://indexer.upgrade.thegraph.com/status`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
diff --git a/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 77d10212c770..d44f9b375203 100644
--- a/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -88,6 +88,8 @@ graph auth
Once you are ready, you can deploy your Subgraph to Subgraph Studio.
> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
+>
+> **Note**: Each account is limited to 3 deployed (unpublished) Subgraphs. If you reach this limit, you must archive or publish existing Subgraphs before deploying new ones.
Use the following CLI command to deploy your Subgraph:
@@ -104,6 +106,8 @@ After running this command, the CLI will ask for a version label.
After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+> **Note**: The development query URL is limited to 3,000 queries per day.
+
Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
diff --git a/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index 2bc0ec5f514c..e3e3a7e3d455 100644
--- a/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -53,7 +53,7 @@ USAGE
FLAGS
-h, --help Show CLI help.
- -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node.
+ -i, --ipfs= [default: https://ipfs.thegraph.com/api/v0] Upload build results to an IPFS node.
--ipfs-hash= IPFS hash of the subgraph manifest to deploy.
--protocol-network= [default: arbitrum-one] The network to use for the subgraph deployment.
diff --git a/website/src/pages/ko/subgraphs/explorer.mdx b/website/src/pages/ko/subgraphs/explorer.mdx
index 499fcede88d3..ddb6ad1b39b6 100644
--- a/website/src/pages/ko/subgraphs/explorer.mdx
+++ b/website/src/pages/ko/subgraphs/explorer.mdx
@@ -2,83 +2,103 @@
title: Graph Explorer
---
-Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Use [Graph Explorer](https://thegraph.com/explorer) and take full advantage of its core features.
## Overview
-Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+This guide explains how to use [Graph Explorer](https://thegraph.com/explorer) to quickly discover and interact with Subgraphs on The Graph Network, delegate GRT, view participant metrics, and analyze network performance.
-## Inside Explorer
+> When you visit Graph Explorer, you can also access the link to [explore Substreams](https://substreams.dev/).
-The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
+## Prerequisites
-### Subgraphs Page
+- To perform actions, you need a wallet (e.g., MetaMask) connected to [Graph Explorer](https://thegraph.com/explorer).
+ > Make sure your wallet is connected to the correct network (e.g., Arbitrum). Features and data shown are network specific.
+- GRT tokens if you plan to delegate or curate.
+- Basic knowledge of [Subgraphs](https://thegraph.com/docs/en/subgraphs/developing/subgraphs/)
-After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+## Navigating Graph Explorer
-- Your own finished Subgraphs
-- Subgraphs published by others
-- The exact Subgraph you want (based on the date created, signal amount, or name).
+### Step 1. Explore Subgraphs
-
+> For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
-When you click into a Subgraph, you will be able to do the following:
+Go to the Subgraphs page in [Graph Explorer](https://thegraph.com/explorer).
-- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
+- If you've deployed and published your Subgraph in Subgraph Studio, you can view it here.
+- Search all published Subgraphs and filter them by indexed network, specific categories (such as DeFI, NFTs, and DAOs), and **most queried, most curated, recently created, and recently updated**.
+
+
+
+To find Subgraphs indexing a specific contract, enter the contract address into the search bar.
+
+- For example, you can enter the L2GNS contract on Artbitrum (`0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec`) and this returns all Subgraphs indexing that contract:
+
+
- - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+> Looking for indexing contracts? Check out [this Subgraph](https://thegraph.com/explorer/subgraphs/FMTUN6d7sY2bLnAmNEPJTqiU3iuQht6ZXurpBh71wbWR?view=About&chain=arbitrum-one) which indexes contract addresses listed in its manifest. It shows all current deployments indexing those contracts on Arbitrum One, along with the signal allocated to each.
-
+You can click into any Subgraph, to:
+
+- Test queries in the playground and be able to leverage network details to make informed decisions.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make Indexers aware of its importance and quality.
+ > This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it'll eventually surface on the network to serve queries.
+
+
On each Subgraph’s dedicated page, you can do the following:
-- Signal/Un-signal on Subgraphs
-- View more details such as charts, current deployment ID, and other metadata
-- Switch versions to explore past iterations of the Subgraph
- Query Subgraphs via GraphQL
+- View Subgraph ID, current deployment ID, Query URL, and other metadata
+- Signal/unsignal on Subgraphs
- Test Subgraphs in the playground
- View the Indexers that are indexing on a certain Subgraph
- Subgraph stats (allocations, Curators, etc)
-- View the entity who published the Subgraph
+- View query fees and charts
+- Change versions to explore past iterations of the Subgraph
+- View entity types
+- View Subgraph activity
-
+
-### Delegate Page
+### Step 2. Delegate GRT
-On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer.
+Go to the [Delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) page to learn how to delegate, get GRT, and choose an Indexer.
-On this page, you can see the following:
+Here, you can:
-- Indexers who collected the most query fees
-- Indexers with the highest estimated APR
+- Compare Indexers by most query fees earned and highest estimated APR.
+- Use the built-in ROI calculator or search by Indexer name or address.
+- Click **"Delegate"** next to an Indexer to stake your GRT.
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
+### Step 3. Monitor Participants in the Network
-### Participants Page
+Go to the [Participants](https://thegraph.com/explorer/participants?chain=arbitrum-one) page to view:
-This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators.
+- Indexers: stakes, allocations, rewards, and delegation parameters
+- Curators: signal amounts, Subgraph shares, and activity history
+- Delegators: current and historical delegations, rewards, and Indexer metrics
-#### 1. Indexers
+#### Indexers
-
+
Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexer's delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
-- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators.
-- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
-- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
-- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
-- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
-- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
-- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
-- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
-- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
+- Query Fee Cut: The % of the query fee rebates that the Indexer keeps when splitting with Delegators.
+- Effective Reward Cut: The indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
+- Cooldown Remaining: The time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
+- Owned: This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
+- Delegated: Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
+- Allocated: Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
+- Available Delegation Capacity: The amount of delegated stake the Indexers can still receive before they become over-delegated.
+- Max Delegation Capacity: The maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
+- Query Fees: This is the total fees that end users have paid for queries from an Indexer over all time.
+- Indexer Rewards: This is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters.
@@ -86,9 +106,9 @@ Indexers can earn both query fees and indexing rewards. Functionally, this happe
To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/)
-
+
-#### 2. Curators
+#### Curators
Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
@@ -102,11 +122,11 @@ In the The Curator table listed below you can see:
- The number of GRT that was deposited
- The number of shares a Curator owns
-
+
If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/).
-#### 3. Delegators
+#### Delegators
Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers.
@@ -114,7 +134,7 @@ Delegators play a key role in maintaining the security and decentralization of T
- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts.
- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/).
-
+
In the Delegators table you can see the active Delegators in the community and important metrics:
@@ -127,9 +147,9 @@ In the Delegators table you can see the active Delegators in the community and i
If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers).
-### Network Page
+### Step 4. Analyze Network Performance
-On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
+On the [Network](https://thegraph.com/explorer/network?chain=arbitrum-one) page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
#### Overview
@@ -147,7 +167,7 @@ A few key details to note:
- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
-
+
#### Epochs
@@ -161,69 +181,77 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as:
- The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates.
- The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers.
-
+
+
+## Access and Manage Your User Profile
+
+### Step 1. Access Your Profile
-## Your User Profile
+- Click your wallet address in the top right corner
+- Your wallet acts as your user profile
+- In your profile dashboard, you can view and interact with several useful tabs
-Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs:
+### Step 2. Explore the Tabs
-### Profile Overview
+#### Profile Overview
In this section, you can view the following:
-- Any of your current actions you've done.
-- Your profile information, description, and website (if you added one).
+- Your activity
+- Your profile information: total query fees, total shares value, owned stake, stake delegating
-
+
-### Subgraphs Tab
+#### Subgraphs Tab
-In the Subgraphs tab, you’ll see your published Subgraphs.
+The Subgraphs tab displays all your published Subgraphs.
-> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> Subgraphs deployed with the CLI for testing purposes will not show up here. Subgraphs will only show up when they are published to the decentralized network.
-
+
-### Indexing Tab
+#### Indexing Tab
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+> If you haven't indexed, you will see links to stake to index Subgraphs and browse Subgraphs on Graph Explorer.
-This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics:
+The Indexing tab displays a table where you can review active and historical allocations to Subgraphs.
-- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed
-- Total Query Fees - the total fees that users have paid for queries served by you over time
-- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT
-- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators
-- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators
-- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior
+Track your Indexer performance with visual charts and key metrics, including:
-
+- Delegated Stake: Stake from Delegators that can be allocated by you but cannot be slashed.
+- Total Query Fees: Cumulative fees from served queries.
+- Indexer Rewards (in GRT): Total rewards earned.
+- Fee Cut & Rewards Cut: The % of query fee rebates and Indexer rewards you'll keep when you split with Delegators.
+- Owned Stake: Your deposited stake, which could be slashed for malicious or incorrect behavior.
-### Delegating Tab
+
-Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards.
+#### Delegating Tab
-In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards.
+> To learn more about the benefits of delegating, check out [delegating](/resources/roles/delegating/delegating/).
-In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics.
+The Delegators tab displays your active and historical delegations, along with the metrics for the Indexers you've delegated to.
-The Delegator metrics you’ll see here in this tab include:
+Top Section:
-- Total delegation rewards
-- Total unrealized rewards
-- Total realized rewards
+- View delegation and rewards-only charts
+- Track key metrics:
+ - Total delegation rewards
+ - Unrealized rewards
+ - Realized Rewards
-In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc).
+Bottom Section:
-With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period.
+- Explore a table of your Indexer delegations, including reward cuts, cooldowns, and more.
+- Use the buttons on the right side of the table to manage your delegation - delegate more, undelegate, or withdraw it after the thawing period.
-Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable).
+> This table is horizontally scrollable, so scroll right to see delegation status: delegating, undelegating, or withdrawable.
-
+
-### Curating Tab
+#### Curating Tab
-In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
+The Curation tab displays all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
Within this tab, you’ll find an overview of:
@@ -232,22 +260,22 @@ Within this tab, you’ll find an overview of:
- Query rewards per Subgraph
- Updated at date details
-
+
-### Your Profile Settings
+#### Your Profile Settings
Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators.
- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set
- Delegation parameters allow you to control the distribution of GRT between you and your Delegators.
-
+
As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.

-## Additional Resources
+### Additional Resources
### Video Guide
diff --git a/website/src/pages/ko/subgraphs/fair-use-policy.mdx b/website/src/pages/ko/subgraphs/fair-use-policy.mdx
new file mode 100644
index 000000000000..8a27a7ea2887
--- /dev/null
+++ b/website/src/pages/ko/subgraphs/fair-use-policy.mdx
@@ -0,0 +1,51 @@
+---
+title: Fair Use Policy
+---
+
+> Effective Date: May 15, 2025
+
+## Overview
+
+This outlines storage limits for Subgraphs that rely solely on [Edge & Node's Upgrade Indexer](/subgraphs/upgrade-indexer/). This policy is designed to ensure fair and optimized use of queries across the community.
+
+To maintain performance and reliability across its infrastructure, Edge & Node is updating its Upgrade Indexer Subgraph storage policy. Free usage tiers remain available, but users who exceed specified limits will need to upgrade to a paid plan. Storage allocations and thresholds vary by feature.
+
+### 1. Scope
+
+This policy applies to all individual users, teams, chains, and dapps using Edge & Node's Upgrade Indexer in Subgraph Studio for storage and queries.
+
+### 2. Fair Use Storage Limits
+
+**Free Storage: Up to 10 GB**
+
+Beyond that, pricing is variable and adjusts based on usage patterns, network conditions, infrastructure requirements, and specific use cases.
+
+Reach out to Edge & Node at [info@edgeandnode.com](mailto:info@edgeandnode.com) to discuss options that meet your technical needs.
+
+You can monitor your usage via [Subgraph Studio](https://thegraph.com/studio/).
+
+### 3. Fair Use Limits
+
+To preserve the stability of Edge & Node's Subgraph Studio and preserve the reliability of The Graph Network, the Edge & Node Support Team will monitor storage usage and take corresponding action with Subgraphs that have:
+
+- Abnormally high or sustained bandwidth or storage usage beyond posted limits
+- Circumvention of storage thresholds (e.g., use of multiple free-tier accounts)
+
+The Edge & Node Support Team reserves the right to revise storage limits or impose temporary constraints for operational integrity.
+
+If you exceed your included storage:
+
+- Try [pruning Subgraph data](/subgraphs/best-practices/pruning/) to remove unused entities and help stay within storage limits
+- [Add signal to the Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to encourage other Indexers on the network to serve it
+- You will receive multiple notifications and email alerts
+- A grace period of 14 days will be provided to upgrade or reduce storage
+
+Edge & Node's team is committed to helping users avoid unnecessary interruptions and will continue to support all web3 builders.
+
+### 4. Subgraph Data Retention
+
+Subgraphs inactive for over 14 days or Subgraphs that exceed free-tier storage limits will be subject to automatic data archival or deletion. Edge & Node's team will notify you before any such actions are taken.
+
+### 5. Support
+
+If you believe your usage is incorrectly flagged or have unique use cases (e.g. approved special request pending new Subgraph upgrade plan), reach out the Edge & Node team at [info@edgeandnode.com](mailto:info@edgeandnode.com).
diff --git a/website/src/pages/ko/subgraphs/guides/near.mdx b/website/src/pages/ko/subgraphs/guides/near.mdx
index e78a69eb7fa2..34adc65f9175 100644
--- a/website/src/pages/ko/subgraphs/guides/near.mdx
+++ b/website/src/pages/ko/subgraphs/guides/near.mdx
@@ -186,7 +186,7 @@ Once your Subgraph has been created, you can deploy your Subgraph by using the `
```sh
$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
+$ graph deploy --node --ipfs https://ipfs.thegraph.com # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
The node configuration will depend on where the Subgraph is being deployed.
diff --git a/website/src/pages/ko/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ko/subgraphs/guides/subgraph-composition.mdx
index 09f1939c1fde..1d6ab12958b4 100644
--- a/website/src/pages/ko/subgraphs/guides/subgraph-composition.mdx
+++ b/website/src/pages/ko/subgraphs/guides/subgraph-composition.mdx
@@ -39,20 +39,20 @@ While the source Subgraph is a standard Subgraph, the dependent Subgraph uses th
### Source Subgraphs
-- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs).
- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
-- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
-- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
-- Source Subgraphs cannot use grafting on top of existing entities
-- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+- Immutable entities only: All Subgraphs must have [immutable entities](/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed.
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of.
+- Source Subgraphs cannot use grafting on top of existing entities.
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly.
### Composed Subgraphs
-- You can only compose up to a **maximum of 5 source Subgraphs**
-- Composed Subgraphs can only use **datasources from the same chain**
-- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
-- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
-- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+- You can only compose up to a **maximum of 5 source Subgraphs.**
+- Composed Subgraphs can only use **datasources from the same chain.**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time.
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly.
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph).
Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
diff --git a/website/src/pages/ko/subgraphs/mcp/claude.mdx b/website/src/pages/ko/subgraphs/mcp/claude.mdx
new file mode 100644
index 000000000000..8b61438d2ab7
--- /dev/null
+++ b/website/src/pages/ko/subgraphs/mcp/claude.mdx
@@ -0,0 +1,180 @@
+---
+title: Claude Desktop
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Claude to interact directly with Subgraphs on The Graph Network. This integration allows you to find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries—all through natural language conversations with Claude.
+
+## What You Can Do
+
+The Subgraph MCP integration enables you to:
+
+- Access the GraphQL schema for any Subgraph on The Graph Network
+- Execute GraphQL queries against any Subgraph deployment
+- Find top Subgraph deployments for a given keyword or contract address
+- Get 30-day query volume for Subgraph deployments
+- Ask natural language questions about Subgraph data without writing GraphQL queries manually
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+
+## Installation Options
+
+### Option 1: Using npx (Recommended)
+
+#### Configuration Steps using npx
+
+#### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+#### 2. Add Configuration
+
+Paste the following settings into your config file:
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+#### 3. Add Your Gateway API Key
+
+Replace `GATEWAY_API_KEY` with your API key from [Subgraph Studio](https://thegraph.com/studio/).
+
+#### 4. Save and Restart
+
+Once you've entered your Gateway API key into your settings, save the file and restart Claude Desktop.
+
+### Option 2: Building from Source
+
+#### Requirements
+
+- Rust (latest stable version recommended: 1.75+)
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+ Follow the on-screen instructions. For other platforms, see the [official Rust installation guide](https://www.rust-lang.org/tools/install).
+
+#### Installation Steps
+
+1. **Clone and Build the Repository**
+
+ ```bash
+ git clone git@github.com:graphops/subgraph-mcp.git
+ cd subgraph-mcp
+ cargo build --release
+ ```
+
+2. **Find the Command Path**
+
+ After building, the executable will be located at `target/release/subgraph-mcp` inside your project directory.
+
+ - Navigate to your `subgraph-mcp` directory in terminal
+ - Run `pwd` to get the full path
+ - Combine the output with `/target/release/subgraph-mcp`
+
+3. **Configure Claude Desktop**
+
+ Open your `claude_desktop_config.json` file as described above and add:
+
+ ```json
+ {
+ "mcpServers": {
+ "subgraph": {
+ "command": "/path/to/your/subgraph-mcp/target/release/subgraph-mcp",
+ "env": {
+ "GATEWAY_API_KEY": "your-api-key-here"
+ }
+ }
+ }
+ }
+ ```
+
+ Replace `/path/to/your/subgraph-mcp/target/release/subgraph-mcp` with the actual path to the compiled binary.
+
+## Using The Graph Resource in Claude
+
+After configuring Claude Desktop:
+
+1. Restart Claude Desktop
+2. Start a new conversation
+3. Click on the context menu (top right)
+4. Add "Subgraph Server Instructions" as a resource by adding `graphql://subgraph` to your chat context
+
+> **Important**: Claude Desktop may not automatically utilize the Subgraph MCP. You must manually add "Subgraph Server Instructions" resource to your chat context for each conversation where you want to use it.
+
+## Troubleshooting
+
+To enable logs for the MCP when using the npx option, add the `--verbose true` option to your args array.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID/IPFS hash**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Search subgraphs by keyword**: Find subgraphs by keyword in their display names, ordered by signal
+- **Get deployment 30-day query counts**: Get aggregate query count over the last 30 days for multiple subgraph deployments
+- **Get top Subgraph deployments for a contract**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain, ordered by query fees
+
+## Key Identifier Types
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a subgraph. Use `execute_query_by_subgraph_id` or `get_schema_by_subgraph_id`.
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment. Use `execute_query_by_deployment_id` or `get_schema_by_deployment_id`.
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific, immutable deployment. Use `execute_query_by_deployment_id` (the gateway treats it like a deployment ID for querying) or `get_schema_by_ipfs_hash`.
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Claude will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+```
+Find the top subgraphs for contract 0x1f98431c8ad98523631ae4a59f267346ea31f984 on arbitrum-one
+```
diff --git a/website/src/pages/ko/subgraphs/mcp/cline.mdx b/website/src/pages/ko/subgraphs/mcp/cline.mdx
new file mode 100644
index 000000000000..156221d9a127
--- /dev/null
+++ b/website/src/pages/ko/subgraphs/mcp/cline.mdx
@@ -0,0 +1,99 @@
+---
+title: Cline
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cline to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cline.
+
+## Prerequisites
+
+- [Cline](https://cline.bot/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+## Using The Graph Resource in Cline
+
+After configuring Cline:
+
+1. Restart Cline
+2. Start a new conversation
+3. Enable the Subgraph MCP from the context menu
+4. Add "Subgraph Server Instructions" as a resource to your chat context
+
+## Available Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cline will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/ko/subgraphs/mcp/cursor.mdx b/website/src/pages/ko/subgraphs/mcp/cursor.mdx
new file mode 100644
index 000000000000..298f43ece048
--- /dev/null
+++ b/website/src/pages/ko/subgraphs/mcp/cursor.mdx
@@ -0,0 +1,94 @@
+---
+title: Cursor
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cursor to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cursor.
+
+## Prerequisites
+
+- [Cursor](https://www.cursor.com/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+### 4. Restart Cursor
+
+Restart Cursor, and start a new chat.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cursor will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/ko/subgraphs/querying/best-practices.mdx b/website/src/pages/ko/subgraphs/querying/best-practices.mdx
index ab02b27cbc03..372507f29bb9 100644
--- a/website/src/pages/ko/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/ko/subgraphs/querying/best-practices.mdx
@@ -2,9 +2,7 @@
title: Querying Best Practices
---
-The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-
-Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
+Use The Graph's GraphQL API to query [Subgraph](/subgraphs/developing/subgraphs/) data efficiently. This guide outlines essential GraphQL rules, guides, and best practices to help you write optimized, reliable queries.
---
@@ -12,9 +10,11 @@ Learn the essential GraphQL language rules and best practices to optimize your S
### The Anatomy of a GraphQL Query
-Unlike REST API, a GraphQL API is built upon a Schema that defines which queries can be performed.
+> GraphQL queries use the GraphQL language, which is defined in the [GraphQL specification](https://spec.graphql.org/).
+
+Unlike REST APIs, GraphQL APIs are built on a schema-driven design that defines which queries can be performed.
-For example, a query to get a token using the `token` query will look as follows:
+Here's a typical query to fetch a `token`:
```graphql
query GetToken($id: ID!) {
@@ -25,7 +25,7 @@ query GetToken($id: ID!) {
}
```
-which will return the following predictable JSON response (_when passing the proper `$id` variable value_):
+which will return a predictable JSON response (when passing the proper `$id` variable value):
```json
{
@@ -36,8 +36,6 @@ which will return the following predictable JSON response (_when passing the pro
}
```
-GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/).
-
The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders):
```graphql
@@ -50,33 +48,31 @@ query [operationName]([variableName]: [variableType]) {
}
```
-## Rules for Writing GraphQL Queries
+### Rules for Writing GraphQL Queries
-- Each `queryName` must only be used once per operation.
-- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`)
-- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
-- Any variable assigned to an argument must match its type.
-- In a given list of variables, each of them must be unique.
-- All defined variables must be used.
+> Important: Failing to follow these rules will result in an error from The Graph API.
-> Note: Failing to follow these rules will result in an error from The Graph API.
+1. Each `queryName` must only be used once per operation.
+2. Each `field` must be used only once in a selection (you cannot query `id` twice under `token`).
+3. Complex types require a selection of sub-fields.
+ - For example, some `fields' or queries (like `tokens`) return complex types which will require a selection of sub-fields. Not providing a selection when expected or providing one when not expected will raise an error, such as `id\`. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
+4. Any variable assigned to an argument must match its type.
+5. Variables must be uniquely defined and used.
-For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/).
+**For a complete list of rules with code examples, check out the [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)**.
-### Sending a query to a GraphQL API
+### How to Send a Query to a GraphQL API
-GraphQL is a language and set of conventions that transport over HTTP.
+[GraphQL is a query language](https://graphql.org/learn/) and a set of conventions for APIs, typically used over HTTP to request and send data between clients and servers. This means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-
-However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
+However, as recommended in [Querying from an Application](/subgraphs/querying/from-an-application/), it's best to use `graph-client`, which supports the following unique features:
- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Fully typed result
-Here's how to query The Graph with `graph-client`:
+Example query using `graph-client`:
```tsx
import { execute } from '../.graphclient'
@@ -100,15 +96,15 @@ async function main() {
main()
```
-More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/).
+For more alternatives, see ["Querying from an Application"](/subgraphs/querying/from-an-application/).
---
## Best Practices
-### Always write static queries
+### 1. Always Write Static Queries
-A common (bad) practice is to dynamically build query strings as follows:
+A common bad practice is to dynamically build a query string as follows:
```tsx
const id = params.id
@@ -124,14 +120,16 @@ query GetToken {
// Execute query...
```
-While the above snippet produces a valid GraphQL query, **it has many drawbacks**:
+While the example above produces a valid GraphQL query, it comes with several issues:
+
+- The full query is harder to understand.
+- Developers are responsible for safely sanitizing the string interpolation.
+- Not sending the values of the variables as part of the request can block server-side caching.
+- It prevents tools from statically analyzing the query (e.g.linters or type generation tools).
-- it makes it **harder to understand** the query as a whole
-- developers are **responsible for safely sanitizing the string interpolation**
-- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side**
-- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools)
+Instead, it's recommended to **always write queries as static strings**.
-For this reason, it is recommended to always write queries as static strings:
+Example of static string:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -153,18 +151,21 @@ const result = await execute(query, {
})
```
-Doing so brings **many advantages**:
+Static strings have several **key advantages**:
-- **Easy to read and maintain** queries
-- The GraphQL **server handles variables sanitization**
-- **Variables can be cached** at server-level
-- **Queries can be statically analyzed by tools** (more on this in the following sections)
+- Queries are easier to read, manage, and debug.
+- Variable sanitization is handled by the GraphQL server The GraphQL.
+- Variables can be cached at the server level.
+- Queries can be statically analyzed by tools (see [GraphQL Essential Tools](/subgraphs/querying/best-practices/#graphql-essential-tools-guides)).
-### How to include fields conditionally in static queries
+### 2. Include Fields Conditionally in Static Queries
-You might want to include the `owner` field only on a particular condition.
+Including fields in static queries only for a particular condition improves performance and keeps responses lightweight by fetching only the necessary data when it's relevant.
-For this, you can leverage the `@include(if:...)` directive as follows:
+- The `@include(if:...)` directive tells the query to **include** a specific field only if the given condition is true.
+- The `@skip(if: ...)` directive tells the query to **exclude** a specific field if the given condition is true.
+
+Example using `owner` field with `@include(if:...)` directive:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -187,15 +188,11 @@ const result = await execute(query, {
})
```
-> Note: The opposite directive is `@skip(if: ...)`.
-
-### Ask for what you want
-
-GraphQL became famous for its "Ask for what you want" tagline.
+### 3. Ask Only For What You Want
-For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually.
+GraphQL is known for its "Ask for what you want” tagline, which is why it requires explicitly listing each field you want. There's no built-in way to fetch all available fields automatically.
-- When querying GraphQL APIs, always think of querying only the fields that will be actually used.
+- When querying GraphQL APIs, always think of querying only the fields that will actually be used.
- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities.
For example, in the following query:
@@ -215,9 +212,9 @@ query listTokens {
The response could contain 100 transactions for each of the 100 tokens.
-If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field.
+If the application only needs 10 transactions, the query should explicitly set **`first: 10`** on the transactions field.
-### Use a single query to request multiple records
+### 4. Use a Single Query to Request Multiple Records
By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
@@ -249,7 +246,7 @@ query ManyRecords {
}
```
-### Combine multiple queries in a single request
+### 5. Combine Multiple Queries in a Single Request
Your application might require querying multiple types of data as follows:
@@ -281,9 +278,9 @@ const [tokens, counters] = Promise.all(
)
```
-While this implementation is totally valid, it will require two round trips with the GraphQL API.
+While this implementation is valid, it will require two round trips with the GraphQL API.
-Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows:
+It's best to send multiple queries in the same GraphQL request as follows:
```graphql
import { execute } from "your-favorite-graphql-client"
@@ -304,9 +301,9 @@ query GetTokensandCounters {
const { result: { tokens, counters } } = execute(query)
```
-This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**.
+Sending multiple queries in the same GraphQL request **improves the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and provides a **more concise implementation**.
-### Leverage GraphQL Fragments
+### 6. Leverage GraphQL Fragments
A helpful feature to write GraphQL queries is GraphQL Fragment.
@@ -335,7 +332,7 @@ Such repeated fields (`id`, `active`, `status`) bring many issues:
- More extensive queries become harder to read.
- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces.
-A refactored version of the query would be the following:
+An optimized version of the query would be the following:
```graphql
query {
@@ -359,15 +356,18 @@ fragment DelegateItem on Transcoder {
}
```
-Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation.
+Using a GraphQL `fragment` improves readability (especially at scale) and results in better TypeScript types generation.
When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_).
-### GraphQL Fragment do's and don'ts
+## GraphQL Fragment Guidelines
-### Fragment base must be a type
+### Do's and Don'ts for Fragments
-A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**:
+1. Fragments cannot be based on a non-applicable types (types without fields).
+2. `BigInt` cannot be used as a fragment's base because it's a **scalar** (native "plain" type).
+
+Example:
```graphql
fragment MyFragment on BigInt {
@@ -375,11 +375,8 @@ fragment MyFragment on BigInt {
}
```
-`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base.
-
-#### How to spread a Fragment
-
-Fragments are defined on specific types and should be used accordingly in queries.
+3. Fragments belong to specific types and must be used with those same types in queries.
+4. Spread only fragments matching the correct type.
Example:
@@ -402,20 +399,23 @@ fragment VoteItem on Vote {
}
```
-`newDelegate` and `oldDelegate` are of type `Transcoder`.
+- `newDelegate` and `oldDelegate` are of type `Transcoder`. It's not possible to spread a fragment of type `Vote` here.
-It is not possible to spread a fragment of type `Vote` here.
+5. Fragments must be defined based on their specific usage.
+6. Define fragments as an atomic business unit of data.
-#### Define Fragment as an atomic business unit of data
+---
-GraphQL `Fragment`s must be defined based on their usage.
+### How to Define `Fragment` as an Atomic Business Unit of Data
-For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient.
+> For most use-cases, defining one fragment per type (in the case of repeated fields usage or type generation) is enough.
Here is a rule of thumb for using fragments:
- When fields of the same type are repeated in a query, group them in a `Fragment`.
-- When similar but different fields are repeated, create multiple fragments, for instance:
+- When similar but different fields are repeated, create multiple fragments.
+
+Example:
```graphql
# base fragment (mostly used in listing)
@@ -438,35 +438,45 @@ fragment VoteWithPoll on Vote {
---
-## The Essential Tools
+## GraphQL Essential Tools Guides
+
+### Test Queries with Graph Explorer
+
+Before integrating GraphQL queries into your dapp, it's best to test them. Instead of running them directly in your app, use a web-based playground.
+
+Start with [Graph Explorer](https://thegraph.com/explorer), a preconfigured GraphQL playground built specifically for Subgraphs. You can experiment with queries and see the structure of the data returned without writing any frontend code.
+
+If you want alternatives to debug/test your queries, check out other similar web-based tools:
+
+- [GraphiQL](https://graphiql-online.com/graphiql)
+- [Altair](https://altairgraphql.dev/)
-### GraphQL web-based explorers
+### Setting up Workflow and IDE Tools
-Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries.
+In order to keep up with querying best practices and syntax rules, use the following workflow and IDE tools.
-If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql).
+#### GraphQL ESLint
-### GraphQL Linting
+1. Install GraphQL ESLint
-In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools.
+Use [GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) to enforce best practices and syntax rules with zero effort.
-**GraphQL ESLint**
+2. Use the ["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config
-[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort.
+This will enforce essential rules such as:
-[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as:
+- `@graphql-eslint/fields-on-correct-type`: Ensures fields match the proper type.
+- `@graphql-eslint/no-unused variables`: Flags unused variables in your queries.
-- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type?
-- `@graphql-eslint/no-unused variables`: should a given variable stay unused?
-- and more!
+Result: You'll **catch errors without even testing queries** on the playground or running them in production!
-This will allow you to **catch errors without even testing queries** on the playground or running them in production!
+#### Use IDE plugins
-### IDE plugins
+GraphQL plugins streamline your workflow by offering real-time feedback while you code. They highlight mistakes, suggest completions, and help you explore your schema faster.
-**VSCode and GraphQL**
+1. VS Code
-The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get:
+Install the [GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) to unlock:
- Syntax highlighting
- Autocomplete suggestions
@@ -474,11 +484,11 @@ The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemNa
- Snippets
- Go to definition for fragments and input types
-If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly.
+If you are using `graphql-eslint`, use the [ESLint VS Code extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) to visualize errors and warnings inlined in your code correctly.
-**WebStorm/Intellij and GraphQL**
+2. WebStorm/Intellij and GraphQL
-The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing:
+Install the [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/). It significantly improves the experience of working with GraphQL by providing:
- Syntax highlighting
- Autocomplete suggestions
diff --git a/website/src/pages/ko/subgraphs/querying/graphql-api.mdx b/website/src/pages/ko/subgraphs/querying/graphql-api.mdx
index e10201771989..0da4d012df07 100644
--- a/website/src/pages/ko/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/ko/subgraphs/querying/graphql-api.mdx
@@ -2,23 +2,37 @@
title: GraphQL API
---
-Learn about the GraphQL Query API used in The Graph.
+Explore the GraphQL Query API for interacting with Subgraphs on The Graph Network.
-## What is GraphQL?
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with existing data.
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
+The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
+## Core Concepts
-## Queries with GraphQL
+### Entities
+
+- **What they are**: Persistent data objects defined with `@entity` in your schema
+- **Key requirement**: Must contain `id: ID!` as primary identifier
+- **Usage**: Foundation for all query operations
+
+### Schema
+
+- **Purpose**: Blueprint defining the data structure and relationships using GraphQL [IDL](https://facebook.github.io/graphql/draft/#sec-Type-System)
+- **Key characteristics**:
+ - Auto-generates query endpoints
+ - Read-only operations (no mutations)
+ - Defines entity interfaces and derived fields
-In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+## Query Structure
-> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
+GraphQL queries in The Graph target entities defined in the Subgraph schema. Each `Entity` type generates corresponding `entity` and `entities` fields on the root `Query` type.
-### Examples
+> Note: The `query` keyword is not required at the top level of GraphQL queries.
-Query for a single `Token` entity defined in your schema:
+### Single Entity Queries Example
+
+Query for a single `Token` entity:
```graphql
{
@@ -29,9 +43,11 @@ Query for a single `Token` entity defined in your schema:
}
```
-> Note: When querying for a single entity, the `id` field is required, and it must be written as a string.
+> Note: Single entity queries require the `id` parameter as a string.
+
+### Collection Queries Example
-Query all `Token` entities:
+Query format for all `Token` entities:
```graphql
{
@@ -42,14 +58,14 @@ Query all `Token` entities:
}
```
-### Sorting
+### Sorting Example
-When querying a collection, you may:
+Collection queries support the following sort parameters:
-- Use the `orderBy` parameter to sort by a specific attribute.
-- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending.
+- `orderBy`: Specifies the attribute for sorting
+- `orderDirection`: Accepts `asc` (ascending) or `desc` (descending)
-#### Example
+#### Standard Sorting Example
```graphql
{
@@ -60,11 +76,7 @@ When querying a collection, you may:
}
```
-#### Example for nested entity sorting
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities.
-
-The following example shows tokens sorted by the name of their owner:
+#### Nested Entity Sorting Example
```graphql
{
@@ -77,20 +89,18 @@ The following example shows tokens sorted by the name of their owner:
}
```
-> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
+> Note: Nested sorting supports one-level-deep `String` or `ID` types on `@entity` and `@derivedFrom` fields.
-### Pagination
+### Pagination Example
-When querying a collection, it's best to:
+When querying a collection, it is best to:
- Use the `first` parameter to paginate from the beginning of the collection.
- The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time.
- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities.
- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above.
-#### Example using `first`
-
-Query the first 10 tokens:
+#### Standard Pagination Example
```graphql
{
@@ -101,11 +111,7 @@ Query the first 10 tokens:
}
```
-To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection.
-
-#### Example using `first` and `skip`
-
-Query 10 `Token` entities, offset by 10 places from the beginning of the collection:
+#### Offset Pagination Example
```graphql
{
@@ -116,9 +122,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect
}
```
-#### Example using `first` and `id_ge`
-
-If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query:
+#### Cursor-based Pagination Example
```graphql
query manyTokens($lastID: String) {
@@ -129,16 +133,11 @@ query manyTokens($lastID: String) {
}
```
-The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values.
-
### Filtering
-- You can use the `where` parameter in your queries to filter for different properties.
-- You can filter on multiple values within the `where` parameter.
-
-#### Example using `where`
+The `where` parameter filters entities based on specified conditions.
-Query challenges with `failed` outcome:
+#### Basic Filtering Example
```graphql
{
@@ -152,9 +151,7 @@ Query challenges with `failed` outcome:
}
```
-You can use suffixes like `_gt`, `_lte` for value comparison:
-
-#### Example for range filtering
+#### Numeric Comparison Example
```graphql
{
@@ -166,11 +163,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
}
```
-#### Example for block filtering
-
-You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-
-This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
+#### Block-based Filtering Example
```graphql
{
@@ -182,11 +175,7 @@ This can be useful if you are looking to fetch only entities which have changed,
}
```
-#### Example for nested entity filtering
-
-Filtering on the basis of nested entities is possible in the fields with the `_` suffix.
-
-This can be useful if you are looking to fetch only entities whose child-level entities meet the provided conditions.
+#### Nested Entity Filtering Example
```graphql
{
@@ -200,11 +189,9 @@ This can be useful if you are looking to fetch only entities whose child-level e
}
```
-#### Logical operators
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria.
+#### Logical Operators
-##### `AND` Operator
+##### AND Operations Example
The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`.
@@ -220,27 +207,11 @@ The following example filters for challenges with `outcome` `succeeded` and `num
}
```
-> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas.
->
-> ```graphql
-> {
-> challenges(where: { number_gte: 100, outcome: "succeeded" }) {
-> challenger
-> outcome
-> application {
-> id
-> }
-> }
-> }
-> ```
-
-##### `OR` Operator
-
-The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`.
+**Syntactic sugar:** You can simplify the above query by removing the `and` operator and by passing a sub-expression separated by commas.
```graphql
{
- challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenges(where: { number_gte: 100, outcome: "succeeded" }) {
challenger
outcome
application {
@@ -250,52 +221,36 @@ The following example filters for challenges with `outcome` `succeeded` or `numb
}
```
-> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
-
-#### All Filters
-
-Full list of parameter suffixes:
+##### OR Operations Example
+```graphql
+{
+ challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenger
+ outcome
+ application {
+ id
+ }
+ }
+}
```
-_
-_not
-_gt
-_lt
-_gte
-_lte
-_in
-_not_in
-_contains
-_contains_nocase
-_not_contains
-_not_contains_nocase
-_starts_with
-_starts_with_nocase
-_ends_with
-_ends_with_nocase
-_not_starts_with
-_not_starts_with_nocase
-_not_ends_with
-_not_ends_with_nocase
-```
-
-> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types.
-In addition, the following global filters are available as part of `where` argument:
+Global filter parameter:
```graphql
_change_block(number_gte: Int)
```
-### Time-travel queries
+### Time-travel Queries Example
-You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries.
+Queries support historical state retrieval using the `block` parameter:
-The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change.
+- `number`: Integer block number
+- `hash`: String block hash
> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail.
-#### Example
+#### Block Number Query Example
```graphql
{
@@ -309,9 +264,7 @@ The result of such a query will not change over time, i.e., querying at a certai
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000.
-
-#### Example
+#### Block Hash Query Example
```graphql
{
@@ -325,28 +278,26 @@ This query will return `Challenge` entities, and their associated `Application`
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash.
-
-### Fulltext Search Queries
+### Full-Text Search Example
-Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
+Full-text search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Full-text Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add full-text search to your Subgraph.
-Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
+Full-text search queries have one required field, `text`, for supplying search terms. Several special full-text operators are available to be used in this `text` search field.
-Fulltext search operators:
+Full-text search fields use the required `text` parameter with the following operators:
-| Symbol | Operator | Description |
-| --- | --- | --- |
-| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms |
-| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms |
-| `<->` | `Follow by` | Specify the distance between two words. |
-| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) |
+| Operator | Symbol | Description |
+| --------- | ------ | --------------------------------------------------------------- |
+| And | `&` | Matches entities containing all terms |
+| Or | `\|` | Return all entities with a match from any of the provided terms |
+| Follow by | `<->` | Matches terms with specified distance |
+| Prefix | `:*` | Matches word prefixes (minimum 2 characters) |
-#### Examples
+#### Search Examples
-Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields.
+OR operator:
-```graphql
+```
{
blogSearch(text: "anarchism | crumpets") {
id
@@ -357,7 +308,7 @@ Using the `or` operator, this query will filter to blog entities with variations
}
```
-The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy"
+“Follow” by operator:
```graphql
{
@@ -370,7 +321,7 @@ The `follow by` operator specifies a words a specific distance apart in the full
}
```
-Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music".
+Combined operators:
```graphql
{
@@ -383,29 +334,19 @@ Combine fulltext operators to make more complex filters. With a pretext search o
}
```
-### Validation
+### Schema Definition
-Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
-
-## Schema
-
-The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
-
-> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
-
-### Entities
+Entity types require:
-All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field.
+- GraphQL Interface Definition Language (IDL) format
+- `@entity` directive
+- `ID` field
-> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported.
+### Subgraph Metadata Example
-### Subgraph Metadata
+The `_Meta_` object provides subgraph metadata:
-All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
-
-```graphQL
+```graphql
{
_meta(block: { number: 123987 }) {
block {
@@ -419,14 +360,49 @@ All Subgraphs have an auto-generated `_Meta_` object, which provides access to S
}
```
-If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
-
-`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
+Metadata fields:
+
+- `deployment`: IPFS CID of the subgraph.yaml
+- `block`: Latest block information
+- `hasIndexingErrors`: Boolean indicating past indexing errors
+
+> Note: When writing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use `and` operators instead of `or` whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
+
+### GraphQL Filter Operators Reference
+
+This table explains each filter operator available in The Graph's GraphQL API. These operators are used as suffixes to field names when filtering data using the `where` parameter.
+
+| Operator | Description | Example |
+| ------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
+| `_` | Matches entities where the specified field equals another entity | `{ where: { owner_: { name: "Alice" } } }` |
+| `_not` | Negates the specified condition | `{ where: { active_not: true } }` |
+| `_gt` | Greater than (>) | `{ where: { price_gt: "100" } }` |
+| `_lt` | Less than (`\<`) | `{ where: { price_lt: "100" } }` |
+| `_gte` | Greater than or equal to (>=) | `{ where: { price_gte: "100" } }` |
+| `_lte` | Less than or equal to (`\<=`) | `{ where: { price_lte: "100" } }` |
+| `_in` | Value is in the specified array | `{ where: { category_in: ["Art", "Music"] } }` |
+| `_not_in` | Value is not in the specified array | `{ where: { category_not_in: ["Art", "Music"] } }` |
+| `_contains` | Field contains the specified string (case-sensitive) | `{ where: { name_contains: "token" } }` |
+| `_contains_nocase` | Field contains the specified string (case-insensitive) | `{ where: { name_contains_nocase: "token" } }` |
+| `_not_contains` | Field does not contain the specified string (case-sensitive) | `{ where: { name_not_contains: "test" } }` |
+| `_not_contains_nocase` | Field does not contain the specified string (case-insensitive) | `{ where: { name_not_contains_nocase: "test" } }` |
+| `_starts_with` | Field starts with the specified string (case-sensitive) | `{ where: { name_starts_with: "Crypto" } }` |
+| `_starts_with_nocase` | Field starts with the specified string (case-insensitive) | `{ where: { name_starts_with_nocase: "crypto" } }` |
+| `_ends_with` | Field ends with the specified string (case-sensitive) | `{ where: { name_ends_with: "Token" } }` |
+| `_ends_with_nocase` | Field ends with the specified string (case-insensitive) | `{ where: { name_ends_with_nocase: "token" } }` |
+| `_not_starts_with` | Field does not start with the specified string (case-sensitive) | `{ where: { name_not_starts_with: "Test" } }` |
+| `_not_starts_with_nocase` | Field does not start with the specified string (case-insensitive) | `{ where: { name_not_starts_with_nocase: "test" } }` |
+| `_not_ends_with` | Field does not end with the specified string (case-sensitive) | `{ where: { name_not_ends_with: "Test" } }` |
+| `_not_ends_with_nocase` | Field does not end with the specified string (case-insensitive) | `{ where: { name_not_ends_with_nocase: "test" } }` |
+
+#### Notes
+
+- Type support varies by operator. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`.
+- The `_` operator is only available for object and interface types.
+- String comparison operators are especially useful for text fields.
+- Numeric comparison operators work with both number and string-encoded number fields.
+- Use these operators in combination with logical operators (`and`, `or`) for complex filtering.
-`block` provides information about the latest block (taking into account any block constraints passed to `_meta`):
-
-- hash: the hash of the block
-- number: the block number
-- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
+### Validation
-`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
+Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
diff --git a/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx
index aed3d10422e1..0a16fca09f15 100644
--- a/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx
@@ -1,34 +1,86 @@
---
-title: Managing API keys
+title: How to Manage API keys
---
+This guide shows you how to create, manage, and secure API keys for your [Subgraphs](/subgraphs/developing/subgraphs/).
+
## Overview
-API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are required to query Subgraphs. They authenticate users and devices, authorize access to specific endpoints, enforce rate limits, and enable usage tracking across The Graph.
+
+## Prerequisites
+
+- A [Subgraph Studio](https://thegraph.com/studio/) account
+
+## Create a New API Key
+
+1. Navigate to [Subgraph Studio](https://thegraph.com/studio/)
+2. Click the **API Keys** tab in the navigation menu
+3. Click the **Create API Key** button
+
+A new window will pop up:
+
+4. Enter a name for your API key
+5. Optional: You can enable a period spending limit
+6. Click **Create API Key**
+
+
+
+## Manage API Keys
+
+The “API keys” table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+
+### How to Set Spending Limits
+
+1. Find your API key in the API keys table
+2. Click the "three dots" icon next to the key
+3. Select "Manage spending limit"
+4. Enter your desired monthly limit in USD
+5. Click **Save**
+
+> Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+
+### How to Rename an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Rename API key"
+3. Enter the new name
+4. Click **Save**
+
+### How to Regenerate an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Regenerate API key"
+3. Confirm the action in the pop up dialog
+
+> Warning: Regenerating an API key will invalidate the previous key immediately. Update your applications with the new key to prevent service interruption.
+
+## API Key Details
-### Create and Manage API Keys
+### Monitoring Usage
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
+1. Click on your API key to view the Details page
+2. Check the **Overview** section for:
+ - Total number of queries
+ - GRT spent
+ - Current usage statistics
-The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+### Restricting Domain Access
-You can click the "three dots" menu to the right of a given API key to:
+1. Click on your API key to open the Details page
+2. Navigate to the **Security** section
+3. Click "Add Domain"
+4. Enter the authorized domain name
+5. Click **Save**
-- Rename API key
-- Regenerate API key
-- Delete API key
-- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+### Limiting Subgraph Access
-### API Key Details
+1. In the API key Details page
+2. Navigate to the **Security** section
+3. Click "Assign Subgraphs"
+4. Select the Subgraphs you want to authorize
+5. Click **Save**
-You can click on an individual API key to view the Details page:
+## Additional Resources
-1. Under the **Overview** section, you can:
- - Edit your key name
- - Regenerate API keys
- - View the current usage of the API key with stats:
- - Number of queries
- - Amount of GRT spent
-2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- - View and manage the domain names authorized to use your API key
- - Assign Subgraphs that can be queried with your API key
+[Deploying Using Subgraph Studio](/subgraphs/developing/deploying/using-subgraph-studio/)
diff --git a/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 17258dd13ea1..c48a3021233a 100644
--- a/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,15 +2,19 @@
title: Subgraph ID vs Deployment ID
---
+Managing and accessing Subgraphs relies on two distinct identification systems: Subgraph IDs and Deployment IDs.
+
A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
-Here are some key differences between the two IDs: 
+Both identifiers are accessible in [Subgraph Studio](https://thegraph.com/studio/):
+
+
## Deployment ID
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://ipfs.thegraph.com/ipfs/QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
@@ -18,6 +22,12 @@ Example endpoint that uses Deployment ID:
`https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB`
+Using Deployment IDs for queries offers precise version control but comes with specific implications:
+
+- Advantages: Complete control over which version you're querying, ensuring consistent results
+- Challenges: Requires manual updates to query code when new Subgraph versions are published
+- Use case: Ideal for production environments where stability and predictability are crucial
+
## Subgraph ID
The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
@@ -25,3 +35,20 @@ The Subgraph ID is a unique identifier for a Subgraph. It remains constant acros
Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
+
+Using Subgraph IDs comes with important considerations:
+
+- Benefits: Automatically queries the latest version, reducing maintenance overhead
+- Limitations: May encounter version synchronization delays or breaking schema changes
+- Use case: Better suited for development environments or when staying current is more important than version stability
+
+## Deployment ID vs Subgraph ID
+
+Here are the key differences between the two IDs:
+
+| Consideration | Deployment ID | Subgraph ID |
+| ----------------------- | --------------------- | --------------- |
+| Version Pinning | Specific version | Always latest |
+| Maintenance Effort | High (manual updates) | Low (automatic) |
+| Environment Suitability | Production | Development |
+| Sync Status Awareness | Not required | Critical |
diff --git a/website/src/pages/ko/subgraphs/quick-start.mdx b/website/src/pages/ko/subgraphs/quick-start.mdx
index a803ac8695fa..b5c4a0fdc09e 100644
--- a/website/src/pages/ko/subgraphs/quick-start.mdx
+++ b/website/src/pages/ko/subgraphs/quick-start.mdx
@@ -2,24 +2,28 @@
title: Quick Start
---
-Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Create, deploy, and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph Network.
+
+By the end, you'll have:
+
+- Initialized a Subgraph from a smart contract
+- Deployed it to Subgraph Studio for testing
+- Published to The Graph Network for decentralized indexing
## Prerequisites
- A crypto wallet
-- A smart contract address on a [supported network](/supported-networks/)
-- [Node.js](https://nodejs.org/) installed
-- A package manager of your choice (`npm`, `yarn` or `pnpm`)
+- A deployed smart contract on a [supported network](/supported-networks/)
+- [Node.js](https://nodejs.org/) & a package manager of your choice (`npm`, `yarn` or `pnpm`)
## How to Build a Subgraph
### 1. Create a Subgraph in Subgraph Studio
-Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-
-Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-
-Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
+1. Go to [Subgraph Studio](https://thegraph.com/studio/)
+2. Connect your wallet
+3. Click "Create a Subgraph"
+4. Name it in Title Case: "Subgraph Name Chain Name"
### 2. Install the Graph CLI
@@ -37,20 +41,22 @@ Using [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your Subgraph
+Verify install:
-> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
+```sh
+graph --version
+```
-The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
+### 3. Initialize your Subgraph
-The following command initializes your Subgraph from an existing contract:
+> You can find commands for your specific Subgraph in [Subgraph Studio](https://thegraph.com/studio/).
+
+The following command initializes your Subgraph from an existing contract and indexes events:
```sh
graph init
```
-If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-
When you initialize your Subgraph, the CLI will ask you for the following information:
- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
@@ -59,19 +65,17 @@ When you initialize your Subgraph, the CLI will ask you for the following inform
- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block where the contract was deployed to optimize Subgraph indexing of blockchain data.
- **Contract Name**: Input the name of your contract.
- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-See the following screenshot for an example for what to expect when initializing your Subgraph:
+See the following screenshot for an example of what to expect when initializing your Subgraph:

### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-
When making changes to the Subgraph, you will mainly work with three files:
- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
@@ -82,9 +86,7 @@ For a detailed breakdown on how to write your Subgraph, check out [Creating a Su
### 5. Deploy your Subgraph
-> Remember, deploying is not the same as publishing.
-
-When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
Once your Subgraph is written, run the following commands:
@@ -107,8 +109,6 @@ graph deploy
```
````
-The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-
### 6. Review your Subgraph
If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
@@ -125,55 +125,13 @@ When your Subgraph is ready for a production environment, you can publish it to
- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-
-> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
-
-#### Publishing with Subgraph Studio
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to add curation signal.
-To publish your Subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard and select your network.

-Select the network to which you would like to publish your Subgraph.
-
-#### Publishing from the CLI
-
-As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
-
-Open the `graph-cli`.
-
-Use the following commands:
-
-````
-```sh
-graph codegen && graph build
-```
-
-Then,
-
-```sh
-graph publish
-```
-````
-
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.
-
-
-
-To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-
-#### Adding signal to your Subgraph
-
-1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
-
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
-
-2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
-
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
-
-To learn more about curation, read [Curating](/resources/roles/curating/).
+> It is recommended that you curate your own Subgraph with at least 3,000 GRT to incentivize indexing.
To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:
diff --git a/website/src/pages/ko/subgraphs/upgrade-indexer.mdx b/website/src/pages/ko/subgraphs/upgrade-indexer.mdx
new file mode 100644
index 000000000000..8d6c874bec12
--- /dev/null
+++ b/website/src/pages/ko/subgraphs/upgrade-indexer.mdx
@@ -0,0 +1,25 @@
+---
+title: Edge & Node Upgrade Indexer
+sidebarTitle: Upgrade Indexer
+---
+
+## Overview
+
+The Upgrade Indexer is a specialized Indexer operated by Edge & Node. It supports newly integrated chains within The Graph ecosystem and ensures new Subgraphs are immediately available for querying, eliminating potential downtime.
+
+Originally designed as a transitional support, its primary purpose was to facilitate the migration of Subgraphs from the hosted service to the decentralized network. Currently, it supports newly deployed Subgraphs before the full Chain Integration Process (CIP) Indexing rewards are activated.
+
+### What it does
+
+- Provides immediate query support for all newly deployed Subgraphs.
+- Functions as the sole supporting Indexer for each chain until indexing rewards are activated.
+
+### What it does **not** do
+
+- Does not permanently index Subgraphs. Subgraph owners should curate Subgraphs to use independent Indexers long term.
+- Does not compete for rewards. The Upgrade Indexer's participation on the Graph Network does not dilute rewards for other Indexers.
+- Doesn't support Time Travel Queries (TTQ). All Subgraphs on the Upgrade Indexer are auto-pruned. If TTQs are needed on a Subgraph, [curation signal can be added](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to attract Indexers that will support this feature.
+
+### Conclusion
+
+The Edge & Node Upgrade Indexer is foundational in supporting chain integrations and mitigating data latency risks. It plays a critical role in scaling The Graph's decentralized infrastructure by ensuring immediate query support and fostering community-driven indexing.
diff --git a/website/src/pages/ko/substreams/_meta-titles.json b/website/src/pages/ko/substreams/_meta-titles.json
index 6262ad528c3a..b8799cc89251 100644
--- a/website/src/pages/ko/substreams/_meta-titles.json
+++ b/website/src/pages/ko/substreams/_meta-titles.json
@@ -1,3 +1,4 @@
{
- "developing": "Developing"
+ "developing": "Developing",
+ "sps": "Substreams-powered Subgraphs"
}
diff --git a/website/src/pages/ko/substreams/developing/sinks.mdx b/website/src/pages/ko/substreams/developing/sinks.mdx
index 48c246201e8f..8719fa13ea81 100644
--- a/website/src/pages/ko/substreams/developing/sinks.mdx
+++ b/website/src/pages/ko/substreams/developing/sinks.mdx
@@ -8,14 +8,13 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database or a file.
## Sinks
> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed.
- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database.
-- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network.
- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application.
- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic.
- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks.
@@ -26,26 +25,26 @@ Sinks are integrations that allow you to send the extracted data to different de
### Official
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Official Support (by one of the main Substreams providers)
- C = Community Support
diff --git a/website/src/pages/ko/substreams/quick-start.mdx b/website/src/pages/ko/substreams/quick-start.mdx
index b5eec572b00a..acd3029d1376 100644
--- a/website/src/pages/ko/substreams/quick-start.mdx
+++ b/website/src/pages/ko/substreams/quick-start.mdx
@@ -31,6 +31,7 @@ If you can't find a Substreams package that meets your specific needs, you can d
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/).
diff --git a/website/src/pages/ko/substreams/sps/faq.mdx b/website/src/pages/ko/substreams/sps/faq.mdx
new file mode 100644
index 000000000000..250c466d5929
--- /dev/null
+++ b/website/src/pages/ko/substreams/sps/faq.mdx
@@ -0,0 +1,96 @@
+---
+title: Substreams-Powered Subgraphs FAQ
+sidebarTitle: FAQ
+---
+
+## What are Substreams?
+
+Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications.
+
+Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere.
+
+Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
+
+## What are Substreams-powered Subgraphs?
+
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
+
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
+
+## How are Substreams-powered Subgraphs different from Subgraphs?
+
+Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain.
+
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+
+## What are the benefits of using Substreams-powered Subgraphs?
+
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+
+## What are the benefits of Substreams?
+
+There are many benefits to using Substreams, including:
+
+- Composable: You can stack Substreams modules like LEGO blocks, and build upon community modules, further refining public data.
+
+- High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery).
+
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
+
+- Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks.
+
+- Access to additional data which is not available as part of the JSON RPC
+
+- All the benefits of the Firehose.
+
+## What is the Firehose?
+
+Developed by [StreamingFast](https://www.streamingfast.io/), the Firehose is a blockchain data extraction layer designed from scratch to process the full history of blockchains at speeds that were previously unseen. Providing a files-based and streaming-first approach, it is a core component of StreamingFast's suite of open-source technologies and the foundation for Substreams.
+
+Go to the [documentation](https://firehose.streamingfast.io/) to learn more about the Firehose.
+
+## What are the benefits of the Firehose?
+
+There are many benefits to using Firehose, including:
+
+- Lowest latency & no polling: In a streaming-first fashion, the Firehose nodes are designed to race to push out the block data first.
+
+- Prevents downtimes: Designed from the ground up for High Availability.
+
+- Never miss a beat: The Firehose stream cursor is designed to handle forks and to continue where you left off in any condition.
+
+- Richest data model: Best data model that includes the balance changes, the full call tree, internal transactions, logs, storage changes, gas costs, and more.
+
+- Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available.
+
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
+
+The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules.
+
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+
+The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
+
+## What is the role of Rust modules in Substreams?
+
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
+
+See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
+
+## What makes Substreams composable?
+
+When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used.
+
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
+
+## How can you build and deploy a Substreams-powered Subgraph?
+
+After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
+
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
+
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
+
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
+
+The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them.
diff --git a/website/src/pages/ko/substreams/sps/introduction.mdx b/website/src/pages/ko/substreams/sps/introduction.mdx
new file mode 100644
index 000000000000..92d8618165dd
--- /dev/null
+++ b/website/src/pages/ko/substreams/sps/introduction.mdx
@@ -0,0 +1,31 @@
+---
+title: Introduction to Substreams-Powered Subgraphs
+sidebarTitle: Introduction
+---
+
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+
+## Overview
+
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+
+### Specifics
+
+There are two methods of enabling this technology:
+
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
+
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
+
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+
+### Additional Resources
+
+Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly:
+
+- [Solana](/substreams/developing/solana/transactions/)
+- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm)
+- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
+- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
+- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/ko/substreams/sps/triggers.mdx b/website/src/pages/ko/substreams/sps/triggers.mdx
new file mode 100644
index 000000000000..66687aa21889
--- /dev/null
+++ b/website/src/pages/ko/substreams/sps/triggers.mdx
@@ -0,0 +1,47 @@
+---
+title: Substreams Triggers
+---
+
+Use Custom Triggers and enable the full use GraphQL.
+
+## Overview
+
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
+
+### Defining `handleTransactions`
+
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
+
+```tsx
+export function handleTransactions(bytes: Uint8Array): void {
+ let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1.
+ if (transactions.length == 0) {
+ log.info('No transactions found', [])
+ return
+ }
+
+ for (let i = 0; i < transactions.length; i++) {
+ // 2.
+ let transaction = transactions[i]
+
+ let entity = new Transaction(transaction.hash) // 3.
+ entity.from = transaction.from
+ entity.to = transaction.to
+ entity.save()
+ }
+}
+```
+
+Here's what you're seeing in the `mappings.ts` file:
+
+1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object
+2. Looping over the transactions
+3. Create a new Subgraph entity for every transaction
+
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
+
+### Additional Resources
+
+To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/).
diff --git a/website/src/pages/ko/substreams/sps/tutorial.mdx b/website/src/pages/ko/substreams/sps/tutorial.mdx
new file mode 100644
index 000000000000..e20a22ba4b1c
--- /dev/null
+++ b/website/src/pages/ko/substreams/sps/tutorial.mdx
@@ -0,0 +1,155 @@
+---
+title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana"
+sidebarTitle: Tutorial
+---
+
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
+
+## Get Started
+
+For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial)
+
+### Prerequisites
+
+Before starting, make sure to:
+
+- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container.
+- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs.
+
+### Step 1: Initialize Your Project
+
+1. Open your Dev Container and run the following command to initialize your project:
+
+ ```bash
+ substreams init
+ ```
+
+2. Select the "minimal" project option.
+
+3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID:
+
+```yaml
+specVersion: v0.1.0
+package:
+ name: my_project_sol
+ version: v0.1.0
+
+imports: # Pass your spkg of interest
+ solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg
+
+modules:
+ - name: map_spl_transfers
+ use: solana:map_block # Select corresponding modules available within your spkg
+ initialBlock: 260000082
+
+ - name: map_transactions_by_programid
+ use: solana:solana:transactions_by_programid_without_votes
+
+network: solana-mainnet-beta
+
+params: # Modify the param fields to meet your needs
+ # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA
+ map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE
+```
+
+### Step 2: Generate the Subgraph Manifest
+
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
+
+```bash
+substreams codegen subgraph
+```
+
+You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source:
+
+```yaml
+---
+dataSources:
+ - kind: substreams
+ name: my_project_sol
+ network: solana-mainnet-beta
+ source:
+ package:
+ moduleName: map_spl_transfers # Module defined in the substreams.yaml
+ file: ./my-project-sol-v0.1.0.spkg
+ mapping:
+ apiVersion: 0.0.9
+ kind: substreams/graph-entities
+ file: ./src/mappings.ts
+ handler: handleTriggers
+```
+
+### Step 3: Define Entities in `schema.graphql`
+
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
+
+Here is an example:
+
+```graphql
+type MyTransfer @entity {
+ id: ID!
+ amount: String!
+ source: String!
+ designation: String!
+ signers: [String!]!
+}
+```
+
+This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`.
+
+### Step 4: Handle Substreams Data in `mappings.ts`
+
+With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
+
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
+
+```ts
+import { Protobuf } from 'as-proto/assembly'
+import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events'
+import { MyTransfer } from '../generated/schema'
+
+export function handleTriggers(bytes: Uint8Array): void {
+ const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode)
+
+ for (let i = 0; i < input.data.length; i++) {
+ const event = input.data[i]
+
+ if (event.transfer != null) {
+ let entity_id: string = `${event.txnId}-${i}`
+ const entity = new MyTransfer(entity_id)
+ entity.amount = event.transfer!.instruction!.amount.toString()
+ entity.source = event.transfer!.accounts!.source
+ entity.designation = event.transfer!.accounts!.destination
+
+ if (event.transfer!.accounts!.signer!.single != null) {
+ entity.signers = [event.transfer!.accounts!.signer!.single!.signer]
+ } else if (event.transfer!.accounts!.signer!.multisig != null) {
+ entity.signers = event.transfer!.accounts!.signer!.multisig!.signers
+ }
+ entity.save()
+ }
+ }
+}
+```
+
+### Step 5: Generate Protobuf Files
+
+To generate Protobuf objects in AssemblyScript, run the following command:
+
+```bash
+npm run protogen
+```
+
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
+
+### Conclusion
+
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+
+### Video Tutorial
+
+
+
+### Additional Resources
+
+For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana).
diff --git a/website/src/pages/ko/supported-networks.mdx b/website/src/pages/ko/supported-networks.mdx
index ef2c28393033..9592cfabc0ad 100644
--- a/website/src/pages/ko/supported-networks.mdx
+++ b/website/src/pages/ko/supported-networks.mdx
@@ -4,17 +4,17 @@ hideTableOfContents: true
hideContentHeader: true
---
-import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, NetworksTable } from '@/supportedNetworks'
import { Heading } from '@/components'
import { useI18n } from '@/i18n'
export const getStaticProps = getSupportedNetworksStaticProps
-
+
{useI18n().t('index.supportedNetworks.title')}
-
+
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
diff --git a/website/src/pages/ko/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/ko/token-api/evm/get-balances-evm-by-address.mdx
index 3386fd078059..68385ffc4272 100644
--- a/website/src/pages/ko/token-api/evm/get-balances-evm-by-address.mdx
+++ b/website/src/pages/ko/token-api/evm/get-balances-evm-by-address.mdx
@@ -1,9 +1,9 @@
---
-title: Token Balances by Wallet Address
+title: Balances by Address
template:
type: openApi
apiId: tokenApi
operationId: getBalancesEvmByAddress
---
-The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
+Provides latest ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/ko/token-api/evm/get-historical-balances-evm-by-address.mdx b/website/src/pages/ko/token-api/evm/get-historical-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..d96ed1b81fa2
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-historical-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Historical Balances
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHistoricalBalancesEvmByAddress
+---
+
+Provides historical ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/ko/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/ko/token-api/evm/get-holders-evm-by-contract.mdx
index 0bb79e41ed54..01a52bbf7ad2 100644
--- a/website/src/pages/ko/token-api/evm/get-holders-evm-by-contract.mdx
+++ b/website/src/pages/ko/token-api/evm/get-holders-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders by Contract Address
+title: Token Holders
template:
type: openApi
apiId: tokenApi
operationId: getHoldersEvmByContract
---
-The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
+Provides ERC-20 token holder balances by contract address.
diff --git a/website/src/pages/ko/token-api/evm/get-nft-activities-evm.mdx b/website/src/pages/ko/token-api/evm/get-nft-activities-evm.mdx
new file mode 100644
index 000000000000..f76eb35f653a
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-nft-activities-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Activities
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftActivitiesEvm
+---
+
+Provides NFT Activities (ex: transfers, mints & burns).
diff --git a/website/src/pages/ko/token-api/evm/get-nft-collections-evm-by-contract.mdx b/website/src/pages/ko/token-api/evm/get-nft-collections-evm-by-contract.mdx
new file mode 100644
index 000000000000..c8e9bfb64219
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-nft-collections-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Collection
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftCollectionsEvmByContract
+---
+
+Provides single NFT collection metadata, total supply, owners & total transfers.
diff --git a/website/src/pages/ko/token-api/evm/get-nft-holders-evm-by-contract.mdx b/website/src/pages/ko/token-api/evm/get-nft-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..091d01a197f4
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-nft-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Holders
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftHoldersEvmByContract
+---
+
+Provides NFT holders per contract.
diff --git a/website/src/pages/ko/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx b/website/src/pages/ko/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
new file mode 100644
index 000000000000..cf9ff1c6e1b8
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Items
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftItemsEvmContractByContractToken_idByToken_id
+---
+
+Provides single NFT token metadata, ownership & traits.
diff --git a/website/src/pages/ko/token-api/evm/get-nft-ownerships-evm-by-address.mdx b/website/src/pages/ko/token-api/evm/get-nft-ownerships-evm-by-address.mdx
new file mode 100644
index 000000000000..4c33526eceb7
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-nft-ownerships-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Ownerships
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftOwnershipsEvmByAddress
+---
+
+Provides NFT Ownerships for Account.
diff --git a/website/src/pages/ko/token-api/evm/get-nft-sales-evm.mdx b/website/src/pages/ko/token-api/evm/get-nft-sales-evm.mdx
new file mode 100644
index 000000000000..f2d78bea4052
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-nft-sales-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Sales
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftSalesEvm
+---
+
+Provides latest NFT marketplace sales.
diff --git a/website/src/pages/ko/token-api/evm/get-ohlc-pools-evm-by-pool.mdx b/website/src/pages/ko/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
new file mode 100644
index 000000000000..d5bc5357eadf
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
@@ -0,0 +1,9 @@
+---
+title: OHLCV by Pool
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPoolsEvmByPool
+---
+
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/ko/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/ko/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
index d1558ddd6e78..ff8f590b0433 100644
--- a/website/src/pages/ko/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
+++ b/website/src/pages/ko/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token OHLCV prices by Contract Address
+title: OHLCV by Contract
template:
type: openApi
apiId: tokenApi
operationId: getOhlcPricesEvmByContract
---
-The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/ko/token-api/evm/get-pools-evm.mdx b/website/src/pages/ko/token-api/evm/get-pools-evm.mdx
new file mode 100644
index 000000000000..db32376f5a17
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-pools-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Liquidity Pools
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getPoolsEvm
+---
+
+Provides Uniswap V2 & V3 liquidity pool metadata.
diff --git a/website/src/pages/ko/token-api/evm/get-swaps-evm.mdx b/website/src/pages/ko/token-api/evm/get-swaps-evm.mdx
new file mode 100644
index 000000000000..0a7697f38c8b
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-swaps-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Swap Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getSwapsEvm
+---
+
+Provides Uniswap V2 & V3 swap events.
diff --git a/website/src/pages/ko/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/ko/token-api/evm/get-tokens-evm-by-contract.mdx
index b6fab8011fc2..aed206c15272 100644
--- a/website/src/pages/ko/token-api/evm/get-tokens-evm-by-contract.mdx
+++ b/website/src/pages/ko/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders and Supply by Contract Address
+title: Token Metadata
template:
type: openApi
apiId: tokenApi
operationId: getTokensEvmByContract
---
-The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
+Provides ERC-20 token contract metadata.
diff --git a/website/src/pages/ko/token-api/evm/get-transfers-evm.mdx b/website/src/pages/ko/token-api/evm/get-transfers-evm.mdx
new file mode 100644
index 000000000000..d8e73c90a03c
--- /dev/null
+++ b/website/src/pages/ko/token-api/evm/get-transfers-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Transfer Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvm
+---
+
+Provides ERC-20 & Native transfer events.
diff --git a/website/src/pages/ko/token-api/faq.mdx b/website/src/pages/ko/token-api/faq.mdx
index 6178aee33e86..3bf60c0cda8f 100644
--- a/website/src/pages/ko/token-api/faq.mdx
+++ b/website/src/pages/ko/token-api/faq.mdx
@@ -6,21 +6,37 @@ Get fast answers to easily integrate and scale with The Graph's high-performance
## General
-### What blockchains does the Token API support?
+### Which blockchains are supported by the Token API?
-Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+Currently, the Token API supports Ethereum, BNB Smart Chain (BSC), Polygon, Optimism, Base, Unichain, and Arbitrum One.
-### Why isn't my API key from The Graph Market working?
+### Does the Token API support NFTs?
-Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+Yes, The Graph Token API currently supports ERC-721 and ERC-1155 NFT token standards, with support for additional NFT standards planned. Endpoints are offered for ownership, collection stats, metadata, sales, holders, and transfer activity.
+
+### Do NFTs include off-chain data?
+
+NFT endpoints currently only include on-chain data. To get off-chain data, use the IPFS or HTTP links included in the NFT item response.
+
+### How do I authenticate requests to the Token API, and why doesn't my API key from The Graph Market work?
+
+Authentication is managed via API tokens obtained through [The Graph Market](https://thegraph.market/). If you're experiencing issues, make sure you're using the API Token generated from the API key, not the API key itself. An API token can be found on The Graph Market dashboard next to each API key. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
### How current is the data provided by the API relative to the blockchain?
The API provides data up to the latest finalized block.
-### How do I authenticate requests to the Token API?
+### How do I retrieve token prices?
+
+By default, token prices are returned with token-related responses, including token balances, token transfers, token metadata, and token holders. Historical prices are available with the Open-High-Low-Close (OHLC) endpoints.
+
+### Does the Token API support historical token data?
-Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+The Token API supports historical token balances with the `/historical/balances/evm/{address}` endpoint. You can query historical price data by pool at `/ohlc/pools/evm/{pool}` and by contract at `/ohlc/prices/evm/{contract}`.
+
+### What exchanges does the Token API use for token prices?
+
+The Token API currently tracks prices on Uniswap v2 and Uniswap v3, with plans to support additional exchanges in the future.
### Does the Token API provide a client SDK?
@@ -34,9 +50,9 @@ Yes, more blockchains will be supported in the future. Please share feedback on
Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
-### Are there plans to support additional use cases such as NFTs?
+### Are there plans to support additional use cases?
-The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
## MCP / LLM / AI Topics
@@ -60,17 +76,25 @@ You can find the code for the MCP client in [The Graph's repo](https://github.co
Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
-### Are there rate limits or usage costs?\*\*
+### Why am I getting 500 errors?
+
+Networks that are currently or temporarily unavailable on a given endpoint will return a `bad_database_response`, `Endpoint is currently not supported for this network` error. Databases that are in the process of ingestion will produce this response.
+
+### Are there rate limits or usage costs?
+
+During Beta, the Token API is free for authorized developers. There are no specific rate limits, but reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What do I do if I notice data inconsistencies in the data returned by the Token API?
-During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+If you notice data inconsistencies, please report the issue on our [Discord](https://discord.gg/graphprotocol). Identifying edge cases can help make sure all data is accurate and up-to-date.
-### What networks are supported, and how do I specify them?
+### How do I specify a network?
-You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of the exact network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`, `unichain`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
### Why do I only see 10 results? How can I get more data?
-Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` and `page` (1-indexed) to return more results. For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
### How do I fetch older transfer history?
diff --git a/website/src/pages/ko/token-api/monitoring/get-health.mdx b/website/src/pages/ko/token-api/monitoring/get-health.mdx
index 57a827b3343b..09f7b954dbf3 100644
--- a/website/src/pages/ko/token-api/monitoring/get-health.mdx
+++ b/website/src/pages/ko/token-api/monitoring/get-health.mdx
@@ -1,7 +1,9 @@
---
-title: Get health status of the API
+title: Health Status
template:
type: openApi
apiId: tokenApi
operationId: getHealth
---
+
+Get health status of the API
diff --git a/website/src/pages/ko/token-api/monitoring/get-networks.mdx b/website/src/pages/ko/token-api/monitoring/get-networks.mdx
index 0ea3c485ddb9..f4b65492ed15 100644
--- a/website/src/pages/ko/token-api/monitoring/get-networks.mdx
+++ b/website/src/pages/ko/token-api/monitoring/get-networks.mdx
@@ -1,7 +1,9 @@
---
-title: Get supported networks of the API
+title: Supported Networks
template:
type: openApi
apiId: tokenApi
operationId: getNetworks
---
+
+Get supported networks of the API
diff --git a/website/src/pages/ko/token-api/monitoring/get-version.mdx b/website/src/pages/ko/token-api/monitoring/get-version.mdx
index 0be6b7e92d04..fa0040807854 100644
--- a/website/src/pages/ko/token-api/monitoring/get-version.mdx
+++ b/website/src/pages/ko/token-api/monitoring/get-version.mdx
@@ -1,7 +1,9 @@
---
-title: Get the version of the API
+title: Version
template:
type: openApi
apiId: tokenApi
operationId: getVersion
---
+
+Get the version of the API
diff --git a/website/src/pages/ko/token-api/quick-start.mdx b/website/src/pages/ko/token-api/quick-start.mdx
index 4653c3d41ac6..5b3d052d9ec5 100644
--- a/website/src/pages/ko/token-api/quick-start.mdx
+++ b/website/src/pages/ko/token-api/quick-start.mdx
@@ -9,15 +9,15 @@ sidebarTitle: Quick Start
The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
-The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+The Token API provides access to onchain NFT and fungible token data, including live and historical balances, holders, prices, market data, token metadata, and token transfers. This API also uses the Model Context Protocol (MCP) to allow AI tools such as Claude to enrich raw blockchain data with contextual insights.
## Prerequisites
-Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+Before you begin, get a JWT API token by signing up on [The Graph Market](https://thegraph.market/). Make sure to use the JWT API Token, not the API key. Each API key can generate a new JWT API Token at any time.
## Authentication
-All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+All API endpoints are authenticated using a JWT API token inserted in the header as `Authorization: Bearer `.
```json
{
@@ -64,6 +64,20 @@ Make sure to replace `` with the JWT Token generated from your API key.
> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+## Chain and Feature Support
+
+| Network | evm-tokens | evm-uniswaps | evm-nft-tokens |
+| ---------------- | :---------: | :----------: | :------------: |
+| Ethereum Mainnet | ✅ | ✅ | ✅ |
+| BSC | ✅\* | ✅ | ✅ |
+| Base | ✅ | ✅ | ✅ |
+| Unichain | ✅ | ✅ | ✅ |
+| Arbitrum-One | Ingesting\* | Ingesting\* | Ingesting\* |
+| Optimism | ✅ | ✅ | ✅ |
+| Polygon | ✅ | ✅ | ✅ |
+
+\*Some chains are still in the process of syncing. You may encounter `bad_database_response` errors or incorrect response values until data is fully synced.
+
## Troubleshooting
If the API call fails, try printing out the full response object for additional error details. For example:
diff --git a/website/src/pages/mr/about.mdx b/website/src/pages/mr/about.mdx
index 9597ecb03bb2..ef64d8bada05 100644
--- a/website/src/pages/mr/about.mdx
+++ b/website/src/pages/mr/about.mdx
@@ -1,67 +1,46 @@
---
-title: ग्राफ बद्दल
+title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## द ग्राफ म्हणजे काय?
-The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier.
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## Understanding the Basics
+Its data services include:
-Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain.
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### Challenges Without The Graph
+### Why Blockchain Data is Difficult to Query
-In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply.
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**.
+The result is slow performance, complex infrastructure, and scalability issues.
-- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself.
+## How The Graph Solves This
-- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it.
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### Why is this a problem?
+Find the perfect data service for you:
-It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions.
+### 1. Custom Real-Time Data Streams
-Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
+**Use Case:** High-frequency trading, live analytics.
-Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data.
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph Provides a Solution
+### 2. Instant Token Data
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
+- [Start with Token API](/token-api/quick-start/)
-### How The Graph Functions
+### 3. Flexible Historical Queries
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+**Use Case:** Dapp frontends, custom analytics.
-#### Specifics
-
-- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-
-- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-
-- When creating a Subgraph, you need to write a Subgraph manifest.
-
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-
-The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.
-
-
-
-प्रवाह या चरणांचे अनुसरण करतो:
-
-1. A dapp स्मार्ट करारावरील व्यवहाराद्वारे इथरियममध्ये डेटा जोडते.
-2. व्यवहारावर प्रक्रिया करताना स्मार्ट करार एक किंवा अधिक इव्हेंट सोडतो.
-3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
-4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
-5. नोडचा [GraphQL एंडपॉइंट](https://graphql.org/learn/) वापरून ब्लॉकचेन वरून अनुक्रमित केलेल्या डेटासाठी dapp ग्राफ नोडची क्वेरी करते. ग्राफ नोड यामधून, स्टोअरच्या इंडेक्सिंग क्षमतांचा वापर करून, हा डेटा मिळविण्यासाठी त्याच्या अंतर्निहित डेटा स्टोअरच्या क्वेरींमध्ये GraphQL क्वेरीचे भाषांतर करतो. dapp हा डेटा अंतिम वापरकर्त्यांसाठी समृद्ध UI मध्ये प्रदर्शित करते, जो ते Ethereum वर नवीन व्यवहार जारी करण्यासाठी वापरतात. चक्राची पुनरावृत्ती होते.
-
-## पुढील पायऱ्या
-
-The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-
-Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/mr/index.json b/website/src/pages/mr/index.json
index add2f95c68b0..15235dc0f40c 100644
--- a/website/src/pages/mr/index.json
+++ b/website/src/pages/mr/index.json
@@ -2,7 +2,7 @@
"title": "Home",
"hero": {
"title": "The Graph Docs",
- "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "How The Graph works",
"cta2": "Build your first subgraph"
},
@@ -19,10 +19,10 @@
"description": "Fetch and consume blockchain data with parallel execution.",
"cta": "Develop with Substreams"
},
- "sps": {
- "title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "Set up a Substreams-powered subgraph"
+ "tokenApi": {
+ "title": "Token API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "आलेख नोड",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "फायरहोस",
- "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Get started with Firehose"
}
},
@@ -58,6 +58,7 @@
"networks": "networks",
"completeThisForm": "complete this form"
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "सबग्राफ",
"substreams": "उपप्रवाह",
"firehose": "फायरहोस",
- "tokenapi": "Token API"
+ "tokenApi": "Token API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "उपप्रवाह",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "Graph Explorer",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/mr/indexing/new-chain-integration.mdx b/website/src/pages/mr/indexing/new-chain-integration.mdx
index 670e06c752c3..cf698522f7e5 100644
--- a/website/src/pages/mr/indexing/new-chain-integration.mdx
+++ b/website/src/pages/mr/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, in a JSON-RPC batch request
-- `trace_filter` *(limited tracing and optionally required for Graph Node)*
+- `trace_filter` _(limited tracing and optionally required for Graph Node)_
### 2. Firehose Integration
@@ -63,7 +63,7 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
> Do not change the env var name itself. It must remain `ethereum` even if the network name is different.
-3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## Substreams-powered Subgraphs
diff --git a/website/src/pages/mr/indexing/overview.mdx b/website/src/pages/mr/indexing/overview.mdx
index 9d78f7612f01..57417928daa5 100644
--- a/website/src/pages/mr/indexing/overview.mdx
+++ b/website/src/pages/mr/indexing/overview.mdx
@@ -110,12 +110,12 @@ Indexers may differentiate themselves by applying advanced techniques for making
- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
-| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
+| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -131,7 +131,7 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
@@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer
#### आलेख नोड
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Getting started using Docker
@@ -708,42 +708,6 @@ Note that supported action types for allocation management have different input
Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
-#### Agora
-
-The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query.
-
-A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression.
-
-Example cost model:
-
-```
-# This statement captures the skip value,
-# uses a boolean expression in the predicate to match specific queries that use `skip`
-# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-# This default will match any GraphQL expression.
-# It uses a Global substituted into the expression to calculate cost
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-Example query costing using the above model:
-
-| Query | Price |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### Applying the cost model
-
-Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them.
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## Interacting with the network
### Stake in the protocol
diff --git a/website/src/pages/mr/indexing/tooling/graph-node.mdx b/website/src/pages/mr/indexing/tooling/graph-node.mdx
index 687f1ea42338..460b59a35703 100644
--- a/website/src/pages/mr/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/mr/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ While some Subgraphs may just require a full node, some may have indexing featur
### आयपीएफएस नोड्स
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### प्रोमिथियस मेट्रिक्स सर्व्हर
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Kubernetes सह प्रारंभ करणे
@@ -77,15 +77,20 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
When it is running Graph Node exposes the following ports:
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
-
-> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## प्रगत ग्राफ नोड कॉन्फिगरेशन
@@ -330,7 +335,7 @@ Database tables that store entities seem to generally come in two varieties: 'tr
For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table.
-The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show ` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions.
@@ -340,6 +345,4 @@ For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates f
#### Removing Subgraphs
-> This is new functionality, which will be available in Graph Node 0.29.x
-
At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/mr/resources/claude-mcp.mdx b/website/src/pages/mr/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..5b55bbcbe0a4
--- /dev/null
+++ b/website/src/pages/mr/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/mr/subgraphs/_meta-titles.json b/website/src/pages/mr/subgraphs/_meta-titles.json
index 3fd405eed29a..f095d374344f 100644
--- a/website/src/pages/mr/subgraphs/_meta-titles.json
+++ b/website/src/pages/mr/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "Querying",
"developing": "Developing",
"guides": "How-to Guides",
- "best-practices": "Best Practices"
+ "best-practices": "Best Practices",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5f964d3cbb78..edc1d88dc6cf 100644
--- a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Patch Changes
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Minor Changes
diff --git a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx
index c84987c66e17..c4c2e4f17471 100644
--- a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs:
The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| आवृत्ती | रिलीझ नोट्स |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
-| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
-| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
-| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
-| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Added `input` field to the Ethereum Transaction object |
+| आवृत्ती | रिलीझ नोट्स |
+| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
+| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
+| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
+| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
+| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
+| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Added `input` field to the Ethereum Transaction object |
### अंगभूत प्रकार
diff --git a/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx
index daed9ec13c64..8b40bdfde4fc 100644
--- a/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -22,14 +22,14 @@ Start the process and build a Subgraph that matches your needs:
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
-| आवृत्ती | रिलीझ नोट्स |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+| आवृत्ती | रिलीझ नोट्स |
+| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx
index 3e34f743a6c0..6fc6fe500de3 100644
--- a/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx
@@ -212,7 +212,7 @@ Every Subgraph affected with this policy has an option to bring the version in q
If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph: `https://indexer.upgrade.thegraph.com/status`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
diff --git a/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 2319974d45ed..e07a0f3d1531 100644
--- a/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -88,6 +88,8 @@ graph auth
Once you are ready, you can deploy your Subgraph to Subgraph Studio.
> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
+>
+> **Note**: Each account is limited to 3 deployed (unpublished) Subgraphs. If you reach this limit, you must archive or publish existing Subgraphs before deploying new ones.
Use the following CLI command to deploy your Subgraph:
@@ -104,6 +106,8 @@ After running this command, the CLI will ask for a version label.
After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+> **Note**: The development query URL is limited to 3,000 queries per day.
+
Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
diff --git a/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index 78b641e5ae0a..5c0177c85d87 100644
--- a/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -53,7 +53,7 @@ USAGE
FLAGS
-h, --help Show CLI help.
- -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node.
+ -i, --ipfs= [default: https://ipfs.thegraph.com/api/v0] Upload build results to an IPFS node.
--ipfs-hash= IPFS hash of the subgraph manifest to deploy.
--protocol-network= [default: arbitrum-one] The network to use for the subgraph deployment.
diff --git a/website/src/pages/mr/subgraphs/explorer.mdx b/website/src/pages/mr/subgraphs/explorer.mdx
index afcc80c29f35..d236e46a096b 100644
--- a/website/src/pages/mr/subgraphs/explorer.mdx
+++ b/website/src/pages/mr/subgraphs/explorer.mdx
@@ -2,83 +2,103 @@
title: Graph Explorer
---
-Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Use [Graph Explorer](https://thegraph.com/explorer) and take full advantage of its core features.
## सविश्लेषण
-Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+This guide explains how to use [Graph Explorer](https://thegraph.com/explorer) to quickly discover and interact with Subgraphs on The Graph Network, delegate GRT, view participant metrics, and analyze network performance.
-## Inside Explorer
+> When you visit Graph Explorer, you can also access the link to [explore Substreams](https://substreams.dev/).
-The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
+## Prerequisites
-### Subgraphs Page
+- To perform actions, you need a wallet (e.g., MetaMask) connected to [Graph Explorer](https://thegraph.com/explorer).
+ > Make sure your wallet is connected to the correct network (e.g., Arbitrum). Features and data shown are network specific.
+- GRT tokens if you plan to delegate or curate.
+- Basic knowledge of [Subgraphs](https://thegraph.com/docs/en/subgraphs/developing/subgraphs/)
-After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+## Navigating Graph Explorer
-- Your own finished Subgraphs
-- Subgraphs published by others
-- The exact Subgraph you want (based on the date created, signal amount, or name).
+### Step 1. Explore Subgraphs
-
+> For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide).
-When you click into a Subgraph, you will be able to do the following:
+Go to the Subgraphs page in [Graph Explorer](https://thegraph.com/explorer).
-- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
+- If you've deployed and published your Subgraph in Subgraph Studio, you can view it here.
+- Search all published Subgraphs and filter them by indexed network, specific categories (such as DeFI, NFTs, and DAOs), and **most queried, most curated, recently created, and recently updated**.
+
+
+
+To find Subgraphs indexing a specific contract, enter the contract address into the search bar.
+
+- For example, you can enter the L2GNS contract on Artbitrum (`0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec`) and this returns all Subgraphs indexing that contract:
+
+
- - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+> Looking for indexing contracts? Check out [this Subgraph](https://thegraph.com/explorer/subgraphs/FMTUN6d7sY2bLnAmNEPJTqiU3iuQht6ZXurpBh71wbWR?view=About&chain=arbitrum-one) which indexes contract addresses listed in its manifest. It shows all current deployments indexing those contracts on Arbitrum One, along with the signal allocated to each.
-
+You can click into any Subgraph, to:
+
+- Test queries in the playground and be able to leverage network details to make informed decisions.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make Indexers aware of its importance and quality.
+ > This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it'll eventually surface on the network to serve queries.
+
+
On each Subgraph’s dedicated page, you can do the following:
-- Signal/Un-signal on Subgraphs
-- चार्ट, वर्तमान उपयोजन आयडी आणि इतर मेटाडेटा यासारखे अधिक तपशील पहा
-- Switch versions to explore past iterations of the Subgraph
- Query Subgraphs via GraphQL
+- View Subgraph ID, current deployment ID, Query URL, and other metadata
+- Signal/unsignal on Subgraphs
- Test Subgraphs in the playground
- View the Indexers that are indexing on a certain Subgraph
- Subgraph stats (allocations, Curators, etc)
-- View the entity who published the Subgraph
+- View query fees and charts
+- Change versions to explore past iterations of the Subgraph
+- View entity types
+- View Subgraph activity
-
+
-### Delegate Page
+### Step 2. Delegate GRT
-On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer.
+Go to the [Delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) page to learn how to delegate, get GRT, and choose an Indexer.
-On this page, you can see the following:
+Here, you can:
-- Indexers who collected the most query fees
-- Indexers with the highest estimated APR
+- Compare Indexers by most query fees earned and highest estimated APR.
+- Use the built-in ROI calculator or search by Indexer name or address.
+- Click **"Delegate"** next to an Indexer to stake your GRT.
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
+### Step 3. Monitor Participants in the Network
-### Participants Page
+Go to the [Participants](https://thegraph.com/explorer/participants?chain=arbitrum-one) page to view:
-This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators.
+- Indexers: stakes, allocations, rewards, and delegation parameters
+- Curators: signal amounts, Subgraph shares, and activity history
+- Delegators: current and historical delegations, rewards, and Indexer metrics
-#### 1. Indexers
+#### Indexers
-
+
Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexer's delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
-- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators.
-- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
-- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
-- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
-- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
-- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
-- कमाल डेलिगेशन क्षमता - इंडेक्सर उत्पादकपणे स्वीकारू शकणारी जास्तीत जास्त डेलिगेटेड स्टेक. वाटप किंवा बक्षिसे गणनेसाठी जास्तीचा वाटप केला जाऊ शकत नाही.
-- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
-- इंडेक्सर रिवॉर्ड्स - हे इंडेक्सर आणि त्यांच्या प्रतिनिधींनी सर्वकाळात मिळवलेले एकूण इंडेक्सर रिवॉर्ड्स आहेत. इंडेक्सर रिवॉर्ड्स GRT जारी करून दिले जातात.
+- Query Fee Cut: The % of the query fee rebates that the Indexer keeps when splitting with Delegators.
+- Effective Reward Cut: The indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards.
+- Cooldown Remaining: The time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
+- Owned: This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
+- Delegated: Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
+- Allocated: Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
+- Available Delegation Capacity: The amount of delegated stake the Indexers can still receive before they become over-delegated.
+- Max Delegation Capacity: The maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
+- Query Fees: This is the total fees that end users have paid for queries from an Indexer over all time.
+- Indexer Rewards: This is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance.
Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters.
@@ -86,9 +106,9 @@ Indexers can earn both query fees and indexing rewards. Functionally, this happe
To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/)
-
+
-#### 2. Curators
+#### Curators
Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
@@ -102,11 +122,11 @@ In the The Curator table listed below you can see:
- The number of GRT that was deposited
- The number of shares a Curator owns
-
+
If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/).
-#### 3. Delegators
+#### Delegators
Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers.
@@ -114,7 +134,7 @@ Delegators play a key role in maintaining the security and decentralization of T
- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts.
- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/).
-
+
In the Delegators table you can see the active Delegators in the community and important metrics:
@@ -127,9 +147,9 @@ In the Delegators table you can see the active Delegators in the community and i
If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers).
-### Network Page
+### Step 4. Analyze Network Performance
-On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
+On the [Network](https://thegraph.com/explorer/network?chain=arbitrum-one) page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time.
#### सविश्लेषण
@@ -147,7 +167,7 @@ A few key details to note:
- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
-
+
#### Epochs
@@ -161,69 +181,77 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as:
- वितरण युग हे असे युग आहेत ज्यामध्ये युगांसाठी राज्य चॅनेल सेटल केले जात आहेत आणि इंडेक्सर्स त्यांच्या क्वेरी फी सवलतीचा दावा करू शकतात.
- The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers.
-
+
+
+## Access and Manage Your User Profile
+
+### Step 1. Access Your Profile
-## Your User Profile
+- Click your wallet address in the top right corner
+- Your wallet acts as your user profile
+- In your profile dashboard, you can view and interact with several useful tabs
-Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs:
+### Step 2. Explore the Tabs
-### Profile Overview
+#### Profile Overview
In this section, you can view the following:
-- Any of your current actions you've done.
-- Your profile information, description, and website (if you added one).
+- Your activity
+- Your profile information: total query fees, total shares value, owned stake, stake delegating
-
+
-### Subgraphs Tab
+#### Subgraphs Tab
-In the Subgraphs tab, you’ll see your published Subgraphs.
+The Subgraphs tab displays all your published Subgraphs.
-> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> Subgraphs deployed with the CLI for testing purposes will not show up here. Subgraphs will only show up when they are published to the decentralized network.
-
+
-### Indexing Tab
+#### Indexing Tab
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+> If you haven't indexed, you will see links to stake to index Subgraphs and browse Subgraphs on Graph Explorer.
-या विभागात तुमच्या निव्वळ इंडेक्सर रिवॉर्ड्स आणि नेट क्वेरी फीबद्दल तपशील देखील समाविष्ट असतील. तुम्हाला खालील मेट्रिक्स दिसतील:
+The Indexing tab displays a table where you can review active and historical allocations to Subgraphs.
-- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed
-- एकूण क्वेरी शुल्क - वापरकर्त्यांनी वेळोवेळी तुमच्याद्वारे दिलेल्या क्वेरींसाठी भरलेले एकूण शुल्क
-- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT
-- फी कट - तुम्ही डेलिगेटर्ससह विभक्त झाल्यावर तुम्ही ठेवू शकणार्या क्वेरी फी सवलतींचा %
-- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators
-- मालकीचा - तुमचा जमा केलेला हिस्सा, जो दुर्भावनापूर्ण किंवा चुकीच्या वर्तनासाठी कमी केला जाऊ शकतो
+Track your Indexer performance with visual charts and key metrics, including:
-
+- Delegated Stake: Stake from Delegators that can be allocated by you but cannot be slashed.
+- Total Query Fees: Cumulative fees from served queries.
+- Indexer Rewards (in GRT): Total rewards earned.
+- Fee Cut & Rewards Cut: The % of query fee rebates and Indexer rewards you'll keep when you split with Delegators.
+- Owned Stake: Your deposited stake, which could be slashed for malicious or incorrect behavior.
-### Delegating Tab
+
-Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards.
+#### Delegating Tab
-In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards.
+> To learn more about the benefits of delegating, check out [delegating](/resources/roles/delegating/delegating/).
-पृष्ठाच्या पूर्वार्धात, तुम्ही तुमच्या डेलिगेशन चार्ट तसेच रिवॉर्ड-ओन्ली चार्ट पाहू शकता. डावीकडे, तुम्ही KPI पाहू शकता जे तुमचे वर्तमान प्रतिनिधीत्व मेट्रिक्स दर्शवतात.
+The Delegators tab displays your active and historical delegations, along with the metrics for the Indexers you've delegated to.
-या टॅबमध्ये तुम्हाला येथे दिसणार्या डेलिगेटर मेट्रिक्समध्ये हे समाविष्ट आहे:
+Top Section:
-- एकूण प्रतिनिधीत्व बक्षिसे
-- Total unrealized rewards
-- Total realized rewards
+- View delegation and rewards-only charts
+- Track key metrics:
+ - एकूण प्रतिनिधीत्व बक्षिसे
+ - Unrealized rewards
+ - Realized Rewards
-पृष्ठाच्या दुसऱ्या सहामाहीत, आपल्याकडे प्रतिनिधी टेबल आहे. येथे तुम्ही ज्या निर्देशांकांना तुम्ही नियुक्त केले आहे, तसेच त्यांचे तपशील (जसे की रिवॉर्ड कट, कूलडाउन इ.) पाहू शकता.
+Bottom Section:
-टेबलच्या उजव्या बाजूला असलेल्या बटणांच्या सहाय्याने, तुम्ही तुमचे प्रतिनिधीमंडळ व्यवस्थापित करू शकता - वितळण्याच्या कालावधीनंतर तुमचे प्रतिनिधीमंडळ अधिक प्रतिनिधी, अस्वीकृत किंवा मागे घेऊ शकता.
+- Explore a table of your Indexer delegations, including reward cuts, cooldowns, and more.
+- Use the buttons on the right side of the table to manage your delegation - delegate more, undelegate, or withdraw it after the thawing period.
-लक्षात ठेवा की हा चार्ट क्षैतिजरित्या स्क्रोल करण्यायोग्य आहे, म्हणून तुम्ही उजवीकडे स्क्रोल केल्यास, तुम्ही तुमच्या प्रतिनिधीची स्थिती देखील पाहू शकता (प्रतिनिधी, अस्वीकृत, मागे घेण्यायोग्य).
+> This table is horizontally scrollable, so scroll right to see delegation status: delegating, undelegating, or withdrawable.
-
+
-### Curating Tab
+#### Curating Tab
-In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
+The Curation tab displays all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
या टॅबमध्ये, तुम्हाला याचे विहंगावलोकन मिळेल:
@@ -232,22 +260,22 @@ In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus
- Query rewards per Subgraph
- Updated at date details
-
+
-### Your Profile Settings
+#### Your Profile Settings
तुमच्या वापरकर्ता प्रोफाइलमध्ये, तुम्ही तुमचे वैयक्तिक प्रोफाइल तपशील व्यवस्थापित करण्यास सक्षम असाल (जसे की ENS नाव सेट करणे). तुम्ही इंडेक्सर असल्यास, तुमच्या बोटांच्या टोकावर असलेल्या सेटिंग्जमध्ये तुम्हाला आणखी प्रवेश आहे. तुमच्या वापरकर्ता प्रोफाइलमध्ये, तुम्ही तुमचे डेलिगेशन पॅरामीटर्स आणि ऑपरेटर सेट करू शकाल.
- ऑपरेटर इंडेक्सरच्या वतीने प्रोटोकॉलमध्ये मर्यादित कृती करतात, जसे की वाटप उघडणे आणि बंद करणे. ऑपरेटर हे सामान्यत: इतर इथरियम पत्ते असतात, जे त्यांच्या स्टॅकिंग वॉलेटपासून वेगळे असतात, इंडेक्सर्स वैयक्तिकरित्या सेट करू शकणार्या नेटवर्कवर गेट केलेला प्रवेश असतो
- Delegation parameters allow you to control the distribution of GRT between you and your Delegators.
-
+
As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.

-## अतिरिक्त संसाधने
+### अतिरिक्त संसाधने
### Video Guide
diff --git a/website/src/pages/mr/subgraphs/fair-use-policy.mdx b/website/src/pages/mr/subgraphs/fair-use-policy.mdx
new file mode 100644
index 000000000000..0d69d65abd69
--- /dev/null
+++ b/website/src/pages/mr/subgraphs/fair-use-policy.mdx
@@ -0,0 +1,51 @@
+---
+title: Fair Use Policy
+---
+
+> Effective Date: May 15, 2025
+
+## सविश्लेषण
+
+This outlines storage limits for Subgraphs that rely solely on [Edge & Node's Upgrade Indexer](/subgraphs/upgrade-indexer/). This policy is designed to ensure fair and optimized use of queries across the community.
+
+To maintain performance and reliability across its infrastructure, Edge & Node is updating its Upgrade Indexer Subgraph storage policy. Free usage tiers remain available, but users who exceed specified limits will need to upgrade to a paid plan. Storage allocations and thresholds vary by feature.
+
+### 1. Scope
+
+This policy applies to all individual users, teams, chains, and dapps using Edge & Node's Upgrade Indexer in Subgraph Studio for storage and queries.
+
+### 2. Fair Use Storage Limits
+
+**Free Storage: Up to 10 GB**
+
+Beyond that, pricing is variable and adjusts based on usage patterns, network conditions, infrastructure requirements, and specific use cases.
+
+Reach out to Edge & Node at [info@edgeandnode.com](mailto:info@edgeandnode.com) to discuss options that meet your technical needs.
+
+You can monitor your usage via [Subgraph Studio](https://thegraph.com/studio/).
+
+### 3. Fair Use Limits
+
+To preserve the stability of Edge & Node's Subgraph Studio and preserve the reliability of The Graph Network, the Edge & Node Support Team will monitor storage usage and take corresponding action with Subgraphs that have:
+
+- Abnormally high or sustained bandwidth or storage usage beyond posted limits
+- Circumvention of storage thresholds (e.g., use of multiple free-tier accounts)
+
+The Edge & Node Support Team reserves the right to revise storage limits or impose temporary constraints for operational integrity.
+
+If you exceed your included storage:
+
+- Try [pruning Subgraph data](/subgraphs/best-practices/pruning/) to remove unused entities and help stay within storage limits
+- [Add signal to the Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to encourage other Indexers on the network to serve it
+- You will receive multiple notifications and email alerts
+- A grace period of 14 days will be provided to upgrade or reduce storage
+
+Edge & Node's team is committed to helping users avoid unnecessary interruptions and will continue to support all web3 builders.
+
+### 4. Subgraph Data Retention
+
+Subgraphs inactive for over 14 days or Subgraphs that exceed free-tier storage limits will be subject to automatic data archival or deletion. Edge & Node's team will notify you before any such actions are taken.
+
+### 5. Support
+
+If you believe your usage is incorrectly flagged or have unique use cases (e.g. approved special request pending new Subgraph upgrade plan), reach out the Edge & Node team at [info@edgeandnode.com](mailto:info@edgeandnode.com).
diff --git a/website/src/pages/mr/subgraphs/guides/near.mdx b/website/src/pages/mr/subgraphs/guides/near.mdx
index 4a183fca2e16..9f60b0c9dfe6 100644
--- a/website/src/pages/mr/subgraphs/guides/near.mdx
+++ b/website/src/pages/mr/subgraphs/guides/near.mdx
@@ -186,7 +186,7 @@ Once your Subgraph has been created, you can deploy your Subgraph by using the `
```sh
$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
+$ graph deploy --node --ipfs https://ipfs.thegraph.com # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
The node configuration will depend on where the Subgraph is being deployed.
diff --git a/website/src/pages/mr/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/mr/subgraphs/guides/subgraph-composition.mdx
index 52da13032a9c..5ffaa92e22e8 100644
--- a/website/src/pages/mr/subgraphs/guides/subgraph-composition.mdx
+++ b/website/src/pages/mr/subgraphs/guides/subgraph-composition.mdx
@@ -39,20 +39,20 @@ While the source Subgraph is a standard Subgraph, the dependent Subgraph uses th
### Source Subgraphs
-- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs).
- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
-- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
-- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
-- Source Subgraphs cannot use grafting on top of existing entities
-- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+- Immutable entities only: All Subgraphs must have [immutable entities](/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed.
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of.
+- Source Subgraphs cannot use grafting on top of existing entities.
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly.
### Composed Subgraphs
-- You can only compose up to a **maximum of 5 source Subgraphs**
-- Composed Subgraphs can only use **datasources from the same chain**
-- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
-- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
-- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+- You can only compose up to a **maximum of 5 source Subgraphs.**
+- Composed Subgraphs can only use **datasources from the same chain.**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time.
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly.
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph).
Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
diff --git a/website/src/pages/mr/subgraphs/mcp/claude.mdx b/website/src/pages/mr/subgraphs/mcp/claude.mdx
new file mode 100644
index 000000000000..8b61438d2ab7
--- /dev/null
+++ b/website/src/pages/mr/subgraphs/mcp/claude.mdx
@@ -0,0 +1,180 @@
+---
+title: Claude Desktop
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Claude to interact directly with Subgraphs on The Graph Network. This integration allows you to find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries—all through natural language conversations with Claude.
+
+## What You Can Do
+
+The Subgraph MCP integration enables you to:
+
+- Access the GraphQL schema for any Subgraph on The Graph Network
+- Execute GraphQL queries against any Subgraph deployment
+- Find top Subgraph deployments for a given keyword or contract address
+- Get 30-day query volume for Subgraph deployments
+- Ask natural language questions about Subgraph data without writing GraphQL queries manually
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+
+## Installation Options
+
+### Option 1: Using npx (Recommended)
+
+#### Configuration Steps using npx
+
+#### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+#### 2. Add Configuration
+
+Paste the following settings into your config file:
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+#### 3. Add Your Gateway API Key
+
+Replace `GATEWAY_API_KEY` with your API key from [Subgraph Studio](https://thegraph.com/studio/).
+
+#### 4. Save and Restart
+
+Once you've entered your Gateway API key into your settings, save the file and restart Claude Desktop.
+
+### Option 2: Building from Source
+
+#### Requirements
+
+- Rust (latest stable version recommended: 1.75+)
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+ Follow the on-screen instructions. For other platforms, see the [official Rust installation guide](https://www.rust-lang.org/tools/install).
+
+#### Installation Steps
+
+1. **Clone and Build the Repository**
+
+ ```bash
+ git clone git@github.com:graphops/subgraph-mcp.git
+ cd subgraph-mcp
+ cargo build --release
+ ```
+
+2. **Find the Command Path**
+
+ After building, the executable will be located at `target/release/subgraph-mcp` inside your project directory.
+
+ - Navigate to your `subgraph-mcp` directory in terminal
+ - Run `pwd` to get the full path
+ - Combine the output with `/target/release/subgraph-mcp`
+
+3. **Configure Claude Desktop**
+
+ Open your `claude_desktop_config.json` file as described above and add:
+
+ ```json
+ {
+ "mcpServers": {
+ "subgraph": {
+ "command": "/path/to/your/subgraph-mcp/target/release/subgraph-mcp",
+ "env": {
+ "GATEWAY_API_KEY": "your-api-key-here"
+ }
+ }
+ }
+ }
+ ```
+
+ Replace `/path/to/your/subgraph-mcp/target/release/subgraph-mcp` with the actual path to the compiled binary.
+
+## Using The Graph Resource in Claude
+
+After configuring Claude Desktop:
+
+1. Restart Claude Desktop
+2. Start a new conversation
+3. Click on the context menu (top right)
+4. Add "Subgraph Server Instructions" as a resource by adding `graphql://subgraph` to your chat context
+
+> **Important**: Claude Desktop may not automatically utilize the Subgraph MCP. You must manually add "Subgraph Server Instructions" resource to your chat context for each conversation where you want to use it.
+
+## Troubleshooting
+
+To enable logs for the MCP when using the npx option, add the `--verbose true` option to your args array.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID/IPFS hash**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Search subgraphs by keyword**: Find subgraphs by keyword in their display names, ordered by signal
+- **Get deployment 30-day query counts**: Get aggregate query count over the last 30 days for multiple subgraph deployments
+- **Get top Subgraph deployments for a contract**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain, ordered by query fees
+
+## Key Identifier Types
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a subgraph. Use `execute_query_by_subgraph_id` or `get_schema_by_subgraph_id`.
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment. Use `execute_query_by_deployment_id` or `get_schema_by_deployment_id`.
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific, immutable deployment. Use `execute_query_by_deployment_id` (the gateway treats it like a deployment ID for querying) or `get_schema_by_ipfs_hash`.
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Claude will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+```
+Find the top subgraphs for contract 0x1f98431c8ad98523631ae4a59f267346ea31f984 on arbitrum-one
+```
diff --git a/website/src/pages/mr/subgraphs/mcp/cline.mdx b/website/src/pages/mr/subgraphs/mcp/cline.mdx
new file mode 100644
index 000000000000..156221d9a127
--- /dev/null
+++ b/website/src/pages/mr/subgraphs/mcp/cline.mdx
@@ -0,0 +1,99 @@
+---
+title: Cline
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cline to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cline.
+
+## Prerequisites
+
+- [Cline](https://cline.bot/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+## Using The Graph Resource in Cline
+
+After configuring Cline:
+
+1. Restart Cline
+2. Start a new conversation
+3. Enable the Subgraph MCP from the context menu
+4. Add "Subgraph Server Instructions" as a resource to your chat context
+
+## Available Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cline will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/mr/subgraphs/mcp/cursor.mdx b/website/src/pages/mr/subgraphs/mcp/cursor.mdx
new file mode 100644
index 000000000000..298f43ece048
--- /dev/null
+++ b/website/src/pages/mr/subgraphs/mcp/cursor.mdx
@@ -0,0 +1,94 @@
+---
+title: Cursor
+---
+
+The Subgraph [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server enables Cursor to interact directly with Subgraphs on The Graph Network. This integration allows you to explore Subgraph schemas, execute GraphQL queries, and find relevant Subgraphs for specific contracts—all through natural language conversations with Cursor.
+
+## Prerequisites
+
+- [Cursor](https://www.cursor.com/) installed (latest version)
+- A Gateway API key from [Subgraph Studio](https://thegraph.com/studio/)
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+### 2. Add Configuration
+
+```json
+{
+ "mcpServers": {
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Key
+
+Replace `GATEWAY_API_KEY` with your API key from Subgraph Studio.
+
+### 4. Restart Cursor
+
+Restart Cursor, and start a new chat.
+
+## Available Subgraph Tools and Usage
+
+The Subgraph MCP provides several tools for interacting with Subgraphs:
+
+### Schema Retrieval Tools
+
+- **Get schema by deployment ID**: Access the GraphQL schema using a deployment ID (0x...)
+- **Get schema by Subgraph ID**: Access the schema for the current deployment of a Subgraph (5zvR82...)
+- **Get schema by IPFS hash**: Access the schema using a Subgraph's IPFS manifest hash (Qm...)
+
+### Query Execution Tools
+
+- **Execute query by deployment ID**: Run GraphQL queries against specific, immutable deployments
+- **Execute query by Subgraph ID**: Run GraphQL queries against the latest version of a Subgraph
+
+### Discovery Tools
+
+- **Get top Subgraph deployments**: Find the top 3 Subgraph deployments indexing a specific contract on a particular chain
+
+## Benefits of Natural Language Queries
+
+One of the most powerful features of the Subgraph MCP integration is the ability to ask questions in natural language. Cursor will:
+
+1. Understand your goal (lookup, find Subgraphs, query, get schema)
+2. Find relevant deployments if needed
+3. Fetch and interpret the Subgraph schema
+4. Convert your question into an appropriate GraphQL query
+5. Execute the query and present the results in a readable format
+
+### Example Natural Language Queries
+
+```
+What are the pairs with maximum volume on deployment 0xde0a7b5368f846f7d863d9f64949b688ad9818243151d488b4c6b206145b9ea3?
+```
+
+```
+Which tokens have the highest market cap in this Subgraph?
+```
+
+```
+Show me the most recent 5 swaps for the USDC/ETH pair
+```
+
+## Key Identifier Types
+
+When working with Subgraphs, you'll encounter different types of identifiers:
+
+- **Subgraph ID** (e.g., `5zvR82...`): Logical identifier for a Subgraph
+- **Deployment ID** (e.g., `0x4d7c...`): Identifier for a specific, immutable deployment
+- **IPFS Hash** (e.g., `QmTZ8e...`): Identifier for the manifest of a specific deployment
diff --git a/website/src/pages/mr/subgraphs/querying/best-practices.mdx b/website/src/pages/mr/subgraphs/querying/best-practices.mdx
index db52212384b1..f1b78701982c 100644
--- a/website/src/pages/mr/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/mr/subgraphs/querying/best-practices.mdx
@@ -2,9 +2,7 @@
title: Querying Best Practices
---
-The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-
-Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
+Use The Graph's GraphQL API to query [Subgraph](/subgraphs/developing/subgraphs/) data efficiently. This guide outlines essential GraphQL rules, guides, and best practices to help you write optimized, reliable queries.
---
@@ -12,9 +10,11 @@ Learn the essential GraphQL language rules and best practices to optimize your S
### The Anatomy of a GraphQL Query
-REST API च्या विपरीत, GraphQL API एका स्कीमावर तयार केले जाते जे कोणत्या क्वेरी पूर्ण केल्या जाऊ शकतात हे परिभाषित करते.
+> GraphQL queries use the GraphQL language, which is defined in the [GraphQL specification](https://spec.graphql.org/).
+
+Unlike REST APIs, GraphQL APIs are built on a schema-driven design that defines which queries can be performed.
-For example, a query to get a token using the `token` query will look as follows:
+Here's a typical query to fetch a `token`:
```graphql
query GetToken($id: ID!) {
@@ -25,7 +25,7 @@ query GetToken($id: ID!) {
}
```
-which will return the following predictable JSON response (_when passing the proper `$id` variable value_):
+which will return a predictable JSON response (when passing the proper `$id` variable value):
```json
{
@@ -36,8 +36,6 @@ which will return the following predictable JSON response (_when passing the pro
}
```
-GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/).
-
The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders):
```graphql
@@ -50,33 +48,31 @@ query [operationName]([variableName]: [variableType]) {
}
```
-## Rules for Writing GraphQL Queries
+### Rules for Writing GraphQL Queries
-- Each `queryName` must only be used once per operation.
-- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`)
-- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
-- Any variable assigned to an argument must match its type.
-- In a given list of variables, each of them must be unique.
-- All defined variables must be used.
+> Important: Failing to follow these rules will result in an error from The Graph API.
-> Note: Failing to follow these rules will result in an error from The Graph API.
+1. Each `queryName` must only be used once per operation.
+2. Each `field` must be used only once in a selection (you cannot query `id` twice under `token`).
+3. Complex types require a selection of sub-fields.
+ - For example, some `fields' or queries (like `tokens`) return complex types which will require a selection of sub-fields. Not providing a selection when expected or providing one when not expected will raise an error, such as `id\`. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/).
+4. Any variable assigned to an argument must match its type.
+5. Variables must be uniquely defined and used.
-For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/).
+**For a complete list of rules with code examples, check out the [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)**.
-### Sending a query to a GraphQL API
+### How to Send a Query to a GraphQL API
-GraphQL is a language and set of conventions that transport over HTTP.
+[GraphQL is a query language](https://graphql.org/learn/) and a set of conventions for APIs, typically used over HTTP to request and send data between clients and servers. This means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`).
-
-However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
+However, as recommended in [Querying from an Application](/subgraphs/querying/from-an-application/), it's best to use `graph-client`, which supports the following unique features:
- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Fully typed result
-Here's how to query The Graph with `graph-client`:
+Example query using `graph-client`:
```tsx
import { execute } from '../.graphclient'
@@ -100,15 +96,15 @@ async function main() {
main()
```
-More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/).
+For more alternatives, see ["Querying from an Application"](/subgraphs/querying/from-an-application/).
---
## Best Practices
-### Always write static queries
+### 1. Always Write Static Queries
-A common (bad) practice is to dynamically build query strings as follows:
+A common bad practice is to dynamically build a query string as follows:
```tsx
const id = params.id
@@ -124,14 +120,16 @@ query GetToken {
// Execute query...
```
-While the above snippet produces a valid GraphQL query, **it has many drawbacks**:
+While the example above produces a valid GraphQL query, it comes with several issues:
+
+- The full query is harder to understand.
+- Developers are responsible for safely sanitizing the string interpolation.
+- Not sending the values of the variables as part of the request can block server-side caching.
+- It prevents tools from statically analyzing the query (e.g.linters or type generation tools).
-- it makes it **harder to understand** the query as a whole
-- developers are **responsible for safely sanitizing the string interpolation**
-- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side**
-- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools)
+Instead, it's recommended to **always write queries as static strings**.
-For this reason, it is recommended to always write queries as static strings:
+Example of static string:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -153,18 +151,21 @@ const result = await execute(query, {
})
```
-Doing so brings **many advantages**:
+Static strings have several **key advantages**:
-- **Easy to read and maintain** queries
-- The GraphQL **server handles variables sanitization**
-- **Variables can be cached** at server-level
-- **Queries can be statically analyzed by tools** (more on this in the following sections)
+- Queries are easier to read, manage, and debug.
+- Variable sanitization is handled by the GraphQL server The GraphQL.
+- Variables can be cached at the server level.
+- Queries can be statically analyzed by tools (see [GraphQL Essential Tools](/subgraphs/querying/best-practices/#graphql-essential-tools-guides)).
-### How to include fields conditionally in static queries
+### 2. Include Fields Conditionally in Static Queries
-You might want to include the `owner` field only on a particular condition.
+Including fields in static queries only for a particular condition improves performance and keeps responses lightweight by fetching only the necessary data when it's relevant.
-For this, you can leverage the `@include(if:...)` directive as follows:
+- The `@include(if:...)` directive tells the query to **include** a specific field only if the given condition is true.
+- The `@skip(if: ...)` directive tells the query to **exclude** a specific field if the given condition is true.
+
+Example using `owner` field with `@include(if:...)` directive:
```tsx
import { execute } from 'your-favorite-graphql-client'
@@ -187,15 +188,11 @@ const result = await execute(query, {
})
```
-> Note: The opposite directive is `@skip(if: ...)`.
-
-### Ask for what you want
-
-GraphQL became famous for its "Ask for what you want" tagline.
+### 3. Ask Only For What You Want
-For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually.
+GraphQL is known for its "Ask for what you want” tagline, which is why it requires explicitly listing each field you want. There's no built-in way to fetch all available fields automatically.
-- When querying GraphQL APIs, always think of querying only the fields that will be actually used.
+- When querying GraphQL APIs, always think of querying only the fields that will actually be used.
- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities.
For example, in the following query:
@@ -215,9 +212,9 @@ query listTokens {
The response could contain 100 transactions for each of the 100 tokens.
-If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field.
+If the application only needs 10 transactions, the query should explicitly set **`first: 10`** on the transactions field.
-### Use a single query to request multiple records
+### 4. Use a Single Query to Request Multiple Records
By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
@@ -249,7 +246,7 @@ query ManyRecords {
}
```
-### Combine multiple queries in a single request
+### 5. Combine Multiple Queries in a Single Request
Your application might require querying multiple types of data as follows:
@@ -281,9 +278,9 @@ const [tokens, counters] = Promise.all(
)
```
-While this implementation is totally valid, it will require two round trips with the GraphQL API.
+While this implementation is valid, it will require two round trips with the GraphQL API.
-Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows:
+It's best to send multiple queries in the same GraphQL request as follows:
```graphql
import { execute } from "your-favorite-graphql-client"
@@ -304,9 +301,9 @@ query GetTokensandCounters {
const { result: { tokens, counters } } = execute(query)
```
-This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**.
+Sending multiple queries in the same GraphQL request **improves the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and provides a **more concise implementation**.
-### Leverage GraphQL Fragments
+### 6. Leverage GraphQL Fragments
A helpful feature to write GraphQL queries is GraphQL Fragment.
@@ -335,7 +332,7 @@ Such repeated fields (`id`, `active`, `status`) bring many issues:
- More extensive queries become harder to read.
- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces.
-A refactored version of the query would be the following:
+An optimized version of the query would be the following:
```graphql
query {
@@ -359,15 +356,18 @@ fragment DelegateItem on Transcoder {
}
```
-Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation.
+Using a GraphQL `fragment` improves readability (especially at scale) and results in better TypeScript types generation.
When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_).
-### GraphQL Fragment do's and don'ts
+## GraphQL Fragment Guidelines
-### Fragment base must be a type
+### Do's and Don'ts for Fragments
-A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**:
+1. Fragments cannot be based on a non-applicable types (types without fields).
+2. `BigInt` cannot be used as a fragment's base because it's a **scalar** (native "plain" type).
+
+उदाहरण:
```graphql
fragment MyFragment on BigInt {
@@ -375,11 +375,8 @@ fragment MyFragment on BigInt {
}
```
-`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base.
-
-#### How to spread a Fragment
-
-Fragments are defined on specific types and should be used accordingly in queries.
+3. Fragments belong to specific types and must be used with those same types in queries.
+4. Spread only fragments matching the correct type.
उदाहरण:
@@ -402,20 +399,23 @@ fragment VoteItem on Vote {
}
```
-`newDelegate` and `oldDelegate` are of type `Transcoder`.
+- `newDelegate` and `oldDelegate` are of type `Transcoder`. It's not possible to spread a fragment of type `Vote` here.
-It is not possible to spread a fragment of type `Vote` here.
+5. Fragments must be defined based on their specific usage.
+6. Define fragments as an atomic business unit of data.
-#### Define Fragment as an atomic business unit of data
+---
-GraphQL `Fragment`s must be defined based on their usage.
+### How to Define `Fragment` as an Atomic Business Unit of Data
-For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient.
+> For most use-cases, defining one fragment per type (in the case of repeated fields usage or type generation) is enough.
Here is a rule of thumb for using fragments:
- When fields of the same type are repeated in a query, group them in a `Fragment`.
-- When similar but different fields are repeated, create multiple fragments, for instance:
+- When similar but different fields are repeated, create multiple fragments.
+
+उदाहरण:
```graphql
# base fragment (mostly used in listing)
@@ -438,35 +438,45 @@ fragment VoteWithPoll on Vote {
---
-## The Essential Tools
+## GraphQL Essential Tools Guides
+
+### Test Queries with Graph Explorer
+
+Before integrating GraphQL queries into your dapp, it's best to test them. Instead of running them directly in your app, use a web-based playground.
+
+Start with [Graph Explorer](https://thegraph.com/explorer), a preconfigured GraphQL playground built specifically for Subgraphs. You can experiment with queries and see the structure of the data returned without writing any frontend code.
+
+If you want alternatives to debug/test your queries, check out other similar web-based tools:
+
+- [GraphiQL](https://graphiql-online.com/graphiql)
+- [Altair](https://altairgraphql.dev/)
-### GraphQL web-based explorers
+### Setting up Workflow and IDE Tools
-Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries.
+In order to keep up with querying best practices and syntax rules, use the following workflow and IDE tools.
-If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql).
+#### GraphQL ESLint
-### GraphQL Linting
+1. Install GraphQL ESLint
-In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools.
+Use [GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) to enforce best practices and syntax rules with zero effort.
-**GraphQL ESLint**
+2. Use the ["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config
-[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort.
+This will enforce essential rules such as:
-[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as:
+- `@graphql-eslint/fields-on-correct-type`: Ensures fields match the proper type.
+- `@graphql-eslint/no-unused variables`: Flags unused variables in your queries.
-- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type?
-- `@graphql-eslint/no-unused variables`: should a given variable stay unused?
-- and more!
+Result: You'll **catch errors without even testing queries** on the playground or running them in production!
-This will allow you to **catch errors without even testing queries** on the playground or running them in production!
+#### Use IDE plugins
-### IDE plugins
+GraphQL plugins streamline your workflow by offering real-time feedback while you code. They highlight mistakes, suggest completions, and help you explore your schema faster.
-**VSCode and GraphQL**
+1. VS Code
-The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get:
+Install the [GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) to unlock:
- Syntax highlighting
- Autocomplete suggestions
@@ -474,11 +484,11 @@ The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemNa
- Snippets
- Go to definition for fragments and input types
-If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly.
+If you are using `graphql-eslint`, use the [ESLint VS Code extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) to visualize errors and warnings inlined in your code correctly.
-**WebStorm/Intellij and GraphQL**
+2. WebStorm/Intellij and GraphQL
-The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing:
+Install the [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/). It significantly improves the experience of working with GraphQL by providing:
- Syntax highlighting
- Autocomplete suggestions
diff --git a/website/src/pages/mr/subgraphs/querying/graphql-api.mdx b/website/src/pages/mr/subgraphs/querying/graphql-api.mdx
index 049248616399..ef5bbee43d9f 100644
--- a/website/src/pages/mr/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/mr/subgraphs/querying/graphql-api.mdx
@@ -2,23 +2,37 @@
title: GraphQL API
---
-Learn about the GraphQL Query API used in The Graph.
+Explore the GraphQL Query API for interacting with Subgraphs on The Graph Network.
-## What is GraphQL?
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with existing data.
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
+The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
+## Core Concepts
-## Queries with GraphQL
+### Entities
+
+- **What they are**: Persistent data objects defined with `@entity` in your schema
+- **Key requirement**: Must contain `id: ID!` as primary identifier
+- **Usage**: Foundation for all query operations
+
+### Schema
+
+- **Purpose**: Blueprint defining the data structure and relationships using GraphQL [IDL](https://facebook.github.io/graphql/draft/#sec-Type-System)
+- **Key characteristics**:
+ - Auto-generates query endpoints
+ - Read-only operations (no mutations)
+ - Defines entity interfaces and derived fields
-In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+## Query Structure
-> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
+GraphQL queries in The Graph target entities defined in the Subgraph schema. Each `Entity` type generates corresponding `entity` and `entities` fields on the root `Query` type.
-### Examples
+> Note: The `query` keyword is not required at the top level of GraphQL queries.
-Query for a single `Token` entity defined in your schema:
+### Single Entity Queries Example
+
+Query for a single `Token` entity:
```graphql
{
@@ -29,9 +43,11 @@ Query for a single `Token` entity defined in your schema:
}
```
-> Note: When querying for a single entity, the `id` field is required, and it must be written as a string.
+> Note: Single entity queries require the `id` parameter as a string.
+
+### Collection Queries Example
-Query all `Token` entities:
+Query format for all `Token` entities:
```graphql
{
@@ -42,14 +58,14 @@ Query all `Token` entities:
}
```
-### Sorting
+### Sorting Example
-When querying a collection, you may:
+Collection queries support the following sort parameters:
-- Use the `orderBy` parameter to sort by a specific attribute.
-- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending.
+- `orderBy`: Specifies the attribute for sorting
+- `orderDirection`: Accepts `asc` (ascending) or `desc` (descending)
-#### उदाहरण
+#### Standard Sorting Example
```graphql
{
@@ -60,11 +76,7 @@ When querying a collection, you may:
}
```
-#### Example for nested entity sorting
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities.
-
-The following example shows tokens sorted by the name of their owner:
+#### Nested Entity Sorting Example
```graphql
{
@@ -77,20 +89,18 @@ The following example shows tokens sorted by the name of their owner:
}
```
-> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
+> Note: Nested sorting supports one-level-deep `String` or `ID` types on `@entity` and `@derivedFrom` fields.
-### Pagination
+### Pagination Example
-When querying a collection, it's best to:
+When querying a collection, it is best to:
- Use the `first` parameter to paginate from the beginning of the collection.
- The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time.
- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities.
- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above.
-#### Example using `first`
-
-Query the first 10 tokens:
+#### Standard Pagination Example
```graphql
{
@@ -101,11 +111,7 @@ Query the first 10 tokens:
}
```
-To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection.
-
-#### Example using `first` and `skip`
-
-Query 10 `Token` entities, offset by 10 places from the beginning of the collection:
+#### Offset Pagination Example
```graphql
{
@@ -116,9 +122,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect
}
```
-#### Example using `first` and `id_ge`
-
-If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query:
+#### Cursor-based Pagination Example
```graphql
query manyTokens($lastID: String) {
@@ -129,16 +133,11 @@ query manyTokens($lastID: String) {
}
```
-The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values.
-
### Filtering
-- You can use the `where` parameter in your queries to filter for different properties.
-- You can filter on multiple values within the `where` parameter.
-
-#### Example using `where`
+The `where` parameter filters entities based on specified conditions.
-Query challenges with `failed` outcome:
+#### Basic Filtering Example
```graphql
{
@@ -152,9 +151,7 @@ Query challenges with `failed` outcome:
}
```
-You can use suffixes like `_gt`, `_lte` for value comparison:
-
-#### Example for range filtering
+#### Numeric Comparison Example
```graphql
{
@@ -166,11 +163,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
}
```
-#### Example for block filtering
-
-You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-
-This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
+#### Block-based Filtering Example
```graphql
{
@@ -182,11 +175,7 @@ This can be useful if you are looking to fetch only entities which have changed,
}
```
-#### Example for nested entity filtering
-
-Filtering on the basis of nested entities is possible in the fields with the `_` suffix.
-
-This can be useful if you are looking to fetch only entities whose child-level entities meet the provided conditions.
+#### Nested Entity Filtering Example
```graphql
{
@@ -200,11 +189,9 @@ This can be useful if you are looking to fetch only entities whose child-level e
}
```
-#### Logical operators
-
-As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria.
+#### Logical Operators
-##### `AND` Operator
+##### AND Operations Example
The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`.
@@ -220,27 +207,11 @@ The following example filters for challenges with `outcome` `succeeded` and `num
}
```
-> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas.
->
-> ```graphql
-> {
-> challenges(where: { number_gte: 100, outcome: "succeeded" }) {
-> challenger
-> outcome
-> application {
-> id
-> }
-> }
-> }
-> ```
-
-##### `OR` Operator
-
-The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`.
+**Syntactic sugar:** You can simplify the above query by removing the `and` operator and by passing a sub-expression separated by commas.
```graphql
{
- challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenges(where: { number_gte: 100, outcome: "succeeded" }) {
challenger
outcome
application {
@@ -250,52 +221,36 @@ The following example filters for challenges with `outcome` `succeeded` or `numb
}
```
-> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
-
-#### All Filters
-
-Full list of parameter suffixes:
+##### OR Operations Example
+```graphql
+{
+ challenges(where: { or: [{ number_gte: 100 }, { outcome: "succeeded" }] }) {
+ challenger
+ outcome
+ application {
+ id
+ }
+ }
+}
```
-_
-_not
-_gt
-_lt
-_gte
-_lte
-_in
-_not_in
-_contains
-_contains_nocase
-_not_contains
-_not_contains_nocase
-_starts_with
-_starts_with_nocase
-_ends_with
-_ends_with_nocase
-_not_starts_with
-_not_starts_with_nocase
-_not_ends_with
-_not_ends_with_nocase
-```
-
-> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types.
-In addition, the following global filters are available as part of `where` argument:
+Global filter parameter:
```graphql
_change_block(number_gte: Int)
```
-### Time-travel queries
+### Time-travel Queries Example
-You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries.
+Queries support historical state retrieval using the `block` parameter:
-The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change.
+- `number`: Integer block number
+- `hash`: String block hash
> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail.
-#### उदाहरण
+#### Block Number Query Example
```graphql
{
@@ -309,9 +264,7 @@ The result of such a query will not change over time, i.e., querying at a certai
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000.
-
-#### उदाहरण
+#### Block Hash Query Example
```graphql
{
@@ -325,28 +278,26 @@ This query will return `Challenge` entities, and their associated `Application`
}
```
-This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash.
-
-### Fulltext Search Queries
+### Full-Text Search Example
-Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
+Full-text search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Full-text Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add full-text search to your Subgraph.
-Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
+Full-text search queries have one required field, `text`, for supplying search terms. Several special full-text operators are available to be used in this `text` search field.
-Fulltext search operators:
+Full-text search fields use the required `text` parameter with the following operators:
-| Symbol | Operator | वर्णन |
-| --- | --- | --- |
-| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms |
-| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms |
-| `<->` | `Follow by` | Specify the distance between two words. |
-| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) |
+| Operator | Symbol | Description |
+| --------- | ------ | --------------------------------------------------------------- |
+| And | `&` | Matches entities containing all terms |
+| Or | `\|` | Return all entities with a match from any of the provided terms |
+| Follow by | `<->` | Matches terms with specified distance |
+| Prefix | `:*` | Matches word prefixes (minimum 2 characters) |
-#### Examples
+#### Search Examples
-Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields.
+OR operator:
-```graphql
+```
{
blogSearch(text: "anarchism | crumpets") {
id
@@ -357,7 +308,7 @@ Using the `or` operator, this query will filter to blog entities with variations
}
```
-The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy"
+“Follow” by operator:
```graphql
{
@@ -370,7 +321,7 @@ The `follow by` operator specifies a words a specific distance apart in the full
}
```
-Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music".
+Combined operators:
```graphql
{
@@ -383,29 +334,19 @@ Combine fulltext operators to make more complex filters. With a pretext search o
}
```
-### Validation
+### स्कीमा व्याख्या
-Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
-
-## Schema
-
-The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
-
-> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
-
-### Entities
+Entity types require:
-All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field.
+- GraphQL Interface Definition Language (IDL) format
+- `@entity` directive
+- `ID` field
-> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported.
+### Subgraph Metadata Example
-### Subgraph Metadata
+The `_Meta_` object provides subgraph metadata:
-All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
-
-```graphQL
+```graphql
{
_meta(block: { number: 123987 }) {
block {
@@ -419,14 +360,49 @@ All Subgraphs have an auto-generated `_Meta_` object, which provides access to S
}
```
-If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
-
-`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
+Metadata fields:
+
+- `deployment`: IPFS CID of the subgraph.yaml
+- `block`: Latest block information
+- `hasIndexingErrors`: Boolean indicating past indexing errors
+
+> Note: When writing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use `and` operators instead of `or` whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries.
+
+### GraphQL Filter Operators Reference
+
+This table explains each filter operator available in The Graph's GraphQL API. These operators are used as suffixes to field names when filtering data using the `where` parameter.
+
+| Operator | वर्णन | उदाहरण |
+| ------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
+| `_` | Matches entities where the specified field equals another entity | `{ where: { owner_: { name: "Alice" } } }` |
+| `_not` | Negates the specified condition | `{ where: { active_not: true } }` |
+| `_gt` | Greater than (>) | `{ where: { price_gt: "100" } }` |
+| `_lt` | Less than (`\<`) | `{ where: { price_lt: "100" } }` |
+| `_gte` | Greater than or equal to (>=) | `{ where: { price_gte: "100" } }` |
+| `_lte` | Less than or equal to (`\<=`) | `{ where: { price_lte: "100" } }` |
+| `_in` | Value is in the specified array | `{ where: { category_in: ["Art", "Music"] } }` |
+| `_not_in` | Value is not in the specified array | `{ where: { category_not_in: ["Art", "Music"] } }` |
+| `_contains` | Field contains the specified string (case-sensitive) | `{ where: { name_contains: "token" } }` |
+| `_contains_nocase` | Field contains the specified string (case-insensitive) | `{ where: { name_contains_nocase: "token" } }` |
+| `_not_contains` | Field does not contain the specified string (case-sensitive) | `{ where: { name_not_contains: "test" } }` |
+| `_not_contains_nocase` | Field does not contain the specified string (case-insensitive) | `{ where: { name_not_contains_nocase: "test" } }` |
+| `_starts_with` | Field starts with the specified string (case-sensitive) | `{ where: { name_starts_with: "Crypto" } }` |
+| `_starts_with_nocase` | Field starts with the specified string (case-insensitive) | `{ where: { name_starts_with_nocase: "crypto" } }` |
+| `_ends_with` | Field ends with the specified string (case-sensitive) | `{ where: { name_ends_with: "Token" } }` |
+| `_ends_with_nocase` | Field ends with the specified string (case-insensitive) | `{ where: { name_ends_with_nocase: "token" } }` |
+| `_not_starts_with` | Field does not start with the specified string (case-sensitive) | `{ where: { name_not_starts_with: "Test" } }` |
+| `_not_starts_with_nocase` | Field does not start with the specified string (case-insensitive) | `{ where: { name_not_starts_with_nocase: "test" } }` |
+| `_not_ends_with` | Field does not end with the specified string (case-sensitive) | `{ where: { name_not_ends_with: "Test" } }` |
+| `_not_ends_with_nocase` | Field does not end with the specified string (case-insensitive) | `{ where: { name_not_ends_with_nocase: "test" } }` |
+
+#### Notes
+
+- Type support varies by operator. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`.
+- The `_` operator is only available for object and interface types.
+- String comparison operators are especially useful for text fields.
+- Numeric comparison operators work with both number and string-encoded number fields.
+- Use these operators in combination with logical operators (`and`, `or`) for complex filtering.
-`block` provides information about the latest block (taking into account any block constraints passed to `_meta`):
-
-- hash: the hash of the block
-- number: the block number
-- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
+### Validation
-`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
+Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more.
diff --git a/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx
index 0cd0d779e8bb..e1db4b8064c3 100644
--- a/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx
@@ -1,34 +1,86 @@
---
-title: Managing API keys
+title: How to Manage API keys
---
+This guide shows you how to create, manage, and secure API keys for your [Subgraphs](/subgraphs/developing/subgraphs/).
+
## सविश्लेषण
-API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are required to query Subgraphs. They authenticate users and devices, authorize access to specific endpoints, enforce rate limits, and enable usage tracking across The Graph.
+
+## Prerequisites
+
+- A [Subgraph Studio](https://thegraph.com/studio/) account
+
+## Create a New API Key
+
+1. Navigate to [Subgraph Studio](https://thegraph.com/studio/)
+2. Click the **API Keys** tab in the navigation menu
+3. Click the **Create API Key** button
+
+A new window will pop up:
+
+4. Enter a name for your API key
+5. Optional: You can enable a period spending limit
+6. Click **Create API Key**
+
+
+
+## Manage API Keys
+
+The “API keys” table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+
+### How to Set Spending Limits
+
+1. Find your API key in the API keys table
+2. Click the "three dots" icon next to the key
+3. Select "Manage spending limit"
+4. Enter your desired monthly limit in USD
+5. Click **Save**
+
+> Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+
+### How to Rename an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Rename API key"
+3. Enter the new name
+4. Click **Save**
+
+### How to Regenerate an API Key
+
+1. Click the "three dots" icon next to the key
+2. Select "Regenerate API key"
+3. Confirm the action in the pop up dialog
+
+> Warning: Regenerating an API key will invalidate the previous key immediately. Update your applications with the new key to prevent service interruption.
+
+## API Key Details
-### Create and Manage API Keys
+### Monitoring Usage
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
+1. Click on your API key to view the Details page
+2. Check the **Overview** section for:
+ - Total number of queries
+ - GRT spent
+ - Current usage statistics
-The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
+### Restricting Domain Access
-You can click the "three dots" menu to the right of a given API key to:
+1. Click on your API key to open the Details page
+2. Navigate to the **Security** section
+3. Click "Add Domain"
+4. Enter the authorized domain name
+5. Click **Save**
-- Rename API key
-- Regenerate API key
-- Delete API key
-- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month).
+### Limiting Subgraph Access
-### API Key Details
+1. In the API key Details page
+2. Navigate to the **Security** section
+3. Click "Assign Subgraphs"
+4. Select the Subgraphs you want to authorize
+5. Click **Save**
-You can click on an individual API key to view the Details page:
+## अतिरिक्त संसाधने
-1. Under the **Overview** section, you can:
- - Edit your key name
- - Regenerate API keys
- - View the current usage of the API key with stats:
- - Number of queries
- - Amount of GRT spent
-2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- - View and manage the domain names authorized to use your API key
- - Assign Subgraphs that can be queried with your API key
+[Deploying Using Subgraph Studio](/subgraphs/developing/deploying/using-subgraph-studio/)
diff --git a/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 17258dd13ea1..c48a3021233a 100644
--- a/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,15 +2,19 @@
title: Subgraph ID vs Deployment ID
---
+Managing and accessing Subgraphs relies on two distinct identification systems: Subgraph IDs and Deployment IDs.
+
A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
-Here are some key differences between the two IDs: 
+Both identifiers are accessible in [Subgraph Studio](https://thegraph.com/studio/):
+
+
## Deployment ID
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://ipfs.thegraph.com/ipfs/QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
@@ -18,6 +22,12 @@ Example endpoint that uses Deployment ID:
`https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB`
+Using Deployment IDs for queries offers precise version control but comes with specific implications:
+
+- Advantages: Complete control over which version you're querying, ensuring consistent results
+- Challenges: Requires manual updates to query code when new Subgraph versions are published
+- Use case: Ideal for production environments where stability and predictability are crucial
+
## Subgraph ID
The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
@@ -25,3 +35,20 @@ The Subgraph ID is a unique identifier for a Subgraph. It remains constant acros
Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
+
+Using Subgraph IDs comes with important considerations:
+
+- Benefits: Automatically queries the latest version, reducing maintenance overhead
+- Limitations: May encounter version synchronization delays or breaking schema changes
+- Use case: Better suited for development environments or when staying current is more important than version stability
+
+## Deployment ID vs Subgraph ID
+
+Here are the key differences between the two IDs:
+
+| Consideration | Deployment ID | Subgraph ID |
+| ----------------------- | --------------------- | --------------- |
+| Version Pinning | Specific version | Always latest |
+| Maintenance Effort | High (manual updates) | Low (automatic) |
+| Environment Suitability | Production | Development |
+| Sync Status Awareness | Not required | Critical |
diff --git a/website/src/pages/mr/subgraphs/quick-start.mdx b/website/src/pages/mr/subgraphs/quick-start.mdx
index b14954bc11a4..83e790887315 100644
--- a/website/src/pages/mr/subgraphs/quick-start.mdx
+++ b/website/src/pages/mr/subgraphs/quick-start.mdx
@@ -2,24 +2,28 @@
title: क्विक स्टार्ट
---
-Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Create, deploy, and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph Network.
+
+By the end, you'll have:
+
+- Initialized a Subgraph from a smart contract
+- Deployed it to Subgraph Studio for testing
+- Published to The Graph Network for decentralized indexing
## Prerequisites
- एक क्रिप्टो वॉलेट
-- A smart contract address on a [supported network](/supported-networks/)
-- [Node.js](https://nodejs.org/) installed
-- A package manager of your choice (`npm`, `yarn` or `pnpm`)
+- A deployed smart contract on a [supported network](/supported-networks/)
+- [Node.js](https://nodejs.org/) & a package manager of your choice (`npm`, `yarn` or `pnpm`)
## How to Build a Subgraph
### 1. Create a Subgraph in Subgraph Studio
-Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-
-Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-
-Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
+1. Go to [Subgraph Studio](https://thegraph.com/studio/)
+2. Connect your wallet
+3. Click "Create a Subgraph"
+4. Name it in Title Case: "Subgraph Name Chain Name"
### 2. आलेख CLI स्थापित करा
@@ -37,20 +41,22 @@ Using [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your Subgraph
+Verify install:
-> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
+```sh
+graph --version
+```
-The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
+### 3. Initialize your Subgraph
-The following command initializes your Subgraph from an existing contract:
+> You can find commands for your specific Subgraph in [Subgraph Studio](https://thegraph.com/studio/).
+
+The following command initializes your Subgraph from an existing contract and indexes events:
```sh
graph init
```
-If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-
When you initialize your Subgraph, the CLI will ask you for the following information:
- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
@@ -59,19 +65,17 @@ When you initialize your Subgraph, the CLI will ask you for the following inform
- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block where the contract was deployed to optimize Subgraph indexing of blockchain data.
- **Contract Name**: Input the name of your contract.
- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-See the following screenshot for an example for what to expect when initializing your Subgraph:
+See the following screenshot for an example of what to expect when initializing your Subgraph:

### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-
When making changes to the Subgraph, you will mainly work with three files:
- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
@@ -82,9 +86,7 @@ For a detailed breakdown on how to write your Subgraph, check out [Creating a Su
### 5. Deploy your Subgraph
-> Remember, deploying is not the same as publishing.
-
-When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
Once your Subgraph is written, run the following commands:
@@ -107,8 +109,6 @@ graph deploy
```
````
-The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-
### 6. Review your Subgraph
If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
@@ -125,55 +125,13 @@ When your Subgraph is ready for a production environment, you can publish it to
- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-
-> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
-
-#### Publishing with Subgraph Studio
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to add curation signal.
-To publish your Subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard and select your network.

-Select the network to which you would like to publish your Subgraph.
-
-#### Publishing from the CLI
-
-As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
-
-Open the `graph-cli`.
-
-Use the following commands:
-
-````
-```sh
-graph codegen && graph build
-```
-
-Then,
-
-```sh
-graph publish
-```
-````
-
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.
-
-
-
-To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-
-#### Adding signal to your Subgraph
-
-1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
-
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
-
-2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
-
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
-
-To learn more about curation, read [Curating](/resources/roles/curating/).
+> It is recommended that you curate your own Subgraph with at least 3,000 GRT to incentivize indexing.
To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:
diff --git a/website/src/pages/mr/subgraphs/upgrade-indexer.mdx b/website/src/pages/mr/subgraphs/upgrade-indexer.mdx
new file mode 100644
index 000000000000..a4fdc7af31a1
--- /dev/null
+++ b/website/src/pages/mr/subgraphs/upgrade-indexer.mdx
@@ -0,0 +1,25 @@
+---
+title: Edge & Node Upgrade Indexer
+sidebarTitle: Upgrade Indexer
+---
+
+## सविश्लेषण
+
+The Upgrade Indexer is a specialized Indexer operated by Edge & Node. It supports newly integrated chains within The Graph ecosystem and ensures new Subgraphs are immediately available for querying, eliminating potential downtime.
+
+Originally designed as a transitional support, its primary purpose was to facilitate the migration of Subgraphs from the hosted service to the decentralized network. Currently, it supports newly deployed Subgraphs before the full Chain Integration Process (CIP) Indexing rewards are activated.
+
+### What it does
+
+- Provides immediate query support for all newly deployed Subgraphs.
+- Functions as the sole supporting Indexer for each chain until indexing rewards are activated.
+
+### What it does **not** do
+
+- Does not permanently index Subgraphs. Subgraph owners should curate Subgraphs to use independent Indexers long term.
+- Does not compete for rewards. The Upgrade Indexer's participation on the Graph Network does not dilute rewards for other Indexers.
+- Doesn't support Time Travel Queries (TTQ). All Subgraphs on the Upgrade Indexer are auto-pruned. If TTQs are needed on a Subgraph, [curation signal can be added](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) to attract Indexers that will support this feature.
+
+### Conclusion
+
+The Edge & Node Upgrade Indexer is foundational in supporting chain integrations and mitigating data latency risks. It plays a critical role in scaling The Graph's decentralized infrastructure by ensuring immediate query support and fostering community-driven indexing.
diff --git a/website/src/pages/mr/substreams/_meta-titles.json b/website/src/pages/mr/substreams/_meta-titles.json
index 6262ad528c3a..b8799cc89251 100644
--- a/website/src/pages/mr/substreams/_meta-titles.json
+++ b/website/src/pages/mr/substreams/_meta-titles.json
@@ -1,3 +1,4 @@
{
- "developing": "Developing"
+ "developing": "Developing",
+ "sps": "Substreams-powered Subgraphs"
}
diff --git a/website/src/pages/mr/substreams/developing/sinks.mdx b/website/src/pages/mr/substreams/developing/sinks.mdx
index 873e20981407..703063e67c52 100644
--- a/website/src/pages/mr/substreams/developing/sinks.mdx
+++ b/website/src/pages/mr/substreams/developing/sinks.mdx
@@ -8,14 +8,13 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database or a file.
## Sinks
> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed.
- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database.
-- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network.
- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application.
- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic.
- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks.
@@ -26,26 +25,26 @@ Sinks are integrations that allow you to send the extracted data to different de
### Official
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Official Support (by one of the main Substreams providers)
- C = Community Support
diff --git a/website/src/pages/mr/substreams/quick-start.mdx b/website/src/pages/mr/substreams/quick-start.mdx
index a2f7e3e938bd..3b9ae824dd9b 100644
--- a/website/src/pages/mr/substreams/quick-start.mdx
+++ b/website/src/pages/mr/substreams/quick-start.mdx
@@ -31,6 +31,7 @@ If you can't find a Substreams package that meets your specific needs, you can d
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/).
diff --git a/website/src/pages/mr/substreams/sps/faq.mdx b/website/src/pages/mr/substreams/sps/faq.mdx
new file mode 100644
index 000000000000..250c466d5929
--- /dev/null
+++ b/website/src/pages/mr/substreams/sps/faq.mdx
@@ -0,0 +1,96 @@
+---
+title: Substreams-Powered Subgraphs FAQ
+sidebarTitle: FAQ
+---
+
+## What are Substreams?
+
+Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications.
+
+Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere.
+
+Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
+
+## What are Substreams-powered Subgraphs?
+
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
+
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
+
+## How are Substreams-powered Subgraphs different from Subgraphs?
+
+Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain.
+
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+
+## What are the benefits of using Substreams-powered Subgraphs?
+
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+
+## What are the benefits of Substreams?
+
+There are many benefits to using Substreams, including:
+
+- Composable: You can stack Substreams modules like LEGO blocks, and build upon community modules, further refining public data.
+
+- High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery).
+
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
+
+- Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks.
+
+- Access to additional data which is not available as part of the JSON RPC
+
+- All the benefits of the Firehose.
+
+## What is the Firehose?
+
+Developed by [StreamingFast](https://www.streamingfast.io/), the Firehose is a blockchain data extraction layer designed from scratch to process the full history of blockchains at speeds that were previously unseen. Providing a files-based and streaming-first approach, it is a core component of StreamingFast's suite of open-source technologies and the foundation for Substreams.
+
+Go to the [documentation](https://firehose.streamingfast.io/) to learn more about the Firehose.
+
+## What are the benefits of the Firehose?
+
+There are many benefits to using Firehose, including:
+
+- Lowest latency & no polling: In a streaming-first fashion, the Firehose nodes are designed to race to push out the block data first.
+
+- Prevents downtimes: Designed from the ground up for High Availability.
+
+- Never miss a beat: The Firehose stream cursor is designed to handle forks and to continue where you left off in any condition.
+
+- Richest data model: Best data model that includes the balance changes, the full call tree, internal transactions, logs, storage changes, gas costs, and more.
+
+- Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available.
+
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
+
+The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules.
+
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+
+The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
+
+## What is the role of Rust modules in Substreams?
+
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
+
+See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
+
+## What makes Substreams composable?
+
+When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used.
+
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
+
+## How can you build and deploy a Substreams-powered Subgraph?
+
+After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
+
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
+
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
+
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
+
+The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them.
diff --git a/website/src/pages/mr/substreams/sps/introduction.mdx b/website/src/pages/mr/substreams/sps/introduction.mdx
new file mode 100644
index 000000000000..d22d998dee0d
--- /dev/null
+++ b/website/src/pages/mr/substreams/sps/introduction.mdx
@@ -0,0 +1,31 @@
+---
+title: Introduction to Substreams-Powered Subgraphs
+sidebarTitle: Introduction
+---
+
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+
+## सविश्लेषण
+
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+
+### Specifics
+
+There are two methods of enabling this technology:
+
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
+
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
+
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+
+### अतिरिक्त संसाधने
+
+Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly:
+
+- [Solana](/substreams/developing/solana/transactions/)
+- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm)
+- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
+- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
+- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/mr/substreams/sps/triggers.mdx b/website/src/pages/mr/substreams/sps/triggers.mdx
new file mode 100644
index 000000000000..df877d792fad
--- /dev/null
+++ b/website/src/pages/mr/substreams/sps/triggers.mdx
@@ -0,0 +1,47 @@
+---
+title: Substreams Triggers
+---
+
+Use Custom Triggers and enable the full use GraphQL.
+
+## सविश्लेषण
+
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
+
+### Defining `handleTransactions`
+
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
+
+```tsx
+export function handleTransactions(bytes: Uint8Array): void {
+ let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1.
+ if (transactions.length == 0) {
+ log.info('No transactions found', [])
+ return
+ }
+
+ for (let i = 0; i < transactions.length; i++) {
+ // 2.
+ let transaction = transactions[i]
+
+ let entity = new Transaction(transaction.hash) // 3.
+ entity.from = transaction.from
+ entity.to = transaction.to
+ entity.save()
+ }
+}
+```
+
+Here's what you're seeing in the `mappings.ts` file:
+
+1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object
+2. Looping over the transactions
+3. Create a new Subgraph entity for every transaction
+
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
+
+### अतिरिक्त संसाधने
+
+To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/).
diff --git a/website/src/pages/mr/substreams/sps/tutorial.mdx b/website/src/pages/mr/substreams/sps/tutorial.mdx
new file mode 100644
index 000000000000..3e89f8c8804d
--- /dev/null
+++ b/website/src/pages/mr/substreams/sps/tutorial.mdx
@@ -0,0 +1,155 @@
+---
+title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana"
+sidebarTitle: Tutorial
+---
+
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
+
+## सुरु करूया
+
+For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial)
+
+### Prerequisites
+
+Before starting, make sure to:
+
+- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container.
+- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs.
+
+### Step 1: Initialize Your Project
+
+1. Open your Dev Container and run the following command to initialize your project:
+
+ ```bash
+ substreams init
+ ```
+
+2. Select the "minimal" project option.
+
+3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID:
+
+```yaml
+specVersion: v0.1.0
+package:
+ name: my_project_sol
+ version: v0.1.0
+
+imports: # Pass your spkg of interest
+ solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg
+
+modules:
+ - name: map_spl_transfers
+ use: solana:map_block # Select corresponding modules available within your spkg
+ initialBlock: 260000082
+
+ - name: map_transactions_by_programid
+ use: solana:solana:transactions_by_programid_without_votes
+
+network: solana-mainnet-beta
+
+params: # Modify the param fields to meet your needs
+ # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA
+ map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE
+```
+
+### Step 2: Generate the Subgraph Manifest
+
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
+
+```bash
+substreams codegen subgraph
+```
+
+You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source:
+
+```yaml
+---
+dataSources:
+ - kind: substreams
+ name: my_project_sol
+ network: solana-mainnet-beta
+ source:
+ package:
+ moduleName: map_spl_transfers # Module defined in the substreams.yaml
+ file: ./my-project-sol-v0.1.0.spkg
+ mapping:
+ apiVersion: 0.0.9
+ kind: substreams/graph-entities
+ file: ./src/mappings.ts
+ handler: handleTriggers
+```
+
+### Step 3: Define Entities in `schema.graphql`
+
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
+
+Here is an example:
+
+```graphql
+type MyTransfer @entity {
+ id: ID!
+ amount: String!
+ source: String!
+ designation: String!
+ signers: [String!]!
+}
+```
+
+This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`.
+
+### Step 4: Handle Substreams Data in `mappings.ts`
+
+With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
+
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
+
+```ts
+import { Protobuf } from 'as-proto/assembly'
+import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events'
+import { MyTransfer } from '../generated/schema'
+
+export function handleTriggers(bytes: Uint8Array): void {
+ const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode)
+
+ for (let i = 0; i < input.data.length; i++) {
+ const event = input.data[i]
+
+ if (event.transfer != null) {
+ let entity_id: string = `${event.txnId}-${i}`
+ const entity = new MyTransfer(entity_id)
+ entity.amount = event.transfer!.instruction!.amount.toString()
+ entity.source = event.transfer!.accounts!.source
+ entity.designation = event.transfer!.accounts!.destination
+
+ if (event.transfer!.accounts!.signer!.single != null) {
+ entity.signers = [event.transfer!.accounts!.signer!.single!.signer]
+ } else if (event.transfer!.accounts!.signer!.multisig != null) {
+ entity.signers = event.transfer!.accounts!.signer!.multisig!.signers
+ }
+ entity.save()
+ }
+ }
+}
+```
+
+### Step 5: Generate Protobuf Files
+
+To generate Protobuf objects in AssemblyScript, run the following command:
+
+```bash
+npm run protogen
+```
+
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
+
+### Conclusion
+
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+
+### Video Tutorial
+
+
+
+### अतिरिक्त संसाधने
+
+For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana).
diff --git a/website/src/pages/mr/supported-networks.mdx b/website/src/pages/mr/supported-networks.mdx
index ef2c28393033..9592cfabc0ad 100644
--- a/website/src/pages/mr/supported-networks.mdx
+++ b/website/src/pages/mr/supported-networks.mdx
@@ -4,17 +4,17 @@ hideTableOfContents: true
hideContentHeader: true
---
-import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, NetworksTable } from '@/supportedNetworks'
import { Heading } from '@/components'
import { useI18n } from '@/i18n'
export const getStaticProps = getSupportedNetworksStaticProps
-
+
{useI18n().t('index.supportedNetworks.title')}
-
+
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
diff --git a/website/src/pages/mr/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/mr/token-api/evm/get-balances-evm-by-address.mdx
index 3386fd078059..68385ffc4272 100644
--- a/website/src/pages/mr/token-api/evm/get-balances-evm-by-address.mdx
+++ b/website/src/pages/mr/token-api/evm/get-balances-evm-by-address.mdx
@@ -1,9 +1,9 @@
---
-title: Token Balances by Wallet Address
+title: Balances by Address
template:
type: openApi
apiId: tokenApi
operationId: getBalancesEvmByAddress
---
-The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
+Provides latest ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/mr/token-api/evm/get-historical-balances-evm-by-address.mdx b/website/src/pages/mr/token-api/evm/get-historical-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..d96ed1b81fa2
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-historical-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Historical Balances
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHistoricalBalancesEvmByAddress
+---
+
+Provides historical ERC-20 & Native balances by wallet address.
diff --git a/website/src/pages/mr/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/mr/token-api/evm/get-holders-evm-by-contract.mdx
index 0bb79e41ed54..01a52bbf7ad2 100644
--- a/website/src/pages/mr/token-api/evm/get-holders-evm-by-contract.mdx
+++ b/website/src/pages/mr/token-api/evm/get-holders-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders by Contract Address
+title: Token Holders
template:
type: openApi
apiId: tokenApi
operationId: getHoldersEvmByContract
---
-The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
+Provides ERC-20 token holder balances by contract address.
diff --git a/website/src/pages/mr/token-api/evm/get-nft-activities-evm.mdx b/website/src/pages/mr/token-api/evm/get-nft-activities-evm.mdx
new file mode 100644
index 000000000000..f76eb35f653a
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-nft-activities-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Activities
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftActivitiesEvm
+---
+
+Provides NFT Activities (ex: transfers, mints & burns).
diff --git a/website/src/pages/mr/token-api/evm/get-nft-collections-evm-by-contract.mdx b/website/src/pages/mr/token-api/evm/get-nft-collections-evm-by-contract.mdx
new file mode 100644
index 000000000000..c8e9bfb64219
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-nft-collections-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Collection
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftCollectionsEvmByContract
+---
+
+Provides single NFT collection metadata, total supply, owners & total transfers.
diff --git a/website/src/pages/mr/token-api/evm/get-nft-holders-evm-by-contract.mdx b/website/src/pages/mr/token-api/evm/get-nft-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..091d01a197f4
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-nft-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Holders
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftHoldersEvmByContract
+---
+
+Provides NFT holders per contract.
diff --git a/website/src/pages/mr/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx b/website/src/pages/mr/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
new file mode 100644
index 000000000000..cf9ff1c6e1b8
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-nft-items-evm-contract-by-contract-token_id-by-token_id.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Items
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftItemsEvmContractByContractToken_idByToken_id
+---
+
+Provides single NFT token metadata, ownership & traits.
diff --git a/website/src/pages/mr/token-api/evm/get-nft-ownerships-evm-by-address.mdx b/website/src/pages/mr/token-api/evm/get-nft-ownerships-evm-by-address.mdx
new file mode 100644
index 000000000000..4c33526eceb7
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-nft-ownerships-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Ownerships
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftOwnershipsEvmByAddress
+---
+
+Provides NFT Ownerships for Account.
diff --git a/website/src/pages/mr/token-api/evm/get-nft-sales-evm.mdx b/website/src/pages/mr/token-api/evm/get-nft-sales-evm.mdx
new file mode 100644
index 000000000000..f2d78bea4052
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-nft-sales-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: NFT Sales
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNftSalesEvm
+---
+
+Provides latest NFT marketplace sales.
diff --git a/website/src/pages/mr/token-api/evm/get-ohlc-pools-evm-by-pool.mdx b/website/src/pages/mr/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
new file mode 100644
index 000000000000..d5bc5357eadf
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-ohlc-pools-evm-by-pool.mdx
@@ -0,0 +1,9 @@
+---
+title: OHLCV by Pool
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPoolsEvmByPool
+---
+
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/mr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/mr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
index d1558ddd6e78..ff8f590b0433 100644
--- a/website/src/pages/mr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
+++ b/website/src/pages/mr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token OHLCV prices by Contract Address
+title: OHLCV by Contract
template:
type: openApi
apiId: tokenApi
operationId: getOhlcPricesEvmByContract
---
-The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
+Provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/mr/token-api/evm/get-pools-evm.mdx b/website/src/pages/mr/token-api/evm/get-pools-evm.mdx
new file mode 100644
index 000000000000..db32376f5a17
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-pools-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Liquidity Pools
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getPoolsEvm
+---
+
+Provides Uniswap V2 & V3 liquidity pool metadata.
diff --git a/website/src/pages/mr/token-api/evm/get-swaps-evm.mdx b/website/src/pages/mr/token-api/evm/get-swaps-evm.mdx
new file mode 100644
index 000000000000..0a7697f38c8b
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-swaps-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Swap Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getSwapsEvm
+---
+
+Provides Uniswap V2 & V3 swap events.
diff --git a/website/src/pages/mr/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/mr/token-api/evm/get-tokens-evm-by-contract.mdx
index b6fab8011fc2..aed206c15272 100644
--- a/website/src/pages/mr/token-api/evm/get-tokens-evm-by-contract.mdx
+++ b/website/src/pages/mr/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -1,9 +1,9 @@
---
-title: Token Holders and Supply by Contract Address
+title: Token Metadata
template:
type: openApi
apiId: tokenApi
operationId: getTokensEvmByContract
---
-The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
+Provides ERC-20 token contract metadata.
diff --git a/website/src/pages/mr/token-api/evm/get-transfers-evm.mdx b/website/src/pages/mr/token-api/evm/get-transfers-evm.mdx
new file mode 100644
index 000000000000..d8e73c90a03c
--- /dev/null
+++ b/website/src/pages/mr/token-api/evm/get-transfers-evm.mdx
@@ -0,0 +1,9 @@
+---
+title: Transfer Events
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvm
+---
+
+Provides ERC-20 & Native transfer events.
diff --git a/website/src/pages/mr/token-api/faq.mdx b/website/src/pages/mr/token-api/faq.mdx
index d7683aa77768..2caacf015c98 100644
--- a/website/src/pages/mr/token-api/faq.mdx
+++ b/website/src/pages/mr/token-api/faq.mdx
@@ -6,21 +6,37 @@ Get fast answers to easily integrate and scale with The Graph's high-performance
## सामान्य
-### What blockchains does the Token API support?
+### Which blockchains are supported by the Token API?
-Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+Currently, the Token API supports Ethereum, BNB Smart Chain (BSC), Polygon, Optimism, Base, Unichain, and Arbitrum One.
-### Why isn't my API key from The Graph Market working?
+### Does the Token API support NFTs?
-Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+Yes, The Graph Token API currently supports ERC-721 and ERC-1155 NFT token standards, with support for additional NFT standards planned. Endpoints are offered for ownership, collection stats, metadata, sales, holders, and transfer activity.
+
+### Do NFTs include off-chain data?
+
+NFT endpoints currently only include on-chain data. To get off-chain data, use the IPFS or HTTP links included in the NFT item response.
+
+### How do I authenticate requests to the Token API, and why doesn't my API key from The Graph Market work?
+
+Authentication is managed via API tokens obtained through [The Graph Market](https://thegraph.market/). If you're experiencing issues, make sure you're using the API Token generated from the API key, not the API key itself. An API token can be found on The Graph Market dashboard next to each API key. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
### How current is the data provided by the API relative to the blockchain?
The API provides data up to the latest finalized block.
-### How do I authenticate requests to the Token API?
+### How do I retrieve token prices?
+
+By default, token prices are returned with token-related responses, including token balances, token transfers, token metadata, and token holders. Historical prices are available with the Open-High-Low-Close (OHLC) endpoints.
+
+### Does the Token API support historical token data?
-Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+The Token API supports historical token balances with the `/historical/balances/evm/{address}` endpoint. You can query historical price data by pool at `/ohlc/pools/evm/{pool}` and by contract at `/ohlc/prices/evm/{contract}`.
+
+### What exchanges does the Token API use for token prices?
+
+The Token API currently tracks prices on Uniswap v2 and Uniswap v3, with plans to support additional exchanges in the future.
### Does the Token API provide a client SDK?
@@ -34,9 +50,9 @@ Yes, more blockchains will be supported in the future. Please share feedback on
Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
-### Are there plans to support additional use cases such as NFTs?
+### Are there plans to support additional use cases?
-The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
## MCP / LLM / AI Topics
@@ -60,17 +76,25 @@ You can find the code for the MCP client in [The Graph's repo](https://github.co
Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
-### Are there rate limits or usage costs?\*\*
+### Why am I getting 500 errors?
+
+Networks that are currently or temporarily unavailable on a given endpoint will return a `bad_database_response`, `Endpoint is currently not supported for this network` error. Databases that are in the process of ingestion will produce this response.
+
+### Are there rate limits or usage costs?
+
+During Beta, the Token API is free for authorized developers. There are no specific rate limits, but reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What do I do if I notice data inconsistencies in the data returned by the Token API?
-During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+If you notice data inconsistencies, please report the issue on our [Discord](https://discord.gg/graphprotocol). Identifying edge cases can help make sure all data is accurate and up-to-date.
-### What networks are supported, and how do I specify them?
+### How do I specify a network?
-You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of the exact network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`, `unichain`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
### Why do I only see 10 results? How can I get more data?
-Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` and `page` (1-indexed) to return more results. For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
### How do I fetch older transfer history?
diff --git a/website/src/pages/mr/token-api/monitoring/get-health.mdx b/website/src/pages/mr/token-api/monitoring/get-health.mdx
index 57a827b3343b..09f7b954dbf3 100644
--- a/website/src/pages/mr/token-api/monitoring/get-health.mdx
+++ b/website/src/pages/mr/token-api/monitoring/get-health.mdx
@@ -1,7 +1,9 @@
---
-title: Get health status of the API
+title: Health Status
template:
type: openApi
apiId: tokenApi
operationId: getHealth
---
+
+Get health status of the API
diff --git a/website/src/pages/mr/token-api/monitoring/get-networks.mdx b/website/src/pages/mr/token-api/monitoring/get-networks.mdx
index 0ea3c485ddb9..f4b65492ed15 100644
--- a/website/src/pages/mr/token-api/monitoring/get-networks.mdx
+++ b/website/src/pages/mr/token-api/monitoring/get-networks.mdx
@@ -1,7 +1,9 @@
---
-title: Get supported networks of the API
+title: Supported Networks
template:
type: openApi
apiId: tokenApi
operationId: getNetworks
---
+
+Get supported networks of the API
diff --git a/website/src/pages/mr/token-api/monitoring/get-version.mdx b/website/src/pages/mr/token-api/monitoring/get-version.mdx
index 0be6b7e92d04..b0c7594e8b5e 100644
--- a/website/src/pages/mr/token-api/monitoring/get-version.mdx
+++ b/website/src/pages/mr/token-api/monitoring/get-version.mdx
@@ -1,7 +1,9 @@
---
-title: Get the version of the API
+title: आवृत्ती
template:
type: openApi
apiId: tokenApi
operationId: getVersion
---
+
+Get the version of the API
diff --git a/website/src/pages/mr/token-api/quick-start.mdx b/website/src/pages/mr/token-api/quick-start.mdx
index 427bd0f2a59b..3fe3ee67c0f7 100644
--- a/website/src/pages/mr/token-api/quick-start.mdx
+++ b/website/src/pages/mr/token-api/quick-start.mdx
@@ -9,15 +9,15 @@ sidebarTitle: क्विक स्टार्ट
The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
-The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+The Token API provides access to onchain NFT and fungible token data, including live and historical balances, holders, prices, market data, token metadata, and token transfers. This API also uses the Model Context Protocol (MCP) to allow AI tools such as Claude to enrich raw blockchain data with contextual insights.
## Prerequisites
-Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+Before you begin, get a JWT API token by signing up on [The Graph Market](https://thegraph.market/). Make sure to use the JWT API Token, not the API key. Each API key can generate a new JWT API Token at any time.
## Authentication
-All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+All API endpoints are authenticated using a JWT API token inserted in the header as `Authorization: Bearer `.
```json
{
@@ -64,6 +64,20 @@ Make sure to replace `` with the JWT Token generated from your API key.
> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+## Chain and Feature Support
+
+| Network | evm-tokens | evm-uniswaps | evm-nft-tokens |
+| ---------------- | :---------: | :----------: | :------------: |
+| Ethereum Mainnet | ✅ | ✅ | ✅ |
+| BSC | ✅\* | ✅ | ✅ |
+| Base | ✅ | ✅ | ✅ |
+| Unichain | ✅ | ✅ | ✅ |
+| Arbitrum-One | Ingesting\* | Ingesting\* | Ingesting\* |
+| Optimism | ✅ | ✅ | ✅ |
+| Polygon | ✅ | ✅ | ✅ |
+
+\*Some chains are still in the process of syncing. You may encounter `bad_database_response` errors or incorrect response values until data is fully synced.
+
## Troubleshooting
If the API call fails, try printing out the full response object for additional error details. For example:
diff --git a/website/src/pages/nl/about.mdx b/website/src/pages/nl/about.mdx
index 7fde3b3d507d..7fda868aab9d 100644
--- a/website/src/pages/nl/about.mdx
+++ b/website/src/pages/nl/about.mdx
@@ -1,67 +1,46 @@
---
-title: Over The Graph
+title: About The Graph
+description: This page summarizes the core concepts and basics of The Graph Network.
---
## What is The Graph?
-The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier.
+The Graph is a decentralized protocol for indexing and querying blockchain data across [90+ networks](/supported-networks/).
-## Understanding the Basics
+Its data services include:
-Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain.
+- [Subgraphs](/subgraphs/developing/subgraphs/): Open APIs to query blockchain data that can be created or queried by anyone.
+- [Substreams](/substreams/introduction/): High-performance data streams for real-time blockchain processing, built with modular components.
+- [Token API Beta](/token-api/quick-start/): Instant access to standardized token data requiring zero setup.
-### Challenges Without The Graph
+### Why Blockchain Data is Difficult to Query
-In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply.
+Reading data from blockchains requires processing smart contract events, parsing metadata from IPFS, and manually aggregating data.
-- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**.
+The result is slow performance, complex infrastructure, and scalability issues.
-- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself.
+## How The Graph Solves This
-- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it.
+The Graph uses a combination of cutting-edge research, core dev expertise, and independent Indexers to make blockchain data accessible for developers.
-### Why is this a problem?
+Find the perfect data service for you:
-It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions.
+### 1. Custom Real-Time Data Streams
-Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
+**Use Case:** High-frequency trading, live analytics.
-Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data.
+- [Build Substreams](/substreams/introduction/)
+- [Browse Community Substreams](https://substreams.dev/)
-## The Graph Provides a Solution
+### 2. Instant Token Data
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
+**Use Case:** Wallet balances, liquidity pools, transfer events.
-Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
+- [Start with Token API](/token-api/quick-start/)
-### How The Graph Functions
+### 3. Flexible Historical Queries
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+**Use Case:** Dapp frontends, custom analytics.
-#### Specifics
-
-- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-
-- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-
-- When creating a Subgraph, you need to write a Subgraph manifest.
-
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-
-The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.
-
-
-
-The flow follows these steps:
-
-1. A dapp adds data to Ethereum through a transaction on a smart contract.
-2. The smart contract emits one or more events while processing the transaction.
-3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
-4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
-5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats.
-
-## Next Steps
-
-The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-
-Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
+- [Explore Subgraphs](https://thegraph.com/explorer)
+- [Build Your Subgraph](/subgraphs/quick-start)
diff --git a/website/src/pages/nl/index.json b/website/src/pages/nl/index.json
index 200a19192e1c..faa064889c50 100644
--- a/website/src/pages/nl/index.json
+++ b/website/src/pages/nl/index.json
@@ -2,7 +2,7 @@
"title": "Start",
"hero": {
"title": "The Graph Docs",
- "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.",
+ "description": "The Graph is a blockchain data solution that powers applications, analytics, and AI on 90+ chains. The Graph's core products include the Token API for web3 apps, Subgraphs for indexing smart contracts, and Substreams for real-time and historical data streaming.",
"cta1": "How The Graph works",
"cta2": "Build your first subgraph"
},
@@ -19,10 +19,10 @@
"description": "Fetch and consume blockchain data with parallel execution.",
"cta": "Develop with Substreams"
},
- "sps": {
- "title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
- "cta": "Set up a Substreams-powered subgraph"
+ "tokenApi": {
+ "title": "Token API",
+ "description": "Query token data and leverage native MCP support.",
+ "cta": "Develop with Token API"
},
"graphNode": {
"title": "Graph Node",
@@ -31,7 +31,7 @@
},
"firehose": {
"title": "Firehose",
- "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.",
+ "description": "Extract blockchain data into flat files to speed sync times.",
"cta": "Get started with Firehose"
}
},
@@ -58,6 +58,7 @@
"networks": "networks",
"completeThisForm": "complete this form"
},
+ "seeAllNetworks": "See all {0} networks",
"emptySearch": {
"title": "No networks found",
"description": "No networks match your search for \"{0}\"",
@@ -70,7 +71,7 @@
"subgraphs": "Subgraphs",
"substreams": "Substreams",
"firehose": "Firehose",
- "tokenapi": "Token API"
+ "tokenApi": "Token API"
}
},
"networkGuides": {
@@ -79,10 +80,22 @@
"title": "Subgraph quick start",
"description": "Kickstart your journey into subgraph development."
},
- "substreams": {
- "title": "Substreams",
+ "substreamsQuickStart": {
+ "title": "Substreams quick start",
"description": "Stream high-speed data for real-time indexing."
},
+ "tokenApi": {
+ "title": "The Graph's Token API",
+ "description": "Query token data and leverage native MCP support."
+ },
+ "graphExplorer": {
+ "title": "Graph Verkenner",
+ "description": "Find and query existing blockchain data."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
"timeseries": {
"title": "Timeseries & Aggregations",
"description": "Learn to track metrics like daily volumes or user growth."
@@ -109,12 +122,16 @@
"title": "Substreams.dev",
"description": "Access tutorials, templates, and documentation to build custom data modules."
},
+ "customSubstreamsSinks": {
+ "title": "Custom Substreams Sinks",
+ "description": "Leverage existing Substreams sinks to access data."
+ },
"substreamsStarter": {
"title": "Substreams starter",
"description": "Leverage this boilerplate to create your first Substreams module."
},
"substreamsRepo": {
- "title": "Substreams repo",
+ "title": "Substreams GitHub repository",
"description": "Study, contribute to, or customize the core Substreams framework."
}
}
diff --git a/website/src/pages/nl/indexing/new-chain-integration.mdx b/website/src/pages/nl/indexing/new-chain-integration.mdx
index 670e06c752c3..cf698522f7e5 100644
--- a/website/src/pages/nl/indexing/new-chain-integration.mdx
+++ b/website/src/pages/nl/indexing/new-chain-integration.mdx
@@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, in a JSON-RPC batch request
-- `trace_filter` *(limited tracing and optionally required for Graph Node)*
+- `trace_filter` _(limited tracing and optionally required for Graph Node)_
### 2. Firehose Integration
@@ -63,7 +63,7 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
> Do not change the env var name itself. It must remain `ethereum` even if the network name is different.
-3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/
+3. Run an IPFS node or use the one used by The Graph: https://ipfs.thegraph.com
## Substreams-powered Subgraphs
diff --git a/website/src/pages/nl/indexing/overview.mdx b/website/src/pages/nl/indexing/overview.mdx
index 89c13c8ab279..e056d7f467c4 100644
--- a/website/src/pages/nl/indexing/overview.mdx
+++ b/website/src/pages/nl/indexing/overview.mdx
@@ -110,12 +110,12 @@ Indexers may differentiate themselves by applying advanced techniques for making
- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
-| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
+| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -131,7 +131,7 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
@@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer
#### Graph Node
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -331,7 +331,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
#### Getting started using Docker
@@ -708,42 +708,6 @@ Note that supported action types for allocation management have different input
Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
-#### Agora
-
-The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query.
-
-A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression.
-
-Example cost model:
-
-```
-# This statement captures the skip value,
-# uses a boolean expression in the predicate to match specific queries that use `skip`
-# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global
-query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-
-# This default will match any GraphQL expression.
-# It uses a Global substituted into the expression to calculate cost
-default => 0.1 * $SYSTEM_LOAD;
-```
-
-Example query costing using the above model:
-
-| Query | Price |
-| ---------------------------------------------------------------------------- | ------- |
-| { pairs(skip: 5000) { id } } | 0.5 GRT |
-| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-
-#### Applying the cost model
-
-Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them.
-
-```sh
-indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
-indexer cost set model my_model.agora
-```
-
## Interacting with the network
### Stake in the protocol
diff --git a/website/src/pages/nl/indexing/tooling/graph-node.mdx b/website/src/pages/nl/indexing/tooling/graph-node.mdx
index edde8a157fd3..56cea09618e3 100644
--- a/website/src/pages/nl/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/nl/indexing/tooling/graph-node.mdx
@@ -26,7 +26,7 @@ While some Subgraphs may just require a full node, some may have indexing featur
### IPFS Nodes
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.thegraph.com.
### Prometheus metrics server
@@ -66,7 +66,7 @@ createdb graph-node
cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
- --ipfs https://ipfs.network.thegraph.com
+ --ipfs https://ipfs.thegraph.com
```
### Getting started with Kubernetes
@@ -77,15 +77,20 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
When it is running Graph Node exposes the following ports:
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
-
-> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+
+> **WARNING: Never expose Graph Node's administrative ports to the public**.
+>
+> - Exposing Graph Node's internal ports can lead to a full system compromise.
+> - These ports must remain **private**: JSON-RPC Admin endpoint, Indexing Status API, and PostgreSQL.
+> - Do not expose 8000 (GraphQL HTTP) and 8001 (GraphQL WebSocket) directly to the internet. Even though these are used for GraphQL queries, they should ideally be proxied though `indexer-agent` and served behind a production-grade proxy.
+> - Lock everything else down with firewalls or private networks.
## Advanced Graph Node configuration
@@ -330,7 +335,7 @@ Database tables that store entities seem to generally come in two varieties: 'tr
For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table.
-The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
+The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity.
In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show ` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions.
@@ -340,6 +345,4 @@ For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates f
#### Removing Subgraphs
-> This is new functionality, which will be available in Graph Node 0.29.x
-
At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/nl/resources/claude-mcp.mdx b/website/src/pages/nl/resources/claude-mcp.mdx
new file mode 100644
index 000000000000..5b55bbcbe0a4
--- /dev/null
+++ b/website/src/pages/nl/resources/claude-mcp.mdx
@@ -0,0 +1,122 @@
+---
+title: Claude MCP
+---
+
+This guide walks you through configuring Claude Desktop to use The Graph ecosystem's [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) resources: Token API and Subgraph. These integrations allow you to interact with blockchain data through natural language conversations with Claude.
+
+## What You Can Do
+
+With these integrations, you can:
+
+- **Token API**: Access token and wallet information across multiple blockchains
+- **Subgraph**: Find relevant Subgraphs for specific keywords and contracts, explore Subgraph schemas, and execute GraphQL queries
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org/en/download/) installed and available in your path
+- [Claude Desktop](https://claude.ai/download) installed
+- API keys:
+ - Token API key from [The Graph Market](https://thegraph.market/)
+ - Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+## Configuration Steps
+
+### 1. Open Configuration File
+
+Navigate to your `claude_desktop_config.json` file:
+
+> **Claude Desktop** > **Settings** > **Developer** > **Edit Config**
+
+Paths by operating system:
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+### 2. Add Configuration
+
+Replace the contents of the existing config file with:
+
+```json
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": "ACCESS_TOKEN"
+ }
+ },
+ "subgraph": {
+ "command": "npx",
+ "args": ["mcp-remote", "--header", "Authorization:${AUTH_HEADER}", "https://subgraphs.mcp.thegraph.com/sse"],
+ "env": {
+ "AUTH_HEADER": "Bearer GATEWAY_API_KEY"
+ }
+ }
+ }
+}
+```
+
+### 3. Add Your API Keys
+
+Replace:
+
+- `ACCESS_TOKEN` with your Token API key from [The Graph Market](https://thegraph.market/)
+- `GATEWAY_API_KEY` with your Gateway API key from [Subgraph Studio](https://thegraph.com/studio/apikeys/)
+
+### 4. Save and Restart
+
+- Save the configuration file
+- Restart Claude Desktop
+
+### 5. Add The Graph Resources in Claude
+
+After configuration:
+
+1. Start a new conversation in Claude Desktop
+2. Click on the context menu (top right)
+3. Add "Subgraph Server Instructions" as a resource by entering `graphql://subgraph` for Subgraph MCP
+
+> **Important**: You must manually add The Graph resources to your chat context for each conversation where you want to use them.
+
+### 6. Run Queries
+
+Here are some example queries you can try after setting up the resources:
+
+### Subgraph Queries
+
+```
+What are the top pools in Uniswap?
+```
+
+```
+Who are the top Delegators of The Graph Protocol?
+```
+
+```
+Please make a bar chart for the number of active loans in Compound for the last 7 days
+```
+
+### Token API Queries
+
+```
+Show me the current price of ETH
+```
+
+```
+What are the top tokens by market cap on Ethereum?
+```
+
+```
+Analyze this wallet address: 0x...
+```
+
+## Troubleshooting
+
+If you encounter issues:
+
+1. **Verify Node.js Installation**: Ensure Node.js is correctly installed by running `node -v` in your terminal
+2. **Check API Keys**: Verify that your API keys are correctly entered in the configuration file
+3. **Enable Verbose Logging**: Add `--verbose true` to the args array in your configuration to see detailed logs
+4. **Restart Claude Desktop**: After making changes to the configuration, always restart Claude Desktop
diff --git a/website/src/pages/nl/subgraphs/_meta-titles.json b/website/src/pages/nl/subgraphs/_meta-titles.json
index 3fd405eed29a..f095d374344f 100644
--- a/website/src/pages/nl/subgraphs/_meta-titles.json
+++ b/website/src/pages/nl/subgraphs/_meta-titles.json
@@ -2,5 +2,6 @@
"querying": "Querying",
"developing": "Developing",
"guides": "How-to Guides",
- "best-practices": "Best Practices"
+ "best-practices": "Best Practices",
+ "mcp": "MCP"
}
diff --git a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5f964d3cbb78..edc1d88dc6cf 100644
--- a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.1
+
+### Patch Changes
+
+- [#2006](https://github.com/graphprotocol/graph-tooling/pull/2006) [`3fb730b`](https://github.com/graphprotocol/graph-tooling/commit/3fb730bdaf331f48519e1d9fdea91d2a68f29fc9) Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - fix global variables in wasm
+
## 0.38.0
### Minor Changes
diff --git a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx
index 5be2530c4d6b..2e256ae18190 100644
--- a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx
@@ -29,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs:
The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| Version | Release notes |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
-| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types