@@ -36,14 +36,14 @@ For **GCS**, before filling **GCS Endpoint**, you need to first grant the GCS bu
1. In the TiDB Cloud console, record the **Service Account ID**, which will be used to grant TiDB Cloud access to your GCS bucket.
- 
+ 
2. In the Google Cloud console, create an IAM role for your GCS bucket.
1. Sign in to the [Google Cloud console](https://console.cloud.google.com/).
2. Go to the [Roles](https://console.cloud.google.com/iam-admin/roles) page, and then click **Create role**.
- 
+ 
3. Enter a name, description, ID, and role launch stage for the role. The role name cannot be changed after the role is created.
4. Click **Add permissions**. Add the following permissions to the role, and then click **Add**.
@@ -55,13 +55,13 @@ For **GCS**, before filling **GCS Endpoint**, you need to first grant the GCS bu
- storage.objects.list
- storage.objects.update
- 
+ 
3. Go to the [Bucket](https://console.cloud.google.com/storage/browser) page, and choose a GCS bucket you want TiDB Cloud to access. Note that the GCS bucket must be in the same region as your TiDB cluster.
4. On the **Bucket details** page, click the **Permissions** tab, and then click **Grant access**.
- 
+ 
5. Fill in the following information to grant access to your bucket, and then click **Save**.
@@ -76,11 +76,11 @@ For **GCS**, before filling **GCS Endpoint**, you need to first grant the GCS bu
- To get a bucket's gsutil URI, click the copy button and add `gs://` as a prefix. For example, if the bucket name is `test-sink-gcs`, the URI would be `gs://test-sink-gcs/`.
- 
+ 
- To get a folder's gsutil URI, open the folder, click the copy button, and add `gs://` as a prefix. For example, if the bucket name is `test-sink-gcs` and the folder name is `changefeed-xxx`, the URI would be `gs://test-sink-gcs/changefeed-xxx/`.
- 
+ 
7. In the TiDB Cloud console, go to the Changefeed's **Configure Destination** page, and fill in the **bucket gsutil URI** field.
@@ -96,7 +96,7 @@ Click **Next** to establish the connection from the TiDB Cloud Dedicated cluster
1. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](https://docs.pingcap.com/tidb/stable/ticdc-filter#changefeed-log-filters).
- 
+ 
- **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules in the box on the right. You can add up to 100 filter rules.
- **Tables with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
@@ -143,7 +143,7 @@ Click **Next** to establish the connection from the TiDB Cloud Dedicated cluster
- **Flush Interval**: set to 60 seconds by default, adjustable within a range of 2 seconds to 10 minutes;
- **File Size**: set to 64 MB by default, adjustable within a range of 1 MB to 512 MB.
- 
+ 
> **Note:**
>
diff --git a/tidb-cloud/config-s3-and-gcs-access.md b/tidb-cloud/config-s3-and-gcs-access.md
index 38b1e3489545a..d0cf133da2010 100644
--- a/tidb-cloud/config-s3-and-gcs-access.md
+++ b/tidb-cloud/config-s3-and-gcs-access.md
@@ -44,11 +44,11 @@ Configure the bucket access for TiDB Cloud and get the Role ARN as follows:
1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).
2. In the **Buckets** list, choose the name of your bucket with the source data, and then click **Copy ARN** to get your S3 bucket ARN (for example, `arn:aws:s3:::tidb-cloud-source-data`). Take a note of the bucket ARN for later use.
- 
+ 
3. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/), click **Policies** in the navigation pane on the left, and then click **Create Policy**.
- 
+ 
4. On the **Create policy** page, click the **JSON** tab.
5. Copy the following access policy template and paste it to the policy text field.
@@ -111,7 +111,7 @@ Configure the bucket access for TiDB Cloud and get the Role ARN as follows:
1. In the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/), click **Roles** in the navigation pane on the left, and then click **Create role**.
- 
+ 
2. To create a role, fill in the following information:
@@ -123,7 +123,7 @@ Configure the bucket access for TiDB Cloud and get the Role ARN as follows:
4. Under **Role details**, set a name for the role, and then click **Create role** in the lower-right corner. After the role is created, the list of roles is displayed.
5. In the list of roles, click the name of the role that you just created to go to its summary page, and then copy the role ARN.
- 
+ 
4. In the TiDB Cloud console, go to the **Data Import** page where you get the TiDB Cloud account ID and external ID, and then paste the role ARN to the **Role ARN** field.
@@ -179,7 +179,7 @@ To allow TiDB Cloud to access the source data in your GCS bucket, you need to co
1. Sign in to the [Google Cloud console](https://console.cloud.google.com/).
2. Go to the [Roles](https://console.cloud.google.com/iam-admin/roles) page, and then click **CREATE ROLE**.
- 
+ 
3. Enter a name, description, ID, and role launch stage for the role. The role name cannot be changed after the role is created.
4. Click **ADD PERMISSIONS**.
@@ -191,13 +191,13 @@ To allow TiDB Cloud to access the source data in your GCS bucket, you need to co
You can copy a permission name to the **Enter property name or value** field as a filter query, and choose the name in the filter result. To add the three permissions, you can use **OR** between the permission names.
- 
+ 
3. Go to the [Bucket](https://console.cloud.google.com/storage/browser) page, and click the name of the GCS bucket you want TiDB Cloud to access.
4. On the **Bucket details** page, click the **PERMISSIONS** tab, and then click **GRANT ACCESS**.
- 
+ 
5. Fill in the following information to grant access to your bucket, and then click **SAVE**.
@@ -212,12 +212,12 @@ To allow TiDB Cloud to access the source data in your GCS bucket, you need to co
If you want to copy a file's gsutil URI, select the file, click **Open object overflow menu**, and then click **Copy gsutil URI**.
- 
+ 
If you want to use a folder's gsutil URI, open the folder, and then click the copy button following the folder name to copy the folder name. After that, you need to add `gs://` to the beginning and `/` to the end of the name to get a correct URI of the folder.
For example, if the folder name is `tidb-cloud-source-data`, you need to use `gs://tidb-cloud-source-data/` as the URI.
- 
+ 
7. In the TiDB Cloud console, go to the **Data Import** page where you get the Google Cloud Service Account ID, and then paste the GCS bucket gsutil URI to the **Bucket gsutil URI** field. For example, paste `gs://tidb-cloud-source-data/`.
diff --git a/tidb-cloud/csv-config-for-import-data.md b/tidb-cloud/csv-config-for-import-data.md
index fa6c7f556937c..febff8e218c36 100644
--- a/tidb-cloud/csv-config-for-import-data.md
+++ b/tidb-cloud/csv-config-for-import-data.md
@@ -9,7 +9,7 @@ This document introduces CSV configurations for the Import Data service on TiDB
The following is the CSV Configuration window when you use the Import Data service on TiDB Cloud to import CSV files. For more information, see [Import CSV Files from Amazon S3 or GCS into TiDB Cloud](/tidb-cloud/import-csv-files.md).
-
+
## Separator
diff --git a/tidb-cloud/data-service-integrations.md b/tidb-cloud/data-service-integrations.md
index 32b6ddc0f72a7..06d114c1eeaaf 100644
--- a/tidb-cloud/data-service-integrations.md
+++ b/tidb-cloud/data-service-integrations.md
@@ -19,7 +19,7 @@ To integrate your Data App with GPTs, perform the following steps:
2. In the left pane, locate your target Data App, click the name of your target Data App, and then click the **Integrations** tab.
3. In the **Integrate with GPTs** area, click **Get Configuration**.
- 
+ 
4. In the displayed dialog box, you can see the following fields:
@@ -29,7 +29,7 @@ To integrate your Data App with GPTs, perform the following steps:
c. **API Key Encoded**: copy the base64 encoded string equivalent to the API key you have provided.
- 
+ 
5. Use the copied API Specification URL and the encoded API key in your GPT configuration.
diff --git a/tidb-cloud/dev-guide-bi-looker-studio.md b/tidb-cloud/dev-guide-bi-looker-studio.md
index e1b3c8b357b1a..a15e9952f7c13 100644
--- a/tidb-cloud/dev-guide-bi-looker-studio.md
+++ b/tidb-cloud/dev-guide-bi-looker-studio.md
@@ -78,7 +78,7 @@ If you encounter any issues during import, you can cancel this import task as fo
- **Password**: enter the `PASSWORD` parameter from the TiDB Cloud Serverless connection dialog.
- **Enable SSL**: select this option, and then click the upload icon to the right of **MySQL SSL Client Configuration Files** to upload the CA file downloaded from [Step 2](#step-2-get-the-connection-information-for-your-cluster).
- 
+ 
4. Click **AUTHENTICATE**.
@@ -90,7 +90,7 @@ Now, you can use the TiDB cluster as a data source and create a simple chart wit
1. In the right pane, click **CUSTOM QUERY**.
- 
+ 
2. Copy the following code to the **Enter Custom Query** area, and then click **Add** in the lower-right corner.
@@ -124,7 +124,7 @@ Now, you can use the TiDB cluster as a data source and create a simple chart wit
Then, you can see a combo chart similar as follows:
-
+
## Next steps
diff --git a/tidb-cloud/integrate-tidbcloud-with-airbyte.md b/tidb-cloud/integrate-tidbcloud-with-airbyte.md
index a600ba1186694..962fd142b1cfd 100644
--- a/tidb-cloud/integrate-tidbcloud-with-airbyte.md
+++ b/tidb-cloud/integrate-tidbcloud-with-airbyte.md
@@ -66,7 +66,7 @@ Conveniently, the steps are the same for setting TiDB as the source and the dest
4. Click **Set up source** or **destination** to complete creating the connector. The following screenshot shows the configuration of TiDB as the source.
-
+
You can use any combination of sources and destinations, such as TiDB to Snowflake, and CSV files to TiDB.
@@ -92,13 +92,13 @@ The following steps use TiDB as both a source and a destination. Other connector
> - In Incremental mode, Airbyte only reads records added to the source since the last sync job. The first sync using Incremental mode is equivalent to Full Refresh mode.
> - In Full Refresh mode, Airbyte reads all records in the source and replicates to the destination in every sync task. You can set the sync mode for every table named **Namespace** in Airbyte individually.
- 
+ 
7. Set **Normalization & Transformation** to **Normalized tabular data** to use the default normalization mode, or you can set the dbt file for your job. For more information about normalization, refer to [Transformations and Normalization](https://docs.airbyte.com/operator-guides/transformation-and-normalization/transformations-with-dbt).
8. Click **Set up connection**.
9. Once the connection is established, click **ENABLED** to activate the synchronization task. You can also click **Sync now** to sync immediately.
-
+
## Limitations
diff --git a/tidb-cloud/integrate-tidbcloud-with-n8n.md b/tidb-cloud/integrate-tidbcloud-with-n8n.md
index eb4a615bb2070..1741c65fae4d8 100644
--- a/tidb-cloud/integrate-tidbcloud-with-n8n.md
+++ b/tidb-cloud/integrate-tidbcloud-with-n8n.md
@@ -74,7 +74,7 @@ This example usage workflow would use the following nodes:
The final workflow should look like the following image.
-
+
### (Optional) Create a TiDB Cloud Serverless cluster
diff --git a/tidb-cloud/integrate-tidbcloud-with-vercel.md b/tidb-cloud/integrate-tidbcloud-with-vercel.md
index f576c97d633e5..d580fc28ba314 100644
--- a/tidb-cloud/integrate-tidbcloud-with-vercel.md
+++ b/tidb-cloud/integrate-tidbcloud-with-vercel.md
@@ -95,7 +95,7 @@ The detailed steps are as follows:
7. Choose whether to enable **Branching** to create new branches for preview environments.
8. Click **Add Integration and Return to Vercel**.
-
+
6. Get back to your Vercel dashboard, go to your Vercel project, click **Settings** > **Environment Variables**, and check whether the environment variables for your target TiDB cluster have been automatically added.
@@ -139,7 +139,7 @@ The detailed steps are as follows:
4. Select your target TiDB Data App.
6. Click **Add Integration and Return to Vercel**.
-
+
6. Get back to your Vercel dashboard, go to your Vercel project, click **Settings** > **Environment Variables**, and check whether the environment variables for your target Data App have been automatically added.
@@ -163,7 +163,7 @@ If you have installed [TiDB Cloud Vercel integration](https://vercel.com/integra
3. Click **Configure**.
4. Click **Add Link** or **Remove** to add or remove connections.
- 
+ 
When you remove a connection, the environment variables set by the integration workflow are removed from the Vercel project, too. However, this action does not affect the data of the TiDB Cloud Serverless cluster.
@@ -192,15 +192,15 @@ After you push changes to the Git repository, Vercel will trigger a preview depl
2. Add some changes and push the changes to the remote repository.
3. Vercel will trigger a preview deployment for the new branch.
- 
+ 
1. During the deployment, TiDB Cloud integration will automatically create a TiDB Cloud Serverless branch with the same name as the Git branch. If the TiDB Cloud Serverless branch already exists, TiDB Cloud integration will skip this step.
- 
+ 
2. After the TiDB Cloud Serverless branch is ready, TiDB Cloud integration will set environment variables in the preview deployment for the Vercel project.
- 
+ 
3. TiDB Cloud integration will also register a blocking check to wait for the TiDB Cloud Serverless branch to be ready. You can rerun the check manually.
4. After the check is passed, you can visit the preview deployment to see the changes.
@@ -224,7 +224,7 @@ After you push changes to the Git repository, Vercel will trigger a preview depl
2. Go to your Vercel dashboard > Vercel project > **Settings** > **Environment Variables**, and then [declare each environment variable value](https://vercel.com/docs/concepts/projects/environment-variables#declare-an-environment-variable) according to the connection information of your TiDB cluster.
- 
+ 
Here we use a Prisma application as an example. The following is a datasource setting in the Prisma schema file for a TiDB Cloud Serverless cluster:
@@ -249,7 +249,7 @@ You can get the information of `
`, ``, ``, ``, a
2. Go to your Vercel dashboard > Vercel project > **Settings** > **Environment Variables**, and then [declare each environment variable value](https://vercel.com/docs/concepts/projects/environment-variables#declare-an-environment-variable) according to the connection information of your Data App.
- 
+ 
In Vercel, you can declare the environment variables as follows.
diff --git a/tidb-cloud/integrate-tidbcloud-with-zapier.md b/tidb-cloud/integrate-tidbcloud-with-zapier.md
index b31a1b17d0e94..8b6166a52c30a 100644
--- a/tidb-cloud/integrate-tidbcloud-with-zapier.md
+++ b/tidb-cloud/integrate-tidbcloud-with-zapier.md
@@ -65,7 +65,7 @@ In the editor page, you can see the trigger and action. Click the trigger to set
2. On the login page, fill in your public key and private key. To get the TiDB Cloud API key, follow the instructions in [TiDB Cloud API documentation](https://docs.pingcap.com/tidbcloud/api/v1beta#section/Authentication/API-Key-Management).
3. Click **Continue**.
- 
+ 
3. Set up action
@@ -73,19 +73,19 @@ In the editor page, you can see the trigger and action. Click the trigger to set
1. From the drop-down list, choose the project name and cluster name. The connection information of your cluster will be displayed automatically.
- 
+ 
2. Enter your password.
3. From the drop-down list, choose the database.
- 
+ 
Zapier queries the databases from TiDB Cloud using the password you entered. If no database is found in your cluster, re-enter your password and refresh the page.
4. In **The table you want to search** box, fill in `github_global_event`. If the table does not exist, the template uses the following DDL to create the table. Click **Continue**.
- 
+ 
4. Test action
@@ -101,7 +101,7 @@ In the editor page, you can see the trigger and action. Click the trigger to set
Select the account you have chosen when you set up the `Find Table in TiDB Cloud` action. Click **Continue**.
- 
+ 
3. Set up action
@@ -109,11 +109,11 @@ In the editor page, you can see the trigger and action. Click the trigger to set
2. In the **Table Name**, choose the **github_global_event** table from the drop-down list. The columns of the table are displayed.
- 
+ 
3. In the **Columns** box, choose the corresponding data from the trigger. Fill in all the columns, and click **Continue**.
- 
+ 
4. Test action
@@ -133,7 +133,7 @@ In the editor page, you can see the trigger and action. Click the trigger to set
Click **Publish** to publish your zap. You can see the zap is running in the [home page](https://zapier.com/app/zaps).
-
+
Now, this zap will automatically record all the global events from your GitHub account into TiDB Cloud.
@@ -231,6 +231,6 @@ Make sure that your custom query executes in less than 30 seconds. Otherwise, yo
2. In the`set up action` step, tick the `Create TiDB Cloud Table if it doesn’t exist yet?` box to enable `find and create`.
- 
+ 
This workflow creates a table if it does not exist yet. Note that the table will be created directly if you test your action.
diff --git a/tidb-cloud/migrate-from-mysql-using-aws-dms.md b/tidb-cloud/migrate-from-mysql-using-aws-dms.md
index 4a7fc78e0ddd7..b341f4b1814ee 100644
--- a/tidb-cloud/migrate-from-mysql-using-aws-dms.md
+++ b/tidb-cloud/migrate-from-mysql-using-aws-dms.md
@@ -37,7 +37,7 @@ Before you start the migration, make sure you have read the following:
2. Click **Create replication instance**.
- 
+ 
3. Fill in an instance name, ARN, and description.
@@ -60,19 +60,19 @@ Before you start the migration, make sure you have read the following:
1. In the [AWS DMS console](https://console.aws.amazon.com/dms/v2/home), click the replication instance that you just created. Copy the public and private network IP addresses as shown in the following screenshot.
- 
+ 
2. Configure the security group rules for Amazon RDS. In this example, add the public and private IP addresses of the AWS DMS instance to the security group.
- 
+ 
3. Click **Create endpoint** to create the source database endpoint.
- 
+ 
4. In this example, click **Select RDS DB instance** and then select the source RDS instance. If the source database is a self-hosted MySQL, you can skip this step and fill in the information in the following steps.
- 
+ 
5. Configure the following information:
- **Endpoint identifier**: create a label for the source endpoint to help you identify it in the subsequent task configuration.
@@ -83,19 +83,19 @@ Before you start the migration, make sure you have read the following:
- Fill in the source database **Port**, **Username**, and **Password**.
- **Secure Socket Layer (SSL) mode**: you can enable SSL mode as needed.
- 
+ 
6. Use default values for **Endpoint settings**, **KMS key**, and **Tags**. In the **Test endpoint connection (optional)** section, it is recommended to select the same VPC as the source database to simplify the network configuration. Select the corresponding replication instance, and then click **Run test**. The status needs to be **successful**.
7. Click **Create endpoint**.
- 
+ 
## Step 3. Create the target database endpoint
1. In the [AWS DMS console](https://console.aws.amazon.com/dms/v2/home), click the replication instance that you just created. Copy the public and private network IP addresses as shown in the following screenshot.
- 
+ 
2. In the TiDB Cloud console, go to the [**Clusters**](https://tidbcloud.com/console/clusters) page, click the name of your target cluster, and then click **Connect** in the upper-right corner to get the TiDB Cloud database connection information.
@@ -113,7 +113,7 @@ Before you start the migration, make sure you have read the following:
- **Descriptive Amazon Resource Name (ARN) - optional**: create a friendly name for the default DMS ARN.
- **Target engine**: select **MySQL**.
- 
+ 
8. In the [AWS DMS console](https://console.aws.amazon.com/dms/v2/home), click **Create endpoint** to create the target database endpoint, and then configure the following information:
- **Server name**: fill in the hostname of your TiDB cluster, which is the `-h` information you have recorded.
@@ -123,23 +123,23 @@ Before you start the migration, make sure you have read the following:
- **Secure Socket Layer (SSL) mode**: select **Verify-ca**.
- Click **Add new CA certificate** to import the CA file downloaded from the TiDB Cloud console in the previous steps.
- 
+ 
9. Import the CA file.
- 
+ 
10. Use the default values for **Endpoint settings**, **KMS key**, and **Tags**. In the **Test endpoint connection (optional)** section, select the same VPC as the source database. Select the corresponding replication instance, and then click **Run test**. The status needs to be **successful**.
11. Click **Create endpoint**.
- 
+ 
## Step 4. Create a database migration task
1. In the AWS DMS console, go to the [Data migration tasks](https://console.aws.amazon.com/dms/v2/home#tasks) page. Switch to your region. Then click **Create task** in the upper-right corner of the window.
- 
+ 
2. Configure the following information:
- **Task identifier**: fill in a name for the task. It is recommended to use a name that is easy to remember.
@@ -149,7 +149,7 @@ Before you start the migration, make sure you have read the following:
- **Target database endpoint**: select the target database endpoint that you just created.
- **Migration type**: select a migration type as needed. In this example, select **Migrate existing data and replicate ongoing changes**.
- 
+ 
3. Configure the following information:
- **Editing mode**: select **Wizard**.
@@ -161,23 +161,23 @@ Before you start the migration, make sure you have read the following:
- **Turn on validation**: select it according to your needs.
- **Task logs**: select **Turn on CloudWatch logs** for troubleshooting in future. Use the default settings for the related configurations.
- 
+ 
4. In the **Table mappings** section, specify the database to be migrated.
The schema name is the database name in the Amazon RDS instance. The default value of the **Source name** is "%", which means that all databases in the Amazon RDS will be migrated to TiDB. It will cause the system databases such as `mysql` and `sys` in Amazon RDS to be migrated to the TiDB cluster, and result in task failure. Therefore, it is recommended to fill in the specific database name, or filter out all system databases. For example, according to the settings in the following screenshot, only the database named `franktest` and all the tables in that database will be migrated.
- 
+ 
5. Click **Create task** in the lower-right corner.
6. Go back to the [Data migration tasks](https://console.aws.amazon.com/dms/v2/home#tasks) page. Switch to your region. You can see the status and progress of the task.
- 
+ 
If you encounter any issues or failures during the migration, you can check the log information in [CloudWatch](https://console.aws.amazon.com/cloudwatch/home) to troubleshoot the issues.
-
+
## See also
diff --git a/tidb-cloud/migrate-from-mysql-using-data-migration.md b/tidb-cloud/migrate-from-mysql-using-data-migration.md
index ca58ca4a5e8a0..94815091b6f41 100644
--- a/tidb-cloud/migrate-from-mysql-using-data-migration.md
+++ b/tidb-cloud/migrate-from-mysql-using-data-migration.md
@@ -252,8 +252,8 @@ For detailed instructions about incremental data migration, see [Migrate Only In
2. Click **Next**.
diff --git a/tidb-cloud/migrate-from-op-tidb.md b/tidb-cloud/migrate-from-op-tidb.md
index 7bd0929305f2c..3c5e746d18fc6 100644
--- a/tidb-cloud/migrate-from-op-tidb.md
+++ b/tidb-cloud/migrate-from-op-tidb.md
@@ -145,9 +145,9 @@ Create an access key in the AWS console. See [Create an access key](https://docs
3. To create an access key, click **Create access key**. Then choose **Download .csv file** to save the access key ID and secret access key to a CSV file on your computer. Store the file in a secure location. You will not have access to the secret access key again after this dialog box closes. After you download the CSV file, choose **Close**. When you create an access key, the key pair is active by default, and you can use the pair right away.
- 
+ 
- 
+ 
#### Step 3. Export data from the upstream TiDB cluster to Amazon S3 using Dumpling
@@ -164,11 +164,11 @@ Do the following to export data from the upstream TiDB cluster to Amazon S3 usin
The following screenshot shows how to get the S3 bucket URI information:
- 
+ 
The following screenshot shows how to get the region information:
- 
+ 
3. Run Dumpling to export data to the Amazon S3 bucket.
@@ -205,7 +205,7 @@ After you export data from the TiDB Self-Managed cluster to Amazon S3, you need
The following screenshot shows how to get the Account ID and External ID:
- 
+ 
2. Configure access permissions for Amazon S3. Usually you need the following read-only permissions:
@@ -277,7 +277,7 @@ To replicate incremental data, do the following:
1. Get the start time of the incremental data migration. For example, you can get it from the metadata file of the full data migration.
- 
+ 
2. Grant TiCDC to connect to TiDB Cloud. In the [TiDB Cloud console](https://tidbcloud.com/console/clusters), locate the cluster, and then go to the **Networking** page. Click **Add IP Address** > **Use IP addresses**. Fill in the public IP address of the TiCDC component in the **IP Address** field, and click **Confirm** to save it. Now TiCDC can access TiDB Cloud. For more information, see [Configure an IP Access List](/tidb-cloud/configure-ip-access-list.md).
@@ -334,7 +334,7 @@ To replicate incremental data, do the following:
tiup cdc cli changefeed list --pd=http://172.16.6.122:2379
```
- 
+ 
- Verify the replication. Write a new record to the upstream cluster, and then check whether the record is replicated to the downstream TiDB Cloud cluster.
diff --git a/tidb-cloud/migrate-from-oracle-using-aws-dms.md b/tidb-cloud/migrate-from-oracle-using-aws-dms.md
index 56518143e9753..10226f8447a58 100644
--- a/tidb-cloud/migrate-from-oracle-using-aws-dms.md
+++ b/tidb-cloud/migrate-from-oracle-using-aws-dms.md
@@ -29,7 +29,7 @@ At a high level, follow the following steps:
The following diagram illustrates the high-level architecture.
-
+
## Prerequisites
@@ -48,7 +48,7 @@ Log in to the [AWS console](https://console.aws.amazon.com/vpc/home#vpcs:) and c
For instructions about how to create a VPC, see [Creating a VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#Create-VPC).
-
+
## Step 2. Create an Oracle DB instance
@@ -56,7 +56,7 @@ Create an Oracle DB instance in the VPC you just created, and remember the passw
For instructions about how to create an Oracle DB instance, see [Creating an Oracle DB instance and connecting to a database on an Oracle DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.Oracle.html).
-
+
## Step 3. Prepare the table data in Oracle
@@ -67,7 +67,7 @@ Using the following scripts to create and populate 10000 rows of data in the git
After you finish executing the SQL script, check the data in Oracle. The following example uses [DBeaver](https://dbeaver.io/) to query the data:
-
+
## Step 4. Create a TiDB Cloud Serverless cluster
@@ -87,7 +87,7 @@ After you finish executing the SQL script, check the data in Oracle. The followi
2. Create an AWS DMS replication instance with `dms.t3.large` in the VPC.
- 
+ 
> **Note:**
>
@@ -101,11 +101,11 @@ After you finish executing the SQL script, check the data in Oracle. The followi
The following screenshot shows the configurations of the source endpoint.
- 
+ 
The following screenshot shows the configurations of the target endpoint.
- 
+ 
> **Note:**
>
@@ -123,25 +123,25 @@ For more information, see [Migrating your source schema to your target database
1. In the AWS DMS console, go to the [Data migration tasks](https://console.aws.amazon.com/dms/v2/home#tasks) page. Switch to your region. Then click **Create task** in the upper right corner of the window.
- 
+ 
2. Create a database migration task and specify the **Selection rules**:
- 
+ 
- 
+ 
3. Create the task, start it, and then wait for the task to finish.
4. Click the **Table statistics** to check the table. The schema name is `ADMIN`.
- 
+ 
## Step 9. Check data in the downstream TiDB cluster
Connect to the [TiDB Cloud Serverless cluster](https://tidbcloud.com/console/clusters/create-cluster) and check the `admin.github_event` table data. As shown in the following screenshot, DMS successfully migrated table `github_events` and 10000 rows of data.
-
+
## Summary
@@ -149,7 +149,7 @@ With AWS DMS, you can successfully migrate data from any upstream AWS RDS databa
If you encounter any issues or failures during the migration, you can check the log information in [CloudWatch](https://console.aws.amazon.com/cloudwatch/home) to troubleshoot the issues.
-
+
## See also
diff --git a/tidb-cloud/recovery-group-failover.md b/tidb-cloud/recovery-group-failover.md
index 38d84ed7246e6..ba51ae3bec178 100644
--- a/tidb-cloud/recovery-group-failover.md
+++ b/tidb-cloud/recovery-group-failover.md
@@ -15,7 +15,7 @@ When the regional outage is resolved, the ability to reverse the replication fro
Before performing a failover, a recovery group should have been created and be successfully replicating to the secondary cluster. For more information, see [Get Started with Recovery Groups](/tidb-cloud/recovery-group-get-started.md).
-
+
## Failover databases using a recovery group
@@ -37,7 +37,7 @@ In the event of a disaster, you can use the recovery group to failover databases
6. Confirm that you understand the potentially disruptive nature of a failover by typing **Failover** into the confirmation entry and clicking **I understand, failover group** to begin the failover.
- 
+ 
## Reprotect databases using a recovery group
@@ -45,7 +45,7 @@ After a failover completes, the replica databases on the secondary cluster are n
If the original primary cluster that was affected by the disaster can be brought online again, you can re-establish replication from the recovery region back to the original region using the **Reprotect** action.
-
+
1. In the [TiDB Cloud console](https://tidbcloud.com/), click in the lower-left corner, switch to the target project if you have multiple projects, and then click **Project Settings**.
@@ -66,4 +66,4 @@ If the original primary cluster that was affected by the disaster can be brought
5. Confirm the reprotect operation by clicking **Reprotect** to begin the reprotect operation.
- 
+ 
diff --git a/tidb-cloud/recovery-group-overview.md b/tidb-cloud/recovery-group-overview.md
index ebcf5a0a2c0c2..83ac35ca40b59 100644
--- a/tidb-cloud/recovery-group-overview.md
+++ b/tidb-cloud/recovery-group-overview.md
@@ -11,7 +11,7 @@ A TiDB Cloud recovery group allows you to replicate your databases between TiDB
A recovery group consists of a set of replicated databases that can be failed over together between two TiDB Cloud Dedicated clusters. Each recovery group is assigned a primary cluster, and databases on this primary cluster are associated with the group and are then replicated to the secondary cluster.
-
+
- Recovery Group: a group of databases that are replicated between two clusters
- Primary Cluster: the cluster where the database is actively written by the application
diff --git a/tidb-cloud/serverless-external-storage.md b/tidb-cloud/serverless-external-storage.md
index a17a130528016..cf77052e66687 100644
--- a/tidb-cloud/serverless-external-storage.md
+++ b/tidb-cloud/serverless-external-storage.md
@@ -53,7 +53,7 @@ It is recommended that you use [AWS CloudFormation](https://docs.aws.amazon.com/
5. After the CloudFormation stack is executed, you can click the **Outputs** tab and find the Role ARN value in the **Value** column.
- 
+ 
If you have any trouble creating a role ARN with AWS CloudFormation, you can take the following steps to create one manually:
@@ -68,11 +68,11 @@ If you have any trouble creating a role ARN with AWS CloudFormation, you can tak
2. In the **Buckets** list, choose the name of your bucket with the source data, and then click **Copy ARN** to get your S3 bucket ARN (for example, `arn:aws:s3:::tidb-cloud-source-data`). Take a note of the bucket ARN for later use.
- 
+ 
3. Open the [IAM console](https://console.aws.amazon.com/iam/), click **Policies** in the left navigation pane, and then click **Create Policy**.
- 
+ 
4. On the **Create policy** page, click the **JSON** tab.
@@ -138,7 +138,7 @@ If you have any trouble creating a role ARN with AWS CloudFormation, you can tak
1. In the [IAM console](https://console.aws.amazon.com/iam/), click **Roles** in the left navigation pane, and then click **Create role**.
- 
+ 
2. To create a role, fill in the following information:
@@ -152,7 +152,7 @@ If you have any trouble creating a role ARN with AWS CloudFormation, you can tak
5. In the list of roles, click the name of the role that you just created to go to its summary page, and then you can get the role ARN.
- 
+ 
diff --git a/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md b/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md
index 2ae40e9d64ff9..e4003df3f54f3 100644
--- a/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md
+++ b/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md
@@ -18,7 +18,7 @@ Powered by Google Cloud Private Service Connect, the endpoint connection is secu
The architecture of the private endpoint is as follows:
-
+
For more detailed definitions of the private endpoint and endpoint service, see the following Google Cloud documents:
diff --git a/tidb-cloud/set-up-private-endpoint-connections-serverless.md b/tidb-cloud/set-up-private-endpoint-connections-serverless.md
index a2dddbcc98b2d..a2ce31e782389 100644
--- a/tidb-cloud/set-up-private-endpoint-connections-serverless.md
+++ b/tidb-cloud/set-up-private-endpoint-connections-serverless.md
@@ -18,7 +18,7 @@ Powered by AWS PrivateLink, the endpoint connection is secure and private, and d
The architecture of the private endpoint is as follows:
-
+
For more detailed definitions of the private endpoint and endpoint service, see the following AWS documents:
@@ -61,7 +61,7 @@ To use the AWS Management Console to create a VPC interface endpoint, perform th
The **Create endpoint** page is displayed.
- 
+ 
3. Select **Other endpoint services**.
4. Enter the service name that you found in [step 1](#step-1-choose-a-tidb-cluster).
@@ -119,7 +119,7 @@ After you have created the interface endpoint, go back to the TiDB Cloud console
You might need to properly set the security group for your VPC endpoint in the AWS Management Console. Go to **VPC** > **Endpoints**. Right-click your VPC endpoint and select the proper **Manage security groups**. A proper security group within your VPC that allows inbound access from your EC2 instances on Port 4000 or a customer-defined port.
-
+
### I cannot enable private DNS. An error is reported indicating that the `enableDnsSupport` and `enableDnsHostnames` VPC attributes are not enabled
diff --git a/tidb-cloud/set-up-private-endpoint-connections.md b/tidb-cloud/set-up-private-endpoint-connections.md
index 9d16744dc1eb0..0b2c164657315 100644
--- a/tidb-cloud/set-up-private-endpoint-connections.md
+++ b/tidb-cloud/set-up-private-endpoint-connections.md
@@ -18,7 +18,7 @@ Powered by AWS PrivateLink, the endpoint connection is secure and private, and d
The architecture of the private endpoint is as follows:
-
+
For more detailed definitions of the private endpoint and endpoint service, see the following AWS documents:
@@ -85,7 +85,7 @@ To use the AWS Management Console to create a VPC interface endpoint, perform th
The **Create endpoint** page is displayed.
- 
+ 
3. Select **Other endpoint services**.
4. Enter the service name `${your_endpoint_service_name}` from the generated command (`--service-name ${your_endpoint_service_name}`).
@@ -141,7 +141,7 @@ To enable private DNS in your AWS Management Console:
3. Select the **Enable for this endpoint** check box.
4. Click **Save changes**.
- 
+ 
@@ -207,7 +207,7 @@ The possible statuses of a private endpoint service are explained as follows:
You might need to properly set the security group for your VPC endpoint in the AWS Management Console. Go to **VPC** > **Endpoints**. Right-click your VPC endpoint and select the proper **Manage security groups**. A proper security group within your VPC that allows inbound access from your EC2 instances on Port 4000 or a customer-defined port.
-
+
### I cannot enable private DNS. An error is reported indicating that the `enableDnsSupport` and `enableDnsHostnames` VPC attributes are not enabled
diff --git a/tidb-cloud/set-up-vpc-peering-connections.md b/tidb-cloud/set-up-vpc-peering-connections.md
index 2bdc751469c46..4ed7d1becf483 100644
--- a/tidb-cloud/set-up-vpc-peering-connections.md
+++ b/tidb-cloud/set-up-vpc-peering-connections.md
@@ -32,7 +32,7 @@ You can set the CIDR when creating the first TiDB Cloud Dedicated cluster. If yo
3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, click the **Project CIDR** tab, and then select **AWS** or **Google Cloud** according to your cloud provider.
4. In the upper-right corner, click **Create CIDR**. Specify the region and CIDR value in the **Create AWS CIDR** or **Create Google Cloud CIDR** dialog, and then click **Confirm**.
- 
+ 
> **Note:**
>
@@ -51,7 +51,7 @@ You can set the CIDR when creating the first TiDB Cloud Dedicated cluster. If yo
The CIDR is inactive by default. To activate the CIDR, you need to create a cluster in the target region. When the region CIDR is active, you can create VPC Peering for the region.
- 
+ 
## Set up VPC peering on AWS
@@ -79,7 +79,7 @@ You can add VPC peering requests on either the project-level **Network Access**
You can get such information from your VPC details page of the [AWS Management Console](https://console.aws.amazon.com/). TiDB Cloud supports creating VPC peerings between VPCs in the same region or from two different regions.
- 
+ 
5. Click **Create** to send the VPC peering request, and then view the VPC peering information on the **VPC Peering** > **AWS** tab. The status of the newly created VPC peering is **System Checking**.
@@ -109,7 +109,7 @@ You can add VPC peering requests on either the project-level **Network Access**
You can get such information from your VPC details page of the [AWS Management Console](https://console.aws.amazon.com/). TiDB Cloud supports creating VPC peerings between VPCs in the same region or from two different regions.
- 
+ 
4. Click **Create** to send the VPC peering request, and then view the VPC peering information on the **Networking** > **AWS VPC Peering** section. The status of the newly created VPC peering is **System Checking**.
@@ -208,13 +208,13 @@ You can also use the AWS dashboard to configure the VPC peering connection.
1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) and click **Services** on the top menu bar. Enter `VPC` in the search box and go to the VPC service page.
- 
+ 
2. From the left navigation bar, open the **Peering Connections** page. On the **Create Peering Connection** tab, a peering connection is in the **Pending Acceptance** status.
3. Confirm that the requester owner and the requester VPC match **TiDB Cloud AWS Account ID** and **TiDB Cloud VPC ID** on the **VPC Peering Details** page of the [TiDB Cloud console](https://tidbcloud.com). Right-click the peering connection and select **Accept Request** to accept the request in the **Accept VPC peering connection request** dialog.
- 
+ 
2. Add a route to the TiDB Cloud VPC for each of your VPC subnet route tables.
@@ -222,11 +222,11 @@ You can also use the AWS dashboard to configure the VPC peering connection.
2. Search all the route tables that belong to your application VPC.
- 
+ 
3. Right-click each route table and select **Edit routes**. On the edit page, add a route with a destination to the TiDB Cloud CIDR (by checking the **VPC Peering** configuration page in the TiDB Cloud console) and fill in your peering connection ID in the **Target** column.
- 
+ 
3. Make sure you have enabled private DNS hosted zone support for your VPC.
diff --git a/tidb-cloud/tidb-cloud-billing-dm.md b/tidb-cloud/tidb-cloud-billing-dm.md
index 5ab94cd92e871..434ac291a2e4f 100644
--- a/tidb-cloud/tidb-cloud-billing-dm.md
+++ b/tidb-cloud/tidb-cloud-billing-dm.md
@@ -41,15 +41,15 @@ Note that if you are using AWS PrivateLink or VPC peering connections, and if th
- If the source database and the TiDB node are not in the same region, cross-region traffic charges are incurred when the Data Migration job collects data from the source database.
- 
+ 
- If the source database and the TiDB node are in the same region but in different AZs, cross-AZ traffic charges are incurred when the Data Migration job collects data from the source database.
- 
+ 
- If the Data Migration job and the TiDB node are not in the same AZ, cross-AZ traffic charges are incurred when the Data Migration job writes data to the target TiDB node. In addition, if the Data Migration job and the TiDB node are not in the same AZ (or region) with the source database, cross-AZ (or cross-region) traffic charges are incurred when the Data Migration job collects data from the source database.
- 
+ 
The cross-region and cross-AZ traffic prices are the same as those for TiDB Cloud. For more information, see [TiDB Cloud Pricing Details](https://www.pingcap.com/tidb-dedicated-pricing-details/).
diff --git a/tidb-cloud/tidb-cloud-connect-aws-dms.md b/tidb-cloud/tidb-cloud-connect-aws-dms.md
index 6ad1a127fd72c..249e2e9dff6c3 100644
--- a/tidb-cloud/tidb-cloud-connect-aws-dms.md
+++ b/tidb-cloud/tidb-cloud-connect-aws-dms.md
@@ -64,7 +64,7 @@ For TiDB Cloud Dedicated, your clients can connect to clusters via public endpoi
1. In the AWS DMS console, go to the [**Replication instances**](https://console.aws.amazon.com/dms/v2/home#replicationInstances) page and switch to the corresponding region. It is recommended to use the same region for AWS DMS as TiDB Cloud.
- 
+ 
2. Click **Create replication instance**.
@@ -84,7 +84,7 @@ For TiDB Cloud Dedicated, your clients can connect to clusters via public endpoi
- **Replication subnet group**: select a subnet group for your replication instance.
- **Public accessible**: set it based on your network configuration.
- 
+ 
7. Configure the **Advanced settings**, **Maintenance**, and **Tags** sections if needed, and then click **Create replication instance** to finish the instance creation.
@@ -98,7 +98,7 @@ For connectivity, the steps for using TiDB Cloud clusters as a source or as a ta
1. In the AWS DMS console, go to the [**Endpoints**](https://console.aws.amazon.com/dms/v2/home#endpointList) page and switch to the corresponding region.
- 
+ 
2. Click **Create endpoint** to create the target database endpoint.
@@ -133,7 +133,7 @@ For connectivity, the steps for using TiDB Cloud clusters as a source or as a ta
- 
+ 
6. If you want to create the endpoint as a **Target endpoint**, expand the **Endpoint settings** section, select the **Use endpoint connection attributes** checkbox, and then set **Extra connection attributes** to `Initstmt=SET FOREIGN_KEY_CHECKS=0;`.
diff --git a/tidb-cloud/tidb-cloud-intro.md b/tidb-cloud/tidb-cloud-intro.md
index 292a5c7d0a5ca..2fb0675118702 100644
--- a/tidb-cloud/tidb-cloud-intro.md
+++ b/tidb-cloud/tidb-cloud-intro.md
@@ -8,7 +8,7 @@ category: intro
[TiDB Cloud](https://www.pingcap.com/tidb-cloud/) is a fully-managed Database-as-a-Service (DBaaS) that brings [TiDB](https://docs.pingcap.com/tidb/stable/overview), an open-source Hybrid Transactional and Analytical Processing (HTAP) database, to your cloud. TiDB Cloud offers an easy way to deploy and manage databases to let you focus on your applications, not the complexities of the databases. You can create TiDB Cloud clusters to quickly build mission-critical applications on Google Cloud and Amazon Web Services (AWS).
-
+
## Why TiDB Cloud
@@ -76,7 +76,7 @@ For feature comparison between TiDB Cloud Serverless and TiDB Cloud Dedicated, s
## Architecture
-
+
- TiDB VPC (Virtual Private Cloud)
diff --git a/tidb-cloud/tidb-cloud-poc.md b/tidb-cloud/tidb-cloud-poc.md
index 33e9a9bffbeae..91e339595be62 100644
--- a/tidb-cloud/tidb-cloud-poc.md
+++ b/tidb-cloud/tidb-cloud-poc.md
@@ -214,7 +214,7 @@ Once your application for the PoC is approved, you will receive credits in your
To check the credits left for your PoC, go to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your target project, as shown in the following screenshot.
-
+
Alternatively, you can also click