diff --git a/README.md b/README.md
index 19ce405a..0a7d3674 100644
--- a/README.md
+++ b/README.md
@@ -167,7 +167,6 @@ Full list of options in `config.json`:
| schema_mapping | Object | | Useful if you want to load multiple streams from one tap to multiple Snowflake schemas.
If the tap sends the `stream_id` in `-` format then this option overwrites the `default_target_schema` value. Note, that using `schema_mapping` you can overwrite the `default_target_schema_select_permission` value to grant SELECT permissions to different groups per schemas or optionally you can create indices automatically for the replicated tables.
**Note**: This is an experimental feature and recommended to use via PipelineWise YAML files that will generate the object mapping in the right JSON format. For further info check a [PipelineWise YAML Example]
| disable_table_cache | Boolean | | (Default: False) By default the connector caches the available table structures in Snowflake at startup. In this way it doesn't need to run additional queries when ingesting data to check if altering the target tables is required. With `disable_table_cache` option you can turn off this caching. You will always see the most recent table structures but will cause an extra query runtime. |
| client_side_encryption_master_key | String | | (Default: None) When this is defined, Client-Side Encryption is enabled. The data in S3 will be encrypted, No third parties, including Amazon AWS and any ISPs, can see data in the clear. Snowflake COPY command will decrypt the data once it's in Snowflake. The master key must be 256-bit length and must be encoded as base64 string. |
-| client_side_encryption_stage_object | String | | (Default: None) Required when `client_side_encryption_master_key` is defined. The name of the encrypted stage object in Snowflake that created separately and using the same encryption master key. |
| add_metadata_columns | Boolean | | (Default: False) Metadata columns add extra row level information about data ingestions, (i.e. when was the row read in source, when was inserted or deleted in snowflake etc.) Metadata columns are creating automatically by adding extra columns to the tables with a column prefix `_SDC_`. The column names are following the stitch naming conventions documented at https://www.stitchdata.com/docs/data-structure/integration-schemas#sdc-columns. Enabling metadata columns will flag the deleted rows by setting the `_SDC_DELETED_AT` metadata column. Without the `add_metadata_columns` option the deleted rows from singer taps will not be recongisable in Snowflake. |
| hard_delete | Boolean | | (Default: False) When `hard_delete` option is true then DELETE SQL commands will be performed in Snowflake to delete rows in tables. It's achieved by continuously checking the `_SDC_DELETED_AT` metadata column sent by the singer tap. Due to deleting rows requires metadata columns, `hard_delete` option automatically enables the `add_metadata_columns` option as well. |
| data_flattening_max_level | Integer | | (Default: 0) Object type RECORD items from taps can be loaded into VARIANT columns as JSON (default) or we can flatten the schema by creating columns automatically.
When value is 0 (default) then flattening functionality is turned off. |
@@ -199,7 +198,6 @@ Full list of options in `config.json`:
export TARGET_SNOWFLAKE_FILE_FORMAT_CSV=
export TARGET_SNOWFLAKE_FILE_FORMAT_PARQUET=
export CLIENT_SIDE_ENCRYPTION_MASTER_KEY=
- export CLIENT_SIDE_ENCRYPTION_STAGE_OBJECT=
```
2. Install python test dependencies in a virtual env and run unit and integration tests
diff --git a/tests/integration/.env.sample b/tests/integration/.env.sample
index bb97ed1f..98c55b45 100644
--- a/tests/integration/.env.sample
+++ b/tests/integration/.env.sample
@@ -13,4 +13,3 @@ TARGET_SNOWFLAKE_STAGE=
TARGET_SNOWFLAKE_FILE_FORMAT_CSV=
TARGET_SNOWFLAKE_FILE_FORMAT_PARQUET=
CLIENT_SIDE_ENCRYPTION_MASTER_KEY=
-CLIENT_SIDE_ENCRYPTION_STAGE_OBJECT=