diff --git a/.gitbook/assets/acceptDeposit (1) (2).png b/.gitbook/assets/acceptDeposit (1) (2).png
new file mode 100644
index 00000000..8b94214d
Binary files /dev/null and b/.gitbook/assets/acceptDeposit (1) (2).png differ
diff --git a/.gitbook/assets/cloudmosSelectTemplate (1).png b/.gitbook/assets/cloudmosSelectTemplate (1).png
index 8a1711d8..ddd89fd0 100644
Binary files a/.gitbook/assets/cloudmosSelectTemplate (1).png and b/.gitbook/assets/cloudmosSelectTemplate (1).png differ
diff --git a/.gitbook/assets/cloudmosSelectTemplate.png b/.gitbook/assets/cloudmosSelectTemplate.png
index ddd89fd0..8a1711d8 100644
Binary files a/.gitbook/assets/cloudmosSelectTemplate.png and b/.gitbook/assets/cloudmosSelectTemplate.png differ
diff --git a/.gitbook/assets/image (1).png b/.gitbook/assets/image (1).png
index dea397f5..27251274 100644
Binary files a/.gitbook/assets/image (1).png and b/.gitbook/assets/image (1).png differ
diff --git a/.gitbook/assets/image (10).png b/.gitbook/assets/image (10).png
index 3940e6ad..dea397f5 100644
Binary files a/.gitbook/assets/image (10).png and b/.gitbook/assets/image (10).png differ
diff --git a/.gitbook/assets/image (11).png b/.gitbook/assets/image (11).png
index 642090c4..0e2c34d6 100644
Binary files a/.gitbook/assets/image (11).png and b/.gitbook/assets/image (11).png differ
diff --git a/.gitbook/assets/image (12).png b/.gitbook/assets/image (12).png
index 3c662497..27251274 100644
Binary files a/.gitbook/assets/image (12).png and b/.gitbook/assets/image (12).png differ
diff --git a/.gitbook/assets/image (13).png b/.gitbook/assets/image (13).png
index d1501379..1169a261 100644
Binary files a/.gitbook/assets/image (13).png and b/.gitbook/assets/image (13).png differ
diff --git a/.gitbook/assets/image (14).png b/.gitbook/assets/image (14).png
index 27251274..f22d9027 100644
Binary files a/.gitbook/assets/image (14).png and b/.gitbook/assets/image (14).png differ
diff --git a/.gitbook/assets/image (16).png b/.gitbook/assets/image (16).png
index 27251274..642090c4 100644
Binary files a/.gitbook/assets/image (16).png and b/.gitbook/assets/image (16).png differ
diff --git a/.gitbook/assets/image (17).png b/.gitbook/assets/image (17).png
index 1169a261..7baba6c6 100644
Binary files a/.gitbook/assets/image (17).png and b/.gitbook/assets/image (17).png differ
diff --git a/.gitbook/assets/image (2).png b/.gitbook/assets/image (2).png
index 642090c4..27251274 100644
Binary files a/.gitbook/assets/image (2).png and b/.gitbook/assets/image (2).png differ
diff --git a/.gitbook/assets/image (3).png b/.gitbook/assets/image (3).png
index 0e2c34d6..642090c4 100644
Binary files a/.gitbook/assets/image (3).png and b/.gitbook/assets/image (3).png differ
diff --git a/.gitbook/assets/image (5).png b/.gitbook/assets/image (5).png
index 27251274..d426b5e2 100644
Binary files a/.gitbook/assets/image (5).png and b/.gitbook/assets/image (5).png differ
diff --git a/.gitbook/assets/image (7).png b/.gitbook/assets/image (7).png
index 7baba6c6..d1501379 100644
Binary files a/.gitbook/assets/image (7).png and b/.gitbook/assets/image (7).png differ
diff --git a/.gitbook/assets/image (8).png b/.gitbook/assets/image (8).png
index 642090c4..3940e6ad 100644
Binary files a/.gitbook/assets/image (8).png and b/.gitbook/assets/image (8).png differ
diff --git a/.gitbook/assets/image (9).png b/.gitbook/assets/image (9).png
index f22d9027..3c662497 100644
Binary files a/.gitbook/assets/image (9).png and b/.gitbook/assets/image (9).png differ
diff --git a/.gitbook/assets/image.png b/.gitbook/assets/image.png
index d426b5e2..642090c4 100644
Binary files a/.gitbook/assets/image.png and b/.gitbook/assets/image.png differ
diff --git a/.gitbook/assets/phoneNumberEntry (2).png b/.gitbook/assets/phoneNumberEntry (2).png
index 0001a914..46e8c194 100644
Binary files a/.gitbook/assets/phoneNumberEntry (2).png and b/.gitbook/assets/phoneNumberEntry (2).png differ
diff --git a/.gitbook/assets/phoneNumberEntry (3).png b/.gitbook/assets/phoneNumberEntry (3).png
index 46e8c194..0001a914 100644
Binary files a/.gitbook/assets/phoneNumberEntry (3).png and b/.gitbook/assets/phoneNumberEntry (3).png differ
diff --git a/SUMMARY.md b/SUMMARY.md
index d0b96a62..cc955790 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -120,12 +120,12 @@
* [STEP 6 - Verify Current Provider Settings](providers/build-a-cloud-provider/akash-provider-checkup/step-6-verify-current-provider-settings.md)
* [STEP 7 - Additional Verifications](providers/build-a-cloud-provider/akash-provider-checkup/step-6-additional-verifications.md)
* [Contact Technical Support](providers/build-a-cloud-provider/akash-provider-checkup/contact-technical-support.md)
- * [GPU Resource Enablement (Optional Step)](other-resources/experimental/build-a-cloud-provider/gpu-resource-enablement-optional-step/README.md)
- * [GPU Provider Configuration](other-resources/experimental/build-a-cloud-provider/gpu-resource-enablement-optional-step/gpu-provider-configuration.md)
- * [GPU Node Label](other-resources/experimental/build-a-cloud-provider/gpu-resource-enablement-optional-step/gpu-node-label.md)
- * [Apply NVIDIA Runtime Engine](other-resources/experimental/build-a-cloud-provider/gpu-resource-enablement-optional-step/apply-nvidia-runtime-engine.md)
+ * [GPU Resource Enablement (Optional Step)](providers/build-a-cloud-provider/gpu-resource-enablement-optional-step/README.md)
+ * [GPU Provider Configuration](providers/build-a-cloud-provider/gpu-resource-enablement-optional-step/gpu-provider-configuration.md)
+ * [GPU Node Label](providers/build-a-cloud-provider/gpu-resource-enablement-optional-step/gpu-node-label.md)
+ * [Apply NVIDIA Runtime Engine](providers/build-a-cloud-provider/gpu-resource-enablement-optional-step/apply-nvidia-runtime-engine.md)
* [Update Akash Provider](providers/build-a-cloud-provider/gpu-resource-enablement-optional-step/update-akash-provider.md)
- * [GPU Test Deployments](other-resources/experimental/build-a-cloud-provider/gpu-resource-enablement-optional-step/gpu-test-deployments.md)
+ * [GPU Test Deployments](providers/build-a-cloud-provider/gpu-resource-enablement-optional-step/gpu-test-deployments.md)
* [TLS Certs for Akash Provider (Optional Step)](providers/build-a-cloud-provider/tls-certs-for-akash-provider-optional-step/README.md)
* [Install Let's Encrypt Cert Manager](providers/build-a-cloud-provider/tls-certs-for-akash-provider-optional-step/install-lets-encrypt-cert-manager.md)
* [Configure the Issuer](providers/build-a-cloud-provider/tls-certs-for-akash-provider-optional-step/configure-the-issuer.md)
@@ -329,20 +329,20 @@
* [Provider Upgrade](other-resources/archived-resources/mainnet5-upgrade-docs/provider-upgrade.md)
* [Upgrades](other-resources/archived-resources/upgrades/README.md)
* [Akash Mainnet5 Node Upgrade Guide](other-resources/archived-resources/upgrades/v0.20.0-upgrade-docs.md)
- * [Provider Build With GPU](providers/build-a-cloud-provider/provider-build-with-gpu/README.md)
- * [Prepare Kubernetes Hosts](providers/build-a-cloud-provider/provider-build-with-gpu/prepare-kubernetes-hosts.md)
- * [Disable Search Domains](providers/build-a-cloud-provider/provider-build-with-gpu/disable-search-domains.md)
- * [Install NVIDIA Drivers & Toolkit](providers/build-a-cloud-provider/provider-build-with-gpu/install-nvidia-drivers-and-toolkit.md)
- * [NVIDIA Runtime Configuration](providers/build-a-cloud-provider/provider-build-with-gpu/nvidia-runtime-configuration.md)
- * [Create Kubernetes Cluster](providers/build-a-cloud-provider/provider-build-with-gpu/create-kubernetes-cluster.md)
- * [Confirm Kubernetes Cluster](providers/build-a-cloud-provider/provider-build-with-gpu/step-7-confirm-kubernetes-cluster.md)
- * [Helm Installation on Kubernetes Node](providers/build-a-cloud-provider/provider-build-with-gpu/step-4-helm-installation-on-kubernetes-node.md)
- * [Apply NVIDIA Runtime Engine](providers/build-a-cloud-provider/provider-build-with-gpu/apply-nvidia-runtime-engine.md)
- * [Test GPUs](providers/build-a-cloud-provider/provider-build-with-gpu/test-gpus.md)
- * [Akash Provider Install](providers/build-a-cloud-provider/provider-build-with-gpu/akash-provider-install.md)
- * [Ingress Controller Install](providers/build-a-cloud-provider/provider-build-with-gpu/step-8-ingress-controller-install.md)
- * [Domain Name Review](providers/build-a-cloud-provider/provider-build-with-gpu/step-5-domain-name-review.md)
- * [GPU Test Deployments](providers/build-a-cloud-provider/provider-build-with-gpu/gpu-test-deployments.md)
+ * [Provider Build With GPU](other-resources/archived-resources/provider-build-with-gpu/README.md)
+ * [Prepare Kubernetes Hosts](other-resources/archived-resources/provider-build-with-gpu/prepare-kubernetes-hosts.md)
+ * [Disable Search Domains](other-resources/archived-resources/provider-build-with-gpu/disable-search-domains.md)
+ * [Install NVIDIA Drivers & Toolkit](other-resources/archived-resources/provider-build-with-gpu/install-nvidia-drivers-and-toolkit.md)
+ * [NVIDIA Runtime Configuration](other-resources/archived-resources/provider-build-with-gpu/nvidia-runtime-configuration.md)
+ * [Create Kubernetes Cluster](other-resources/archived-resources/provider-build-with-gpu/create-kubernetes-cluster.md)
+ * [Confirm Kubernetes Cluster](other-resources/archived-resources/provider-build-with-gpu/step-7-confirm-kubernetes-cluster.md)
+ * [Helm Installation on Kubernetes Node](other-resources/archived-resources/provider-build-with-gpu/step-4-helm-installation-on-kubernetes-node.md)
+ * [Apply NVIDIA Runtime Engine](other-resources/archived-resources/provider-build-with-gpu/apply-nvidia-runtime-engine.md)
+ * [Test GPUs](other-resources/archived-resources/provider-build-with-gpu/test-gpus.md)
+ * [Akash Provider Install](other-resources/archived-resources/provider-build-with-gpu/akash-provider-install.md)
+ * [Ingress Controller Install](other-resources/archived-resources/provider-build-with-gpu/step-8-ingress-controller-install.md)
+ * [Domain Name Review](other-resources/archived-resources/provider-build-with-gpu/step-5-domain-name-review.md)
+ * [GPU Test Deployments](other-resources/archived-resources/provider-build-with-gpu/gpu-test-deployments.md)
* [Akash v0.24.0 Network Upgrade](other-resources/archived-resources/akash-v0.24.0-network-upgrade/README.md)
* [Akash v0.24.0 Node Upgrade Guide](other-resources/archived-resources/akash-v0.24.0-network-upgrade/v0.24.0-upgrade-docs.md)
* [Akash v0.26.1 Node Upgrade Guide](other-resources/archived-resources/akash-v0.24.0-network-upgrade/v0.26.1-upgrade-docs.md)
diff --git a/akash-nodes/akash-node-deployment-via-omnibus.md b/akash-nodes/akash-node-deployment-via-omnibus.md
index 5db057c8..972fe809 100644
--- a/akash-nodes/akash-node-deployment-via-omnibus.md
+++ b/akash-nodes/akash-node-deployment-via-omnibus.md
@@ -24,12 +24,12 @@ If you have not used Cloudmos Deploy previously, please use [this guide](https:/
* To install the Akash Node we will use a custom SDL file
* Select the “Empty” option so that we can copy/paste the Akash Node SDL in the next step
-.png>)
+
* Copy and paste the SDL from [this site](https://github.com/akash-network/cosmos-omnibus/blob/master/akash/deploy.yml) into the Cloudmos SDL editor window
* **NOTE -** the SDL within GitHub currently has a storage > size value of 120Gi. Omnibus uses a compressed snapshot of the blockchain and when expanded 120GB of storage for the deployment will not be enough. At the time of this writing adjusting the storage size to 350GB will suffice and allow some growth. Please adjust the storage appropriately and as shown in the screenshot below.
-
+.png>)
* Accept the initial deposit of 5 AKT into the deployment’s escrow account
* The escrow can be refilled easily within Cloudmos Deploy at any time
@@ -38,7 +38,7 @@ If you have not used Cloudmos Deploy previously, please use [this guide](https:/
* Approve the transaction fee to allow the deployment to continue
-
+.png>)
* Select a provider from the bid list
@@ -46,13 +46,13 @@ If you have not used Cloudmos Deploy previously, please use [this guide](https:/
* Accept the transaction fee to create a lease with the provider
-.png>)
+
### Cloudmos Deploy Complete
* Once the deployment is complete the lease details are shown
-
+.png>)
* After some time the Available/Ready Replicas fields will update to show the current node count. It may be necessary to refresh the screen for this count to update.
@@ -67,14 +67,14 @@ With the install of the Akash node complete, it must sync with the blockchain. O
* While the blockchain snapshot is downloading, the following logs should be visible within Cloudmos
* We can know that the snapshot download is not yet complete if we see this message in the logs
-
+.png>)
### Snapshot Download Completed
* After the snapshot download completes the logs will begin showing blockchain sync activity
* Example output shown should look something like this but with the current blocks on the chain
-
+.png>)
### Akash Node Verifications
@@ -84,7 +84,7 @@ With the install of the Akash node complete, it must sync with the blockchain. O
* Begin by capturing the deployment’s public URI from Cloudmos
-
+.png>)
#### Confirm Blockchain Sync
@@ -101,7 +101,7 @@ With the install of the Akash node complete, it must sync with the blockchain. O
* Open [Mintscan](https://www.mintscan.io/akash), a popular blockchain explorer, to compare the captured “latest\_block\_height” value to the latest block displayed in the explorer
* The block height from the Akash Node and Mintscan will not be exactly match but should be close to each other
-
+.png>)
#### Confirm Peer Nodes
diff --git a/deploy/chia-on-akash.md b/deploy/chia-on-akash.md
index d993f4d2..fb9da006 100644
--- a/deploy/chia-on-akash.md
+++ b/deploy/chia-on-akash.md
@@ -20,7 +20,7 @@ For the following providers who are participating in the sale, expect to see the
### Required SDL
-To make sure you get the sale price from the providers, please Copy and Paste the SDL into [Cloudmos](broken-reference) :
+To make sure you get the sale price from the providers, please Copy and Paste the SDL into [Cloudmos](broken-reference/) :
[BladeBit Summer Sale SDL](chia-on-akash.md#bladebit-ram-plotting)
@@ -34,7 +34,7 @@ Please wait up to 60 seconds to see bids from all the providers.
2. Install [Cloudmos Deploy](https://cloudmos.io/cloud-deploy) and import your AKT wallet address from Keplr.
3. [Fund your wallet](https://github.com/akash-network/awesome-akash/tree/chia/chia#Quickest-way-to-get-more-AKT)
-For additional help we recommend you [follow our full deployment guide](broken-reference) in parallel with this guide.
+For additional help we recommend you [follow our full deployment guide](broken-reference/) in parallel with this guide.
## How does this work?
@@ -275,7 +275,7 @@ deployment:
To access the Chia Plot Manager, click on the \`Uri\` link on the deployment detail page.\
To download plots, click an invididual plot in the Chia Plot Manager and click on Download/Open.
-.png>)
+.png>)
\*Once your download has finished - Delete the plot from the container - to make room for new plots! Plots will continue to be created as long as there is enough free space available in the container (Max 32Tb) and the deployment is fully funded.
@@ -334,7 +334,7 @@ Upload your plots to any SSH destination by modifying the `env:`
### Rclone
-Upload your plots to any [Rclone](https://rclone.org/) endpoint! You need to first create a connection to your endpoint on a standard client so that you have a valid configuration in `~/.config/rclone/rclone.conf` You need to modify this block and add to the end of each line to make it valid for Akash. Below you can find examples of how the `env:` should look.
+Upload your plots to any [Rclone](https://rclone.org/) endpoint! You need to first create a connection to your endpoint on a standard client so that you have a valid configuration in `~/.config/rclone/rclone.conf` You need to modify this block and add to the end of each line to make it valid for Akash. Below you can find examples of how the `env:` should look.
### Rclone to Dropbox
diff --git a/deploy/kava-rpc-node-deployment/kava-rpc-node-deployment.md b/deploy/kava-rpc-node-deployment/kava-rpc-node-deployment.md
index 0b17854a..2855bc6d 100644
--- a/deploy/kava-rpc-node-deployment/kava-rpc-node-deployment.md
+++ b/deploy/kava-rpc-node-deployment/kava-rpc-node-deployment.md
@@ -48,7 +48,7 @@
* Accept the Keplr prompt to approve small blockchain fee for lease creation with the selected cloud provider
-
+
## Kava RPC Node Deployment Complete
diff --git a/deploy/mine-raptoreum-on-akash-network.md b/deploy/mine-raptoreum-on-akash-network.md
index cedcd707..576bd968 100644
--- a/deploy/mine-raptoreum-on-akash-network.md
+++ b/deploy/mine-raptoreum-on-akash-network.md
@@ -4,7 +4,7 @@ description: How to Mine Raptoreum (RTM) on Akash Network
# Mine Raptoreum on Akash Network
-
+.png>)
## Why use Akash?
@@ -99,7 +99,7 @@ deployment:
Akash is a marketplace of compute. Providers set their own prices for compute resources. We recommend you try different providers and check your logs after deployment to determine the hashrate.
-.png>)
+
## How to speed up mining?
diff --git a/guides/cloudmos-deploy/wordpress-deployment-example.md b/guides/cloudmos-deploy/wordpress-deployment-example.md
index d266b280..4ec5e68f 100644
--- a/guides/cloudmos-deploy/wordpress-deployment-example.md
+++ b/guides/cloudmos-deploy/wordpress-deployment-example.md
@@ -20,7 +20,7 @@ In this section we will use Cloudmos Deploy to launch an example Minecraft deplo
* The tool provides several sample templates launch of popular applications
* Select the `Minecraft` template for our initial deployment
-
+
#### **STEP 4 - Proceed with Deployment**
diff --git a/guides/deploy/minesweeper-deployment-example.md b/guides/deploy/minesweeper-deployment-example.md
index d5b85599..12d71c76 100644
--- a/guides/deploy/minesweeper-deployment-example.md
+++ b/guides/deploy/minesweeper-deployment-example.md
@@ -90,4 +90,4 @@ Access the Deployment's URL via the exposed link.
Example display of the Minesweeper web app within the Akash Deployment.
-
+
diff --git a/other-resources/archived-resources/provider-build-with-gpu/README.md b/other-resources/archived-resources/provider-build-with-gpu/README.md
new file mode 100644
index 00000000..8d8ca662
--- /dev/null
+++ b/other-resources/archived-resources/provider-build-with-gpu/README.md
@@ -0,0 +1,18 @@
+# Provider Build With GPU
+
+Use this guide and follow the sequential steps to build your Testnet Akash Provider with GPU support.
+
+* [Prepare Kubernetes Hosts](prepare-kubernetes-hosts.md)
+* [Disable Search Domains](disable-search-domains.md)
+* [Install NVIDIA Drivers & Toolkit](install-nvidia-drivers-and-toolkit.md)
+* [NVIDIA Runtime Configuration](nvidia-runtime-configuration.md)
+* [Create Kubernetes Cluster](create-kubernetes-cluster.md)
+* [Confirm Kubernetes Cluster](step-7-confirm-kubernetes-cluster.md)
+* [Helm Installation on Kubernetes Node](step-4-helm-installation-on-kubernetes-node.md)
+* [Apply NVIDIA Runtime Engine](apply-nvidia-runtime-engine.md)
+* [Test GPUs](test-gpus.md)
+* [Akash Provider Install](akash-provider-install.md)
+* [Ingress Controller Install](step-8-ingress-controller-install.md)
+* [Domain Name Review](step-5-domain-name-review.md)
+* [GPU Test Deployments](gpu-test-deployments.md)
+
diff --git a/other-resources/archived-resources/provider-build-with-gpu/akash-provider-install.md b/other-resources/archived-resources/provider-build-with-gpu/akash-provider-install.md
new file mode 100644
index 00000000..e53765b7
--- /dev/null
+++ b/other-resources/archived-resources/provider-build-with-gpu/akash-provider-install.md
@@ -0,0 +1,246 @@
+# Akash Provider Install
+
+> _**NOTE**_ - all steps in this guide should be performed from a Kubernetes control plane node
+
+## Install Akash Provider Services Binary
+
+```
+wget https://github.com/akash-network/provider/releases/download/v0.4.6/provider-services_0.4.6_linux_amd64.zip
+
+unzip provider-services_0.4.6_linux_amd64.zip
+
+install provider-services /usr/local/bin/
+
+rm provider-services provider-services_0.4.6_linux_amd64.zip
+```
+
+## Confirm Akash Provider Services Install
+
+* Issue the following command to confirm successful installation of the binary:
+
+```
+provider-services version
+```
+
+#### Expected/Example Output
+
+```
+root@node1:~# provider-services version
+v0.4.6
+```
+
+## Specify Provider Account Keyring Location
+
+```
+export AKASH_KEYRING_BACKEND=test
+```
+
+## Create Provider Account
+
+The wallet created in this step used will be used for the following purposes:
+
+* Pay for provider transaction gas fees
+* Pay for bid collateral which is discussed further in this section
+
+> _**NOTE**_ - Make sure to create a new Akash account for the provider and do not reuse an account used for deployment purposes. Bids will not be generated from your provider if the deployment orders are created with the same key as the provider.
+
+> _**NOTE**_ - capture the mnemonic phrase for the account to restore later if necessary
+
+> _**NOTE**_ - in the provided syntax we are creating an account with the key name of `default`
+
+```
+provider-services keys add default
+```
+
+## Fund Provider Account via Faucet
+
+Ensure that the provider account - created in the prior step - is funded. Avenues to fund an account are discussed in this [document](https://docs.akash.network/guides/cli/detailed-steps/part-3.-fund-your-account).
+
+## Export Provider Key for Build Process
+
+### STEP 1 - Export Provider Key
+
+* Enter pass phrase when prompted
+* The passphrase used will be needed in subsequent steps
+
+```
+cd ~
+provider-services keys export default
+```
+
+#### Expected/Example Output
+
+```
+provider-services keys export default
+
+Enter passphrase to encrypt the exported key:
+Enter keyring passphrase:
+-----BEGIN TENDERMINT PRIVATE KEY-----
+kdf: bcrypt
+salt: REDACTED
+type: secp256k1
+
+REDACTED
+-----END TENDERMINT PRIVATE KEY-----
+```
+
+### STEP 2 - Create key.pem and Copy Output Into File
+
+* Copy the contents of the prior step into the `key.pem` file
+
+> _**NOTE -**_ file should contain only what's between `-----BEGIN TENDERMINT PRIVATE KEY-----` and `-----END TENDERMINT PRIVATE KEY-----` (including the `BEGIN` and `END` lines):
+
+```
+vim key.pem
+```
+
+#### Verification of key.pem File
+
+```
+cat key.pem
+```
+
+#### Expected/Example File
+
+```
+cat key.pem
+-----BEGIN TENDERMINT PRIVATE KEY-----
+kdf: bcrypt
+salt: REDACTED
+type: secp256k1
+
+REDACTED
+-----END TENDERMINT PRIVATE KEY-----
+```
+
+## Provider RPC Node
+
+Akash Providers need to run their own blockchain RPC node to remove dependence on public nodes. This is a strict requirement.
+
+We have recently released documentation guiding thru the process of building a [RPC node via Helm Charts](../../../akash-nodes/akash-node-via-helm-charts/) with state sync.
+
+## Declare Relevant Environment Variables
+
+* Update `RPC-NODE-ADDRESS` with your own value
+
+```
+export AKASH_CHAIN_ID=akashnet-2
+export AKASH_NODE=
+export AKASH_GAS=auto
+export AKASH_GAS_PRICES=0.025uakt
+export AKASH_GAS_ADJUSTMENT=1.5
+```
+
+* Update the following variables with your own values
+* The `KEY_PASSWORD` value should be the passphrase of used during the account export step
+* Further discussion of the Akash provider domain is available [here](step-5-domain-name-review.md)
+
+```
+export ACCOUNT_ADDRESS=
+export KEY_PASSWORD=
+export DOMAIN=
+```
+
+## Create Provider Configuration File
+
+* Providers must be updated with attributes in order to bid on the GPUs.
+
+### GPU Attributes Template
+
+* GPU model template is used in the subsequent `Provider Configuration File`
+* Multiple such entries should be included in the `Provider Configuration File` if the providers has multiple GPU types
+* Currently Akash providers may only host one GPU type per worker node. But different GPU models/types may be hosted on separate Kubernetes nodes.
+
+```
+capabilities/gpu/vendor//model/: true
+```
+
+### Example Provider Configuration File
+
+* In the example configuration file below the Akash Provider will advertise availability of NVIDIA GPU model A4000
+* Steps included in this code block create the necessary `provider.yaml` file in the expected directory
+* Ensure that the attributes section is updated witih your own values
+
+```
+cd ~
+mkdir provider
+cd provider
+cat > provider.yaml << EOF
+---
+from: "$ACCOUNT_ADDRESS"
+key: "$(cat ~/key.pem | openssl base64 -A)"
+keysecret: "$(echo $KEY_PASSWORD | openssl base64 -A)"
+domain: "$DOMAIN"
+node: "$AKASH_NODE"
+withdrawalperiod: 12h
+attributes:
+ - key: host
+ value: akash
+ - key: tier
+ value: community
+ - key: capabilities/gpu/vendor/nvidia/model/a4000
+ value: true
+EOF
+```
+
+## **Provider Bid Defaults**
+
+* When a provider is created the default bid engine settings are used which are used to derive pricing per workload. If desired these settings could be updated. But we would recommend initially using the default values.
+* For a through discussion on customized pricing please visit this [guide](../../../providers/build-a-cloud-provider/akash-cloud-provider-build-with-helm-charts/step-6-provider-bid-customization.md).
+
+## Create Provider Via Helm
+
+```
+export CRDS="manifests.akash.network providerhosts.akash.network providerleasedips.akash.network"
+kubectl delete crd $CRDS
+
+kubectl apply -f https://raw.githubusercontent.com/akash-network/provider/v0.4.6/pkg/apis/akash.network/crd.yaml
+
+for CRD in $CRDS; do
+ kubectl annotate crd $CRD helm.sh/resource-policy=keep
+ kubectl annotate crd $CRD meta.helm.sh/release-name=akash-provider
+ kubectl annotate crd $CRD meta.helm.sh/release-namespace=akash-services
+ kubectl label crd $CRD app.kubernetes.io/managed-by=Helm
+done
+
+helm upgrade --install akash-provider akash/provider -n akash-services -f provider.yaml \
+--set bidpricescript="$(cat /root/provider/price_script_generic.sh | openssl base64 -A)"
+```
+
+#### Verification
+
+* Verify the image is correct by running this command:
+
+```
+kubectl -n akash-services get pod akash-provider-0 -o yaml | grep image: | uniq -c
+```
+
+#### Expected/Example Output
+
+```
+root@node1:~/provider# kubectl -n akash-services get pod akash-provider-0 -o yaml | grep image: | uniq -c
+ 4 image: ghcr.io/akash-network/provider:0.4.6
+```
+
+## Create Akash Hostname Operator
+
+```
+helm upgrade --install akash-hostname-operator akash/akash-hostname-operator -n akash-services
+```
+
+## Verify Health of Akash Provider
+
+* Use the following command to verify the health of the Akash Provider and Hostname Operator pods
+
+```
+kubectl get pods -n akash-services
+```
+
+#### Example/Expected Output
+
+```
+root@node1:~/provider# kubectl get pods -n akash-services
+NAME READY STATUS RESTARTS AGE
+akash-hostname-operator-5c59757fcc-kt7dl 1/1 Running 0 17s
+akash-provider-0 1/1 Running 0 59s
+```
diff --git a/other-resources/archived-resources/provider-build-with-gpu/apply-nvidia-runtime-engine.md b/other-resources/archived-resources/provider-build-with-gpu/apply-nvidia-runtime-engine.md
new file mode 100644
index 00000000..a08f9e0a
--- /dev/null
+++ b/other-resources/archived-resources/provider-build-with-gpu/apply-nvidia-runtime-engine.md
@@ -0,0 +1,136 @@
+# Apply NVIDIA Runtime Engine
+
+> _**NOTE**_ - conduct these steps on the control plane node that Helm was installed on via the previous step
+
+## Create RuntimeClass
+
+#### Create the NVIDIA Runtime Config
+
+```
+cat > nvidia-runtime-class.yaml << EOF
+kind: RuntimeClass
+apiVersion: node.k8s.io/v1
+metadata:
+ name: nvidia
+handler: nvidia
+EOF
+```
+
+#### Apply the NVIDIA Runtime Config
+
+```
+kubectl apply -f nvidia-runtime-class.yaml
+```
+
+## Upgrade/Install the NVIDIA Device Plug In Via Helm - GPUs on All Nodes
+
+> _**NOTE**_ - in some scenarios a provider may host GPUs only on a subset of Kubernetes worker nodes. Use the instructions in this section if ALL Kubernetes worker nodes have available GPU resources. If only a subset of worker nodes host GPU resources - use the section `Upgrade/Install the NVIDIA Device Plug In Via Helm - GPUs on Subset of Nodes` instead. Only one of these two sections should be completed.
+
+```
+helm upgrade -i nvdp nvdp/nvidia-device-plugin \
+ --namespace nvidia-device-plugin \
+ --create-namespace \
+ --version 0.14.2 \
+ --set runtimeClassName="nvidia" \
+ --set deviceListStrategy=volume-mounts
+```
+
+#### Expected/Example Output
+
+```
+root@ip-172-31-8-172:~# helm upgrade -i nvdp nvdp/nvidia-device-plugin \
+ --namespace nvidia-device-plugin \
+ --create-namespace \
+ --version 0.14.2 \
+ --set runtimeClassName="nvidia" \
+ --set deviceListStrategy=volume-mounts
+
+Release "nvdp" does not exist. Installing it now.
+NAME: nvdp
+LAST DEPLOYED: Thu Apr 13 19:11:28 2023
+NAMESPACE: nvidia-device-plugin
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+```
+
+## Upgrade/Install the NVIDIA Device Plug In Via Helm - GPUs on Subset of Nodes
+
+> _**NOTE**_ - use the instructions in this section if only a subset of Kubernetes worker nodes have available GPU resources.
+
+* By default, the nvidia-device-plugin DaemonSet may run on all nodes in your Kubernetes cluster. If you want to restrict its deployment to only GPU-enabled nodes, you can leverage Kubernetes node labels and selectors.
+* Specifically, you can use the `allow-nvdp=true label` to limit where the DaemonSet is scheduled.
+
+#### STEP 1: Label the GPU Nodes
+
+* First, identify your GPU nodes and label them with `allow-nvdp=true`. You can do this by running the following command for each GPU node
+* Replace `node-name` of the node you're labeling
+
+> _**NOTE**_ - if you are unsure of the `` to be used in this command - issue `kubectl get nodes` from one of your Kubernetes control plane nodes to obtain via the `NAME` column of this command output
+
+```
+kubectl label nodes allow-nvdp=true
+```
+
+#### STEP 2: Update Helm Chart Values
+
+* By setting the node selector, you are ensuring that the `nvidia-device-plugin` DaemonSet will only be scheduled on nodes with the `allow-nvdp=true` label.
+
+```
+helm upgrade -i nvdp nvdp/nvidia-device-plugin \
+ --namespace nvidia-device-plugin \
+ --create-namespace \
+ --version 0.14.2 \
+ --set runtimeClassName="nvidia" \
+ --set deviceListStrategy=volume-mounts \
+ --set-string nodeSelector.allow-nvdp="true"
+```
+
+#### STEP 3: Verify
+
+```
+kubectl -n nvidia-device-plugin get pods -o wide
+```
+
+_**Expected/Example Output**_
+
+* In this example only nodes: node1, node3 and node4 have the `allow-nvdp=true` labels and that's where `nvidia-device-plugin` pods spawned at:
+
+```
+root@node1:~# kubectl -n nvidia-device-plugin get pods -o wide
+
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+nvdp-nvidia-device-plugin-gqnm2 1/1 Running 0 11s 10.233.75.1 node2
+```
+
+## Verification - Applicable to all Environments
+
+```
+kubectl -n nvidia-device-plugin logs -l app.kubernetes.io/instance=nvdp
+```
+
+#### Example/Expected Output
+
+```
+ root@node1:~# kubectl -n nvidia-device-plugin logs -l app.kubernetes.io/instance=nvdp
+ "sharing": {
+ "timeSlicing": {}
+ }
+}
+2023/04/14 14:18:27 Retreiving plugins.
+2023/04/14 14:18:27 Detected NVML platform: found NVML library
+2023/04/14 14:18:27 Detected non-Tegra platform: /sys/devices/soc0/family file not found
+2023/04/14 14:18:27 Starting GRPC server for 'nvidia.com/gpu'
+2023/04/14 14:18:27 Starting to serve 'nvidia.com/gpu' on /var/lib/kubelet/device-plugins/nvidia-gpu.sock
+2023/04/14 14:18:27 Registered device plugin for 'nvidia.com/gpu' with Kubelet
+ "sharing": {
+ "timeSlicing": {}
+ }
+}
+2023/04/14 14:18:29 Retreiving plugins.
+2023/04/14 14:18:29 Detected NVML platform: found NVML library
+2023/04/14 14:18:29 Detected non-Tegra platform: /sys/devices/soc0/family file not found
+2023/04/14 14:18:29 Starting GRPC server for 'nvidia.com/gpu'
+2023/04/14 14:18:29 Starting to serve 'nvidia.com/gpu' on /var/lib/kubelet/device-plugins/nvidia-gpu.sock
+2023/04/14 14:18:29 Registered device plugin for 'nvidia.com/gpu' with Kubelet
+```
diff --git a/other-resources/archived-resources/provider-build-with-gpu/create-kubernetes-cluster.md b/other-resources/archived-resources/provider-build-with-gpu/create-kubernetes-cluster.md
new file mode 100644
index 00000000..fb335bfe
--- /dev/null
+++ b/other-resources/archived-resources/provider-build-with-gpu/create-kubernetes-cluster.md
@@ -0,0 +1,79 @@
+# Create Kubernetes Cluster
+
+## Create Cluster
+
+> _**NOTE**_ - This step should be completed from the Kubespray host only
+
+With inventory in place we are ready to build the Kubernetes cluster via Ansible.
+
+> _**NOTE**_ - the cluster creation may take several minutes to complete
+
+* If the Kubespray process fails or is interpreted, run the Ansible playbook again and it will complete any incomplete steps on the subsequent run
+
+```
+cd ~/kubespray
+
+source venv/bin/activate
+
+ansible-playbook -i inventory/akash/hosts.yaml -b -v --private-key=~/.ssh/id_rsa cluster.yml
+```
+
+## GPU Node Label (Kubernetes)
+
+Each node that provides GPUs must be labeled correctly.
+
+> _**NOTE**_ - these configurations should be completed on a Kubernetes control plane node
+
+### Label Template
+
+* Use this label template in the `kubectl label` command in the subsequent Label Appliction sub-section below
+
+> _**NOTE**_ - please do not assign any value other than `true` to these labels. Setting the value to `false` may have unexpected consequences on the Akash provider. If GPU resources are removed from a node, simply remove the Kubernetes label completely from that node.
+
+```
+akash.network/capabilities.gpu.vendor..model.=true
+```
+
+### Label Application
+
+#### Template
+
+> _**NOTE**_ - if you are unsure of the `` to be used in this command - issue `kubectl get nodes` from one of your Kubernetes control plane nodes to obtain via the `NAME` column of this command output
+
+```
+kubectl label node