This repo contains participant information for the Countdown Innovation Challenge event. Please use information in this repo to explore data and develop operational apps.
EMBRACE A SUSTAINABLE FUTURE AND HARNESS YOUR INNOVATIVE IDEAS TO CREATE A GREENER ENVIRONMENT FOR ALL The focus for the event will be anchored to our strategic theme of Good and Green. A few examples of topics that align with the theme include:
- Reduction of food waste from farm to fork
- Increased energy efficiency in stores
- Decarbonisation of our supply chain
- Reduction of water usage, and;
- Other environmental sustainability areas
In order to explore more ideas, please use this repo to form ideas and use examples for your team.
Please find the project links for your team.
Every team will be provisioned with a dedicated GCP project with project owner permission. The project naming convention is <team_name>-<random_suffix>
.
By default, the following servers/APIs will be enabled for those team projects. Other services/APIs can be enabled when required.
- "aiplatform.googleapis.com"
- "artifactregistry.googleapis.com"
- "bigquery.googleapis.com"
- "compute.googleapis.com"
- "cloudbuild.googleapis.com"
- "cloudfunctions.googleapis.com"
- "datacatalog.googleapis.com"
- "dataflow.googleapis.com"
- "datastudio.googleapis.com"
- "dlp.googleapis.com"
- "eventarc.googleapis.com"
- "logging.googleapis.com"
- "sourcerepo.googleapis.com"
- "run.googleapis.com"
- "pubsub.googleapis.com"
- "monitoring.googleapis.com"
- "notebooks.googleapis.com"
By default, each team will be provided a data landing zone cloud storage bucket . The naming convention is <team_name>-bigquery-csv-import
. Please note that we only support CSV format at the moment, and those CSV files must be uploaded to the bucket root. We are working on supporting more data format and nested directory structures.
By default, each team will be provided a cloud source repository. Please use this command to clone it to your local. gcloud source repos clone --project=countdown-<team_name>-repo
.
By default, each team will be provisioned a Vertex AI managed notebook instance (JupyterLab notebook) with owner permission. The imported datasets should be accessible via GCS bucket browser on the instance.
These datasets are available to teams and loaded either into your BigQuery instance, or available in a storage bucket gs://<team_name>-bigquery-csv-import
.
The following custom datasets as per category are available on the GCS bucket.
- https://www.kaggle.com/datasets/manjeetsingh/retaildataset
- https://data.world/xfu022/australia-grocery-product-dataset
- https://data.world/hxchua/waste-in-singapore
- https://www.kaggle.com/datasets/wastebase/plastic-bottle-waste
- https://data.world/makeovermonday/2020w16
- https://www.kaggle.com/datasets/skyliecampos/food-and-empact
In addition to the supplied data sets, Google also have a repository of samples datasets which can be found BigQuery Public Dataset.
Please feel free to add them via "ADD DATA" button on the top left of your BigQuery Explorer.
These are available as a simple one-click deployment for hosting applications on managed Cloud Run.
Framework | Description | Deploy |
---|---|---|
[React.js](boilerplate-react | React Sample | |
Sapper.js | Sapper Sample | |
Svelte Kit | Sveltekit with TailwindCSS | |
Nuxt.js | Nuxt.js with TailwindCSS and TypeScript | |
Next.js | Next.js with TailwindCSS |
Additional information and resources are available at the links below: