This benchmark structure is designed to be extensible β you can add Q&A datasets for any XR platform and toolkit. However, this repository currently includes only one dataset: Unity as the platform and XRI version 2 as the toolkit.
It includes a Python utility script for easily loading, validating, and querying the dataset.
benchmark.json
β The benchmark dataset in JSON format.benchmark_reader.py
β Python code for reading and validating the benchmark.
The benchmark is organized as a hierarchy:
- benchmark_info β General metadata.
- platforms[] β E.g., Unity, Web(Mock).
- toolkits[] β E.g., XRIv2, MRTK3(Mock), A-Frame(Mock).
- dataset β List of Q&A pairs, with optional metadata.
- toolkits[] β E.g., XRIv2, MRTK3(Mock), A-Frame(Mock).
{
"benchmark_info": {
"name": "XRI-benchmark",
"description": "Text-based, Q&A Benchmark for Virtual Reality applications...",
"version": "0.1",
"date": "2024-09-15",
"author": "CG3HCI (https://cg3hci.dmi.unica.it/lab/)",
"email": "[email protected]"
},
"platforms": [
{
"name": "Unity",
"toolkits": [
{
"name": "XRIv2",
"dataset": [
{
"question": "What is ... ?",
"answer": "... is a ...",
"metadata1": "A value",
...
"metadataN": "Another value"
}
]
}
]
}
]
}