-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installation with pip? #5
Comments
Hi @alberthli, I think there are several steps should be done to install this package:
|
Here's a modified cmake_minimum_required(VERSION 3.23)
project(volume_rendering_jax LANGUAGES CXX CUDA)
# use `cmake -DCMAKE_CUDA_ARCHITECTURES=61;62;75` to build for compute capabilities 61, 62, and 75
# set(CMAKE_CUDA_ARCHITECTURES "all")
message(STATUS "Enabled CUDA architectures: ${CMAKE_CUDA_ARCHITECTURES}")
message(STATUS "Using CMake version " ${CMAKE_VERSION})
set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} --extended-lambda")
find_package(Python COMPONENTS Interpreter Development REQUIRED)
find_package(pybind11 CONFIG REQUIRED)
find_package(fmt REQUIRED)
include_directories(${CMAKE_CURRENT_LIST_DIR}/lib)
include_directories(${CMAKE_CUDA_TOOLKIT_INCLUDE_DIRECTORIES})
pybind11_add_module(
tcnnutils
${CMAKE_CURRENT_LIST_DIR}/lib/impl/hashgrid.cu
${CMAKE_CURRENT_LIST_DIR}/lib/ffi.cc
)
# e.g. `cmake -DTCNN_MIN_GPU_ARCH=61`
message(STATUS "TCNN_MIN_GPU_ARCH=35")
target_compile_definitions(tcnnutils PUBLIC -DTCNN_MIN_GPU_ARCH=35)
target_link_libraries(tcnnutils PRIVATE tiny-cuda-nn fmt::fmt)
install(TARGETS tcnnutils DESTINATION jaxtcnn) To add all required headers, you can first clone $ git clone https://github.com/nvlabs/tiny-cuda-nn.git --recursive
$ cd tiny-cuda-nn
$ git checkout v1.6
$ cd ..
$ git clone https://github.com/blurgyy/jaxngp.git
$ cd jaxngp/deps/jax-tcnn/lib
$ ln -s /path/to/tiny-cuda-nn/include/tiny-cuda-nn /path/to/tiny-cuda-nn/dependencies/* . You should then build tiny-cuda-nn to obtain the static library |
@blurgyy Thanks for the edits/instructions - I finally got back around to looking at this and successfully built I have a follow-up question: my goal is to train a However, when I initialize Do you have any idea where this parameter discrepancy comes from in the hash grid implementation? Could this be resolved just by building the most recent version of EDIT: In my fork of this branch, I've made some minor modifications to allow To initialize the encoder, I'm using the parameters
|
Hi @alberthli, I recently encountered a use case where I need to calibrate the JAX's I used the following parameters to initialize and train a HashGridEncoder from the JAX side, and used tcnn's pytorch bindings to load it, the parameter count and per-layer hashgrid output is checked to match (there are still absolute error not larger than 1e-3).
The I hope this helps. |
Hi, great work on this package!
I'm looking to use this package in conjunction with other dependencies in a larger project and have no experience with nix. Since there are multiple top-level packages, it's not clear to me how to correctly install
jaxngp
. Further, when I try to install from a subdirectory (e.g., trying to install onlyjax-tcnn
), I'm unable to do so. For example, when trying to install only thejax-tcnn
subpackage usingpip install "git+https://github.com/blurgyy/jaxngp.git#egg=jax-tcnn&subdirectory=deps/jax-tcnn"
, I get the errorSome guidance on installation would be helpful! If it helps, I only really need a
jax
version of the TCNN hash encoder.The text was updated successfully, but these errors were encountered: