Skip to content

Commit

Permalink
kill off GPU artifacts for now
Browse files Browse the repository at this point in the history
  • Loading branch information
yuzawa-san committed Nov 25, 2024
1 parent 82d3a3a commit 31cf1c2
Show file tree
Hide file tree
Showing 4 changed files with 6 additions and 12 deletions.
9 changes: 3 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,7 @@ A collection of native libraries with CPU support for a several common OS/archit

#### onnxruntime-gpu

[![maven](https://img.shields.io/maven-central/v/com.jyuzawa/onnxruntime-gpu)](https://search.maven.org/artifact/com.jyuzawa/onnxruntime-gpu)

A collection of native libraries with GPU support for a several common OS/architecture combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers like `osx-x86_64` to provide specific support.
See https://github.com/yuzawa-san/onnxruntime-java/issues/258

### In your library

Expand All @@ -58,7 +56,7 @@ This puts the burden of providing a native library on your end user.
There is an example application in the `onnxruntime-sample-application` directory.
The library should use the `onnxruntime` as a implementation dependency.
The application needs to have acccess to the native library.
You have the option providing it via a runtime dependency using either a classifier variant from `onnxruntime-cpu` or `onnxruntime-gpu`
You have the option providing it via a runtime dependency using either a classifier variant from `onnxruntime-cpu`.
Otherwise, the Java library path will be used to load the native library.


Expand All @@ -74,7 +72,6 @@ Since this uses a native library, this will require the runtime to have the `--e
### Execution Providers

Only those which are exposed in the C API are supported.
The `onnxruntime-gpu` artifact supports CUDA and TensorRT, since those are built off of the GPU artifacts from the upstream project.
If you wish to use another execution provider which is present in the C API, but not in any of the artifacts from the upstream project, you can choose to bring your own onnxruntime shared library to link against.

## Versioning
Expand All @@ -86,4 +83,4 @@ Upstream major version changes will typically be major version changes here.
Minor version will be bumped for smaller, but compatible changes.
Upstream minor version changes will typically be minor version changes here.

The `onnxruntime-cpu` and `onnxruntime-gpu` artifacts are versioned to match the upstream versions and depend on a minimum compatible `onnxruntime` version.
The `onnxruntime-cpu` artifacts are versioned to match the upstream versions and depend on a minimum compatible `onnxruntime` version.
2 changes: 2 additions & 0 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -282,6 +282,7 @@ publishing {
artifact tasks.named("osArchJar${it}")
}
}
/*
onnxruntimeGpu(MavenPublication) {
version = ORT_JAR_VERSION
artifactId = "${rootProject.name}-gpu"
Expand All @@ -293,6 +294,7 @@ publishing {
artifact tasks.named("osArchJar${it}")
}
}
*/
onnxruntime(MavenPublication) {
from components.java
pom {
Expand Down
2 changes: 0 additions & 2 deletions onnxruntime-sample-application/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@ dependencies {
// For the application to work, you will need to provide the native libraries.
// Optionally, provide the CPU libraries (for various OS/Architecture combinations)
// runtimeOnly "com.jyuzawa:onnxruntime-cpu:1.X.0:osx-x86_64"
// Optionally, provide the GPU libraries (for various OS/Architecture combinations)
// runtimeOnly "com.jyuzawa:onnxruntime-gpu:1.X.0:osx-x86_64"
// Alternatively, do nothing and the Java library path will be used
}

Expand Down
5 changes: 1 addition & 4 deletions src/main/java/module-info.java
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,9 @@
* <li>The {@code onnxruntime-cpu} artifact provides support for several common operating systems / CPU architecture
* combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers like
* {@code osx-x86_64} to provide specific support.
* <li>The {@code onnxruntime-gpu} artifact provides GPU (CUDA) support for several common operating systems / CPU
* architecture combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers
* like {@code osx-x86_64} to provide specific support.
* <li>The {@code onnxruntime} artifact contains only bindings and no libraries. This means the native library will need
* to be provided. Use this artifact as a compile dependency if you want to allow your project's users to bring use
* {@code onnxruntime-cpu}, {@code onnxruntime-gpu}, or their own native library as dependencies provided at runtime.
* {@code onnxruntime-cpu} or their own native library as dependencies provided at runtime.
* </ul>
*
* @since 1.0.0
Expand Down

0 comments on commit 31cf1c2

Please sign in to comment.