Skip to content

Actions: Adriankhl/llama.cpp

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
70 workflow runs
70 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

ggml: run llama_get_device_count before llama_default_buffer_type_off…
Python check requirements.txt #3: Commit bd00902 pushed by Adriankhl
May 27, 2024 02:27 3m 26s fix_vulkan_device
May 27, 2024 02:27 3m 26s
vulkan: fix MSVC debug build by adding the _ITERATOR_DEBUG_LEVEL=0 de…
Python check requirements.txt #2: Commit 4ed6a96 pushed by Adriankhl
May 21, 2024 01:41 3m 32s fix-msvc-vulkan-debug
May 21, 2024 01:41 3m 32s
llava-cli: fix base64 prompt
flake8 Lint #2: Commit 2cafc20 pushed by Adriankhl
May 13, 2024 03:32 19s fix-llava
May 13, 2024 03:32 19s
llava-cli: fix base64 prompt
Code Coverage #2: Commit 2cafc20 pushed by Adriankhl
May 13, 2024 03:32 1m 59s fix-llava
May 13, 2024 03:32 1m 59s
vulkan: fix ggml_soft_max_ext parameter
Python check requirements.txt #1: Commit 7742d72 pushed by Adriankhl
May 12, 2024 11:52 4m 30s khl
khl
May 12, 2024 11:52 4m 30s
vulkan: fix ggml_soft_max_ext parameter
flake8 Lint #1: Commit 7742d72 pushed by Adriankhl
May 12, 2024 11:52 18s khl
khl
May 12, 2024 11:52 18s
vulkan: fix ggml_soft_max_ext parameter
Code Coverage #1: Commit 7742d72 pushed by Adriankhl
May 12, 2024 11:52 1m 57s khl
khl
May 12, 2024 11:52 1m 57s