Skip to content

llama : add support for Deepseek-R1-Qwen distill model (partial) #101

llama : add support for Deepseek-R1-Qwen distill model (partial)

llama : add support for Deepseek-R1-Qwen distill model (partial) #101

Triggered via push January 28, 2025 21:22
Status Failure
Total duration 28m 25s
Artifacts 5

docker.yml

on: push
Matrix: Push Docker image to Docker Hub
Fit to window
Zoom out
Zoom in

Annotations

7 errors
Push Docker image to Docker Hub (full, .devops/full.Dockerfile, linux/amd64,linux/arm64)
buildx failed with: ERROR: failed to solve: process "/dev/.buildkit_qemu_emulator /bin/sh -c make -j$(nproc)" did not complete successfully: exit code: 2
Push Docker image to Docker Hub (server-cuda, .devops/llama-server-cuda.Dockerfile, linux/amd64)
The job was canceled because "full__devops_full_Dockerf" failed.
Push Docker image to Docker Hub (light-cuda, .devops/llama-cli-cuda.Dockerfile, linux/amd64)
The job was canceled because "full__devops_full_Dockerf" failed.
Push Docker image to Docker Hub (full-cuda, .devops/full-cuda.Dockerfile, linux/amd64)
The job was canceled because "full__devops_full_Dockerf" failed.

Artifacts

Produced during runtime
Name Size
nomic-ai~llama.cpp~6HDLPV.dockerbuild
103 KB
nomic-ai~llama.cpp~BXBQEN.dockerbuild
97.6 KB
nomic-ai~llama.cpp~SIN3AN.dockerbuild
104 KB
nomic-ai~llama.cpp~SK5HT6.dockerbuild
79.8 KB
nomic-ai~llama.cpp~WGRDPO.dockerbuild
98.3 KB