detect Gpu
Detects the best available GPU backend on the current system.
Detection order:
NVIDIA CUDA — Runs
nvidia-smiand checks for a valid GPU name.Vulkan — Runs
vulkaninfo --summaryand checks for a GPU device.NONE — If neither is available, falls back to CPU-only.
MacOS is excluded because llama.cpp uses Metal natively (the standard macOS binary already includes Metal/GPU support, no separate build is needed).
Return
The detected GpuBackend.
Parameters
log
Log function for diagnostic output.