detectGpu

Detects the best available GPU backend on the current system.

Detection order:

  1. NVIDIA CUDA — Runs nvidia-smi and checks for a valid GPU name.

  2. Vulkan — Runs vulkaninfo --summary and checks for a GPU device.

  3. NONE — If neither is available, falls back to CPU-only.

MacOS is excluded because llama.cpp uses Metal natively (the standard macOS binary already includes Metal/GPU support, no separate build is needed).

Return

The detected GpuBackend.

Parameters

log

Log function for diagnostic output.