You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Environment, CPU architecture, OS, and Version:
Docker environment on Debian 12, with Arc A380 GPU passthrough
Describe the bug
Running a model with llama backend, in my case Hermes-2-Pro-Mistral-7B.Q4_0.gguf gives an error llama_model_load: can not find preferred GPU platform.
LocalAI version:
master / 2.1.20
Environment, CPU architecture, OS, and Version:
Docker environment on Debian 12, with Arc A380 GPU passthrough
Describe the bug
Running a model with llama backend, in my case
Hermes-2-Pro-Mistral-7B.Q4_0.gguf
gives an errorllama_model_load: can not find preferred GPU platform
.To Reproduce
docker-compose file:
Use the above mentioned model, or also tested with others, and do chat request.
Logs
Additional context
The text was updated successfully, but these errors were encountered: