I am, admittedly, “GPU rich”; I have ~48GB of VRAM at my disposal on my main workstation, and 24GB on my gaming rig. Thus, I am using Q8 and Q6_L quantized .gguf files.
Naturally, my experience with the “fidelity” of my LLM models re: hallucinations would be better.
Self-hosted LLMs are the way.
deleted by creator
Oof, ok, my apologies.
I am, admittedly, “GPU rich”; I have ~48GB of VRAM at my disposal on my main workstation, and 24GB on my gaming rig. Thus, I am using Q8 and Q6_L quantized
.gguf
files.Naturally, my experience with the “fidelity” of my LLM models re: hallucinations would be better.
I actually think that (presently) self hosted LLMs are much worse for hallucination