Home assistant with local ollama AI

I’m firmly in the absorbing info and planning stage. I.E. the “cheap to change my mind now” stage.

With regards to this:

I’m looking at P40 GPUs. Pros, double the Vram, about the same cost, physically designed to fit the R720. Cons, slightly older tech, doesn’t really do fp16, which may cause limitations.

Basically it looks like the 3060 has a slight edge in speed, and the P40 has an edge in the size of model it can hold.

Currently leaning toward P40. Still reading and learning though.

Thoughts?