Local AI - Beginners quest

Hello. Thanks for clicking and offering some of your time.

I am building a dedicated local machine and just wondering if there is a guide on what OS to use so that I can host/run “stuff” (sorry, I don’t know what stuff yet I’m learning) The machine is a Ryzen 16core, 64gb ram and RTX 5060 12gb. A fairly good starting point and not costing 10k+!

My aim is to learn about inference, training, and multi-agent eventually - I wanted to invest money into my own hardware so I can have unlimited use locally, even if it “takes a bit longer” .. and also maybe share with friends if they want to.

I am pretty overwhelmed by all the different types of software out there and models that run on x and y but not on z? so I don’t know what best to start with. I want to develop on my machine and connect to this remote one to do the hard lifting.. or login to remote to run dedicated apps there.

Much appreciated for reading this far. sorry for being so vague. I spent some time i found llama builds.. seems quite new, limited info there. Searching web.. i think im asking the wrong questions and not finding much.

1 Like

When building a new PC for AI, if you can choose the OS, you should absolutely go with Linux. Mac is the next best option. That should be PC Unix-based…

I use Windows because I want to work in my usual environment, but you should expect backend library support to typically lag several months behind. That’s actually the better case…
Some critical libraries still don’t offer full support.
For testing, I mostly use Linux on the cloud.

Anyway, avoid Windows as the OS for running generative AI if possible… You could run Linux via WSL2, but if that’s the case, just use Linux.

Regarding the Blackwell GPU, the only thing I’m concerned about in your build, it’s not much of an issue on either Windows or Linux as of today. However, note that you’ll need the new version of PyTorch and compatible software.