

So lxc containers and not vm’s
So lxc containers and not vm’s
Could you explain further with a bit more detail? I havnt looked at this in a while but back then the options where virtiofs or nfs
So you mount the pool to each vm that needs the shared data? Afaik zfs is not made for concurrency
Scrap that - after upgrading it went bonkers and will always use one of my «knowledges» no matter what I try. The websearch fails even with ddg as engine. Its aways seemed like the ui was made by unskilled labour, but this is just horrible. 2/10 not recommended
Possibly. Been running it since last summer, but like i say the small models dont do much good for me. I have tried llama3.1 olmo2, deepseek r1 in a few variants, qwen2. Qwen2.5 coder, mistral, codellama, starcoder2, nemotron-mini, llama3.2, qwen2.5-coder, gamma2 and llava.
I use perplexity and mistral as paid, with much better quality. Openwebui is great though, but my hardware is lacking
Edit: saw that my mate is still using it a bit so i’ll update openwebu frpm 0.4 to 0.5.20 for him. Hes a bit anxious about sending data to the cloud so he dont mind the quality
I have the same setup, but its not very usable as my graphics card has 6gb ram. I want one with 20 or 24, as the 6b models are pain and the tiny ones don’t give me much.
Ollama was pretty easy to set up on windows, and its eqsy to download and test the models ollama has available
It always predicts the next word based on its tokenisation, data from training and context handling. So accuracy is all there is.
People are misled by the name. Its not making stuff up, its just less accurate
I dont get it - are you trying to mimic vm’s with you docker containers? docker works great using the normal way of exposing ports from the internal docker net through the host. Making technology work in ways it wasnt designed for usually gives you a hard to maintain setup
You have a mighty big hand if you reach l and a with the same one
Does it even fucking matter what’s banned in what echo chamber??