• 1 Post
  • 17 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle







  • You mean drag them from platforms that have a vested interest in keeping them locked in and squashing competitors like the Fediverse?

    In platforms that spend billions on engagement optimization algorithms, with the sole purpose of keeping users addicted, basically with government and business landscape backing?

    Look, I’m optimistic about the Fediverse, this is a great refuge in the hellscape that is the internet. But you can’t make people want to change. I’ve learned this IRL, but see it with (for example) persecuted people continuing to use Twitter even though its owner basically has a gun to their heads. There’s a big gulf between being a fantastic refuge and taking the internet from Facebook and Google. Even if every phone on the planet had an easy button to switch to Fediverse alternatives in one click… many would not take it, and that’s an utter fantasy.





  • A problem is volunteers and critical mass.

    Open source “hacks” need a big pool of people who want something to seed a few brilliant souls to develop it in their free time. It has to be at least proportional to the problem.

    This kinda makes sense for robot vacuums: a lot of people have them, and the cloud service is annoying, simpler, and not life critical.

    Teslas are a whole different deal. They are very expensive, and fewer people own them. Replicating even part of the cloud API calls is a completely different scope. The pool of Tesla owners willing to dedicate their time to that is just… smaller.

    Also, I think buying a Tesla, for many, was a vote of implicit trust in the company and its software. It’s harder for someone cynical of its cloud dependence to end up with an entire luxury automobile.





  • To go into more detail:

    • Exllama is faster than llama.cpp with all other things being equal.

    • exllama’s quantized KV cache implementation is also far superior, and nearly lossless at Q4 while llama.cpp is nearly unusable at Q4 (and needs to be turned up to Q5_1/Q4_0 or Q8_0/Q4_1 for good quality)

    • With ollama specifically, you get locked out of a lot of knobs like this enhanced llama.cpp KV cache quantization, more advanced quantization (like iMatrix IQ quantizations or the ARM/AVX optimized Q4_0_4_4/Q4_0_8_8 quantizations), advanced sampling like DRY, batched inference and such.

    It’s not evidence or options… it’s missing features, thats my big issue with ollama. I simply get far worse, and far slower, LLM responses out of ollama than tabbyAPI/EXUI on the same hardware, and there’s no way around it.

    Also, I’ve been frustrated with implementation bugs in llama.cpp specifically, like how llama 3.1 (for instance) was bugged past 8K at launch because it doesn’t properly support its rope scaling. Ollama inherits all these quirks.

    I don’t want to go into the issues I have with the ollama devs behavior though, as that’s way more subjective.


  • It’s less optimal.

    On a 3090, I simply can’t run Command-R or Qwen 2.5 34B well at 64K-80K context with ollama. Its slow even at lower context, the lack of DRY sampling and some other things majorly hit quality.

    Ollama is meant to be turnkey, and thats fine, but LLMs are extremely resource intense. Sometimes the manual setup/configuration is worth it to squeeze out every ounce of extra performance and quantization quality.

    Even on CPU-only setups, you are missing out on (for instance) the CPU-optimized quantizations llama.cpp offers now, or the more advanced sampling kobold.cpp offers, or more fine grained tuning of flash attention configs, or batched inference, just to start.

    And as I hinted at, I don’t like some other aspects of ollama, like how they “leech” off llama.cpp and kinda hide the association without contributing upstream, some hype and controversies in the past, and hints that they may be cooking up something commercial.