Instead of using character ai, which will send all my private conversations to governments, I found this solution. Any thoughts on this? 😅

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 day ago

    I’ve run Kobold AI on local hardware, and it has some erotic models. From my fairly quick skim of character.ai’s syntax, I think that KoboldAI has more-powerful options for creating worlds and triggers. KoboldAI can split layers across all available GPUs and your CPU, so if you’ve got the electricity and the power supply and the room cooling and are willing to blow the requisite money on multiple GPUs, you can probably make it respond about as arbitrarily-quickly as you want.

    But more-broadly, I’m not particularly impressed with what I’ve seen of sex chatbots in 2025. They have limited ability to use conversation tokens from earlier in the conversation in generating each new message, which means that as a conversation progresses, it increasingly doesn’t take into account content earlier in the conversation. It’s possible to get into loops, or forget facts about characters or the environment that were present earlier in a conversation.

    Maybe someone could make some kind of system to try to summarize and condense material from earlier in the conversation or something, but…meh.

    As generating pornography goes, I think that image generation is a lot more viable.

    • fishynoob@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 day ago

      Thanks for the edit. You have a very intriguing idea; a second LLM in the background with a summary of the conversation + static context might make performance a lot better. I don’t know if anyone has implemented it/knows how one can DIY it with Kobold/Ollama. I think it is an amazing idea for code assistants too if you’re doing a long coding session.

    • fishynoob@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      I had never heard of Kobold AI. I was going to self-host Ollama and try with it but I’ll take a look at Kobold. I had never heard about controls on world-building and dialogue triggers either; there’s a lot to learn.

      Will more VRAM solve the problem of not retaining context? Can I throw 48GB of VRAM towards an 8B model to help it remember stuff?

      Yes, I’m looking at image generation (stable diffusion) too. Thanks

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        Will more VRAM solve the problem of not retaining context?

        IIRC — I ran KoboldAI with 24GB of VRAM, so wasn’t super-constrained – there are some limits on the number of tokens that can be sent as a prompt imposed by VRAM, which I did not hit. However, there are also some imposed by the software; you can only increase the number of tokens that get fed in so far, regardless of VRAM. More VRAM does let you use larger, more “knowledgeable” models.

        I’m not sure whether those are purely-arbitrary, to try to keep performance running, or if there are other technical issues with very large prompts.

        It definitely isn’t capable of keeping the entire previous conversation (once you get one of any length) as an input to generating a new response, though.

        • fishynoob@infosec.pub
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          I see. Thanks for the note. I think beyond 48GB of VRAM diminishing returns set in very quickly so I’ll likely stick to that limit. I wouldn’t want to use models hosted in the cloud so that’s out of the question.