• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: March 8th, 2024

help-circle
  • You didn’t, I did. The starting models cap at 24, but you can spec up the biggest one up to 64GB. I should have clicked through to the customization page before reporting what was available.

    That is still cheaper than a 5090, so it’s not that clear cut. I think it depends on what you’re trying to set up and how much money you’re willing to burn. Sometimes literally, the Mac will also be more power efficient than a honker of an Nvidia 90 series card.

    Honestly, all I have for recommendations is that I’d rather scale up than down. I mean, unless you also want to play kickass games at insane framerates with path tracing or something. Then go nuts with your big boy GPUs, who cares.

    But for LLM stuff strictly I’d start by repurposing what I have around, hitting a speed limit and then scaling up to maybe something with a lot of shared RAM (including a Mac Mini if you’re into those) and keep rinsing and repeating. I don’t know that I personally am in the market for AI-specific muti-thousand APUs with a hundred plus gigs of RAM yet.


  • Thing is, you can trade off speed for quality. For coding support you can settle for Llama 3.2 or a smaller deepseek-r1 and still get most of what you need on a smaller GPU, then scale up to a bigger model that will run slower if you need something cleaner. I’ve had a small laptop with 16 GB of total memory and a 4060 mobile serving as a makeshift home server with a LLM and a few other things and… well, it’s not instant, but I can get the sort of thing you need out of it.

    Sure, if I’m digging in and want something faster I can run something else in my bigger PC GPU, but a lot of the time I don’t have to.

    Like I said below, though, I’m in the process of trying to move that to an Arc A770 with 16 GB of VRAM that I had just lying around because I saw it on sale for a couple hundred bucks and I needed a temporary GPU replacement for a smaller PC. I’ve tried running LLMs on it before and it’s not… super fast, but it’ll do what you want for 14B models just fine. That’s going to be your sweet spot on home GPUs anyway, anything larger than 16GB and you’re talking 3090, 4090 or 5090, pretty much exclusively.


  • This is… mostly right, but I have to say, macs with 16 gigs of shared memory aren’t all that, you can get many other alternatives with similar memory distributions, although not as fast.

    A bunch of vendors are starting to lean on this by providing small, weaker PCs with a BIG cache of shared RAM. That new Framework desktop with an AMD APU specs up to 128 GB of shared memory, while the mac minis everybody is hyping up for this cap at 24 GB instead.

    I’d strongly recommend starting with a mid-sized GPU on a desktop PC. Intel ships the A770 with 16GB of RAM and the B580 with 12 and they’re both dirt cheap. You can still get a 3060 with 12 GB for similar prices, too. I’m not sure how they benchmark relative to each other on LLM tasks, but I’m sure one can look it up. Cheap as the entry level mac mini is, all of those are cheaper if you already have a PC up and running, and the total amount of dedicated RAM you get is very comparable.


  • Cool.

    So?

    I mean, you are assuming “decentralized” is good, but it’s only as good as what it gets you. On paper, and until proven otherwise, I may choose less decentralized and more “capable of proper, effective moderation” instead. Especially if “less decentralized” is actually “somewhat decentralized”. I haven’t seen a case that fundamental decentralization trumps all so far.


  • See, but as I was saying above about the privacy stuff, the perception is supposed to be that this is somehow “the alogrithm’s fault” or caused on purpose by corporate media to boost engagement.

    Even your take is letting Fedi design off the hook, IMO. The answer here isn’t “oh, well, what can you do?” it’s designing proper moderation tools.

    I know people get mad when you praise Bluesky around these parts, but they have an actually good block system, compared to Masto, Lemmy and Fedi in general. It really helps cut this crap short.


  • Well, where are you all when the Fedi cheerleading squad keeps posting about how bad it is that this or that competitor stores this or that information and how secure and private and great it is in Fedi servers because they don’t store anything?

    Because I’ve spent years chiming in to explain these things in those and it normally just gets people angry and complaining that you’re shilling for corporate social media or whatever. The image being projected, both accidentally and on purpose is that no centralized data collection means your data on Fedi is private when it is extremely not.


  • Well, for one thing it only works asymmetrically. It’s fine if you have a very specific source of issues that you can isolate and cut off, but it’s not really useful if what you have is hostile users across the network. And it only protects the larger space. For smaller instances it’s a choice between functioning as social media or not existing at all.

    It’s extremely far from a magic bullet, it is not resilient to large scale, systemic issues and the only reason its limitations haven’t been apparent is that the AP ecosystem is too small to suffer most of the issues of larger social media.

    Aaaaand it’s designed to function via the petty squabbles of FOSS developer arguments, which I hate anyway. But that’s a me thing.


  • Again, doesn’t matter. There’s data on logged in users and it’s also many orders of magnitude larger than Fedi.

    By most independent metrics Reddit has more visits than Netflix. Than Pornhub, while we’re at it. It’s one of the top ten most visited sites on the Internet, and by most accounts it’s actually grown since the “exodus”.

    I don’t use it and I do like it here, but the idea that Lemmy is somehow encroaching on it is absurd. And self-defeating, too. Lemmy and its satellites are very worthwhile for what they are… but just a gnat in the wind as a Reddit alternative. Better to measure them on their own merits.







  • You know, for all the complaints about phones all being the same, I don’t see anybody trying to get rid of the stupid punch hole anymore. I haven’t taken a selfie since 2014, Sony is certainly looking like it doesn’t have many more Xperias in its back pocket and I really, really would like a replacement that isn’t afraid of having a thin forehead where you can put sensors without defacing the display. I would take something with expandable storage and a headphone jack for the complete package, but let’s start with a usable screen without holes in it. It’s gotten to the point where I haven’t seen a single phone in years I didn’t look at and immediately go “nope, not for me”.