I’d like to self host a large language model, LLM.

I don’t mind if I need a GPU and all that, at least it will be running on my own hardware, and probably even cheaper than the $20 everyone is charging per month.

What LLMs are you self hosting? And what are you using to do it?

  • astrsk@fedia.io
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    If you don’t need to host but can run locally, GPT4ALL is nice, has several models to download and plug and play with different purposes and descriptions, and doesn’t require a GPU.

    • theshatterstone54@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      I second that. Even my lower-midrange laptop from 3 years ago (8GB RAM, Integrated AMD GPU) can run a few of the smaller LLMs, and it’s true that you don’t even need a GPU as they can run in RAM. And depending on how much RAM you have and what GPU, you might find models performing better in RAM instead of on the GPU. Just keep in mind that when a model says, for example, 8GB Memory required, if you have 8GB RAM, you can’t run it cuz you also have your operating system and other applications running. If you have 8GB video memory on your GPU though, you should be golden (I think).