Developer, 11 year reddit refugee

Zetaphor

  • 2 Posts
  • 31 Comments
Joined 4 months ago
cake
Cake day: March 12th, 2024

help-circle


  • I’m really enjoying Otterwiki. Everything is saved as markdown, attachments are next to the markdown files in a folder, and version control is integrated with a git repo. Everything lives in a directory and the application runs from a docker container.

    It’s the perfect amount of simplicity and is really just a UI on top of fully portable standard tech.







  • but if you need me to leave, I can. I get that a lot.

    I don’t think OP is suggesting this. It’s simply a reminder to those who have the privilege of having extra income that contributing to the core devs improves the experience for everyone, regardless of their individual ability to contribute.

    I’m personally happy to donate if it means everyone gets to continue enjoying the growth of the platform, as the real value of the threadiverse is user activity.





  • Setting aside the obvious answer of “because capitalism”, there are a lot of obstacles towards democratizing this technology. Training of these models is done on clusters of A100 GPU’s, which are priced at $10,000USD each. Then there’s also the fact that a lot of the progress being made is being done by highly specialized academics, often with the resources of large corporations like Microsoft.

    Additionally the curation of datasets is another massive obstacle. We’ve mostly reached the point of diminishing returns of just throwing all the data at the training of models, it’s quickly becoming apparent that the quality of data is far more important than the quantity of the data (see TinyStories as an example). This means a lot of work and research needs to go into qualitative analysis when preparing a dataset. You need a large corpus of input, each of which are above a quality threshold, but then also as a whole they need to represent a wide enough variety of circumstances for you to reach emergence in the domain(s) you’re trying to train for.

    There is a large and growing body of open source model development, but even that only exists because of Meta “leaking” the original Llama models, and now more recently releasing Llama 2 with a commercial license. Practically overnight an entire ecosystem was born creating higher quality fine-tunes and specialized datasets, but all of that was only possible because Meta invested the resources and made it available to the public.

    Actually in hindsight it looks like the answer is still “because capitalism” despite everything I’ve just said.