![](https://lemmy.keychat.org/pictrs/image/1f08ce6b-6c40-4066-b51c-7abb6cdc5ad6.jpeg)
![](https://sh.itjust.works/pictrs/image/045a2049-eb61-4960-88ba-97e7f1ffbf31.jpeg)
Oh wow I didn’t realize nearly any of that detail about the current system. That explains why my fluid systems would always be unbalanced crap and sometimes require inexplicable pumps be added.
Oh wow I didn’t realize nearly any of that detail about the current system. That explains why my fluid systems would always be unbalanced crap and sometimes require inexplicable pumps be added.
FTS? fuck that
MAAAN would be a much better acronym though
That’s when you update your sig with your address and a link to a local delivery venue
Alternatively all 504 Gateway Timeout
deleted by creator
That joke was constant in the early 00s.
Oh wow that looks fucking awesome. Major Legends & Lattes vibes but with a dark undercurrent.
Data size and user expectations is the main difference. It’s possible but there’d be a lot of latency and overhead for just scrolling down a page with a bunch of images. Maybe there’s fancy stuff you could do by batching images together and reusing connection pools but it feels sisyphean.
Mastodon and lemmy handle this in slightly different ways. Mastodon (according to the link) replicates media on every instance while lemmy (mostly) only replicates thumbnails. That means a popular post doesn’t cause load for one server on mastodon but does on lemmy. But Mastodon has a higher aggregate cost due to all the replicated data, which is what the linked proposal solves by making it sublinear.
If the torrent is instance to instance I don’t see any real benefit (and instance to client is infeasible). On Mastodon side you still have data duplication driving storage costs and bandwidth usage regardless of whether it’s delivered via direct http or torrent. On the lemmy side it wouldn’t gain much (asymmetric load is based on subscription count and so not very bursty) but would add a lot of non-determinism and complexity to the already fragile federation process.
Conventional solutions like cache/CDN/Object Storage or switching to a shared hosting solution (decoupled from instances like your link proposes) seems like a more feasible way to address things.
I wish I could drop 2024
No
You could hire a team of security experts to audit it for you
I love the concept. I hate many of the language design choices.
I guess their advertising campaign wasn’t very effective!
It’s popcorn not salad dressing.
Looking at NASA and Webb sites it appears this is a poorly cropped version of pictures from over a year ago, not something new like the article claims.
There are absolutely laptops with fingerprint sensors.
I’d say the main reason it’s more common in phones than computers is because of the different markets. Phones are mostly consumer purchases, the business market is smaller and the software is more locked down so you can rely on a software disable better sufficing for those cases. Laptops are increasingly dominated by business use cases. Businesses have IT groups that care about security who would prefer models without biometrics.
Secondarily, you login to your phone a lot more often than laptops so the convenience factor is less impactful for laptops. So people don’t consider the fingerprint sensor a mandatory requirement as much as with phones.
I’ve been using it the last month. It’s autocomplete and does what autocomplete should. It doesn’t guess utterly insane shit like certain other tools.
I got one of those desks with a vertical pneumatic lift so I can stack the computers vertically in a rack and just raise/lower it so the right one is at eye height