BlueSky is its own thing with its own federated protocol called ATproto. They have an explanation in their docs on how it works, different features. There’s a bridge between the two as well, a bit janky but effective.
BlueSky is its own thing with its own federated protocol called ATproto. They have an explanation in their docs on how it works, different features. There’s a bridge between the two as well, a bit janky but effective.
You just put both in the server_name
line and you’re good to go.
I think a part of it is that english is just the default language and strongly leans american already, so there’s just no demand for a USA instance and people just use the popular or thematic ones for that content. There’s no advantage in laws to prefer US hosting.
The country ones make sense because they’re also a different language, like jlai.lu in french, and the feddits for European languages.
That should be mostly the default. My secondary Vega 64 is reporting using only 3W which, on a laptop would be worth it but I doubt 3W affects your electricity. It’s nothing compared to the overall power usage of the rest of the desktop, the monitors. Pretty sure even my fans use more.
The best way to address this would be to first take proper measurements. Maybe get a kill-a-watt and measure usage with and without the card installed to get the true usage at the wall. Also maybe get a baseline with as little hardware as possible. With that data you can calculate roughly how much it costs to run the PC and how much each component costs, and from there it’s easier to decide if it’s worth.
Just the electric bill being higher isn’t a lot to go with. Could just be that it’s getting cold, or hot. Little details can really throw expectations off. For example, mining crypto during the winter is technically cheaper than not for me because I have electric heat, so between 500W in a heating strip or 500W mining crypto, they both produce the same amount of heat in the room but one of them also made me a few cents as a byproduct. You have to consider that when optimizing for cost and not maximizing battery life on a laptop.
Guarantee there will be questions of cost of setup, maintenance, and risks.
And time moderating it, especially if they run their own. At least with Twitter/Facebook/YouTube, you get a lot of moderation for free whether you agree with it or not.
And if they use another instance, there’s other liability questions about the particular instance to choose. If they’re gonna represent an official city account, you’d expect some cybersecurity certifications to be a requirement and all kinds of stuff, even if it’s a free service. The instance admins interfering, possibly steering opinions during city elections, etc.
Nobody cares about decentralized social networks, the technology, or how terrible the other outlets are. For a municipality, you may want to focus on maintaining multiple channels of communications and ways to reach and engage the most users. You could then fold the fediverse into it as one more channel. Something they should keep an eye on. They’ll need a way to post the same content to all those channels with the least effort. Something easy that a trained intern or clerk can do.
In this case IMO it might even be better to use something like Wordpress with the ActivityPub plugin, or alternatives to that. I imagine a city mostly posts announcements and stuff, so a blog that serves as both an official website and you can follow and interact with it from the comfort of your preferred social service sounds a lot more appealing than just another social media without that many users. Can even use more plugins to post to Facebook and Twitter as well, all from one place. Given the age of the board, they’re also more likely to know and care about Threads and Bluesky compatibility just because they have more users, and bureaucratic decisions are based on numbers. A nice graph showing if they join the fediverse they capture all the users fleeing Twitter by supporting AP and AT.
With Docker, the internal network is just a bridge interface. The reason most firewall rules don’t apply is a combination of:
The only thing that should be affected by the host firewall is the proxy service Docker uses to listen on a port on the host and send it to the container.
When using Docker, each container acts like an independent machine, and your host gets configured to act as a router. You can firewall Docker containers, the rules just need to be in the right place to work.
The problem with a different spoof for each domain is that this behavior on its own can be used as a fingerprint based on timestamp and IP in access logs.
Hiding among the crowd is probably better, especially since newer versions of Chrome all report the same UA you blend in even more.
You can block them and over time it should get better, or you can write a script that does some checks and blocks them for you.
Also, series F but they’re only deploying on one server? Try scaling that to a real deployment (200+ servers) with millions of requests going through and see how well that goes.
And also no way their process passes ISO/SOC 2/PCI certifications. CI/CD isn’t just “make do things”, it’s also the process, the logs, all the checks done, mandatory peer reviews. You can’t just deploy without the audit logs of who pushed what when and who approved it.
My point was really that data can’t be that exensive even with including transit fees like Cogent and Level3, because I can use TBs of bandwidth every month and OVH doesn’t even bother measuring it.
If my home ISP gives me a gigabit link, yes I pay for all the cabling and equipment to carry that traffic. But that’s it, I already pay for infrastructure capable of providing me with gigabit connectivity. So why is it that they also want me to pay per the GB?
In Europe they can provide gigabit connectivity for dirt cheap with no caps, they don’t even bother with tiered speed plans there, how come my $120+/mo Internet in the US isn’t sufficient to cover the bandwidth costs? It’s ridiculous, even StarLink doesn’t have data caps.
But somehow communities with crappy DSL that can barely do 10 Mbps still have ridiculously low data caps. It’s somehow not a problem for most ISPs in the world, except US ISPs, the supposedly richest and most advanced country in the world.
Yeah sure, then why is it that my entire bare metal server leased from OVH costs less than my Internet connection, and is fully unmetered access too.
I pay for a data rate and I should be able to use the full amount as I please. If we paid for the amount of data then why are we advertising speeds and paying for speeds?
More information about storing electrons and light and other information like with most likely aliens abducting and exploiting people as a resource in a text document called “Information about totalitarian and manipulative aliens.odt”, also with picture in the post perhaps also prove these aliens are real:
That’s more like cocaine and meth levels than Adderall at this point
Why does the government keep trying to regular fake Internet money? The whole point of it was that it was a free for all. Who the fuck cares if crypto bros get fucked, if you want real securities you go to a real bank and open a real investment account.
The data set is paywalled so it’s hard to know. If they picked shovelware most people would rather pirate then yeah, they could reach that conclusion easily.
Denuvo could also be just making people forget about the game once the hype dies down so they never end up trying it which ends up never buying it.
Some people also end up buying the game in sale later, or well after they played it. I personally ended up buying a lot of the games I pirating a while back, well after their release.
You have to keep in mind, when you write JavaScript, there’s an entire runtime written in C++ to run it under the hood, with some crazy optimizations to make it reasonably performant. What type of languages do you use to write that runtime? A systems programming language like Rust and C++.
You don’t have to use Rust if you don’t like it. Not everything must be written in Rust. The whole pick a language also involves a lot of picking your tradeoffs. Picking a interpreted/JIT language for speed of development is a perfectly valid tradeoff, but not one you can universally make. Sometimes the performance cost becomes really expensive currency-wise, where you can save thousands of dollars on server costs by simply having a more efficient application that only needs a fraction of the hardware to run it. Even in JavaScript, a fair chunk of libraries you use end up calling to C++ native code because it would be too slow in pure JavaScript. Sometimes the tradeoff is pick the popular language so it’s easier to hire for cheaper.
Even at the dawn of time, most computers shipped with a variant of BASIC so people could write simple applications easily. But if you wanted to squeeze out every bit of power in your Apple II or C64, you sure did reach for assembly. Assembly sucks so we made C, then C++. Rust is still a language that’s made to eventually compile to assembly/binary and have the same performance as if you wrote it in assembly.
And low spec hardware still exists: the regular Pis have gotten pretty fast but if you run on an RP2040 then suddenly, you’re back in like 300MHz dual core land with pitiful amounts of memory, so you do need to write optimized and fast code for those.
Rust’s type system is actually really, really good. Most of the time, if it compiles it runs. It eliminates a ton of errors other than memory safety: the system is so powerful you can straight up make invalid state unrepresentable. You can’t forget to close a connection, you can’t pass the wrong data, you can’t forget to unlock a lock. It does a lot more to enforce correctness of a program well beyond memory safety.
It’s sitting at around 46GB at the moment, not too bad.
Instance is a year and a few months old, so I could probably trim down the storage a bit if needed by purging stuff < 6 months old or something.
I think it initially grows as your users table fills up and pictrs caches the profile pictures, and then it stabilizes a bit. I definitely saw much more growth initially.
I subscribe to a few more communities and my DB dump is about 3GB plain text, but same story, box sits at 5-15% most of the time.
A few woes at the beginning but it’s been running smoothly since. If you have experince setting up stuff in Docker and exposing them to the Internet over HTTPS, it pretty much mostly just works.
I had to block ByteSpider at work because it can’t even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer’s site and taking it down.
The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it’s all GETs, they hit years old content that’s not cached and use up the majority of the CPU time on the web servers.
Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS’ing whoever they scrape with no fucks given. I’ve been woken up by the pager way too often due to ByteSpider.
My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.
This. They even provide the cover image to use. If they don’t want embedding they could just block the request.
But they don’t want to. They want to sell the cake and eat it too.