Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 411 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle




  • That should be mostly the default. My secondary Vega 64 is reporting using only 3W which, on a laptop would be worth it but I doubt 3W affects your electricity. It’s nothing compared to the overall power usage of the rest of the desktop, the monitors. Pretty sure even my fans use more.

    The best way to address this would be to first take proper measurements. Maybe get a kill-a-watt and measure usage with and without the card installed to get the true usage at the wall. Also maybe get a baseline with as little hardware as possible. With that data you can calculate roughly how much it costs to run the PC and how much each component costs, and from there it’s easier to decide if it’s worth.

    Just the electric bill being higher isn’t a lot to go with. Could just be that it’s getting cold, or hot. Little details can really throw expectations off. For example, mining crypto during the winter is technically cheaper than not for me because I have electric heat, so between 500W in a heating strip or 500W mining crypto, they both produce the same amount of heat in the room but one of them also made me a few cents as a byproduct. You have to consider that when optimizing for cost and not maximizing battery life on a laptop.


  • Guarantee there will be questions of cost of setup, maintenance, and risks.

    And time moderating it, especially if they run their own. At least with Twitter/Facebook/YouTube, you get a lot of moderation for free whether you agree with it or not.

    And if they use another instance, there’s other liability questions about the particular instance to choose. If they’re gonna represent an official city account, you’d expect some cybersecurity certifications to be a requirement and all kinds of stuff, even if it’s a free service. The instance admins interfering, possibly steering opinions during city elections, etc.

    Nobody cares about decentralized social networks, the technology, or how terrible the other outlets are. For a municipality, you may want to focus on maintaining multiple channels of communications and ways to reach and engage the most users. You could then fold the fediverse into it as one more channel. Something they should keep an eye on. They’ll need a way to post the same content to all those channels with the least effort. Something easy that a trained intern or clerk can do.

    In this case IMO it might even be better to use something like Wordpress with the ActivityPub plugin, or alternatives to that. I imagine a city mostly posts announcements and stuff, so a blog that serves as both an official website and you can follow and interact with it from the comfort of your preferred social service sounds a lot more appealing than just another social media without that many users. Can even use more plugins to post to Facebook and Twitter as well, all from one place. Given the age of the board, they’re also more likely to know and care about Threads and Bluesky compatibility just because they have more users, and bureaucratic decisions are based on numbers. A nice graph showing if they join the fediverse they capture all the users fleeing Twitter by supporting AP and AT.


  • With Docker, the internal network is just a bridge interface. The reason most firewall rules don’t apply is a combination of:

    • Containers have their own namespace including network namespace, so each container have a blank iptables just for them.
    • For container communication, that goes through the FORWARD table, not the INPUT/OUTPUT ones.
    • Docker adds its own rules to ensure that this works as expected.

    The only thing that should be affected by the host firewall is the proxy service Docker uses to listen on a port on the host and send it to the container.

    When using Docker, each container acts like an independent machine, and your host gets configured to act as a router. You can firewall Docker containers, the rules just need to be in the right place to work.




  • Also, series F but they’re only deploying on one server? Try scaling that to a real deployment (200+ servers) with millions of requests going through and see how well that goes.

    And also no way their process passes ISO/SOC 2/PCI certifications. CI/CD isn’t just “make do things”, it’s also the process, the logs, all the checks done, mandatory peer reviews. You can’t just deploy without the audit logs of who pushed what when and who approved it.


  • My point was really that data can’t be that exensive even with including transit fees like Cogent and Level3, because I can use TBs of bandwidth every month and OVH doesn’t even bother measuring it.

    If my home ISP gives me a gigabit link, yes I pay for all the cabling and equipment to carry that traffic. But that’s it, I already pay for infrastructure capable of providing me with gigabit connectivity. So why is it that they also want me to pay per the GB?

    In Europe they can provide gigabit connectivity for dirt cheap with no caps, they don’t even bother with tiered speed plans there, how come my $120+/mo Internet in the US isn’t sufficient to cover the bandwidth costs? It’s ridiculous, even StarLink doesn’t have data caps.

    But somehow communities with crappy DSL that can barely do 10 Mbps still have ridiculously low data caps. It’s somehow not a problem for most ISPs in the world, except US ISPs, the supposedly richest and most advanced country in the world.






  • You have to keep in mind, when you write JavaScript, there’s an entire runtime written in C++ to run it under the hood, with some crazy optimizations to make it reasonably performant. What type of languages do you use to write that runtime? A systems programming language like Rust and C++.

    You don’t have to use Rust if you don’t like it. Not everything must be written in Rust. The whole pick a language also involves a lot of picking your tradeoffs. Picking a interpreted/JIT language for speed of development is a perfectly valid tradeoff, but not one you can universally make. Sometimes the performance cost becomes really expensive currency-wise, where you can save thousands of dollars on server costs by simply having a more efficient application that only needs a fraction of the hardware to run it. Even in JavaScript, a fair chunk of libraries you use end up calling to C++ native code because it would be too slow in pure JavaScript. Sometimes the tradeoff is pick the popular language so it’s easier to hire for cheaper.

    Even at the dawn of time, most computers shipped with a variant of BASIC so people could write simple applications easily. But if you wanted to squeeze out every bit of power in your Apple II or C64, you sure did reach for assembly. Assembly sucks so we made C, then C++. Rust is still a language that’s made to eventually compile to assembly/binary and have the same performance as if you wrote it in assembly.

    And low spec hardware still exists: the regular Pis have gotten pretty fast but if you run on an RP2040 then suddenly, you’re back in like 300MHz dual core land with pitiful amounts of memory, so you do need to write optimized and fast code for those.

    Rust’s type system is actually really, really good. Most of the time, if it compiles it runs. It eliminates a ton of errors other than memory safety: the system is so powerful you can straight up make invalid state unrepresentable. You can’t forget to close a connection, you can’t pass the wrong data, you can’t forget to unlock a lock. It does a lot more to enforce correctness of a program well beyond memory safety.





  • I had to block ByteSpider at work because it can’t even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer’s site and taking it down.

    The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it’s all GETs, they hit years old content that’s not cached and use up the majority of the CPU time on the web servers.

    Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS’ing whoever they scrape with no fucks given. I’ve been woken up by the pager way too often due to ByteSpider.

    My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.