Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 387 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle

  • I don’t think you can, and I think it makes sense: it would be weird for the compiler to unexpectedly generate hidden variables for you.

    For all the compiler knows, the temporary variable could hold a file handle, a database transaction, a Mutex, or other side effects in their Drop implementation that would make when it’s dropped matter. Or it could just be a very large struct you might not expect to keep around until the end of the function (or even, the end of the program if that’s a main loop).

    So you should be aware of it, and thus you need the temporary variable like you did even if you just immediately shadow it. But at least you know you’re holding on to it until the end of the function.




  • Yep, and I’d guess there’s probably a huge component of “it must be as easy as possible” because the primary target is selfhosters that don’t really even want to learn how to set up Docker containers properly.

    The AIO Docker image is an abomination. The other ones are slightly more sane but they still fundamentally mix code and data in the same folder so it’s not trivial to just replace the app.

    In Docker, the auto updater should be completely neutered, it’s the wrong way to update the app.

    The packages in the Arch repo are legit saner than the Docker version.


  • I’ve heard very good things about resold HGST Helium enterprise drives and can be found fairly cheap for what they are on eBay.

    I’m looking for something from 4TB upwards. I think I remember that drives with very high capacity are more likely to fail sooner - is that correct?

    4TB isn’t even close to “very high capacity” these days. There’s like 32TB HDDs out there, just avoid the shingled archival drives. I believe the belief about higher capacity drives is a question of maturity of the technology rather than the capacity. 4TB drives made today are much better than the very first 4TB drives we made a long time ago when they were pushing the limits of technology.

    Backblaze has pretty good drive reviews as well, with real world failure rate data and all.



  • It doesn’t by default and design, but that doesn’t mean it can’t be implemented through a special protocol or compositor plugin.

    This also protects against windows, not processes of the same user. You can bypass the problem by simply wrapping the game and your overlay in a nested compositor like gamescope. From there you control the compositor, so you can do whatever you want.

    And it’s still secure because it only lets you overlay over stuff from your own Wayland clients spawned by your overlay wrapper, none of the user’s other windows, so you can still trust your password manager and such.


  • There’s two main options to approach this: either you inject into the game like how MangoHUD does it, or you make an overlay window that your window manager/compositor helpfully places on top for you.

    The second option is pretty simple as any GUI library will give you the window, you just need some help from the compositor side to do the rest. That part can also be pretty easy thanks to Wayland nesting: you can use whatever compositor you want for this, not just the one from your DE. Gamescope for example might work, if not wlroots+layer-shell.

    You can also be your own nested compositor by using Smithay, which is in Rust and was made by System76 to be the library behind COSMIC’s compositor. Looking at the examples, it looks not too crazy to use. Then you just run the game under yourself and you can do whatever you want.

    Injecting into the game isn’t too crazy either, you compile to a library and force load it with LD_PRELOAD and override some OpenGL/Vulkan functions that lets you add your own draw commands on top before the frame is finished.

    I think avoiding injecting into the game is cleaner and easier as you’re completely independent from the game client, so you won’t crash the game and you can also just test out your code as a regular window, and restart it mid game. Injecting would also trigger anticheat if present. Which, what you’re doing kind of is, even though anyone could also just pen and paper it.









  • I believe you, but I also very much believe that there are security vendors out there demonizing LE and free stuff in general. The more expensive equals better more serious thinking is unfortunately still quite present, especially in big corps. Big corps also seem to like the concept of having to prove yourself with a high price of entry, they just can’t believe a tiny company could possibly have a better product.

    That doesn’t make it any less ridiculous, but I believe it. I’ve definitely heard my share of “we must use $sketchyVendor because $dubiousReason”. I’ve had to install ClamAV on readonly diskless VMs at work because otherwise customers refuse to sign because “we have no security systems”. Everything has to be TLS encrypted, even if it goes to localhost. Box checkers vs common sense.



  • Neither does Google Trust Services or DigiCert. They’re all HTTP validation on Cloudflare and we have Fortune 100 companies served with LetsEncrypt certs.

    I haven’t seen an EV cert in years, browsers stopped caring ages ago. It’s all been domain validated.

    LetsEncrypt publicly logs which IP requested a certificate, that’s a lot more than what regular CAs do.

    I guess one more to the pile of why everyone hates Zscaler.


  • That’s more of a general DevOps/server admin steep learning curve than Vaultwarden’s there, to be fair.

    It looks a bit complicated at first as Docker isn’t a trivial abstraction, but it’s well worth it once it’s all set up and going. Each container is always the same, and always independent. Vaultwarden per-se isn’t too bad to run without a container, but the same Docker setup can be used for say, Jitsi which is an absolute mess of components to install and make work, some Java stuff, and all. But with Docker? Just docker compose up -d, wait a minute or two and it’s good to go, just need to point your reverse proxy to it.

    Why do you need a reverse proxy? Because it’s a centralized location where everything comes in, and instead of having 10 different apps with their own certificates and ports, you have one proxy, one port, and a handful of certificates all managed together so you don’t have to figure out how to make all those apps play together nicely. Caddy is fine, you don’t need NGINX if you use Caddy. There’s also Traefik which lands in between Caddy and NGINX in ease of use. There’s also HAproxy. They all do the same fundamental thing: traffic comes in as HTTPS, it gets the Host header from the request and sends it to the right container as plain HTTP. Well it doesn’t have to work that way specifically but that’s the most common use case in self hosted.

    As for your backups, if you used a Docker compose file, the volume data should be in the same directory. But it’s probably using some sort of database so you might want to look into how to do periodic data exports instead, as databases don’t like to be backed up live since the file is always being updated so you can’t really get a proper snapshot of it in one go.

    But yeah, try to think of it as an infrastructure investment that makes deploying more apps in the future a breeze. Want to add a NextCloud? Add another docker compose file and start it, Caddy picks it up automagically and boom, it’s live and good to go!

    Moving services to a new server is also pretty easy as well. Copy over your configs and composes, and volumes if applicable. Start them all, and they should all get back exactly in the same state as they were on the other box. No services to install and configure, no repos to add, no distro to maintain. All built into the container by someone else so you don’t have to worry about any of it. Each update of the app will bring with it the whole matching updated OS with the right packages in the right versions.

    As a DevOps engineer we love the whole thing because I can have a Kubernetes cluster running on a whole rack and be like “here’s the apps I want you to run” and it just figures itself out, automatically balances the load, if a server goes down the containers respawn on another one and keeps going as if nothing happened. We don’t have to manually log into any of those servers to install services to run an app. More upfront work for minimal work afterwards.