Exactly what I was going to say because this hit me a while back. I still have no good solution; I have to delete shows/movies from the *arr then manually delete them from qbitorrent too.
Exactly what I was going to say because this hit me a while back. I still have no good solution; I have to delete shows/movies from the *arr then manually delete them from qbitorrent too.
Yeah, it was all tapes. We only had to use them once when I worked there: after finding out the UPS connected to the mainframe was a dud. And then it really was roulette because the first two tapes were unreadable, so we ended up with three week old data.
I’d believe it’s real. In 2016 I was at a company trying to migrate off an old IBM mainframe and green screens. It wasn’t like an airline with complex or critical code; it was just a barely functional ERP for a warehouse. Source control was the furthest thing from their minds. Some companies and IT departments are very reluctant to change, regardless of how much time and money it save.
It gets worse if you use Microsoft D365 AX products. Then you have to provision an entire Build server for builds which has to run Visual Studio 2019 on Windows 10. To do a build you run a pipeline in Azure DevOps, which runs the compiler in a full Visual Studio 2019 environment, which has to run on a special Azure virtual environment running Windows 10 hosted by Microsoft. It’s so fragile.
deleted by creator
He narrates the Minecraft audiobooks my kid listens to as well.
I used to write extensively with C++, but it has been a long time since speed mattered that much to one of my applications. I still marvel at the cache-level optimizations some people come up with, but I’m in the same mindset as you now.
My workload split of Data Movement vs Data Transformation is like 95:5 these days, which means almost all the optimizations I do are changing batch/cache/page/filter settings. I can do that in any language with http bindings, so I choose those that are faster to write.
It felt like the wild west for a while because there were so many open problems and each implementation seemed to be focusing on a subset of them. Git handles all of them with decent enough speed that there isn’t much incentive to go against the grain.
I think Git is good enough and so ubiquitous that we won’t see a competitor until coding itself drastically changes shape. Who knows what that will look like, but if it’s not collections of relatively flat files then Git may someday be replaced.
We need a Thanks I Hate It (TIHI) community on Lemmy. I don’t like this news you have posted, but I do appreciate that you posted it.
Very interesting. It would certainly make doom scrolling harder. Email always feels more personal, like each message was sent specifically to me for a reason. As opposed to feeds, which feels like looking at cars as they drive by.
I think this system pushes against those boundaries. This sort of concrete brainstorming at the edges is such a crucial part of software evolution, so thank you.
“don’t have to throw the whole thing out” is what convinced me to get one. I’m not going to make a big difference on my own, but minimizing what I recycle, throw out, or chuck in the basement is still worthwhile.
My cheap old 3D printer requires constant fiddling before and after every print, yet still fails probably half the time. I avoid printing things sometimes just because I don’t want to deal with it.
I would still agree with you 100%. I hate my HP printer so much.
That’s my primary gripe too. I could theoretically work around it if the chat search worked. I’ll try searching for a specific word to see who said it to me and when, but if it was more than a couple days ago I’m out of luck. Later I’ll remember who said it, eventually find them in the sidebar, scroll up 40 pages in the chat, and find the exact word Teams claimed it’s never heard of.