Thank you for the feedback. I had a lot of fun playing with the model (and still have some improvements on my mind that might require porting it outside of Shadertoy)
Is there any part that was especially hard to understand ? I’m trying to make it as clear as possible for developers without a scientific background.
I can recommend some stuff I’ve been using myself :
I design, deploy and maintain such infrastructures for my own customers, so feel free to DM me with more details about your business if you need help with this
It’s not that I don’t believe you, I was genuinely interested in knowing more. I don’t understand what’s so “precious” about a random stranger’s thought on the internet if it’s not backed up with any source.
Moreover, I did try searching around for this and could not find any result that seemed to answer my question.
Can you give examples of countries where mainstream media is not owned by billionaires ?
On this day, exactly 12 years ago (9:30 EDT 1 Aug 2012), was the most expensive software bug ever, in both terms of dollars per second and total lost. The company managed to pare losses through the heroics of Goldman Sachs, and “only” lost $457 million (which led to its dissolution).
Devs were tasked with porting their HFT bot to an upcoming NYSE API service that was announced to go live less than a 33 days in the future. So they started a death march sprint of 80 hour weeks. The HFT bot was written in C++. Because they didn’t want to have to recompile once, the lead architect decided to keep the same exact class and method signature for their PowerPeg::trade() method, which was their automated testing bot that they had been using since 2003. This also meant that they did not have to update the WSDL for the clients that used the bot, either.
They ripped out the old dead code and put in the new code. Code that actually called real logic, instead of the test code, which was designed, by default, to buy the highest offer given to it.
They tested it, they wrote unit tests, everything looked good. So they decided to deploy it at 8 AM EST, 90 minutes before market open. QA testers tested it in prod, gave the all clear. Everyone was really happy. They’d done it. They’d made the tight deadline and deployed with just 90 minutes to spare…
They immediately went to a sprint standup and then sprint retro meeting. Per their office policy, they left their phones (on mute) at their desks.
During the retro, the markets opened at 9:30 EDT, and the new bot went WILD (!!) It just started buying the highest offer offered for all of the stocks in its buy list. The markets didn’t react very abnormally, becuase it just looked like they were bullish. But they were buying about $5 million shares per second… Within 2 minutes, the warning alarms were going on in their internal banking sector… a huge percentage of their $2.5 billion in operating cash was being depleted, and fast!
So many people tried to contact the devs, but they were in a remote office in Hoboken due to the high price of realestate in Manhattan. And their phones were off and no one was at their computer.
The CEO was seen getting people to run through the halls of the building, yelling, and finally the devs noticed. 11 minutes ahd gone by and the bots had bought over $3 billion of stock. The total cash reserves were depleted. The compnay was in SERIOUS trouble…
None of the devs could find the source of the bug. The CEO, desperate, asked for solutions. “KILL THE SERVERS!!” one of the devs shouted!!
They got techs @ the datacenter next to the NYSE building to find all 8 servers that ran the bots and DESTROYED them with fireaxes. Just ripping the wires out… And finally, after 37 minutes, the bots stopped trading. Total paper loss: $10.8 billion.
The SEC + NYSE refused to rewind the trades for all but 6 stocks, the on paper losses were still at $8 billion. No way they coudl pay. Goldman Sachs stepped in and offered to buy all the stocks @ a for-profit price of $457 million, which they agreed to. All in all, the company lost close to $500 million and all of its corporate clients left, and it went out of business a few weeks later.
Now what was the cause of the bug? Fat fingering human error during release.
The sysop had declined to implement CI/CD, which was still in its infancy, probably because that was his full-time job and he was making like $300,000 in 2012 dollars ($500k today). There were 8 servers that housed the bot and a few clients on the same servers.
The sysop had correctly typed out and pasted the correct rsync commands to get the new C++ binary onto the servers, except for server 5 of 8. In the 5th instance, he had an extra 5 in the server name. The rsync failed, but because he pasted all of the commands at once, he didn’t notice…
Because the code used the exact same method signature for the trade() method, server 5 was happy to buy up the most expensive offer it was given, because it was running the Sad Path test trading software. If they had changed the method signature, it wouldn’t have run and the bug wouldn’t have happened.
At 9:43 EDT, the devs decided collectively to do a “rollback” to the previous release. This was the worst possible mistake, because they added in the Power Peg dead code to the other 7 servers, causing the problems to grow exponentially. Although, it took about 3 minutes for anyone in Finance to actually inform them. At that point, more than $50 million dollars per second was being lost due to the bug.
It wasn’t until 9:58 EDT that the servers had all been destroyed that the trading stopped.
Here is a description of the aftermath:
It was not until 9:58 a.m. that Knight engineers identified the root cause and shut down SMARS on all the servers; however, the damage had been done. Knight had executed over 4 million trades in 154 stocks totaling more than 397 million shares; it assumed a net long position in 80 stocks of approximately $3.5 billion as well as a net short position in 74 stocks of approximately $3.15 billion.
28 minutes. $8.65 billion inappropriately purchased. ~1680 seconds. $5.18 million/second.
But after the rollback at 9:43, about $4.4 billion was lost. ~900 seconds. ~$49 million/second.
That was the story of how a bad software decision and fat-fingered manual production release destroyed the most profitable stock trading firm of the time, and was the most expensive software bug in human history.
I’m pretty sure they are actually hosting it. The tech is quite different (cofractal uses urls ending with {z}/{x}/{y}
, while their tile sever uses this stuff that works quite differently)
They told me about hosting their own tile server earlier today. I’m really impressed by how fast they moved !
A pull request for a privacy page during the onboarding is in the works, and I’ve been working with them to update the settings page and documentation (with the goal of providing an easy way to switch map providers). They are also working on a privacy policy, and want to ship all of this in a few weeks as part of a single release.
Once again, I’m really impressed with how well they’re handling this
With all the botting going on on Reddit, this whole Google AI deal makes me think of the recent paper that demonstrates that, as common sens would suggest, deep learning models collapse when successive generations are trained on the previous generations’ output
never stopped POSTing, even though I configured nginx to always respond 403 to anything from them for about a year now.
Lol, there are definitely some stubborn user agents out there. I’ve been serving 418 to a bunch of SEO crawlers - with fail2ban configured to drop all packets from their IPs/CIDR ranges after some attemps - for a few months now. They keep coming at the same rate as soon as they get unbanned. I guess they keep sending requests into the void for the whole ban duration.
Using 418 for undesirable requests instead of a more common status code (such as 403) lets me easily filter these blocks in fail2ban, which can help weed out a lot of noise in server logs.
Your sensitive data and logins are tied to email addresses, which are tied to domains. Lose your domain, someone can access everything.
I recently stumbled upon an article showing how bad this can be when the expired domains were used for important/serious stuff
I think they do get marked as dead after the Bodis subdomain does not act as a Lemmy instance. But I was wondering if a large number of instances “waking up from the dead” and acting maliciously could cause some trouble. Or would such “undead” instances pose no more threat to the fediverse than the same number of newly created malicious instances ? I’m mainly thinking about stuff like being in a privileged position to DoS most instances at once, or impersonation of accounts that used to actually exist on these “undead” instances
Is named
actually running as the bind
user inside the container ? Maybe a USER bind
line below the RUN
lines will help.
I’ll probably look into newer fancier options such as Caddy one day, but as far as I remember Nginx has never failed me : it’s stable, battle tested, and extremely mature. I can’t remember a single time when I’ve been affected by a breaking change (I could not even find one by searching changelogs) and the feature set makes it very versatile. Newer alternatives seem really interesting, but it seems to me they have quite frequent breaking changes and are not as feature rich.
That being said, I’d love to see side-by-side comparison of Nginx and Caddy configs (if anyone wants to translate to Caddy the Nginx caching proxy for OSM I shared earlier this week, that would make a good and useful example), as well as examples of features missing from Nginx. This may give me enough motivation to actually try Caddy :)
(edit : ad->and)
I don’t use nginx-proxy-manager, but if you want to share what you tried, I will try to help you figure what’s not working
It’s the clients (web/android app, probably iOS too) that are making these requests.
To the best of my knowledge, the Immich server inside the container is not making requests to the outside. It is merely sending a style.json
to the client displaying a map, which then fetches tiles from the Cofractal URL inside this JSON.
Or you can quite easily configure nginx as your personal caching proxy with an arbitrarily long TTL/retention duration (you can check out my follow-up post for instructions on doing that)
I used to wonder what kind of nerd notices this kind of thing, now I’m one of them
Edit : If you want to join us :
I don’t use Traefik myself, but this documentation page seems to suggest that Traefik only allows in-memory cache (which would eat RAM and not persist across reboots). You can probably run Nginx with this config inside a container for the caching, then use Traefik to handle requests to immich.your-domain.tld/map_proxy/*
with the caching proxy container.
What do you mean ? Can you give me the exact link that’s not working ?
I did not expect such a detailed code review (the fact that you wrote it on mobile impresses me even more), but I strongly agree with everything you mentioned. I think I was so caught up learning GLSL and its quirks, then playing and experimenting with the simulation, that I “forgot” my coding standards. Anyway, I’ll make sure to take some time to update both the code and the article following your recommandations.