• 0 Posts
  • 192 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle

  • When I was young I remember that banks often had large drive-thrus with pneumatic tube systems at each car stall.

    There would only be one teller but they could serve quite a few lanes.

    If you wanted a cash withdrawal, you might put your ID and your withdrawal slip in the tube, and a few minutes later it would come back with cash in it.

    It was pretty rad. But ATMs seem like a better bet overall.



  • This Wired article is an interesting read, well worth the time.

    I wish we could see into the head of Stockton Rush a little bit more. The job of all entrepreneurs is to a large degree knowing who to listen to and who to ignore, as well as figuring out which rules you can break. Usually the lives of passengers and yourself is not on the line, though and that’s why so many of the highly competent engineers left his team.

    A lot of his decision making seemed money driven. He got quotations for testing services but declined because of the cost. Salvaging the old titanium rings from the old busted hull to use on the new hull was a risky choice but new ones were surely very expensive. Perhaps a much larger budget would have led to a more committed team of experts and the resources to test things to a higher degree of confidence.

    As this article points out, OceanGate just never came up with a design that was good enough for the job at hand.

    But what can you say. The ocean floor is littered with countless dreams.


  • Of all the billionaires who do exist Bill and Melinda would probably agree with you. Bill has been pretty clear that he always played the game to win but he’s also stated he intends to give it all away and he’s openly recruiting other billionaires to give it all away as well.

    I suppose evil billionaires could give it away to make the world a worse plCe, say by developing something like sharks with lasers on their heads, But again in these guys case they’re giving it away to help eliminate malaria around the world.

    If all billionaires were like Bill and the Melinda I suppose the world would be a significantly better place.





  • nucleative@lemmy.worldtoFunny@sh.itjust.worksIt's so over
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 month ago

    There’s a program called Xevil that can solve even HCaptcha reliably, and it can solve these first gen captions by the thousands per second. It’s been solving Google’s v3 recaptchas for a long time already too.

    People who write automation tools (unfortunately, usually seo spammers and web scrapers) have been using these apps for a long time.

    Captchas haven’t been effective at protecting important websites for years, they just keep the script kiddies away who can’t afford the tools.







  • Well thought-out and articulated opinion, thanks for sharing.

    If even the most skilled hyper-realistic painters were out there painting depictions of CSAM, we’d probably still label it as free speech because we “know” it to be fiction.

    When a computer rolls the dice against a model and imagines a novel composition of children’s images combined with what it knows about adult material, it does seem more difficult to label it as entirely fictional. That may be partly because the source material may have actually been real, even if the final composition is imagined. I don’t intend to suggest models trained on CSAM either, I’m thinking of models trained to know what both mature and immature body shapes look like, as well as adult content, and letting the algorithm figure out the rest.

    Nevertheless, as you brought up, nobody is harmed in this scenario, even though many people in our culture and society find this behavior and content to be repulsive.

    To a high degree, I think we can still label an individual who consumes this type of AI content to be a pedophile, and although being a pedophile is not in and of itself an illegal adjective to posses, it comes with societal consequences. Additionally, pedophilia is a DSM-5 psychiatric disorder, which could be a pathway to some sort of consequences for those who partake.



  • Well stated and explained. I’m not an AI researcher but I develop with LLMs quite a lot right now.

    Hallucination is a huge problem we face when we’re trying to use LLMs for non-fiction. It’s a little bit like having a friend who can lie straight-faced and convincingly. You cannot distinguish whether they are telling you the truth or they’re lying until you rely on the output.

    I think one of the nearest solutions to this may be the addition of extra layers or observer engines that are very deterministic and trained on only extremely reputable sources, perhaps only peer reviewed trade journals, for example, or sources we deem trustworthy. Unfortunately this could only serve to improve our confidence in the facts, not remove hallucination entirely.

    It’s even feasible that we could have multiple observers with different domains of expertise (i.e. training sources) and voting capability to fact check and subjectively rate the LLMs output trustworthiness.

    But all this will accomplish short term is to perhaps roll the dice in our favor a bit more often.

    The perceived results from the end users however may significantly improve. Consider some human examples: sometimes people disagree with their doctor so they go see another doctor and another until they get the answer they want. Sometimes two very experienced lawyers both look at the facts and disagree.

    The system that prevents me from knowingly stating something as true, despite not knowing, without some ability to back up my claims is my reputation and my personal values and ethics. LLMs can only pretend to have those traits when we tell them to.