I vote for xX-[X]-Xx
Alas, this being the darkest timeline, we’ll probably end up with X Social
.
I vote for xX-[X]-Xx
Alas, this being the darkest timeline, we’ll probably end up with X Social
.
He literally just fixed it, and he learned nothing from this, Dunning-Kruger as strong as always.
Instead of simply blurring them, it’d be technically possible to feed their images through a stable diffusion prompt, like “humanoid lizards” or “frantic lemmings”…
Also, I understand that a large language model could be made to rewrite articles about them with a matching prompt.
That would be very silly, of course.
Yes, it really was renamed after the Zuckerbergs, as buildings sometimes are at the request of a large donator seeking posterity.
See Wikipedia:
In November 2008, San Francisco voters approved an $887.4 million general obligation bond for the General Hospital rebuild, work began in 2009, and was expected to be finished in 2015.
In 2015, Facebook founder and CEO, Mark Zuckerberg, and his wife Priscilla Chan gave $75 million to help fund equipment and technology for the new hospital.
More appropriate tools to detect AI generated text you mean?
It’s not a thing. I don’t think it will ever be a thing. Certainly not reliably, and never as a 100% certainty tool.
The punishment for a teacher deciding you cheated on a test or an assignment? I don’t know, but I imagine it sucks. Best case, you’d probably be at risk of failing the class and potentially the grade/semester. Worst case you might get expelled for being a filthy cheater. Because an unreliable tool said so and an unreliable teacher chose to believe it.
If you’re asking what’s the answer teachers should know to defend against AI generated content, I’m afraid I don’t have one. It’s akin to giving students math homework assignments but demanding that they don’t use calculators. That could have been reasonable before calculators were a thing, but not anymore and so teachers don’t expect that to make sense and don’t put those rules on students.
There are stories after stories of students getting shafted by gullible teachers who took one of those AI detectors at face value and decided their students were cheating based solely on their output.
And somehow those teachers are not getting the message that they’re relying on snake oil to harm their students. They certainly won’t see this post, and there just isn’t enough mainstream pushback explaining that AI detectors are entirely inappropriate tools to decide whether to punish a student.
No True Christian would ever activate a fully automated sentry killbot that doesn’t use at least one of its compute cores to pray to the Almighty on a loop.
Presumably because they don’t have a single delivery employee. They just provide “tech” that lets drivers and customers find each others.
Of course if those companies were to become responsible for providing a living wage to their “gig workers”, then it becomes harder to still call them mere “tech” companies (and some might argue that an article using that label to describe them is in fact implicitly picking a side in that lawsuit.)
The term AI was coined many decades ago to encompass a broad set of difficult problems, many of which have become less difficult over time.
There’s a natural temptation to remove solved problems from the set of AI problems, so playing chess is no longer AI, diagnosing diseases through a set of expert system rules is no longer AI, processing natural language is no longer AI, and maybe training and using large models is no longer AI nowadays.
Maybe we do this because we view intelligence as a fundamentally magical property, and anything that has been fully described has necessarily lost all its magic in the process.
But that means that “AI” can never be used to label anything that actually exists, only to gesture broadly at the horizon of what might come.
I’ll note that there are plenty of models out there that aren’t LLMs and that are also being trained on large datasets gathered from public sources.
Image generation models, music generation models, etc.
Heck, it doesn’t even need to be about generation. Music recognition and image recognition models can also be trained on the same sort of datasets, and arguably come with similar IP right questions.
It’s definitely a broader topic than just LLMs, and attempting to enumerate exhaustively the flavors of AIs/models/whatever that should be part of this discussion is fairly futile given the fast evolving nature of the field.
One of my guilty pleasures is to rewrite trivial functions to be statements free.
Since I’d be too self-conscious to put those in a PR, I keep those mostly to myself.
For example, here’s an XPath wrapper:
const $$$ = (q,d=document,x=d.evaluate(q,d),a=[],n=x.iterateNext()) => n ? (a.push(n), $$$(q,d,x,a)) : a;
Which you can use as $$$("//*[contains(@class, 'post-')]//*[text()[contains(.,'fedilink')]]/../../..")
to get an array of matching nodes.
If I was paid to write this, it’d probably look like this instead:
function queryAllXPath(query, doc = document) {
const array = [];
const result = doc.evaluate(query, doc);
let node= result.iterateNext();
while (node) {
array.push(node);
n = result.iterateNext();
}
return array;
}
Seriously boring stuff.
Anyway, since var/let/const are statements, I have no choice but to use optional parameters instead, and since loops are statements as well, recursion saves the day.
Would my quality of life improve if the lambda body could be written as => if n then a.push(n), $$$(q,d,x,a) else a
? Obviously, yes.
ViolentMonkey is open source, TamperMonkey is not.
You don’t have to, but it’d be a lot cooler if you did.
Let that trashcan in.
It’s weirdly difficult to remap the “office” key so that pressing it won’t open an ad for ms office 365 and pressing office+L won’t open linkedin.com, and a few more equally valuable core OS features.
In the end I just had to grab a small bit of C code from GitHub, compile it, move the exe to the startup folder, have Windows Defender yell at me for having obviously installed a particularly nasty brand of trojan, and make Windows Defender put the executive I had just compiled back.
But really, I deserve this for using a Microsoft natural keyboard in the first place.