

Oh yeah no fair enough, thanks for hearing me out. Those kinds people are exhausting
I am several hundred opossums in a trench coat
Oh yeah no fair enough, thanks for hearing me out. Those kinds people are exhausting
I agree, it feels like we’ve been arguing over semantics. When I (and I’m assuming the person you originally responded to) say “real”, I don’t mean to claim that it doesn’t have material effects, I mean that it has no biological basis - i.e. it is socially constructed.
You do not need to believe race is a biological reality to acknowledge that the perception of others as you (+ your ancestors) being a member of a race has materially affected your identity
I don’t really think I can come up with a more concise way of summarizing the idea than anthropologist Audrey Smedley did on the first result of the Google search “race social construct”
Race is a culturally structured systematic definition of a way of looking at perceiving and interpreting reality.
I would recommend you read something like “Feminism and ‘Race’” from Oxford Readings in Feminism or some of bell hooks’ work to understand the idea better.
Saying that race isn’t real is not the same as saying that we live in a post-racial society.
There are plenty of legitimate reasons for Google to provide extra support and exceptions to parts of their guidelines to certain parties, including themselves. No one is claiming this is a consequence-neutral decision, and it’s right to not inherently trust these exceptions, but it is not a black and white issue.
In this case, placing extra barriers around sensitive permissions like MANAGE_EXTERNAL_STORAGE
for untrusted parties is perfectly reasonable, but the process they implemented should be competent and appealable to a real support person. What Google should be criticized for (and “heavily fined” by the EU if that were to happen) is their inconsistent and often incorrect baseline review process, as well as their lack of any real support. They are essentially part of a duopoly and should thus be forced to act responsibly.
Oh yeah for sure. Google, extremely large companies, and government apps essentially have different streams and access to support than the rest of us mere mortals. They all receive scrutiny, and may have slightly altered guidelines depending on the app, but the most consequential difference is that they have much more ability to access real support. I just don’t think it was an intentional and specific attempt to be anti-competitive, this is better explained by incompetence and the consequences of well-intentioned but poorly implemented policy.
I’ve experienced this exact issue with the Google Play Store with some clients and it’s just the worst. This kinda thing happens because Google is essentially half-arsing an Apple-style comprehensive review of apps. For context, Apple offers thorough reviews pointing to exactly how the app violates policy/was rejected, with mostly free one-on-one support with a genuine Apple engineer to discuss or review the validity of the report/how to fix it. They’re restrictive as hell and occasionally make mistakes, but at the end of the road there is a real, extremely competent human able to dedicate time to assist you.
Google uses a mix of human and automated reviewers that are even more incompetent than Apple’s frontline reviewers. They will reject your app for what often feels like arbitrary reasons, and you’re lucky if their reason amounts to more than a single sentence. Unlike Apple, from that point you have few options. I have yet to find an official way to reach an actually useful human unless you happen to know someone in Google’s Android/Developer Relations team.
I’m actually certain that the issues facing Nextcloud are not some malicious anti-competitive effort, but yet more sheer and utter incompetence from every enterprise/business facing aspect of Google.
Doesn’t help that he’s got the build and demeanor of a Hitler cabinet member
You’re right, it looks like they didn’t (at least for most things?). They do mention raytracing briefly, and that the sampling stage can “combine point samples from this algorithm with point samples from other algorithms that have capabilities such as ray tracing”, but it seems like they describe something like shadow mapping for shadows and regular raster shading techniques (“textures have also been used for refractions and shadows”)?
Maybe, what I said is admittedly mostly based on the experience I have with Blender’s Cycles renderer, which is definitely not real time.
I’m not a computer graphics expert (though have at least a little experience with video game dev), but considering Toy Story uses ray-traced lighting I would say it at least depends on whether you have a ray-tracing capable GPU. If you don’t, probably not. I would guess you could get something at least pretty close out of a modern day game engine otherwise.
I’ve used it most extensively for non-professional projects, where if I wasn’t using this kind of tooling to write tests they would simply not be written. That means no tickets to close either. That said, I am aware that the AI is almost always at best testing for regression (I have had it correctly realise my logic is incorrect and write tests that catch it, but that is by no means reliable) Part of the “hand holding” I mentioned involves making sure it has sufficient coverage of use cases and edge cases, and that what it expects to be the correct is actually correct according to intent.
I essentially use the AI to generate a variety of scenarios and complementary test data, then further evaluating it’s validity and expanding from there.
I most often just get it straight up misunderstanding how the test framework itself works, but I’ve definitely had it make strange decisions like that. I’m a little convinced that the only reason I put up with it for unit tests is because I would probably not write them otherwise haha.
I think its most useful as an (often wrong) line completer than anything else. It can take in an entire file and just try and figure out the rest of what you are currently writing. Its context window simply isn’t big enough to understand an entire project.
That and unit tests. Since unit tests are by design isolated, small, and unconcerned with the larger project AI has at least a fighting change of competently producing them. That still takes significant hand holding though.
Fifth, they could simply write checks to Treasury that help us finance global public goods.
You have to be fucking kidding me.
C’mon, that’s what PR’s, RCs, and betas are for
It’s important to note that this is them moving in-development branches/features “behind closed doors”, not making Android closed source. Whenever a feature is ready they then merge it publicly. I know this community tends to be filled with purists, many of whom are well informed and reasoned, but I’m actually totally fine with this change. This kind of structure isn’t crazy uncommon, and I imagine it’s mainly an effort to stop tech journalists analysing random in-progress features for an article. Personally, I wouldn’t want to develop code with that kind of pressure.
It’s weird you think China is some kind of gotcha, because if the best the Canadian government could do in the unlikely future where “China is parked on [Australia’s] coastline” is a symbolic gesture that hurts its own citizens, I would rather you wouldn’t. So again, why do you expect us to damage our own economy for the sake of a symbolic gesture?
In my experience, an LLM can write small, basic scripts or equally small and isolated bits of logic. It can also do some basic boilerplate work and write nearly functional unit tests. Anything else and it’s hopeless.