• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle
  • Ironically enough Aurora city water consistently wins awards for it’s quality lol.

    I think the legitimate reason is that Aurora is a physically massive city, has lower housing costs than the rest of the metro area, and Denver has a habit of forcing its homeless population out and into Aurora. The police department is also an absolute good ole boys club who are all terrified of city residents to the point where they drive unmarked/undercover vehicles by default (at least it seems that way, I see so few marked police cars but whenever there’s a collection of cop cars with lights going the majority are the undercover)

    Sauce: Current Aurora, CO resident. It’s not all bad


  • Embedded systems run into this a lot, especially on low level communication busses. It’s pretty common to have a comm bus architecture where there is just one device that is supposed to be in control of both the communication happening on the bus and what the other devices are actually doing. SPI and I2C are both examples of this, but both of those busses have architectures where there isn’t one single controller or that the devices have some other way to arbitrate who is talking on the bus. It’s functionally useful to have a term to differentiate between the two.

    I’ve seen Master/Servant used before which in my experience just trips people up and doesn’t really address the cultural reason for not using the terms.

    Personally I’m a fan of MIL-STD-1553 terminology, Bus Controller and Remote Terminal, but the letters M and S are heavily baked into so much literature and designs at this point (eg MISO and MOSI) that entirely swapping them out will be costly and so few people will do it, so it sticks around


  • For graphics, the problem to be solved is that the N64 compiled code is expecting that if it puts value X at memory address Y it will draw a particular pixel in a particular way.

    Emulators solve this problem by having a virtual CPU execute the game code (kinda difficult), and then emulator code reads the virtual memory space the game code is interacting with (easy), interprets those values (stupid crazy hard), and replicates the graphical effects using custom code/modern graphics API (kinda difficult).

    This program is decompiling the N64 code (easy), searches for known function calls that interact with the N64 GPU (easy), swaps them with known valid modern graphics API calls (easy), then compiles for local machine (easy). Knowing what function signatures to look for and what to replace them with in the general case is basically downright impossible, but because a lot of N64 games used common code, if you go through the laborious process for one game, you get a bunch extra for free or way less effort.

    As one of my favorite engineering phrases goes: the devil is in the details


  • MajorasMaskForever@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    5 months ago

    Ada

    It has a lot of really nice features for creating data types and has amazing static analysis during compile time.

    But all the tooling around it is absolute crap making using the language unbearable and truly awful. If it had better tooling I could see that it would have taken a decent chunk of development away from C and C++


  • As someone who is in the aerospace industry and has dealt with safety critical code with NASA oversight, it’s a little disingenuous to pin NASA’s coding standards entirely on attempting to make things memory safe. It’s part of it, yeah, but it’s a very small part. There are a ton of other things that NASA is trying to protect for.

    Plus, Rust doesn’t solve the underlying problem that NASA is looking to prevent in banning the C++ standard library. Part of it is DO-178 compliance (or lack thereof) the other part is that dynamic memory has the potential to cause all sorts of problems on resource constrained embedded systems. Statically analyzing dynamic memory usage is virtually impossible, testing for it gets cost prohibitive real quick, it’s just easier to blanket statement ban the STL.

    Also, writing memory safe code honestly isn’t that hard. It just requires a different approach to problem solving, that just like any other design pattern, once you learn and get used to it, is easy.


  • I think part of the “what do I do with this” factor for the iPad was that Apple (and other companies still to this day) were so hell bent on making everything smaller and more compact that releasing a larger product was marketing whiplash. Not to mention that smartphones were being pitched as this “do everything device” so why would you need anything else?

    After you get over that marketing sugarcoating, it becomes pretty obvious what you’d use an iPad for. Internet and media consumption at a larger scale than your phone, easier on your eyes than a phone, but retains at least some of the lightweight smaller form factor that separates it from a regular laptop. Sure you didn’t have the stick it in your pocket advantage of a phone or the full keyboard and computational power of a laptop, but there was this in-between that for a modest fee, you could have the conveniences if you can live with/ignore the sacrifices.


  • I don’t think the MacBook Airs launch is a good comparison.

    Sure there was an early adopter tax on being one of the first “thin and light” laptops, but people already know what you can use a MacBook for, there was already a large value proposition in having a MacBook, the extra cost was entirely being more portable than it’s full size counterparts. Everything you can do on a Mac, just way easier to take on the go.

    I’ve read a few reviews on it, watched MKBHD’s initial review, and outside of a few demo apps they point to the vision pro having no real point to it. Which if true, then it falls in line with existing VR headsets that are a fraction of it’s cost and in a niche market, being three times the cost of your competitors is not a good position to be


  • It’s IEEE misinterpreting the guys original paper.

    https://liuyang12.github.io/proj/privacy_dual_imaging/ (can’t find the full paper, but here’s the abstract at least)

    The paper author straight up says the light sensor is impractical to use as an attack vector, but when you use it in conjunction with other sensors you might be able to gleam more information than most might think. It leaves me with question of what other sensors can you combine to start getting behavioral information that is a security threat?

    I’ll say it worked for me. I read the IEEE headline, called bullshit, dug into it and yeah you can only get a tiny bit of information that you have to stretch pretty far to get useful conclusions from… But it’s more than the zero I initially thought. So props to the paper author, he met his goal. IEEE wanted sensationalized clicks, which they too unfortunately got.