• abhibeckert@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    the google cars few years ago had the boot occupied by big computers

    But those were prototypes. These days you can get an NVIDIA H100 - several inches long, a few inches wide, one inch thick. It has 80GB of memory running at 3.5TB/s and 26 teraflops of compute (for comparison, Tesla autopilot runs on a 2 teraflop GPU).

    The H100 is designed to be run in clusters, with eight GPUs on a single server, but I don’t think you’d need that much compute. You’d have two or maybe three servers, with one GPU each, and they’d be doing the same workload (for redundancy).

    They’re not cheap… you couldn’t afford to put one in a Tesla that only drives 1 or 2 hours a day. But a car/truck that drives 20 hours a day? Yeah that’s affordable.

    • FortuneMisteller@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      A real self driving software must do a lot of things in parallel. Computer vision is just one of the many tasks it has to do. I don’t think that a single H100 will be enough. The fact that the current self driving vehicles did not use so many processing power doesn’t mean a lot, they are prototypes running in controlled environments or under strict supervision.