I’ve finished a small project that is rather non-standard for me: it has just a few hundred lines of logic written by me, and most of the code is rather banal functions I picked up from the different articles and doc sheets (you know, those functions that are quite “atomic” like “check if the process is running” or “get the process name by pid by reading /proc dir” or “get a mount point by a filename”)

The code was written in a “ok, let’s experiment if I can do this” approach, so now it is in a complete mess.

So the question is if is there some AI that can do an initial code review for me? I’ve tried GhatGPT, but it was completely banal and useless.

  • hallettj@leminal.space
    link
    fedilink
    English
    arrow-up
    1
    ·
    24 hours ago

    My work is using Coderabbit, and I’ve found its feedback to be pretty helpful - especially since I’m working with a language I don’t have a whole lot of experience with (Python). I check what it tells me, but it has taught me some new things. I still want human reviews as well, but the AI can pick up on detail that is easy to skim over.

    It doesn’t cover bigger picture stuff like maintainability, architecture, test coverage. Recently I reviewed a PR that was likely AI generated; I saw a number of cases where logic duplication invited future bugs. (Stuff like duplicating access predicates across CRUD handlers for the same resource, repeating the same validation logic in multiple places.) Also magic strings instead of enums, tests of dubious value. Coderabbit did not comment on those issues.

    I’m also getting feedback from Sonarqube on the same project, which I think is static analysis. It’s much less helpful. It has less to say, and a lot of that is pointing out security issues in test code.