I don’t want to dismiss your point overall, but I see that example so often and it irks me so much.
Unit tests are your specification. So, 1) ideally you should write the specification before you implement the functionality. But also, 2) this is the one part where you really should be putting in your critical thinking to work out what the code needs to be doing.
An AI chatbot or autocomplete can aid you in putting down some of the boilerplate to have the specification automatically checked against the implementation. Or you could try to formulate the specification in plaintext and have an AI translate it into code. But an AI without knowledge of the context nor critical thinking cannot write the specification for you.
Tests are probably both the best and worst things to use LLMs for.
They’re the best because of all the boilerplate. Unit tests tend to have so much of that, setting things up and tearing it down. You want that to be as consistent as possible so that someone looking at it immediately understands what they’re seeing.
OTOH, tests are also where you figure out how to attack your code from multiple angles. You really need to understand your code to think of all the ways it could fail. LLMs don’t understand anything, so I’d never trust one to come up with a good set of things to test.
Unit tests become the specification once they are written. ChatGPT can easily write unit tests from whatever your specification is before that – such as documentation, a bunch of comments and stubs, or even a first draft of the function itself, given enough context from the rest of the project.
Unit tests are too klunky to think in. You don’t prototype the specification by implementing unit tests. And you really only lay down a few critical paths even if you “write the tests first” because code paths always come up during implementation that demand more test coverage anyway.
Feels like there are some fine people here only working on new projects.
Getting started could be, breaking down a 20 year old program, written in some wierd manner because the original developer use to do functional programming but was told to use java and oop. No comments, no tests, no normal patterns, no documentation.
The argument was “AI helps when starting up new projects by making unit tests etc.”
Also for not 20, but even building 10 year old libraries using AI is unhelpful. It just keeps hallucinating non-existent packages and functions.
At this point just drive yourself crazy while promising to become goose farmer and commenting every single line in your own words like god intended for programmers to do.
You read “new projects” in, actually. And the whole unit test thing was just an example demonstrating how AI use has to be tightly bounded to be arguably useful.
I don’t want to dismiss your point overall, but I see that example so often and it irks me so much.
Unit tests are your specification. So, 1) ideally you should write the specification before you implement the functionality. But also, 2) this is the one part where you really should be putting in your critical thinking to work out what the code needs to be doing.
An AI chatbot or autocomplete can aid you in putting down some of the boilerplate to have the specification automatically checked against the implementation. Or you could try to formulate the specification in plaintext and have an AI translate it into code. But an AI without knowledge of the context nor critical thinking cannot write the specification for you.
Tests are probably both the best and worst things to use LLMs for.
They’re the best because of all the boilerplate. Unit tests tend to have so much of that, setting things up and tearing it down. You want that to be as consistent as possible so that someone looking at it immediately understands what they’re seeing.
OTOH, tests are also where you figure out how to attack your code from multiple angles. You really need to understand your code to think of all the ways it could fail. LLMs don’t understand anything, so I’d never trust one to come up with a good set of things to test.
Unit tests become the specification once they are written. ChatGPT can easily write unit tests from whatever your specification is before that – such as documentation, a bunch of comments and stubs, or even a first draft of the function itself, given enough context from the rest of the project.
Unit tests are too klunky to think in. You don’t prototype the specification by implementing unit tests. And you really only lay down a few critical paths even if you “write the tests first” because code paths always come up during implementation that demand more test coverage anyway.
Also after your 4th proper project you won’t be as confused about “how to get started” anymore anyway.
You have a language of choice, a gui framework, and a build system you are comfortable with in mind already just start making folders.
Feels like there are some fine people here only working on new projects. Getting started could be, breaking down a 20 year old program, written in some wierd manner because the original developer use to do functional programming but was told to use java and oop. No comments, no tests, no normal patterns, no documentation.
The argument was “AI helps when starting up new projects by making unit tests etc.”
Also for not 20, but even building 10 year old libraries using AI is unhelpful. It just keeps hallucinating non-existent packages and functions.
At this point just drive yourself crazy while promising to become goose farmer and commenting every single line in your own words like god intended for programmers to do.
You read “new projects” in, actually. And the whole unit test thing was just an example demonstrating how AI use has to be tightly bounded to be arguably useful.