Yeah, there are people who can “in general” imagine how this will happen, but programming is exactly 99% not about “in general” but about specific “dumb” conflicts in the objective reality.
People think that what they generally imagine as the task is the most important part, and since they don’t actually do programming or anything requiring to deal with those small details, they just plainly ignore them, because those conversations and opinions exist in subjective bendable reality.
But objective reality doesn’t bend. Their general ideas without every little bloody detail simply won’t work.
Not really, it’s doable with chatgpt right now for programs that have a relatively small scope. If you set very clear requirements and decompose the problem well it can generate fairly high quality solutions.
It’s just a tool like any other. An experienced developer knows that you can’t apply every tool to every situation. Just like you should know the difference between threads and coroutines and know when to apply them. Or know which design pattern is relevant to a given situation. It’s a tool, and a useful one if you know how to use it.
This is like applying a tambourine made of optical discs as a storage solution. A bit better cause punctured discs are no good.
A full description of what a program does is the program itself, have you heard that? (except for UB, libraries, … , but an LLM is no better than a human in that too)
deleted by creator
Yeah, there are people who can “in general” imagine how this will happen, but programming is exactly 99% not about “in general” but about specific “dumb” conflicts in the objective reality.
People think that what they generally imagine as the task is the most important part, and since they don’t actually do programming or anything requiring to deal with those small details, they just plainly ignore them, because those conversations and opinions exist in subjective bendable reality.
But objective reality doesn’t bend. Their general ideas without every little bloody detail simply won’t work.
Not really, it’s doable with chatgpt right now for programs that have a relatively small scope. If you set very clear requirements and decompose the problem well it can generate fairly high quality solutions.
deleted by creator
It’s just a tool like any other. An experienced developer knows that you can’t apply every tool to every situation. Just like you should know the difference between threads and coroutines and know when to apply them. Or know which design pattern is relevant to a given situation. It’s a tool, and a useful one if you know how to use it.
This is like applying a tambourine made of optical discs as a storage solution. A bit better cause punctured discs are no good.
A full description of what a program does is the program itself, have you heard that? (except for UB, libraries, … , but an LLM is no better than a human in that too)
right now not a chance. it’s okay ish at simple scripts. it’s alright as an assistant to get a buggy draft for anything even vaguely complex.
ai doing any actual programming is a long ways off.
I heard a lot of programmers say it
Edit: why is everyone downvoting me lol. I’m not agreeing with them but I’ve seen and met a lot that do.
deleted by creator
Had to do some bullshit ai training for work. Tried to get the thing to remake cmatrix in python.
Yeah no, that’s not replacing us anytime soon, lmao.
People use visual environments to draw systems and then generate code for specific controllers, that’s in control systems design and such.
In that sense there are already situations where they don’t write code directly.
But this has nothing to do with LLMs.
Just for designing systems in one place visual environments with blocks might be more optimal.
And often you still have actual developers reimplementing this shit because EE majors don’t understand dereferencing null pointers is bad