I experimented by asking it to write a SQL query for a platform that has its entire database map available online. The data I asked for was impossible to get without exporting some of the data in those tables into temp tables using sub queries and then running comparative omissions analysis.
Instead of doing that it just made up fake tables and wrote a query that proclaimed the data was in these fake tables.
I asked it to write a review of beowulf in the style of beowulf. It wrote something rhyming which is not they style of beowulf. I said “rewrite this so it doesn’t rhyme” and it gave me something rhyming. I tried several times in several different ways including reasoning with it, and it just kept on kicking out a rhyming poem.
its good to remember that many of these chatbot AIs want to give an answer to the prompt instead of saying “sorry, thats not possible” and will then generate something completely garbage as result
Out of curiosity, are you using 3.5 or 4? I found that gpt4 is pretty good at these tasks, while 3.5 is almost useless. A thing that often helps is to ask it “is your answer correct?”. That seems to make it find the errors and fix them.
I experimented by asking it to write a SQL query for a platform that has its entire database map available online. The data I asked for was impossible to get without exporting some of the data in those tables into temp tables using sub queries and then running comparative omissions analysis.
Instead of doing that it just made up fake tables and wrote a query that proclaimed the data was in these fake tables.
It’s basically a soup-up autocomplete system. Do not expect it to apply any independent thinking at all.
I asked it to write a review of beowulf in the style of beowulf. It wrote something rhyming which is not they style of beowulf. I said “rewrite this so it doesn’t rhyme” and it gave me something rhyming. I tried several times in several different ways including reasoning with it, and it just kept on kicking out a rhyming poem.
its good to remember that many of these chatbot AIs want to give an answer to the prompt instead of saying “sorry, thats not possible” and will then generate something completely garbage as result
Out of curiosity, are you using 3.5 or 4? I found that gpt4 is pretty good at these tasks, while 3.5 is almost useless. A thing that often helps is to ask it “is your answer correct?”. That seems to make it find the errors and fix them.