
What does reinforcement learning from human behavior reinforces
exactly? The goal can be an engagement, which is the opposite of good
behavior of a professional programmer.
But my main concern with the code *produced* with ChatGPT is that the
code is *produced*. The model is generative, not transformative.
An anecdote: https://www.folklore.org/StoryView.py?story=Negative_2000_Lines_Of_Code.txt
I've yet to see an example of ChatGPT reducing the size of code. If
there are some, do their prompts still have directions to do so?
The code is a liability - it has to be supported, maintained, kept,
etc. I take a pride when I my changeset has more deletions than
insertions, a rare occasion.
2023-04-01 17:09 GMT+03:00, Will Yager
On Apr 1, 2023, at 09:57, MigMit
wrote: Well, human programmers can be shamed, yelled at, fired (which would actually hurt them), or, in extreme cases, prosecuted. They have every insentive to do their job right.
ChatGPT has RLHF. It has incentives to do its job right as well. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.