Not sure about you, but when I write code I want it to be better – way beter even – than average code.
The problem with any LLM based Generative AI is that it generates text based on the average of the past corpus they were trained with at the time they were trained.
It is exactly why I have been advocating for a while: be careful when using Generative AI, as you get generated text based on the combination of averaging over the LLM corpus with the relatively small prompt you phased trying to reflect a tiny bit of the model of the reality you are trying to write software for.
So I was not at all surprised by this article: [Wayback/Archive] New GitHub Copilot Research Finds ‘Downward Pressure on Code Quality’ — Visual Studio Magazine.





