Some 3 years after ChatGPT was introduced, I wonder if this still holds for current language and knowledge models:
[Wayback/Archive] We come to bury ChatGPT, not to praise it.
Posted by jpluimers on 2025/11/26
Some 3 years after ChatGPT was introduced, I wonder if this still holds for current language and knowledge models:
[Wayback/Archive] We come to bury ChatGPT, not to praise it.
Posted in AI and ML; Artificial Intelligence & Machine Learning, ChatGPT, Development, GPT-3, Software Development | Tagged: ChatGPT | Leave a Comment »
Posted by jpluimers on 2025/10/01
[Wayback/Archive] Steve Troughton-Smith: “I’m sure everybody is sick of …” – Mastodon
I’m sure everybody is sick of hearing about ChatGPT by now, but one OP feature it has is the capability to translate code between different programming languages, for example from Objective-C to Swift (or vice versa). An incredible accelerant, if nothing else. This was indeed the final push I needed to get @broadcastsapp over the finish line on its migration, and Broadcasts v3.1 doesn’t have a single line of ObjC left 🫡
–jeroen
Posted in AI and ML; Artificial Intelligence & Machine Learning, ChatGPT, Development, GPT-3, ObjectiveC, Software Development, Swift | Leave a Comment »
Posted by jpluimers on 2025/01/17
Een paar quotes van Laurens Verhagen’s column [Wayback/Archive] ChatGPT is een soort waarzegger, die net iets vaker de waarheid vertelt | de Volkskrant:
‘Mensen vrezen dat computers te slim worden en de wereld zullen overnemen, maar het echte probleem is dat ze te dom zijn en de wereld al hebben overgenomen.’ Deze vaak aangehaalde uitspraak van computerwetenschapper Pedro Domingos is al zo’n tien jaar oud, maar heeft nog niets aan actualiteit ingeboet.
…
ChatGPT in de basis is: een slimme woordvoorspeller.
…
Nog erger is misschien wel dat er bij Harper’s Bazaar blijkbaar niemand meer is die nog een boek leest. Ze zijn daar te druk met lijstjes maken aan de hand van chatbots, maar vooral met zichzelf in rap tempo overbodig te maken.
Dat laatste geldt natuurlijk voor iedereen die zwaar leunt op generative AI zonder veel moeite te steken in te verifiëren dat wat de woord- en beeldvoorspellers produceren klopt. Die moeite wordt steeds groter juist door de vervuiling van trainingsdata doordat AI trainingsmodellen niet kunnen onderscheiden of hun bron nu door mensen of door AI wordt gemaakt.
De trainingspool van menselijke data (feit of fictie – waaronder uiteraard satire, dat kunnen de modellen ook niet onderscheiden) – al dan niet uit materiaal dat zonder toestemming is opgenomen – is overigens inmiddels doodgebloed:
“Het had waar kunnen zijn” had van ChatGPT kunnen komen. Als die term faschineert, lees dan vooral [Wayback/Archive] Van Aarsvink tot Zeebreker – O.C. Hooymeijer.
Links uit de quotes van de column:
--jeroen
Posted in AI and ML; Artificial Intelligence & Machine Learning, ChatGPT, Development, Generative AI, GPT-3, LLM, Software Development | Leave a Comment »
Posted by jpluimers on 2024/10/29
Plagiarized Information Synthesis System as per [Wayback/Archive] Spot on! AI can piss off. – electronics post – Imgur
Posted in AI and ML; Artificial Intelligence & Machine Learning, ChatGPT, Development, Fun, GitHub Copilot, GPT-3, GPT-4, LLM, Meme, Software Development | Leave a Comment »
Posted by jpluimers on 2024/03/31
Starting the 2022-2023 period, more and more generative AI content has entered search engines.
The below queries give you some pointers on how to spot those. They return scholar articles from 2023 and later.
Note the list is in alphabetical order for easier reading, but the number of results (in parenthesis) are very different from that order. I was quite amazed to see “As an AI language model” scoring 45 results.
Posted in AI and ML; Artificial Intelligence & Machine Learning, ChatGPT, Development, GPT-3, GPT-4, LLM, Software Development | Leave a Comment »
Posted by jpluimers on 2024/03/21
[Wayback/Archive] https://arxiv.org/pdf/2307.09009.pdf ([Google Docs PDF view: Wayback] Google Docs PDF view: 2307.09009.pdf) is interesting. The abstract confirms my thought: over time LLM drift over time and seem to become worse at knowledge tasks.
How Is ChatGPT’s Behavior Changing over Time?
Lingjiao Chen†, Matei Zaharia‡, James Zou†
†Stanford University ‡UC BerkeleyAbstract
GPT-3.5 and GPT-4 are the two most widely used large language model (LLM) services.
However, when and how these models are updated over time is opaque. Here, we evaluate the March 2023 and June 2023 versions of GPT-3.5 and GPT-4 on several diverse tasks: 1) math problems, 2) sensitive/dangerous questions, 3) opinion surveys, 4) multi-hop knowledge-intensive questions, 5) generating code, 6) US Medical License tests, and 7) visual reasoning. We find that the performance and behavior of both GPT-3.5 and GPT-4 can vary greatly over time. For example, GPT-4 (March 2023) was reasonable at identifying prime vs. composite numbers (84% accuracy) but GPT-4 (June 2023) was poor on these same questions (51% accuracy). This is partly explained by a drop in GPT-4’s amenity to follow chain-of-thought prompting. Interestingly, GPT-3.5 was much better in June than in March in this task. GPT-4 became less willing to answer sensitive questions and opinion survey questions in June than in March. GPT-4 performed better at multi-hop questions in June than in March, while GPT-3.5’s performance dropped on this task. Both GPT-4 and GPT-3.5 had more formatting mistakes in code generation in June than in March. We provide evidence that GPT-4’s ability to follow user instructions has decreased over time, which is one common factor behind the many behavior drifts. Overall, our findings show that the behavior of the “same” LLM service can change substantially in a relatively short amount of time, highlighting the need for continuous monitoring of LLMs.
Later on, Eric Topol had the very interesting conversation with James Zou below which covers many AI aspects including a lot of LLM ones. Basic takeaways for me are that they are good at repeating things from their training data, making them OK on generating text, sort of OK for grammar, but far form OK from reproducing knowledge, and that it will become harder over time to distinguish LLM generated content from human created content.
The video of the conversation is below the blog signature; here is the link: [Wayback/Archive] James Zou: one of the most prolific and creative A.I. researchers in both life science and medicine – YouTube
Almost all LLMs are being trained on a corpus without curation (curation is way too expensive), resulting in them at best averaging the corpus (as in the foundation, LLM is just a “monkey see, monkey do” on steroids but without the means of self-curating to result in above average generation. I think that given more and more on-line content is being and becoming generated by LLM, and newer LLM will be trained based on the corpus encompassing that content (without the means to filter out LLM generated content), over time LLM will perform worse instead of better.
Via he below series of interesting tweets of which were quoted by a slightly less pessimistic Erik Meijer [Wayback/Archive] Erik Meijer on X: “Regression to the mean.“. Nnote some interesting replies as well. I found the one mentioning Eternal September especially fitting. It made me discover [Wayback/Archive] www.eternal-september.org
Today is September 11160, 1993, the september that never ends
No pr0n, no warez, just Usenet
Anyway, the tweets:
Posted in AI and ML; Artificial Intelligence & Machine Learning, Awareness, ChatGPT, Development, GPT-3, GPT-4, LLM, Software Development | Leave a Comment »
Posted by jpluimers on 2024/01/24
Gotten there from the reasonable ChatGPT use below, I was negatively surprised what people use ChatGPT for and totally rely on the ChatGPT responses: [Wayback/Archive] Tim Urban on Twitter: “What, if anything, do you regularly use ChatGPT (or another LLM) for that has provided a dramatic improvement over your previous workflow?”
I think this is about the only reasonable ChatGPT use today: [Wayback/Archive] Barry Kelly on Twitter: “@waitbutwhy – minor scripts for things like ffmpeg or Image/GraphicsMagick – trying to do something with an API I’m not familiar with; often gets screwy when it’s obscure though Things I’m not using it for: any kind of creative writing. Execrable.“
Remember that ChatGPT is a text generation model that averages the quality of the text in its corpus that was obtained in the past which means at it’s release, the “knowledge” was already dated.
Posted in AI and ML; Artificial Intelligence & Machine Learning, ChatGPT, Development, GPT-3, Software Development | Leave a Comment »
Posted by jpluimers on 2023/09/23
Long live the Clippy bot!
[Waybacksave/Archive] Elle Cordova on X: “Alexa, Siri and the other bots hanging out in the server break room again”
Posted in AI and ML; Artificial Intelligence & Machine Learning, Bookmarklet, ChatGPT, Development, GPT-3, GPT-4, JavaScript/ECMAScript, Office, Power User, Scripting, Software Development, Web Browsers | Leave a Comment »
Posted by jpluimers on 2023/05/03
For a while now, there has been a free [Wayback/Archive] ChatGPT which works around the paid barriers by relaying the chat through 3rd parties.
I wonder how long it will exist.
The cease and desist letter was from OpenAI to the repository owner which – paraphrased – maintains the stance that the 3rd parties pay license fees to OpenAI, and that if these parties have issues with his tool basically scraping them, should contact the repository owner to work things out.
This is all part of a bigger discussion on license and copyright of what AI based LLMs (Large Language Models) which are sourced from a large corpus of text that we all publish for free on the internet without a way to track back from ChatGPT responses to which texts were used.
Links:
Posted in AI and ML; Artificial Intelligence & Machine Learning, ChatGPT, Development, GPT-3, Software Development | Leave a Comment »