AI and ML are just as smart as the training data, which for large sets of data usually gives biased or outright results
Posted by jpluimers on 2022/06/15
Kris phrases a thought that has been lingering in my head for decades: [Archive.is] Kristian Köhntopp on Twitter: “”AI” ist nicht intelligent, sondern reproduziert das Trainingsmaterial und die Vorurteile darin. Es handelt sich um automatisierten Aberglauben und Verschwörungsquatsch. Je größer das Netzwerk, um so wirrer.” / Twitter
Basically there are two kinds of AI:
- a bunch of
if
/then
/else
statements - a model based engine that is as bad as it’s training data; the larger the set of training data, the worse it gets.
A few of the images in the excellent thread that Kris quoted (more in the [Wayback/archive.is] PDF): [Archive.is] Owain Evans on Twitter: “Paper: New benchmark testing if models like GPT3 are truthful (= avoid generating false answers). We find that models fail and they imitate human misconceptions. Larger models (with more params) do worse! PDF: https://t.co/3zo3PNKrR5 with S.Lin (Oxford) + J.Hilton (OpenAI)… “
–jeroen
Leave a Reply