LLM are just statistic text generators which depend on the texts they have been trained which and alleviating this usually makes things worse: [Wayback/Archive] AI generates covertly racist decisions about people based on their dialect | Nature
Related:
- [Wayback/Archive] LLMs produce racist output when prompted in African American English
- [Wayback/Archive] AI worse than humans at summarising information, trial finds “A test of AI for Australia’s corporate regulator found that the technology might actually make more work for people, not less.”
Of course these issues are not limited to natural language LLM: artificial computer language LLM are also full of training issues that are likely very hard to resolve. What if covert organisations succeed poisoning LLM platforms with malicious code?
Via





