Artificial intelligence (AI) has long been positioned (by its creators) as a force for good. A labour-saving, cost-reducing tool that has the potential to improve accuracy and efficiency in all manner of fields. It’s already been used across sectors – from finance to healthcare – and it’s changing the way we work. But the time has come to ask, ‘at what cost?’ Because the more AI is developed and deployed, the more examples are being found of a darker side to the technology. And the recent ChatGPT data input scandal showed that our understanding to date is just the tip of a very large and problematic iceberg.

The more sinister side of AI

There is a range of issues with AI that have been either unacknowledged or brushed under the industry carpet. They are each cause for varying degrees of concern, but together present a bleak picture of the future of the technology.

Bias

Bias is the area that has been most talked about and it is the one area that is being addressed, largely due to public pressure. With the likes of Amazon left blushing after the uncovering of its sexist recruitment AI and the American healthcare algorithm that was found to discriminate against black people, AI bias was becoming too dangerous to ignore, both ethically and reputationally. The reason for the bias was easy to identify – because AI ‘learns’ from human-produced data, the biases of the people who create that data can inadvertently affect the training process. The industry has already admitted that all AI systems are at risk of becoming biased (with disclaimers splashed across every model now being produced).  And not withstanding efforts like NVIDIA’s Guardrails initiative, there is no instant fix to the problem.  One possible route out, made more possible by the emergent reasoning capabilities of LLMs, is the use of explainable AI (XAI).  This allows a user to question the logic behind AI’s decision-making, and get a sensible answer. But with this approach still in a very nascent stage, the problem remains rife.

Unethical data collection practices

This is where ChatGPT joins the conversation. Capable of generating text on almost any topic or theme, it is widely viewed as one of the most remarkable tech innovations of the last 10 years. It’s a truly outstanding development. But its development was only possible due to the extensive human data labelling and the hoovering up of vast swathes of human-generated data. For ChatGPT to become as uniquely complex and versatile as it is, millions – billions – of pieces of data have needed to be sourced and in many cases labelled. Because of the immense toxicity of its earlier models, OpenAI, creators of ChatGPT, needed to introduce a significant amount of human-labelled data to indicate to the models what toxicity looked like.  And quickly.

Was this done by the same cappuccino-drinking, highly-paid Silicon Valley hipsters who thought the models up?  No, it was “outsourced” to a workforce who felt coerced to view some of the most disturbing material on the planet, all for the price of the foam on a California coffee. In January 2023, a Time investigation uncovered that it was a Kenyan workforce earning less than $2 an hour had done the job. They often handling extremely graphic and highly disturbing data, without training, support, or any consideration for their well-being.

To Know More, Read Full Article @ https://ai-techpark.com/understanding-the-darker-side-of-ai/ 

Visit AITech For Industry Updates