Kimberly Springer, curator at Columbia University moderated a discussion with Swedish panelists Martin Gelin (journalist and author) and Martin Adolfsson (artist and photographer) – both based in New York. The discussion was lively and there were many questions from the audience reflecting both a curiosity and a healthy skepsis about the impact of the explosive growth of AI in the aftermath of the 2022 launch of ChatGPT 4.0 based on Large Language Models (LLMs).
What happens to trust when we seem to be surrounded by an ever-growing stream of manipulated stories and enhanced or fake images? What and who can we trust? Is more regulation the answer, or even feasible? Is media education the answer? Will the emerging AI-world self-regulate in the interest of the common good? Big questions, which nobody knows the answer to today. But it's not only a matter of what the AI-researchers and companies do.
The deeper question is how we as human beings react in this new environment. Will we cynically take a huge step back and seek comfort in tribalism and conspiracy theories? Will we find a way to navigate a world where confusion grows despite the explosion of facts and theories?
I put together a couple of links to articles that provide a background to those and other questions.
Why Facts Don’t Change Our Minds - New discoveries about the human mind show the limitations of reason. By Elizabeth Kolbert
Don’t Believe What They’re Telling You About Misinformation. By Minvar Singh
The Fake Fake-News Problem and the Truth About Misinformation. By Minvar Singh
Thoughts on “The Enigma of Reason”. By Arthur Juliani
The cerebral mystique. By Arthur Jasanoff
Will A.I. Be a Creator or a Destroyer of Worlds? By Thomas B. Edsall