Hardly a day goes by without a new way of using a language model, image generator version, search engine or ChatGPT recently. In short, there is so much news about AI that it’s easy to miss the bigger trends and fundamental questions we need to ask ourselves as a society in the whirlwind of news. The current boom in ethics and the social sciences poses a number of unanswered questions.
Polish futurist, philosopher of artificial intelligence and vice-rector of Warsaw Kozmin University Aleksandra Przegalińska deals with them in her research. He also works as a researcher at Harvard.
“We are still conscious beings, and like humans, we give tasks to artificial intelligence. We are the main actors who still use it as a tool.” Przegalińska describes the role that artificial intelligence has today. However, according to him, he also mentions situations where we are stepping on thin ice. “I worry about people relying on language models to make decisions. Still, it’s worth noting that tools like GPT are useful, especially if you’re experienced in the subject. For example, I worry about children who can use the program without needing to think critically.” Polish futurologist explains.
In an interview with CzechCrunch, he talks about the pitfalls of adopting large language models in today’s society, such as his own research involving ways to reduce the possibility of artificial intelligence hallucinations. the development of artificial intelligence should be slowed down.
What is the main question you ask yourself as a philosopher when you see the current hype about artificial intelligence?
I think the biggest problem is the hallucinations of language models, the ability to construct knowledge that doesn’t exist. Although language models are completely fictional, they are very confident and convincing. OpenAI claims that the new GPT-4 model is capable of achieving more accurate results. This is true, but we still have a long way to go.
OpenAI also claims that the model now provides a better argument, which I interpret as more convincing in its falsity.
This may indeed be true, but I think that what is meant by better argument is better prediction, which is what you mean by a given task. Obviously, AI systems can potentially be programmed to lie or deceive humans, but in this case the lie is unfortunately a byproduct of the very generality of the model. OpenAI advertises itself as an AI development company that is trustworthy and respects human values, so I’m sure they want to avoid hallucinations. But so far they have achieved only small results.
Is there a technical way to get rid of hallucinations? Because it’s kind of a prerequisite for AI to be reliable…
Available. In my research groups, students make small transformers – small and very simple versions of tools like ChatGPT. The way we try to combat hallucinations is to deliberately make the models less capable, more conservative. In practice, this works by asking the model additional questions before answering. It learns from the answers immediately and is thus able to provide more comprehensive answers.
Przegalińska at the Metaverse Day 2023 event in Warsaw
By limiting their abilities, but also by making certain decisions. And those decisions mean bias in the answers later, don’t they?
Of course, algorithmic bias is generally a problem, but in this case it’s actually intended bias. Another option is to localize the models more, give them your own information, or simply hardwire them to give shorter answers and ask more questions. Moving them towards collaboration rather than pure input-based output.
Are we as a society ready for the emergence of artificial intelligence? Let me give you an example: Imagine what the world would look like if the economy grew by a hundred percent instead of three or four percent a year. According to AI security expert Holden Karnofsky, this will be achieved through a loop where an AI will teach another AI.
I think we are ready. They all enter questions into the models and exchange their best ideas for guidance. This is a very natural way of communication, we talk through chat. However, the development of new generative models is very fast. I think we need to step back and ask ourselves the important ethical and big questions that come with it. Don’t just focus on what the next version of GPT or Midjourney can do.
So do you think we should slow down the pace of AI development?
On the one hand, the slowdown can help address some potential risks—such as the ethical and social implications of automation, privacy concerns, and the impact on employment. It can also give researchers and policymakers more time to develop effective legal frameworks and ethically responsible approaches. On the other hand, slowing the pace of AI development could also hinder progress in healthcare, education and sustainability, which have the potential to make important positive contributions. A slowdown could also hurt countries or companies that have invested heavily in AI research, leading to economic and technological imbalances.
Do we really have too much faith in the existing systems?
We can think of this question as if we are still conscious beings, and as humans we also give tasks to artificial intelligence. We are still the main actors using it as a tool.
Useful concepts
- Language model: A machine learning algorithm or model that can predict the most likely word or phrase to follow for a given sequence of words.
- General AI: A theoretical concept of artificial intelligence capable of solving any problem a human can solve.
- Generative AI
A branch of artificial intelligence that focuses on generating new data such as images, videos, or sounds. Generative AI uses deep neural networks and other machine learning techniques to create data that looks like it was created by humans. - OpenAI: The company was originally founded by Elon Musk to search for a path to general artificial intelligence. But today it focuses on the development of language models and generative artificial intelligence, with Microsoft as a major investor.
More: Glossary of terms from the world of artificial intelligence
Are we really? It’s mostly about conversations, as you say. This is a two-way process.
Admittedly, I’m a bit concerned when people rely on a language model to make decisions. But it’s worth noting that tools like GPT are useful, especially if you have experience with this. For example, I worry about children who may use software without realizing the need for critical thinking.
However, my colleague Dariusz Jemielniak and I wrote a book about generative models of artificial intelligence with the help of GPT. The strategy of AI in business and education. We gave various recommendations to the models and they helped us a lot in the creative process. For example, we asked them a question with some arguments and they produced valuable counterarguments. However, we are still in the process of understanding how to work with these models.
So in this case you used artificial intelligence as a tool. But what about other jobs that are completely interchangeable?
Usually, as AI researchers, people ask us about the professions that will disappear due to ChatGPT. But I think the creative professions will remain safe precisely because of the need for critical thinking. Rather than losing their jobs, they will have more powerful tools at their disposal. The situation is different for people like developers. They can really worry about their place. For example, a colleague told me about an example of a doctor who fed ChatGPT with some information about a patient – and it suggested the correct diagnosis and possible treatment.
From a patient’s point of view, I find it a little scary. Shouldn’t it be a doctor who has the necessary knowledge?
In complex cases, doctors always make collective decisions. So you can think of this type of use as an extension of medical advice. In the end, the decision will always be up to the individual. We also know that GPT also receives training on academic articles from the Science Direct depository.
Even creative professions will be under threat because of the need for critical thinking.
One of the innovations in the GPT-4 model is that OpenAI reduces the chance of responding to an input that violates the terms of use by eighty-two percent. For example, racist or discriminatory questions. But isn’t it dangerous that these restrictions are imposed not by law, but by a private company for a tool used by hundreds of millions of people?
There are still ways not to respond. Since GPT-4 can play characters quite well, the easiest way would be to incorporate the message into the story or character. But to be honest, I don’t have a clear answer. Self-regulation by big tech companies has proven somewhat problematic in the past, such as in the case of social media, but Microsoft or OpenAI will probably tell you that if they don’t do it, no one will.
Do you think hallucinations could be the reason the GPT ecosystem is having trouble with mainstream adoption? Isn’t it a simple assumption that we will believe his answers?
Hallucinations can be a problem for some language model applications, but while they are a source of concern, they are not a general barrier to adoption. Many uses, such as translations, summarizing texts, and answering questions, do not require a high degree of coherence or contextuality, so they may not be affected by occasional hallucinations. However, for applications that require a high level of accuracy and consistency, such as medical diagnosis or legal decision-making, hallucinations can be a problem. Therefore, we need to develop strategies to reduce hallucinations in speech. The problem is that we often don’t know what’s going on inside the models and why the algorithm invented them.
But this is a typical problem of all recommendation algorithms, right? No one knows what they are designing for. They are black boxes.
So, it is necessary to make the black cabinets transparent. No one knows because it is not in the interest of big tech companies to make the principles of their valuable algorithms open and public. There should be regulation for this.
What to take away from the interview:
- The biggest problem with language models is their hallucination – the construction of knowledge that doesn’t exist.
- In the GPT model, lie is a by-product due to generality.
- Hallucination can be reduced by purposefully setting models to be less skilled and more conservative—asking additional questions before giving a final answer.
- Inaccurate answers are especially dangerous when humans leave the entire decision-making process to artificial intelligence.