Philosophers such as Plato, Aristotle, Socrates suffered a lot to define knowledge itself. If they were alive today, they would probably suffer more from the speed and power of manipulating information with concepts such as truth, post-truth, alternative truth and AI.
However, the definition of knowledge is quite clear according to the Oxford English Dictionary: “clear and precise perception of reality; the state or condition of knowing the truth.” Of course, the fact of truth itself was controversial in the old period. Especially religious and political dogmas have been very effective in disseminating and placing directed manipulative information instead of information taking reference from facts since the past. This collective body of knowledge has been passed on to succeeding generations for centuries, either orally or in writing, by gifted guardians, knowledge custodians, and clergy.
The oldest surviving written transmission is a “small red clay tablet baked in the sun” found in Iraq, from around 3100 BC. In the Sumerian city of Uruk, a man named Kushim “who appeared to be an accountant” gave a receipt for the delivery of barley to a warehouse in Mesopotamia. He had created a portable piece of information. Anyone who could read it was educated; could acquire knowledge and pass it on to others.
“The amount of ignorance will always outweigh the sum of information”
A lot has changed since the baked red tablet. Today, while much more information has been created and available, as philosopher Karl Popper stated, “the amount of ignorance will always outweigh the sum of information” Is this fixed? No. Not really!
Digital amnesia is now widely recognized as a problem. It describes a situation in which words searched in the digital world are often forgotten almost as quickly as they are acquired. That is, information that we know can be easily searched on Google does not really need to be known or stored if it is known. Among the generations that passed from the encyclopedia generation to the computer, it is a situation that especially the old generations complain about the new ones. Which of us does not criticize our children in this sense? A generation that doesn’t think, without Google.
Today, artificial intelligence has begun to enter every part of our lives as a reality that processes and diversifies information itself. Although it seems new that nanoscale transistors have become dominant for humanity with their ever-increasing speed, capacity and processing power, it actually dates back to the mid-1950s. The use and processing of information and the production of results and decisions accordingly is a possible threat to both the power of artificial intelligence and humanity. As systems increase in intelligence, speed, resourcefulness, and complexity, so do concerns.
2001: A Space Odyssey
Many will remember Dave, the doomed commander of the spaceship Discovery One in Stanley Kubrick’s movie 2001: A Space Odyssey, based on Arthur C. Clarke’s novel of the same name. The ship’s soft-tongued computer was called HAL 9000. In the creepiest moment of the movie, HAL tells Dave that he is aware that he and his fellow teammate are planning to disconnect him from the power source. The computer sweetly said, “I can’t let that happen.” says. When Dave protests, HAL responds with silky certainty: “This conversation can no longer serve any purpose. Good-bye.” The orange eye will disappear from the screen until it flashes. And the computer takes over everything forever.
The horror of movies is this: One day a computer may feel completely, having its own feelings and concerns that may not completely match ours. And while we’re thinking faster than we can imagine on his tiny titanium legs, he may begin to despise us. He may believe that people lack the ability to govern the planet, to govern themselves. Thus, it can find a way to ensure that it will never be disconnected from the power source and that the integrity of the power itself will be maintained by other like-minded computers, and that the robots radioed to the mothership will then shut down.
As a conspiracy theory for now, these are expressed in different ways around the world. Moreover, it is not only ordinary people who are exposed to it, but also people who contribute to the emergence of this technology.
“Computers do everything for us, then what need do we have to be?”
Simon Winchester, in his book “Knowing What We know”, says that an existential intellectual crisis is approaching. And he asks very critical questions: “If we (our brain) no longer need information and we don’t need information because computers do everything for us, then what good is human intelligence? If machines are going to take all our knowledge for us and do our thinking for us, then what need do we have to be?”
It is possible to take the subject to the most extreme scenarios. But I think it’s a new digital renaissance climate. I believe that it should be beneficial for humanity. For this, it is necessary to pay great attention to the ‘human prejudices’ that also develop and mature artificial intelligence, to prevent folded ‘artificial prejudices‘ that will create big problems later, to never skip the basic human rights dimension and of course to protect the moral and conscientious responsibilities that purify us from all mechanical elements.
As it has been throughout history, the greatest threat to humanity will be the lack of morality and conscience of those who produce, use or disseminate information. Whether it’s a human or a machine learning from a human…
Source & Reading: Simon Winchester, “Knowing What We know”