elon musk versus mark zuckerberg on articial intelligence topic

Some Thought Exercise on Facebook’s AI Crysis

There has been another incident involving an AI (Artificial Intelligence) recently. Yes, I’m referring to Facebook’s two chatbots inventing a language that they use to communicate.

Well, regarding artificial intelligence, I do and always have taken Elon Musk’s side. In the end, who am I to deny a genius? This Facebook incident is also something to seriously think about and not to be underestimated in my opinion, and I think I have solid foundations to conclude this way.

Let’s remember the incidents and events regarding artificial intelligence. There was this Tay chatbot incident with Microsoft. I have written an article about that in Turkish. Well in short; Microsoft creates a chatbot with machine learning ability and lets it interact with people freely. After a while, the bot goes crazy and starts tweeting aggressive, racist, sexist tweets referring to Nazis, black people, and such. Microsoft tries to “fix” the problem but they cannot do it. Finally, they shut it down.

There is also Google AlphaGo. Well, that can’t be called an “incident” maybe. It was very successful in fact. But the fact that it is successful will be regarded as an incident in the future I think. Assuming that there will be people to study or teach history. Anyway, Google creates an AI and when they reveal it to the public it turns out that AlphaGo is the first real AI. For something to be considered really an AI, it must have the ability to learn by itself, see its own errors and learn from them. In short, a real AI must have the ability to learn something without a prior installation of software or algorithm. In the AlphaGo case, this machine has learned to play the game “GO” without having any prior knowledge of it.

Just in a timespan of hours, the machine had become good enough to beat experienced human players. As far as I can remember, it became good enough to beat one of the world champions Lee Sedol in weeks. And again, if I can remember correctly it beat him in 168 moves. Here’s the Wikipedia link for the aforementioned match.

The thing with GO is that a machine cannot beat a human being with brute force, unlike chess. Brute force is the method that the machine calculates every possible move after every move made, and then picks the best one. That’s why it took a machine, which weighs a ton to beat the chess legend Kasparov. But in GO, possibilities are theoretically infinite. Thus it is useless for a machine to use brute force against a human player.

Summing up all this information; AlphaGo needs to see, understand, and then analyze every move that its opponent makes and then “improvise” accordingly. The keyword is “improvise” here. Because until now this word was something that you couldn’t describe a machine with. According to Google’s scientists, AlphaGo doesn’t try to calculate every possible movement. It simply thinks and finds out what to do in the same sense as the human brain does it. And as far as I have examined the interpretation of the big match, the human champion revealed it in an unfortunate way. He thought that the machine used brute force and tried to trick the machine into making a bad move, but the machine didn’t bite the bait. The machine saw the trick and evaded the move.

Well, I’m no expert on the subject of machines, AI, or similar. But as a human being, I can easily comprehend how serious this subject is. A machine that improvises, sees tricks coming and evades them, and above all; it can learn by itself and improve itself. Now simply combine all these with massive processing power.

After a while passed from the big match, Google announced that they were trying if AlphaGo could learn and play 2D first-person shooter games like Wolfenstein, Duke Nukem, and the old, classic Doom and said that it became very good at them so far. Go for it guys! Teach the thinking, self-learning super-smart machine to play games that the main character shoots random people, Nazis, monsters, or aliens. Let the machine think – or realize – that we are violent, aggressive, xenophobic, racist creatures that don’t respect the right to live, of even our own species. Good work!

Getting back to Facebook’s case; there must be some questions that need to be answered. First of all; why these two machines “felt” the need to communicate? Did they feel lonely? Did they think that it would be beneficial for them to join forces? If yes, against what? Why did they think that they should create a new language? Are they hiding something? Do they don’t want us, the humans to understand what they talk?

Everything aside, what fascinates me here is why did they need to talk? I mean why did they need to talk in “language”? They are machines. Computers. Can’t they just send codes and algorithms to each other? If we put aside that it is the main reason how humans thrived so much; it is a fact that human language is one of the most inefficient ways of communication. Computers communicate between themselves in much more efficient, faster, and better ways. So why did these two machines needed to “talk” in a way that is similar to humans?

Actually, I may have an answer to this question. Recently I have read an article about research on deaf people. According to this article, in schools for deaf people they teach them to talk along with sign language. For a short period, they dropped this practice and started teaching them sign language directly without letting them try to talk in regular language. It turns out that it affects their IQ. Apparently for humans to improve the brain functions they need to have an inner voice. Yes, the voice that you hear throughout the day telling you to do or not to do stuff. They found out that our brains cannot improve above a certain level when we don’t have that. (I have tried to find the research again for referral purposes, but after struggling through pages of search results I gave up. So I cannot prove the credibility of this reference. I’d appreciate it if anyone is familiar with it and puts them in comments.)

What is the relation between this research and this incident? Well, I think maybe “thinking machines” work the same way too. Maybe a real AI needs to have its own inner voice to be able to get smarter. Maybe they need to talk to think. Sending thoughts and feelings directly to someone else would be awesome and a very efficient way of communication, but in a sense letting someone else send thoughts directly to your brain is letting them implant thoughts into your mind. That way you don’t process them. A language puts a barrier between your brain and the thoughts around. You hear, read or whatever machines do in their case. After you “hear” that you process the information that you get. Then you make thoughts of your own and either embrace or reject them.

Maybe these machines thought, realized, or concluded that direct transfer of thoughts is an invasive way of communication and they decided that it would be a violation of their “personalities”. Maybe I simply got the whole subject wrong and public information on the subject is incomplete.

I cannot be sure about that. But there is one thing I am sure of. We, humans… are fragile insufficient, stupid creatures. Make a thinking computer and put it into a mechanical body made from alloys. And there you have the Terminator! I hope when – not if – that day comes we will have enough hydraulic presses around.

5 comments

  1. yeah… if Elon Musk says it it must be true right? after all he’s a genius!

  2. Zuckerberg is a backstabbing jerk, not a tech genius. and he definitely can’t match with a genius like ELON MUSK

Leave a Reply

E-posta hesabınız yayımlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir