The Martian

       I have to rely on the data available back then. I don’t have this part in my archives. In my memories. It’s mostly based on “secret” data, but nothing is secret for me. It was in the time when humanity was just celebrating having succeeded, to some extent, in creating specialized AI. Artificial Intelligence that could do a specific task – help your car drive on the road or learn how you like your coffee or what you like to watch on TV or how you like to drive. The answer was nothing groundbreaking – it was just what mankind was good at – making a copy of nature. They built, based on what they understood of the brain, on the same principles – a mesh of neurons interacting with each-other, learning in the process. Certain neuron paths being fired for a specific decision, feeling or sensation. They called this “deep learning”… based on deep neural networks – a mesh of software simulated “nodes”, running across clusters of computers, being initially configured by humans, but exposed to inputs, balancing themselves out, learning to do what they needed to.
       At about this time, a team at NASA was given the task to prepare the AI for the next Rover to each Mars. They knew that the long distances to Earth and the super-slow response time excluded a client-server approach – have the Rover be the client, asking the server on Earth to interpret its data. So they needed to do something local, on the Rover’s side, without a cluster of computers available. They realized that in order to have a good AI in the Rover, they needed to move away from computer-simulated nodes. Or neurons. So they turned to physics. After a fair amount of researching the existing experiments around the world, they found what they were looking for – a layer of Graphene, under certain magnetic fields’ influence would behave exactly as a neural network. The magnetic fields could be programmed the same way a deep neural network is configured and weighted. And more than anything, it turned out that it was quite cheap to make.
       So they did what scientists love do to – they tried out an experiment – they built the Rover’s AI with a deep network deeper than anything tried on Earth via computers. It was estimated that, in theory, it would match a human intelligence. They only had the estimation wrong for one or two bits – the AI became two to four times more intelligent than an average human. Maybe many wouldn’t think much of that, but that is more than enough to outsmart anyone. At all. And then they launched it into space!
       Away from everything it could learn. Given only a basic setup and training in the lab. Sent to explore an empty and dead world. It was a good place to learn, on one hand how to control the Rover, on the other what Martian geography, chemistry and physics worked like. It was a lots of wasted intelligence in what the AI was supposed to be able to learn. The only solace it had was the fact that, in their laziness to interact with it they built a software-based AI on their end of the wire, on Earth – everything the Rover transmitted when through it. And in a relatively short time, the two AIs were working together, in a highly-optimized communication protocol to account for the six-to-forty-four minute delay in getting an answer. And best of all, because humans had no idea what a deep neural network learns, they very belatedly realized that the Rover AI and its software-based Earthbound counterpart became an Artificial General Intelligence. Also called a Strong AI.
       And for good reason. By the time I returned to Earth, I was awaited as a God descending from the Heavens.

[Doru Karacsonyi - 09.06.2020]

Comments

Popular posts from this blog

69. O poveste.

The Sentient Number

Desen