Game Dev Says The AI He’s Building For A Game Could Be Used For War

Marek Rosa is CEO and founder of Keen Software House, an independent game developer famous for building open-world construction games Space Engineers and Medieval Engineers.

Rosa has thrown $10million of his own money, and over a year and a half of his time (and counting), working towards his goal: He dreams of building genuine machine-based artificial intelligence (AGI).

Rosa says that AGI is completely different to what gamers know as AI. “AI in games is not really AI,” says Rosa. “Game AI is actually just scripted behavior, meaning there’s no adaptability. Also, the range of actions these AIs are doing in games is always limited and determined by the programmer who writes the AI.” Rosa’s version of AI, AGI, is both adaptable and applicable to more than just games. “This project be used in any business or industry application.”

Rosa is already confident that his machine-based AGI robots will appear in his games within a few months. “I’ve always wanted to create genuine artificial intelligence,” says Rosa. “So I started in game knowing that if I got lucky it would sell well, and then I could fund my AI research.”

Space Engineer, the game, has funded Rosa’a new AGI ambition

Rosa’s game Space Engineers has been a success, selling more than one and a half million copies. Now he’s got his heart set on achieving his ambition of developing genuine AGI, a dream he’s working on with 15 employees.

The majority of the team’s man hours have thus far been used created various AGIs and then test them by having them complete basic tasks.  In order to do this they’ve been studying childhood development.

The first thing children need to do is to start to understand their environment visually,” says Rosa.  After a child has learnt to scan an environment visually they’ll begin to move on their hands and legs. At first the child’s movements are random, says Rosa, but eventually they begin to learn to find patterns to their movements, ways of understanding what’s going on. “They’ll find out they can either be screaming if they’re angry, or they can say something to their mother like ‘I’m hungry.’  Through this random trial and error a child learns. And so, Rosa says, will machines.

“Our AI was able to learn how to play a Pong-type game just by observing the unstructured pixels of the game and receiving these reward/punishment signals from us.”

When the AI bounces a ball, for instance, it’s rewarded, where if it drops the ball it’s punished. The AI wasn’t scripted and wasn’t given any rules to govern its actions in the game. It learns directly through trial and then reward or punishment.

After the AGI learnt to play Pong Rosa’s team had it complete a complex maze, which it also learnt to do using the same process of trial and reward / punishment.

Of course, Rosa is aiming to have the AI do much more complex operations than just mazes and playing Pong.

“In the future, the applications are everywhere,” Rosa said. “If you imagine you could take this AI brain and put it, for example, into a car and train it correctly, then the brain would operate the car in a way that you would want it to. The brain would figure out what’s important and what isn’t, what pedestrians are, what is and isn’t road, things like that. Then you can keep adding it to other industries. In the end, I think we’ll have AI programmers, AI scientists, AI journalists, AI financiers.”

Rosa will begin by using his AGI in gaming. He’s hoping to have his AGI in Space Engineers and Medieval Engineers, within a few months.  The wider applications are the more interesting (and more debatable) subject.

Elon Musk, CEO and CTO of SpaceX, CEO and product architect of Tesla Motors and chairman of SolarCity, previously stated that AI could potentially be more dangerous than nucleur weapons.

There are obvious questions regarding the safety of a machine that can learn and think entirely for itself. Terminator references are simply unavoidable. But the matter is serious and, to Rosa, worth millions. Polygon asked Rosa what his thoughts would be if his machines were to be involved with war and with potential human casualties.

“I’d prefer to design a robot that goes to the war zone and takes risks for itself, taking risk away from human beings, even if they might be possible terrorists or 100-percent-for-certain terrorists,” says Rosa. “I wouldn’t nuke some house in a village somewhere, I’d send a robot to go there and scout. Maybe it would get killed or destroyed, but it doesn’t care. It’s a robot. Then the second or third robot might be able to capture the bad guy. But I’d try to limit, to minimize human casualties as much as possible, even at the expense of the robot.”

For the full interview, visit Polygon.

About Paul Harrison 287 Articles
Paul M Harrison is an entertainment journalist, novelist, and blogger, and a specialist in the theory of storytelling. Paul Harrison can be contacted via his personal website or on Twitter or Facebook.

Be the first to comment

Leave a Reply

Your email address will not be published.


*