This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Meanwhile, one of the issues the Computational Creativity Group is hoping to tackle in the future is developing AI to the point where it can explain its work. Aside from the purely technical issues involved with getting an AI to communicate in this way though, an additional philosophical problem is the gap between humans and computers when it comes to self-awareness.
That is, while computers "know" themselves utterly in the sense that their decisions can always be traced back through their code, humans do not. We may come up with explanations for why we arrived at particular decisions, but there's no certainty that they are correct even to ourselves because we lack that depth of self-knowledge. Computers are meanwhile so transparent that they can seem autonomous rather than creative; their explanations inevitably tracing back to their programming -- or their programmer.
It's a line of reasoning that Michael is often confronted with; observers countering ANGELINA's creativity by pointing out that he wrote the code that directs it.
"They recognize that [ANGELINA's games] can vary, but they say that when they know the story of how it was made then it stops being creative because they can see the system was driving toward that output," Michael explains. "Whereas if Alan Hazelden sits down to create a newsgame then you start by knowing that there are many ways he could diverge from the path he ended up on."
Unless you hold that Alan's creations were also determined by a larger force, such as destiny, then this is a problem without a simple solution. Michael could give ANGELINA the capacity to diverge further from the source, but then he'd still have created that system and the limits of that would still determine the output.
It's an example of what Michael calls 'the infinite rabbit hole of criticisms' - solutions which only delay or lead to further problems.
Michael's work on ANGELINA is mirrored by Simon Colton's painting AI, The Painting Fool.
Instead, a better way to confront the issue may be to remove ANGELINA's traceability altogether. The Computational Creativity Group is currently experimenting with flowcharting systems for higher level programming, for example and the hope is to allow AI such as ANGELINA to build larger programs out of small, modular components while also facilitating divergence.
"Let's say that ANGELINA makes one of these programs, generates a game with it, but then deletes all trace of that program. The random seeds that affect what ANGELINA read and saw to generate such a program are not known. All we'd have to go on is a single text file describing the justification for what you are about to play. Or even better, a little "Ask The Developer" feature in-game that lets you chat to ANGELINA. All you have to go on is what it/she says in response to your questions - that puts us in a new kind of situation."
While ANGELINA's games are currently very low-fi, Michael hopes to change that in the future.
Even apart from the issue of traceability however, ANGELINA's ability to explain decisions may still be thwarted by its capacity to relate those decisions to real-world concepts. It would be unable, for example, to say that it designed a game to play off ideas of religion because it has no concept of how it could interface with religion. It's possible to use a semantic web such as MIT's ConceptNet to give ANGELINA a way to relate ideas to one another, but translating those into mechanics - rather than just images or definitions - is another problem entirely.
Hidden object games are likely the first which ANGELINA will be able to create on a believably human level.
It's an issue Michael's been pondering deeply, especially with the growing number of gamejams acting as such a perfect example.
"Say that the theme for a game jam was 'Space', for example. ConceptNet could tell an AI lots of things about space -- that it has planets in it and low gravity and so on. ANGELINA could then take that and use it; it could use planets for graphics and maybe cast the player as a planet... but it can't translate it into a real-world, mechanical concept. Likewise, Mechanic Miner can come up with new mechanics and it can discover low gravity for itself... but it can't explain it to you, let alone connect it to a real world action."
"Humans, though? If you were designing a game where you picked up a power-up and that let you jump higher, what would you depict that as? Jump-boots, maybe a jetpack?"
Caffeine, I say.
"See? Exactly. It'd be funny to have a game where you're a journalist who has to drink coffee in order to jump higher and better quality coffees let you jump more... but I have no idea how we're going to broach that problem with an AI."
"But, then again, 12 months ago I didn't know how we'd mine the mechanics either."