This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
The team at Unity published a brief blog post today outlining what they see as the six principles which should guide the development of "ethical" artificial intelligence tools and systems.
While these are just suggestions, they may help jumpstart some useful conversations within game development teams about how and why advanced AI tools and applications (like machine learning and natural language processing) are used.
However, Unity's main focus seems to be on AI used outside the game industry in fields like healthcare, engineering, and media. It makes sense given Unity's ongoing efforts to push its game development tools into the hands of creatives in other industries, and the comparative lack of cutting-edge AI techniques (since game AI is typically designed not to excel at its task, but to foster a good experience for players) employed in game dev.
"These principles are meant as a blueprint for the responsible use of AI for our developers, our community, and our company," reads an excert of the blog post. "We expect to develop these principles more fully and to add to them over time as our community of developers, regulators, and partners continue to debate best practices in advancing this new technology."
Without further ado, here are Unity's six guiding principles for AI:
This comes well over a year after Google DeepMind established its own research group, the DeepMind Ethics & Society unit, to establish and solve the big questions posed by cutting-edge AI development. That effort continues even as Google DeepMind continues to train AI agents to excel at games like Quake III Arena, StarCraft, and Go.