Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
July 28, 2014
arrowPress Releases
July 28, 2014
PR Newswire
View All

If you enjoy reading this site, you might also want to check out these UBM Tech sites:

IBM Demonstrates Neural Net Chip With  Pong  AI
IBM Demonstrates Neural Net Chip With Pong AI
August 19, 2011 | By Kyle Orland

August 19, 2011 | By Kyle Orland

A new computer chip now being developed by IBM emulates the structure of the human brain to achieve artificial intelligence and machine learning goals, including teaching a computer to win at Pong.

As reported by EETimes, the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) chip uses a crossbar array of digital synapses to simulate the human brain, which distributes processing work across synapses as they overload, strengthening those used most often and letting unused pathways wither.

IBM is currently partway through work on a $41 million development contract with the U.S. Army's DARPA that seeks to create a 10 billion neuron, 100 trillion synapse cognitive computer, that would mimic the human brain in power and size.

But a smaller prototype being shown now, only 4 square millimeters in area, uses a few million transistors to perform simple machine learning tasks, including cursive handwriting recognition and a self-generated strategy to win at Pong.

Researchers hope the full chip will help with complex problems such as machine vision, pattern recognition and classification.

Related Jobs

University of Oklahoma - Norman
University of Oklahoma - Norman — Norman, Oklahoma, United States

Game Art Director
Integrated Military
Integrated Military — Remote Work Possible, Florida, United States

Software Engineer
Cloud Imperium Games
Cloud Imperium Games — Austin, Texas, United States

DevOps Engineer
Cloud Imperium Games
Cloud Imperium Games — Austin, Texas, United States

Animation Programmer


Samuel Batista
profile image
I'm very intrigued to find more about how this machine will be programmed to perform its tasks. This is also reasonably concerning, considering that the human brain contains 200 billion neurons, and a CPU with trillions of "neurons" that mimics the human brain in behavior is bound to become self aware.

I'm really excited to see how this CPU will change the computing landscape in the next 20 or 30 years.

Thomas Engelbert
profile image
Hm, sapient AIs. Sound's awesome. Still, I'm rather sceptical if this 'artificial brain' will be enough to get there...

To become self aware, you need to have needs. And how wil a computer ever have an intrinsical need for anything?

Edit: Ok, ignore the previous I wrote. My girlfriend (who got a BSc. in Neuroscience) just slapped me for that comment...

Josiah Dicharry
profile image
Other MachineBrain projects:

HTM(Jeff Hawksins(CEO of Palm Pilot) Hierarchical temporal memory-- Approaches the brain simulation by abstracting the neural network away as a Bayesian network or probability model and focusing on modeling the cortex from its structure as a converging hierarchy with feedback loops)

Blue Brain(Model the brain in software on super computers focusing on the neocortical columns-- for medical research purposes)

FACETS(A European venture of modeling the brain on a chip back in 2005 intended to last 5 years)

Josiah Dicharry
profile image
On a side note, you probably wont program a chip like this. You will end up training it just like other neural network models.

Duong Nguyen
profile image
A little behind on the timeline but Skynet is coming along nicely.. :)

Thomas Grove
profile image
They're trying too hard with the name.

Mark Taylor
profile image
bat.y = ball.y

Liam Boyle
profile image
I would like to know more about the details of this processor to include what binary/assembler set it is using for instructions. I had a though on developing an AI based on "Hamilton's Rule" from biology and game theory and statistical analysis, it seems like this chip architecture would be perfect for the biologically based algorithms I want to use.

Bojan Urosevic
profile image
"...strengthening those used most often and letting unused pathways wither."

Finally we'll be able to blame computers for errors in our programs.

(OK, I'm not entirely fair here, because not using certain pathways enough to strengthen them can still be counted as a programmer's error.)

Btw, This also explains why HAL 9000 forgot to wish Frank a happy birthday. It (he?!?) didn't think about it that often so those particular pathways were poor.

All jokes aside, this is really exciting news.