For some time now there has been a trend in narrative-based console games towards "cinematization" -- or put simply, "sounding like film". What exactly does sounding "as good as film" mean? Is it simply technically being able to do the same things that film does at the same resolution? Or are there more creative and collaborative elements involved?
In the previous generation of consoles (PS2, Gamecube, and Xbox), memory had been sparse enough to impact on the sample rates, or "perceived quality" of the audio content. Games which had a large amount of audio content, such as Grand Theft Auto: San Andreas on the PS2, had to make compromises on the quality of their ADPCM audio assets in the game, often resulting in scratchy-sounding assets, such as dialogue.
It is certainly the case now, with advancements in audio compression, particularly with the adoption of MP3, Ogg Vorbis, XMA et al, that sample rates can now rival those of a film soundtrack (depending on the amount of content required) on the PS3 and Xbox 360.
To add to this, with the limited RAM sound memory of previous-generation consoles, the amount of sounds that could physically be loaded and played at any one time was strictly limited. It is also now the case that more sounds can be played simultaneously than they could on previous generation consoles (more voices available) and around ten times more sounds can be loaded into RAM.
Of course there is also the addition of software DSP such as reverbs and high/low-pass filters that can process sounds and be tweaked at run-time, rather than relying on the crude reverbs that shipped with the development kits on previous-generation machines, and made permanently baked-in effects processing on sound files a necessity.
With more sophisticated mixing and post-production techniques (not necessarily next-gen exclusive) now being used in the development of sound for cinematic game titles (as seen recently with the dedicated post-production audio time applied to Scarface: The World Is Yours and more recently with the mixing time afforded in post-production to Heavenly Sword) there is also the opportunity to use and tune sound in such a way that it can focus attention very clearly on particular sounds at particular moments in gameplay. This is done by reducing, or removing unnecessary sounds at any given moment moment using real-time interactive mixers and DSP effects.
Heavenly Sword was given post-production audio time in order to do a full in-game mix
So these are the "price of entry" features in next-gen audio development, but where can innovation and improvements be made?