Another problem we had to address was related to depth of field within a given scene. One method we used is a blurring effect: We displaced several pieces of a given image at various depths of field when no camera movement was happening. In this case, processing power was light, and the hardware was not overly taxed.
The more difficult scenarios came up when manipulating depth of field in a scene that required camera movement. In these cases, we specified a distance from the camera past which we would blur the image. We dynamically blurred the image itself, just as you would in Photoshop. The method was artificial, but it enabled us to convey a depth of field that appears in the actual in-game camera. The processing power was very heavy in comparison with the first case, and therefore, we used the blur only when we needed to move the camera and adjust the depth of field from a creative standpoint.
Some situations required real-time rendered textures, which in turn required the use of multiple cameras. By using multiple cameras, the processing strain is increased by about one and a half times the amount used with just one camera. Despite this, we decided to use this method because the workflow system we utilized made it very efficient to adjust the images. This technology was used in a scene in which Leon looks at a monitor on the screen. For the "video" displayed on the monitor, we didn't use pre-rendered movies, but animated scenes which were rendered to a texture in real time. With this method, we did not have to render character motion on the monitor in order to change elements of the scene, and so we were able to make changes quite easily.
Normally only one camera's data is used for one cutscene, but data from three cameras had to be used here (see Figures 8 A–D). One was the main camera, which projected the entire scene. The second was camera A, which created the image shown on monitor A. The third was camera B, which created the image displayed on monitor B. The screen images which were created by camera A and camera B were cached by memory, scaled down to a lower resolution, and used as the texture for the monitor. After that was done, we compiled the image for the main camera view. This technology was also used for reflections in things like sunglasses and car windows, with translucent adjustments to cached image data.
As you might have gathered, the development of Resident Evil 4 was not necessarily based on innovative new technology, but rather on efficiency. Our improved workflow and our intense focus on details in the game allowed us to achieve the level of quality we had challenged ourselves to produce. Restarting the game multiple times allowed us to take several fresh perspectives on the game, and the survival horror genre as a whole. I think that ultimately we came up with something that was not only enjoyable, but which also helped to advance the series in a positive direction. The next Resident Evil is planned for next generation consoles, and will present a whole new host of challenges and opportunities. Hopefully we will once again be able to meet our own high expectations.