Gamasutra: The Art & Business of Making Gamesspacer
Optimizing Asset Processing
View All     RSS
March 19, 2019
arrowPress Releases
March 19, 2019
Games Press
View All     RSS

If you enjoy reading this site, you might also want to check out these UBM Tech sites:


Optimizing Asset Processing

October 29, 2008 Article Start Previous Page 3 of 3

Memory Mapped Files

The use of serial I/O is a throwback to the days of limited memory and tape drives. But a combination of factors means it’s still useful to think of your file conversion essentially as a serial process.

First, since file operations can proceed asynchronously, you can be processing data while it’s being read in and begin writing it out as soon as some is ready. Second, memory is slow, and processors are fast. This can lead us to think of normal random access memory as a just a very fast hard disk, with your processor’s cache memory as your actual working memory.

While you could write some complex multi-threaded code to take advantage of the asynchronous nature of file I/O, you can get the full advantages of both this and optimal cache usage using Windows’ memory mapped file functions to read in your files.

The process of memory mapping a file is really very simple. All you are doing is telling the OS that you want a file to appear as if it is already in memory. You can then process the file exactly as if you just loaded it yourself, and the OS will take care of making sure that the file data actually shows up as needed.

This gives you the advantage of asynchronous I/O because you can immediately start processing once the first page of the file is loaded, and the OS will take care of reading the rest of the file as needed. It also makes best use of the memory cache, especially if you process the file in a serial manner. The act of memory mapping a file also ensures that the moving of data is kept to the minimum. No buffers need to be allocated.

Listing 3 shows the same program converted to use memory mapped I/O. Depending on the state of virtual memory and the file cache, this is several times faster than the “whole file” approach in Listing 2. It looks annoyingly complex, but you only have to write it once. The amount of speed-up will depend on the nature of the data, the hardware, and the size and architecture of your build pipeline.


LISTING 3  Using Memory Mapped Files

HANDLE hInFile = ::CreateFile(L"IMAGE.JPG",
DWORD dwFileSize = ::GetFileSize(hInFile, NULL);
HANDLE hMappedInFile = ::CreateFileMapping(hInFile,
LPBYTE lpMapInAddress = (LPBYTE) ::MapViewOfFile(
HANDLE hOutFile = ::CreateFile(L"IMAGE.BIN",
HANDLE hMappedOutFile = ::CreateFileMapping(hOutFile,
LPBYTE lpMapOutAddress = (LPBYTE) ::MapViewOfFile(
    hMappedOutFile, FILE_MAP_WRITE,0,0,0);
char *p_in=(char*)lpMapInAddress;
char* p_out = (char*)lpMapOutAddress;
for (int x=0;x<dwFileSize;x++,p_in++) {
    char c = *p_in;
    if (c == 0)    c = 0xff;
    *p_out++ = c;


Llopis, Noel. “Optimizing the Content Pipeline,” Game Developer, April 2004.

Carter, Ben. “The Game Asset Pipeline: Managing Asset Processing," Gamasutra, Feb. 21, 2005.

[EDITOR'S NOTE: This article was independently published by Gamasutra's editors, since it was deemed of value to the community. Its publishing has been made possible by Intel, as a platform and vendor-agnostic part of Intel's Visual Computing microsite.]

Article Start Previous Page 3 of 3

Related Jobs

Skybox Labs
Skybox Labs — Vancouver, British Columbia, Canada

Software Engineer — Baltimore, Maryland, United States

Server Engineer
LeFort Talent Group
LeFort Talent Group — Toronto, Ontario, Canada

UE 4 Lead Developer
Big Red Button Entertainment
Big Red Button Entertainment — El Segundo, California, United States

Senior Gameplay Programmer

Loading Comments

loader image