This project is read-only.

Multithreaded Content Loading

May 14, 2010 at 5:47 PM

Hi guys,

I am working on a procedurally generated terrain engine for an "infinite" world (Inspired by Minecraft). I need chunks of terrain to be loaded/generated whenever the player walks to new chunks which have not been generated yet. However, a single "chunk" can take several frames to generate. Thus it is important to generate chunks inside the camera's view frustum first. Also, near chunks should be generated before far ones. Here is a crude mspaint diagram:

 

I have looked at tutorial 26 but it does not provide what I need. In tutorial 26 all of the content is loaded at once and no new content can be added while it is busy loading. I am not sure how to extend this to what I need to do. Should I have one background thread that waits for blocks that need to be loaded or should i create tasks for each block in the threadpool?

 

Also, here is a screenshot of what I have so far =)

May 18, 2010 at 11:22 PM

Using threadpool tasks should be OK - however you will have to be careful. For a start, to be efficient ThreadPool tasks generally should complete within the frame (otherwise they can start saturating the workers).

Doing this sort of thing is always very tricky. Often it's a case of looking at the algorithms and taking the expensive bits out of the runtime part.

For example, if you are constructing geometry for each object dynamically - then frankly you will struggle to get good performance in any case. Obviously with just using cubes this isn't an issue (you are just generating positions to instance them by).

Even for rolling terrain, you'd want to break the level up into (say) 32x32 blocks. Calculate how many you need to keep in memory to maintain a decent draw distance (say, 64), allocate them up front as dynamic vertex buffers and then queue them for updating.
The advantage of doing this is the memory overhead stays fairly static - and most importantly the video driver isn't thrashing about allocating and deallocating large chunks of memory over and over again.

You then can look at other optimisations, such as you'd ideally only want to have a dynamic buffer for the position and normal data within the mesh. The texture coords, indices, etc, will most likely be constant across all instances.

Does that make sense?