Quick idea I’ve had for a while, but likely won’t have time to try out, for getting rid of UV sets once and for all by just looking up textures with a 3D position.

Start with a coarse grained 3D hash grid (i.e a hash map of your grid cell coordinates), then each of these cells would represent, say, a 256^{3 }volume texture indexed by your vertex position. For compression (this volume data would be extremely sparse, usually little more than a flat 2D slice through it), it would use 3D Random Access trees (read this if you haven’t heard about primal vs dual subdivision – eye opener for me!), but probably with a higher branching factor to reduce the number of levels required. Probably 4^{3} child pointers per node, or maybe 3^{3}. The grid cells are best looked as “short cuts” into the RA-tree, rather than viewing each cell as its own tree.

This means we get a properly quadrilinear-filtered 3D lookup into a sparse tree at the cost of 8 fetches per level in the tree (if you’re going “huh, how does he get 8?” here, then read the aforementioned paper regarding the magic of primal subdivision as it pertains to filtering), so in the higher branching case it would be 3*8=24 fetches total. *But* we can cache a lot of this because of locality. You’d start traversing the tree in the *geometry shader* for the whole triangle (possibly tessellating it so it fits entirely within one hash grid cell, and obviously making sure that the nodes referred to by each such cell covers the neighbourhood, and not just what’s strictly internal to that cell, since a triangle may stick out a bit). Most triangles are small so hopefully you could do 1-2 levels of traversal on the triangle level, and then only do the final 1-2 levels in the pixel shader (you may also store the leaf nodes as plain old 3D volume textures, or rather a part of a single giant one).

If you want to get clever you could do some sort of tetrahedral tessellation of the volume (so rather than keeping a “cube” that you progressively make smaller to zero in on the desired sample, you keep a tetrahedron). This means you need fewer samples per level, but also makes my head hurt a bit so you’d need to think that through carefully (in particular how to sort out a primal-esque subdivision). This obviously would make the filtering look “different” in some sense, but it would still be smooth.

Clearly, you’d stop early in this hierarchy based on LOD.

For streaming, you’d store the active nodes of the trees in a big pool and just request the unavailable ones on the fly a la standard 2D virtual texturing.

Maybe you’d build a simple 3D volume texture “brick” to substitute for the most detailed node currently loaded to cut down on “manual” filtering.

Maybe you’d store multiple hash grids. For very small triangles (the majority of them, these days) you may be able to “short cut” much deeper into the tree than for big triangles. These grids would be extremely sparse so storage should be okay if you have some very coarse streaming on top of this. For very distant triangles you may need to start at the real root of the RA-tree instead of using the grids as a short cut.

This is very similar to what the Sparse Voxel Octree crowd (including myself) are doing, but we’re also relying on data held within the tree for spatial data, i.e. whether a cell in space has color or is empty. This eliminates the need for polygons to describe which points in space should be rendered.

I read through the slides and found that I had already derived most of the techniques from scratch for use in my voxel-octree rasterizer. The only major exception is that I’m not (yet) using codebooks for differential color compression. I’ll have to try it out, as storing CPU-friendly RGB24 values for every voxel accounts for about 80% of my memory usage, and this will get even worse if I add per-voxel normals.

The hierarchical incremental calculation provided by primal-subdivision trees is a very powerful tool. I use it to replace the traditional model->view matrix transformation and uses about 1/4 as many cycles to compute the screen-space position of a tree node. I’m currently looking at using quadratic interpolation of position instead of linear to allow “space warps”, which would allow me to animate these voxel octrees.

Still, it saddens me to see that the RATrees paper came out in 2007, and that you had this idea in 2010, yet AFAIK nobody has actually used this technique, even for 2D textures, for a production game.

Side note: I’m amazed at how prolific you are on the internet. I saw “ssylvan” commenting on Rust on github, recognizing the name but not remembering where from, and tracked you here. I used to look out for your comments on Reddit as they were some of the best. Definitely subscribing to this blog.

You’re rasterizing the tree directly? That’s pretty awesome! Are you documenting this somewhere or is it a private thing? I’d love to see how it goes!

Yep, walking the tree, culling, finding the transformed XYZ of each non-culled node, then drawing it as a cube on screen. My performance isn’t great, but it’s within an order of magnitude of being useful for game engines. I currently get 30-40FPS @ 720p on a derivative of the Sponza scene, with a 1024^3 grid. It uses 80MB for the octree, which currently only holds RGB24 color data.

There’s two other people AFAIK working on this, or something similar: Atomontage and Euclideon, but they’re both closed source and hoping to make money from it (though Atomontage is much less greedy sounding). I’m open source, but only doing this as a hobby.

The source is at https://bitbucket.org/BinarySplit/voxel but I frequently forget to push, and setting it up for compilation would take ages to explain >_<. My plan is just to post it on Reddit and similar places when I actually get something worth being excited over.