DEElekgolo
December 13th, 2009, 11:05 PM
All Taken from Polycount by d1ver. (http://boards.polycount.net/member.php?u=20394)
Extremly laconic exposition of the following text, which I figured Might be usefull to have in one place:
If you're interested in more details, I suggest you just take it from the top. And ready yourself to spend sometime reading.
------------------------------------top------------------------------------------
Hey.
I felt like doing this paper, because.
Because becoming a video game artist isn’t all that easy. And information unfortunately isn’t all that widely available. I learned a lot from the materials other people were so kind to provide and being incredibly thankful I see no other way to repay them, but to spread the knowledge.
Information is too much of a valuable resource and it’s pretty much the only thing, except confidence, that separates a nooby from a professional.
As time went buy I’ve managed to accumulate some amount of information, which I’ve never seen being presented in one place and somewhat carefully systematized. It’s just stuff that I’ve picked up all over internet (especially from you guys), and that came out of my personal experience crammed together. This paper is aimed at people, who are almost totally unfamiliar with technical aspects of videogame artists work and, will hopefully help them paint a much clearer picture, once they are done reading. I also tried to be pretty thorough, because I want to translate this paper into russian for all those artist, who can’t get useful knowledge due to not knowing english. I won’t be able to link them to all the great articles and forum threads, which means it had to be pretty much all in one place. So please pardon me if this won’t become a discovery for you. Even though this goes out to those, who need knowledge most, I hope, that even some hardcore veterans might find something small but useful in here, or just have a place where they can link somebody, when they encounter some familiar questions.
Aaand. I have one last thing to ask of you. Providing people with knowledge is a very responsible task. I don’t pretend to know everything and, least of all I want to provide somebody with false information. This paper here is just how I came to see things. But things vary so much and change so fast, that I am worried. So if you encounter in this text something you feel doubtful about and can clearly explain why, please don’t hesitate to point this out. I didn’t write this for the sake of writing it, but for the sake of people using it, so feedback would be really appreciated. And now on to the paper:
Video Game Artists Hygiene. Theory and practice.
Video Game Artists work is all about efficiently producing an incredible looking asset. In this article I won’t speak about what makes an asset look good, but only about what makes it’s production efficient.
But before we proceed speaking about efficiency it is crucial to clearly define, what term efficient actually means, related to videogame asset production:
1. Your practice, whether you realize it or not, is all about learning to spend less of your resources to do more work. And speaking about resources spent to have things done, the most significant and valuable one would be time. And not even in your work, but in your whole life, and I suggest you don’t forget that. This, without any argument I hope, concludes, that saving asset production time is efficient.
2. On the other hand, video game graphics, being displayed in real time, have to face certain restrictions. With each project each team tries to surpass the others and hopefully themselves, but the amount of memory and the processing power consoles can offer doesn’t get no bigger until the next generation hits the shelves. So in the name of making better, prettier games we need to make our assets engine-friendly. In other words, we could say, that saving asset render time is efficient too.
From here this article splits in two parts. The first one will be about things you need to know to produce an efficient asset. The second one will be about things you need to do, to make sure, that the asset you’ve produced is efficient. Let’s get rolling, people=)
Things you need to know
Brain Cells Do Not Recover
Even though most the of things to follow, are about making your models more engine friendly, please don’t forget, that this stuff is just an insight on how things work, that you should use a guideline, to know which side to approach your work from. And all of this shouldn’t be time consuming at all. It has to come naturally while you’re working.
Saving your or your teammates time is much more important of a guideline. I would say if your optimization would cause an additional hour of work for you or anyone further down the production pipeline, than you probably don’t want it. (provided you’ve been building your assets with following information in mind from the start). Extra hundred of tris won’t make fps drop below the floor. If you ain’t gonna save another draw call for an object, which is going to be heavily instanced or onscreen all the time, than it’s not worth it. Feeling work go smooth would make for a happier team, that is able to produce a lot more stuff a lot faster. Don’t turn work into struggle for anyone including yourself. Always take concern in needs of people working with stuff you did.
There’s sure an endless amount of stuff you could do to save your or colleagues time, so I wont even try to cover them all, but still I’d like to throw some in as an example (which I may expand upon until it becomes worthy of some personal space):
- Check everything Twice when you’re done. If you’re doing a model for rigging for example, then any little thing you missed might cause the animator to skin the model again.
- Use your UV for selection while making LoD models, if your model has a lot of repetitive parts.
- Your texture artists would thank you if you’d make that UV seams someplace hard to spot. Plus please take some time and try to straighten your lines and keep them mostly perpendicular or parallel, to “utilize the square nature of the pixels” better.
http://img190.imageshack.us/img190/6408/squarenature.jpg
This incredibly adds to workability, while distortion usually goes completely unnoticed. And don’t forget some padding, especially if your piece will be seen from a distance a lot.
- Level designers would think you’re the best guy to work with if you’ll keep to the grid (if you game uses one) and always place the pivot point in the most comfortable place, which you can ask them about.
In the beginning there was Vertex.
Remember geometry for a second. When we have a dot we, well, we have a dot. A dot is a unit of 1 dimensional space. If we move up to 2 dimensional space, we’d be able to operate with dot’s in it too. But if we take two of them, then we’d be able to define a line. A line is a building block of 2 dimensional space. But if you take a closer look, you can see that a line is simply an endless number of dots put alongside each other according to a certain rule or a linear function as you would’ve said in high school. Now lets move a level up again. In 3 dimensional space we can operate with both dots(or vertices) and lines. But, if we add one more dot to the previous two, that defined a line, we’d be able to define a face. And a face would be a building block in 3 dimensional space, that forms shapes, which we are able to look at from different angles.
I think we are all used to receiving triangle count as a main guideline for creating an asset or a character. And I think the fact of it being a building block of a 3 dimensional space has something to do with it. )
But. That’s in human way of thinking. We, humans, also have numbers from 0 to 9, but hardware processors don’t. It’s just 0 and 1 - binary - the most basic number representation system. In order for a processor to execute any thing at all, you have to break it into the smallest and simplest operations that it can solve consecutively. And I am terribly sorry for dragging you through this little technical and geometrical issues, but it was necessary, to make you see that processors work the same way in order to display 3d graphics – from the basics. And even though a triangle is a building block of 3 dimensional space, it is still composed of 3 lines, which in their turn are defined by 3 vertices. So basically, it’s the not the tris that you have to save, but vertices. Someone by now would say “Who cares? The less the tri count the less vertices there are!” And he’d be absolutely right. But unfortunately, the number of tris is not the only thing affecting your vert count. There’s also some a bit more subtle stuff going on.
And I’m sorry we have to do this again, but here comes programmers stuff. Keep it together, people=)
A 3d model stored in memory actually presents itself as a number of vertex structure based objects. Structures, speaking object oriented programming language(figuratively), are predefined groups of different types of data and functions composed together to present a single entity. There could be tons of such entities which all share the same variable types and functions, just different values stored in them. Such entities are called objects. That could be a lousy explanation, but there’s much more depth to it programming wise. Here’s a simplified example of how a structure would look like:
Vertex structure
{
Vertex Coordinates;
Vertex Color;
Vertex Normals;
UV Coordinates;
};
Very simplified.
If you think of it, it’s really obvious that vertex structure should only contain necessary data. Anything redundant could become a great waste of memory, when your scenes hit a couple of millions vertexes mark.
There are a great deal of attributes that a vertex structure incapacitates but, I’ll speak only about artist related ones (since I don’t know anything about the others). As far as I know, a vertex structure has enough variables declared for only one set of arguments. What does it mean for us, artists? It means that a vertex can’t have 2 sets of uv coordinates, or 2 normals, or two material ID’s. Now that’s odd, ‘cause we all have seen how smoothing groups(soft\hard edges) work, or applied multiple materials at objects and everything was fine. Or at least it looked fine. How is that possible? Well, it appears, that the most affordable way, to add that extra attribute to a vertex is to simply create another vertex right alongside it!
Speaking a bit more unrelated to technicalities, every time you set another smoothing group for a selection of polys or make a hard edge in maya, invisibly to you, the number of border vertices doubles. The same goes for every UV seam you create. And for every additional material you apply to your model. If you want to create engine friendly pieces you just have to take this into account. The guys at bigger companies sure as hell know this. Epics Unreal Development Kit automatically compares the number of imported and generated vertices on assets import and warns you if the numbers differ for more than 25 percents. Those a pretty tight shoes to fill, but no one said producing efficient art was easy. If Epics programmers consider it an issue serious enough to be checked upon import, I suggest you should think about it.
Connecting the dots
This small chapter here concerns the stuff that keeps those vertices together – the edges. The way they form triangles is important for an artist, who wants to produce efficient assets. And not only because they define shape, but because they also define how fast your triangles are rendered in a pretty non trivial way.
How would you render a pixel if it’s right on the edge that 2 triangles share? You would render the pixel twice, for both triangles and then blend the results. And that leads us to a pretty interesting concept, that the tighter edge density, the more rerendered pixels you’ll get and that means bigger render time. This issue should hardly affect the way you model, but knowing about it could come in handy in some other specific cases.
Triangulation would be a perfect example of such a case. It’s a pretty known issue, that thin tris aren’t all that good to render. And that could be right while talking about modeling. But talking about triangulation, if you’re saying you’ve made a triangle thinner would mean, that with the exactly same action, you’ve made another one wider. Imagine if we zoom out from a uniformly triangulated model: the smaller the object becomes on screen, the tighter the edge density and the bigger the chance of rerendering same pixels will be.
http://img706.imageshack.us/img706/6472/triangulation.jpg
But, if you neglect uniform triangulation and worry about making every triangle have the largest area possible(thus making it incapacitate more pixels), so in the end you’d get triangles with consecutively decreasing area sizes then once you zoom out again the amount of areas with higher edge density would be limited to a much smaller number of on-screen pixels. And the smaller the object becomes on screen, the smaller amount of potentially redrawn pixels it’ll have. You could also try to work this the other way around, and start with making the triangle edges shortest possible. This would make for a more efficient asset and save you some render time on doing multiple passes for the same pixel.
Eating in portions, makes for a fuller stomach.
Exactly the way your engine draws your object triangle by triangle, it draws the whole scene object by object. In order for your object to be rendered – a draw call must be sent. Since hardware is created by humans it’s pretty much bureaucrtical.) You can’t just go ahead and render everything you want. First you’ve got to have some preparation done. CPU(central processing unit) and GPU(graphics processing unit) share the duties somewhat like this: While GPU goes ahead and just renders stuff, CPU gathers information and prepares next batches to be sent to GPU. What’s important for us here, is that, if CPU is unable to supply GPU with the next batch by the time it’s finished with the current, the GPU has nothing to do. From this we can conclude that rendering an object with a small amount of tris isn’t all that efficient. You’ll spend more time preparing for the render, then on the render itself and waste the precious milliseconds your graphics card could be crunching some sweet stuff.
The number of tris GPU can render until the next batch is ready to be submitted significantly varies, but here are some examples. I’ve seen it somewhere in the UDN that for Unreal Engine 3 numbers are in between a 1000 and 2000 triangles. While working with BigWorld engine we’ve set the barrier at 800 even though some of the programmers said that it could be around a 1000.
Defining such a number for your project would be an incredible help for your art team. It would save a real ton of both production and render time. Plus it’ll serve as a perfect guideline for artists to solve some tricky situations completely on their own.
You’d want to have a less detailed model, only when there’s really no point in making it more complex, and you’d have to spent some extra time on things no one would ever notice. And that luckily works the other way around – you wouldn’t want to make your model lowpolier than this, unless you have your specific reasons. Plus the reserve of tris you have could really be well spent on shaving of those invisible vertices mentioned earlier. Add some chamfered edges and fearlessly assign one smoothing group to the whole object(make all the edges soft). It may sound weird but by having a smother more tessellated models you could actually help the performance.
If you’d like your game to be more efficient try to avoid making very low polygonal objects a single independent asset. If you are making a tavern scene, you really don’t want to have every fork, knife and dishes hand placed in the game editor. You’d rather combine them into sets or even combine them with a table. Yeah, you would have less variety, but believe me, when done right, no one would even notice.
Another plus is that objects of such polycount require no LoD models. And having them, would actually hurt the performance, because you’ll have to spend time and resources on producing, exporting and swapping the models ingame, while their render time will remain identical.
But this in no case means that you should run around applying turbosmooth to everything. There are some things to watch out for. Like stencil shadows, instancing and even vertex lighting, to name a few. Plus some engines combine multiple objects in a single drawcall, so watch out. But I’ll speak about that in the very ending of “Things You Need To Know” part of this paper.
Vertex VS Pixel
If I’d ask you, as an artist, what is the main difference between art production for last and current generation of consoles, what would you say?
I’m pretty damn sure, that the most common answer would be introduction of per texel shading and use of multiple textures, to simulate different physical qualities of a single surface, becoming standard de facto. Yeah sure polycounts have grown, animation rigs now have more bones and procedurally generated physical movement is everywhere. But normal and spec maps are the ones contributing the most visual difference. And this difference comes at a price. Nowadays I hear terms “fill rate driven engine”, “fill rate bound engine” and “fill rate oriented engine” thrown around more and more. All those terms didn’t come out of nowhere and the reason behind there appearance is, that, in modern day engines, most of objects render time is spent processing and applying all those maps based on incoming lights and cameras position.
From a viewpoint of an artist, who strives to produce effective art this means following things:
Optimizing your materials is much more fruitful, than optimizing vertex counts. Adding an extra 10, 20 or even 500 tris ain’t as nearly as stressing for performance as applying another material on an object. Shaving hundreds of tris off your model would hardly ever bring a bigger bang, than deciding that your object could do without an opacity map, or glow map, or bump offset or even a secular map. Kevin Johnstone of Epic Entertainment once said, that while working on Unreal Tournament 3 he optimized a single level by 2-3 millions triangles just to gain somewhat around 2-3 fps. I think this example makes it obvious. It’s not the tri count, that affects performance the most. I’d say it’s the number of draw calls and shader and lighting complexity that counts. Then there are vertex transformation costs when you have some really complex rigs or a lot of physically controlled objects. And post processing.
Surely, as an artist you have a lot more control over your materials, rather then the lighting, but there are still some things you can do, depending on your engine.
If you know you’re going to have dynamic lighting in the scene, then you don’t want to have huge object only a small part of which will be lit dynamically at a single moment of time. Break it into smaller pieces. Or for example if you are doing a haunted hotel scene, where the player will have to navigate dark hallways, lighting his way with a flashlight, you’d rather have every chandelier on the wall a separate object. Even if its like 30-50 tris. It’s may seem logical, in order to optimize things, to go ahead and attach all the chandeliers into a single object, since they are pretty low poly and share the same material, but, all the profit that comes of it, wouldn’t compare with the stress ‘caused to process dynamic lighting every frame for an object so widely dispersed across the level.
Even though I am speaking off my Unreal Engine 3 experience, I believe those guys know what they are doing, and their knowledge could be taken into account. If your engine gives you a choice between vertex lighting and lightmapping you’d want to go with the second.
First of all, because in case of vertex lighting you need to store in memory the data for every single vertex you have, and that kinda makes you wish you had less vertices, but, since we’ve figured out that we have them for free until the next batch is ready, we’d rather use them for good.
You could use a 128 or 64 or even a 32 by 32 light map in some cases that would still look smoother then vertex lighting but eat up a lot less memory.
Plus, since a lightmap is pretty much a usual bitmap you can weave it into your texture streaming pipeline and not affect the overall texture memory budget. And I can hardly think of a way to make vertex lighting almost free memory wise, so lightmaps for the win.
If you want to make your asset a bit more engine friendly and your engine supports lightmapping, then I suggest you don’t hesitate to make a second uv set, for the lightmaps.
The most important thing
After all the things said, there’s still one most important thing that you need to know. And that is that things differ. Sometimes dramatically. As with everything in life, there’s no universal recipe and the best thing you can do is figure out what does your specific case look like. Get all the information you can from the people responsible. No one knows your engine better then the programmers. They know a lot of stuff that could be useful for artists, but sometimes, due to lack of dialogue, this information remains with them. Miscommunication may lead to problems that could’ve been easily avoided, or be the reason you’ve done a shit ton of unnecessary work or wasted a truckload of time that could’ve been spent much wiser. Speak, you’re all making one game after all and your success depends on how well you’re able to cooperate. Communication with programmers could actually be the work of your lead or a tech artist, so you could just ask them instead. Asking has never hurt anyone and it’s actually the best way to get an answer.
Dalai Lama once said:
“Learn your rules diligently, so you would now where to break them.”
And I can do nothing, but agree with him. Obeying rules all the time is the best way to not ever do anything original. All rules or restrictions have some solid arguments to back them up, and fit some general conditions. But conditions vary. If you take a closer look, every other asset could be an exception to some extent. And, ideally, having faced some tricky situation artists should be able to make decisions on their own, sometimes even break the rules if they know that the project will benefit from it, and them braking the rules wouldn’t hurt anything. But, If you don’t know the facts behind the rules I doubt you would ever go breaking them. So I seriously encourage you take interest in your work. There’s more to video games production than art.
If you’re doing freelance work and you feel like your model would really benefit from that extra tris, than I’d say you ask. For it is in the best interest of the people you’re working for. Plus, if for some reason, they are unaware of all the stuff listed above then there’s a big chance, that you’ve helped them a lot – and that’s some respect points for you.)
Things you need to do
So I hope this wall of text up here made some sense for you guys.) I find all this information on how stuff works really useful, but it’s not exactly what you would use on a day by day basis. It’s nice to read once, but I doubt I myself would want to go through this again in case I forget something. And though I think “whys” are important, as an artist(or as an artist wanna be), I’d love to have a place where all the “hows” are clearly stacked, without any other distracting information. And the “whys” section would serve as a reference you can turn to, in case something becomes unclear.
So, while making and asset or a character you’d want to keep these things in mind all the time:
It’s not the tri count that matters, but vertex count. Smoothing Groups(Soft/Hard Edges), UV seams and Multiple Materials increase the number of verts, so you want to have the least number of those possible. Vert counts don’t matter at all, if they don’t add another drawcall, so feel free to find them a proper use. Uberlowpoly models are bad for performance.
No LoD for objects with less tris then GPU can crunch while CPU submits the next package.
Triangulate using “Max Area” or “Shortest Edge” principles.
Materials(Shaders) are the most fruitful thing to optimize.
If you know you’re going to have dynamic lighting, then try to break bigger objects into smaller ones.
Lightmapping is more efficient then vertex lighting, so try to lightmap static meshes and make them lightmapping ready if your engine supports it.
Workability is still king.
Now lets image you’re finally done with an asset. You’d want to make sure things are clean and engine friendly. Here’s the list of things to check upon consecutively:
- Deleted History, Frozen Transformations/Reset XForm, Collapsed Stack
Transformation information stored in a model could prevent it from being displayed correctly, making all further check useless. Plus it’s simply unacceptable for import into some engines. And even if it does import, objects orientation and normal direction could be messed up.
http://img138.imageshack.us/img138/6554/maxcollapsexform.jpg
In Maya, don’t forget to select your object.
http://img340.imageshack.us/img340/3876/mayafreezehistory.jpg
- Inverted Normals
While mirroring(scaling by a negative number) or performing a ton of other operations actually, your vertex normals could get turned inside out. You should have the right settings set in your modeling application in order to spot such problems.
In 3ds Max you can go to object properties and turn “Backface cull” on. Then examine your mesh.
http://img163.imageshack.us/img163/5918/maxbackfacecull.jpg
In Maya you could just disable “Double Sided lighting” in the lighting tab(if it’s missing hit “shift+m”), then make sure, that in the Shading tab “Backface Culling” is disabled. Then If you’ll check out your model with shading all the places with inverted normals will be black.
http://img193.imageshack.us/img193/4717/mayabackfacecull.jpg
- Mesh splits/ Open Edges
It sometimes happens, that while working we forget to weld some vertices or accidentally break/split some. Not only this could cause some lighting and smoothing issues, but it’s also a waste of memory and pretty much a sign of a sloppy work. You wouldn’t want that.
Open edges are an issue you want to think twice about. And not only because in some cases they could be an additional stress for computing dynamic lighting, but because it seriously reduces reusability of your asset. If you simply close the gap and find a place on your texture you can throw this new shell on, that would still be more preferable.
To detect both those issues in 3ds Max simply choose border selection mode (“3” by default) and hit select all (“ctrl + a” by default).
http://img682.imageshack.us/img682/6083/maxmeshsplits.jpg
In Maya you could use a handy tool called “Custom Polygon Display”. Choose “Highlight: Borders” radio button and apply to your object.
http://img340.imageshack.us/img340/2555/mayaborderedges.jpg
- Multiple edges/ Double faces
This double stuff is a nasty bugger since its almost impossible to spot unless you know how. And sometimes when you modify stuff you can get very surprised with things behaving not the way they should.
I could hardly remember having them in max, but just to be sure I always apply a STL Check modifier. Tick the appropriate radio button and check the “Check” checkbox.)
http://img706.imageshack.us/img706/5271/maxstlcheck.jpg
In Maya “Cleanup” tool is very useful. Just check “Nonmanifold geometry” and “Lamina faces” and hit apply.
http://img138.imageshack.us/img138/6838/mayacleanup.jpg
- Smoothing groups/soft-hard edges
For this one you’d want to as less smoothing groups/hard edges as possible. You might consider making it all smooth and just adding some extra chamfers, where bad lighting issues start to appear.
Plus there’s one more issue to watch out for, more in Maya then in 3ds Max though, since max utilizes the Smoothing Group concept:
Edges on planar surfaces would appear smooth even if they are not. To see which edges are actually unsmoothed “Custom Polygon Display” tool comes in handy again. Just click “Soft/Hard” round button right alongside the “Edges:”
http://img706.imageshack.us/img706/7755/mayasofthard.jpg
- UV splits
You would like your UVs to have the least number of seams possible, but, as long it is nice to work with. No need to go over the top with distortion here, just keep it clean and logical.
Broken/split vertices are a thing to watch out for too. 3ds Max would indicate them with different color inside the “Edit UVWs” window.
http://img682.imageshack.us/img682/5232/maxuvsplit.jpg
While in Mayas “UV Texture Editor” window you have Highlight Edges button, that simply checks “Highlight: Texture Borders” checkbox in the “Custom Polygon Display” tool for you.
http://img138.imageshack.us/img138/4157/mayauvsplits.jpg
- Triangulation
While checking triangulation up, first of all make sure, that all the triangles accentuate the shape you’re trying to convey, rather than contradict it. Then a quick glance, to check if triangulation is efficient.
Plus. Some engines have their own triangulation algorithms, and will triangulate a model on import themselves, with no concern about how you thought your triangulation looked. In trickier places this could lead to a messy result, so please take caution and investigate how your engine works and connect the vertices by hand If necessary. Btw, Maya more or less helps you find such places if you check “Concave faces” in “Cleanup Options”. In 3ds Max you’ll just have to keep an eye out for yourself.
http://img189.imageshack.us/img189/3761/mayaconcave.jpg
- Grid Alignment/ Modularity/Pivot point placement
Since the last generation of videogames, graphics production costs has increased significantly, so modularity and extensive reusability are now a very common thing. Ease of implementation and combination with different assets could save a lot of time, maybe not even yours, so don’t make your level designers hate you – think about it.
- Material Optimization
Evaluate your textures and materials again, since it’s probably the biggest source of optimization. Maybe the gloss map doesn’t deliver at all and specular is doing great on its own? Maybe if you used a twice smaller specular the asset would still hold up? Or maybe you can go with the diffuse for specular, since it’s a just a small background asset? Maybe that additional tileable normal isn’t necessary at all? Or maybe you could go with a grayscale spec, and use the spare channels for something else?
- Lightmapping possibility
If your engine supports lightmapping make sure you have a spare set of uniquely unwrapped UVs, which totally met all your engines requirements.
------------------------------------------------------------------------------------
Afterword
Everything said, there’s still one more thing left.
Please remember,
no matter how technical and optimized your model is, it’s meant to look beautiful first of all. No optimization could be an excuse for an ugly model. Optimization is not what artists do. Artists do art. And that’s what you should concentrate on.
The beauty of technicalities is that they can be precise to a pretty big extent. Which means you can write them down, memorize them and don’t bother thinking about them again for quite a while, just remembering instead.
But it’s art where you have to evaluate and make millions of decisions every single second.
All this text is not important and that is exactly why it is written down. Really important things aren’t all that easy to be expressed with words. And I hoped, that maybe, if you didn’t have to bother thinking about all the tech crap at least for some time, you’d concentrate on stuff much more important, and prettier I hope.
Cheers.
------------------------------------------------------------------------------------
Very helpfull links which I owe most of my information to(I suupose most you know them though)
Beautiful, Yet Friendly Part 1: Stop Hitting the Bottleneck (http://www.ericchadwick.com/examples/provost/byf1.html)
Beautiful, Yet Friendly Part 2: Maximizing Efficiency (http://www.ericchadwick.com/examples/provost/byf2.html) - a series of deep and easily understandable articles by Guillaume Provost
Unreal Developer Network (http://udn.epicgames.com/Main/WebHome.html) - contains a ton of usefull information.
Hummus on the triangulation (http://www.humus.name/index.php?page=Comments&ID=228) - some amazing tests.
Too much optimisation thread at polycount (http://boards.polycount.net/showthread.php?t=50588) thats where I first encountered most of the termes I've been talking about...
tl;dr? Sucks for you.
Extremly laconic exposition of the following text, which I figured Might be usefull to have in one place:
If you're interested in more details, I suggest you just take it from the top. And ready yourself to spend sometime reading.
------------------------------------top------------------------------------------
Hey.
I felt like doing this paper, because.
Because becoming a video game artist isn’t all that easy. And information unfortunately isn’t all that widely available. I learned a lot from the materials other people were so kind to provide and being incredibly thankful I see no other way to repay them, but to spread the knowledge.
Information is too much of a valuable resource and it’s pretty much the only thing, except confidence, that separates a nooby from a professional.
As time went buy I’ve managed to accumulate some amount of information, which I’ve never seen being presented in one place and somewhat carefully systematized. It’s just stuff that I’ve picked up all over internet (especially from you guys), and that came out of my personal experience crammed together. This paper is aimed at people, who are almost totally unfamiliar with technical aspects of videogame artists work and, will hopefully help them paint a much clearer picture, once they are done reading. I also tried to be pretty thorough, because I want to translate this paper into russian for all those artist, who can’t get useful knowledge due to not knowing english. I won’t be able to link them to all the great articles and forum threads, which means it had to be pretty much all in one place. So please pardon me if this won’t become a discovery for you. Even though this goes out to those, who need knowledge most, I hope, that even some hardcore veterans might find something small but useful in here, or just have a place where they can link somebody, when they encounter some familiar questions.
Aaand. I have one last thing to ask of you. Providing people with knowledge is a very responsible task. I don’t pretend to know everything and, least of all I want to provide somebody with false information. This paper here is just how I came to see things. But things vary so much and change so fast, that I am worried. So if you encounter in this text something you feel doubtful about and can clearly explain why, please don’t hesitate to point this out. I didn’t write this for the sake of writing it, but for the sake of people using it, so feedback would be really appreciated. And now on to the paper:
Video Game Artists Hygiene. Theory and practice.
Video Game Artists work is all about efficiently producing an incredible looking asset. In this article I won’t speak about what makes an asset look good, but only about what makes it’s production efficient.
But before we proceed speaking about efficiency it is crucial to clearly define, what term efficient actually means, related to videogame asset production:
1. Your practice, whether you realize it or not, is all about learning to spend less of your resources to do more work. And speaking about resources spent to have things done, the most significant and valuable one would be time. And not even in your work, but in your whole life, and I suggest you don’t forget that. This, without any argument I hope, concludes, that saving asset production time is efficient.
2. On the other hand, video game graphics, being displayed in real time, have to face certain restrictions. With each project each team tries to surpass the others and hopefully themselves, but the amount of memory and the processing power consoles can offer doesn’t get no bigger until the next generation hits the shelves. So in the name of making better, prettier games we need to make our assets engine-friendly. In other words, we could say, that saving asset render time is efficient too.
From here this article splits in two parts. The first one will be about things you need to know to produce an efficient asset. The second one will be about things you need to do, to make sure, that the asset you’ve produced is efficient. Let’s get rolling, people=)
Things you need to know
Brain Cells Do Not Recover
Even though most the of things to follow, are about making your models more engine friendly, please don’t forget, that this stuff is just an insight on how things work, that you should use a guideline, to know which side to approach your work from. And all of this shouldn’t be time consuming at all. It has to come naturally while you’re working.
Saving your or your teammates time is much more important of a guideline. I would say if your optimization would cause an additional hour of work for you or anyone further down the production pipeline, than you probably don’t want it. (provided you’ve been building your assets with following information in mind from the start). Extra hundred of tris won’t make fps drop below the floor. If you ain’t gonna save another draw call for an object, which is going to be heavily instanced or onscreen all the time, than it’s not worth it. Feeling work go smooth would make for a happier team, that is able to produce a lot more stuff a lot faster. Don’t turn work into struggle for anyone including yourself. Always take concern in needs of people working with stuff you did.
There’s sure an endless amount of stuff you could do to save your or colleagues time, so I wont even try to cover them all, but still I’d like to throw some in as an example (which I may expand upon until it becomes worthy of some personal space):
- Check everything Twice when you’re done. If you’re doing a model for rigging for example, then any little thing you missed might cause the animator to skin the model again.
- Use your UV for selection while making LoD models, if your model has a lot of repetitive parts.
- Your texture artists would thank you if you’d make that UV seams someplace hard to spot. Plus please take some time and try to straighten your lines and keep them mostly perpendicular or parallel, to “utilize the square nature of the pixels” better.
http://img190.imageshack.us/img190/6408/squarenature.jpg
This incredibly adds to workability, while distortion usually goes completely unnoticed. And don’t forget some padding, especially if your piece will be seen from a distance a lot.
- Level designers would think you’re the best guy to work with if you’ll keep to the grid (if you game uses one) and always place the pivot point in the most comfortable place, which you can ask them about.
In the beginning there was Vertex.
Remember geometry for a second. When we have a dot we, well, we have a dot. A dot is a unit of 1 dimensional space. If we move up to 2 dimensional space, we’d be able to operate with dot’s in it too. But if we take two of them, then we’d be able to define a line. A line is a building block of 2 dimensional space. But if you take a closer look, you can see that a line is simply an endless number of dots put alongside each other according to a certain rule or a linear function as you would’ve said in high school. Now lets move a level up again. In 3 dimensional space we can operate with both dots(or vertices) and lines. But, if we add one more dot to the previous two, that defined a line, we’d be able to define a face. And a face would be a building block in 3 dimensional space, that forms shapes, which we are able to look at from different angles.
I think we are all used to receiving triangle count as a main guideline for creating an asset or a character. And I think the fact of it being a building block of a 3 dimensional space has something to do with it. )
But. That’s in human way of thinking. We, humans, also have numbers from 0 to 9, but hardware processors don’t. It’s just 0 and 1 - binary - the most basic number representation system. In order for a processor to execute any thing at all, you have to break it into the smallest and simplest operations that it can solve consecutively. And I am terribly sorry for dragging you through this little technical and geometrical issues, but it was necessary, to make you see that processors work the same way in order to display 3d graphics – from the basics. And even though a triangle is a building block of 3 dimensional space, it is still composed of 3 lines, which in their turn are defined by 3 vertices. So basically, it’s the not the tris that you have to save, but vertices. Someone by now would say “Who cares? The less the tri count the less vertices there are!” And he’d be absolutely right. But unfortunately, the number of tris is not the only thing affecting your vert count. There’s also some a bit more subtle stuff going on.
And I’m sorry we have to do this again, but here comes programmers stuff. Keep it together, people=)
A 3d model stored in memory actually presents itself as a number of vertex structure based objects. Structures, speaking object oriented programming language(figuratively), are predefined groups of different types of data and functions composed together to present a single entity. There could be tons of such entities which all share the same variable types and functions, just different values stored in them. Such entities are called objects. That could be a lousy explanation, but there’s much more depth to it programming wise. Here’s a simplified example of how a structure would look like:
Vertex structure
{
Vertex Coordinates;
Vertex Color;
Vertex Normals;
UV Coordinates;
};
Very simplified.
If you think of it, it’s really obvious that vertex structure should only contain necessary data. Anything redundant could become a great waste of memory, when your scenes hit a couple of millions vertexes mark.
There are a great deal of attributes that a vertex structure incapacitates but, I’ll speak only about artist related ones (since I don’t know anything about the others). As far as I know, a vertex structure has enough variables declared for only one set of arguments. What does it mean for us, artists? It means that a vertex can’t have 2 sets of uv coordinates, or 2 normals, or two material ID’s. Now that’s odd, ‘cause we all have seen how smoothing groups(soft\hard edges) work, or applied multiple materials at objects and everything was fine. Or at least it looked fine. How is that possible? Well, it appears, that the most affordable way, to add that extra attribute to a vertex is to simply create another vertex right alongside it!
Speaking a bit more unrelated to technicalities, every time you set another smoothing group for a selection of polys or make a hard edge in maya, invisibly to you, the number of border vertices doubles. The same goes for every UV seam you create. And for every additional material you apply to your model. If you want to create engine friendly pieces you just have to take this into account. The guys at bigger companies sure as hell know this. Epics Unreal Development Kit automatically compares the number of imported and generated vertices on assets import and warns you if the numbers differ for more than 25 percents. Those a pretty tight shoes to fill, but no one said producing efficient art was easy. If Epics programmers consider it an issue serious enough to be checked upon import, I suggest you should think about it.
Connecting the dots
This small chapter here concerns the stuff that keeps those vertices together – the edges. The way they form triangles is important for an artist, who wants to produce efficient assets. And not only because they define shape, but because they also define how fast your triangles are rendered in a pretty non trivial way.
How would you render a pixel if it’s right on the edge that 2 triangles share? You would render the pixel twice, for both triangles and then blend the results. And that leads us to a pretty interesting concept, that the tighter edge density, the more rerendered pixels you’ll get and that means bigger render time. This issue should hardly affect the way you model, but knowing about it could come in handy in some other specific cases.
Triangulation would be a perfect example of such a case. It’s a pretty known issue, that thin tris aren’t all that good to render. And that could be right while talking about modeling. But talking about triangulation, if you’re saying you’ve made a triangle thinner would mean, that with the exactly same action, you’ve made another one wider. Imagine if we zoom out from a uniformly triangulated model: the smaller the object becomes on screen, the tighter the edge density and the bigger the chance of rerendering same pixels will be.
http://img706.imageshack.us/img706/6472/triangulation.jpg
But, if you neglect uniform triangulation and worry about making every triangle have the largest area possible(thus making it incapacitate more pixels), so in the end you’d get triangles with consecutively decreasing area sizes then once you zoom out again the amount of areas with higher edge density would be limited to a much smaller number of on-screen pixels. And the smaller the object becomes on screen, the smaller amount of potentially redrawn pixels it’ll have. You could also try to work this the other way around, and start with making the triangle edges shortest possible. This would make for a more efficient asset and save you some render time on doing multiple passes for the same pixel.
Eating in portions, makes for a fuller stomach.
Exactly the way your engine draws your object triangle by triangle, it draws the whole scene object by object. In order for your object to be rendered – a draw call must be sent. Since hardware is created by humans it’s pretty much bureaucrtical.) You can’t just go ahead and render everything you want. First you’ve got to have some preparation done. CPU(central processing unit) and GPU(graphics processing unit) share the duties somewhat like this: While GPU goes ahead and just renders stuff, CPU gathers information and prepares next batches to be sent to GPU. What’s important for us here, is that, if CPU is unable to supply GPU with the next batch by the time it’s finished with the current, the GPU has nothing to do. From this we can conclude that rendering an object with a small amount of tris isn’t all that efficient. You’ll spend more time preparing for the render, then on the render itself and waste the precious milliseconds your graphics card could be crunching some sweet stuff.
The number of tris GPU can render until the next batch is ready to be submitted significantly varies, but here are some examples. I’ve seen it somewhere in the UDN that for Unreal Engine 3 numbers are in between a 1000 and 2000 triangles. While working with BigWorld engine we’ve set the barrier at 800 even though some of the programmers said that it could be around a 1000.
Defining such a number for your project would be an incredible help for your art team. It would save a real ton of both production and render time. Plus it’ll serve as a perfect guideline for artists to solve some tricky situations completely on their own.
You’d want to have a less detailed model, only when there’s really no point in making it more complex, and you’d have to spent some extra time on things no one would ever notice. And that luckily works the other way around – you wouldn’t want to make your model lowpolier than this, unless you have your specific reasons. Plus the reserve of tris you have could really be well spent on shaving of those invisible vertices mentioned earlier. Add some chamfered edges and fearlessly assign one smoothing group to the whole object(make all the edges soft). It may sound weird but by having a smother more tessellated models you could actually help the performance.
If you’d like your game to be more efficient try to avoid making very low polygonal objects a single independent asset. If you are making a tavern scene, you really don’t want to have every fork, knife and dishes hand placed in the game editor. You’d rather combine them into sets or even combine them with a table. Yeah, you would have less variety, but believe me, when done right, no one would even notice.
Another plus is that objects of such polycount require no LoD models. And having them, would actually hurt the performance, because you’ll have to spend time and resources on producing, exporting and swapping the models ingame, while their render time will remain identical.
But this in no case means that you should run around applying turbosmooth to everything. There are some things to watch out for. Like stencil shadows, instancing and even vertex lighting, to name a few. Plus some engines combine multiple objects in a single drawcall, so watch out. But I’ll speak about that in the very ending of “Things You Need To Know” part of this paper.
Vertex VS Pixel
If I’d ask you, as an artist, what is the main difference between art production for last and current generation of consoles, what would you say?
I’m pretty damn sure, that the most common answer would be introduction of per texel shading and use of multiple textures, to simulate different physical qualities of a single surface, becoming standard de facto. Yeah sure polycounts have grown, animation rigs now have more bones and procedurally generated physical movement is everywhere. But normal and spec maps are the ones contributing the most visual difference. And this difference comes at a price. Nowadays I hear terms “fill rate driven engine”, “fill rate bound engine” and “fill rate oriented engine” thrown around more and more. All those terms didn’t come out of nowhere and the reason behind there appearance is, that, in modern day engines, most of objects render time is spent processing and applying all those maps based on incoming lights and cameras position.
From a viewpoint of an artist, who strives to produce effective art this means following things:
Optimizing your materials is much more fruitful, than optimizing vertex counts. Adding an extra 10, 20 or even 500 tris ain’t as nearly as stressing for performance as applying another material on an object. Shaving hundreds of tris off your model would hardly ever bring a bigger bang, than deciding that your object could do without an opacity map, or glow map, or bump offset or even a secular map. Kevin Johnstone of Epic Entertainment once said, that while working on Unreal Tournament 3 he optimized a single level by 2-3 millions triangles just to gain somewhat around 2-3 fps. I think this example makes it obvious. It’s not the tri count, that affects performance the most. I’d say it’s the number of draw calls and shader and lighting complexity that counts. Then there are vertex transformation costs when you have some really complex rigs or a lot of physically controlled objects. And post processing.
Surely, as an artist you have a lot more control over your materials, rather then the lighting, but there are still some things you can do, depending on your engine.
If you know you’re going to have dynamic lighting in the scene, then you don’t want to have huge object only a small part of which will be lit dynamically at a single moment of time. Break it into smaller pieces. Or for example if you are doing a haunted hotel scene, where the player will have to navigate dark hallways, lighting his way with a flashlight, you’d rather have every chandelier on the wall a separate object. Even if its like 30-50 tris. It’s may seem logical, in order to optimize things, to go ahead and attach all the chandeliers into a single object, since they are pretty low poly and share the same material, but, all the profit that comes of it, wouldn’t compare with the stress ‘caused to process dynamic lighting every frame for an object so widely dispersed across the level.
Even though I am speaking off my Unreal Engine 3 experience, I believe those guys know what they are doing, and their knowledge could be taken into account. If your engine gives you a choice between vertex lighting and lightmapping you’d want to go with the second.
First of all, because in case of vertex lighting you need to store in memory the data for every single vertex you have, and that kinda makes you wish you had less vertices, but, since we’ve figured out that we have them for free until the next batch is ready, we’d rather use them for good.
You could use a 128 or 64 or even a 32 by 32 light map in some cases that would still look smoother then vertex lighting but eat up a lot less memory.
Plus, since a lightmap is pretty much a usual bitmap you can weave it into your texture streaming pipeline and not affect the overall texture memory budget. And I can hardly think of a way to make vertex lighting almost free memory wise, so lightmaps for the win.
If you want to make your asset a bit more engine friendly and your engine supports lightmapping, then I suggest you don’t hesitate to make a second uv set, for the lightmaps.
The most important thing
After all the things said, there’s still one most important thing that you need to know. And that is that things differ. Sometimes dramatically. As with everything in life, there’s no universal recipe and the best thing you can do is figure out what does your specific case look like. Get all the information you can from the people responsible. No one knows your engine better then the programmers. They know a lot of stuff that could be useful for artists, but sometimes, due to lack of dialogue, this information remains with them. Miscommunication may lead to problems that could’ve been easily avoided, or be the reason you’ve done a shit ton of unnecessary work or wasted a truckload of time that could’ve been spent much wiser. Speak, you’re all making one game after all and your success depends on how well you’re able to cooperate. Communication with programmers could actually be the work of your lead or a tech artist, so you could just ask them instead. Asking has never hurt anyone and it’s actually the best way to get an answer.
Dalai Lama once said:
“Learn your rules diligently, so you would now where to break them.”
And I can do nothing, but agree with him. Obeying rules all the time is the best way to not ever do anything original. All rules or restrictions have some solid arguments to back them up, and fit some general conditions. But conditions vary. If you take a closer look, every other asset could be an exception to some extent. And, ideally, having faced some tricky situation artists should be able to make decisions on their own, sometimes even break the rules if they know that the project will benefit from it, and them braking the rules wouldn’t hurt anything. But, If you don’t know the facts behind the rules I doubt you would ever go breaking them. So I seriously encourage you take interest in your work. There’s more to video games production than art.
If you’re doing freelance work and you feel like your model would really benefit from that extra tris, than I’d say you ask. For it is in the best interest of the people you’re working for. Plus, if for some reason, they are unaware of all the stuff listed above then there’s a big chance, that you’ve helped them a lot – and that’s some respect points for you.)
Things you need to do
So I hope this wall of text up here made some sense for you guys.) I find all this information on how stuff works really useful, but it’s not exactly what you would use on a day by day basis. It’s nice to read once, but I doubt I myself would want to go through this again in case I forget something. And though I think “whys” are important, as an artist(or as an artist wanna be), I’d love to have a place where all the “hows” are clearly stacked, without any other distracting information. And the “whys” section would serve as a reference you can turn to, in case something becomes unclear.
So, while making and asset or a character you’d want to keep these things in mind all the time:
It’s not the tri count that matters, but vertex count. Smoothing Groups(Soft/Hard Edges), UV seams and Multiple Materials increase the number of verts, so you want to have the least number of those possible. Vert counts don’t matter at all, if they don’t add another drawcall, so feel free to find them a proper use. Uberlowpoly models are bad for performance.
No LoD for objects with less tris then GPU can crunch while CPU submits the next package.
Triangulate using “Max Area” or “Shortest Edge” principles.
Materials(Shaders) are the most fruitful thing to optimize.
If you know you’re going to have dynamic lighting, then try to break bigger objects into smaller ones.
Lightmapping is more efficient then vertex lighting, so try to lightmap static meshes and make them lightmapping ready if your engine supports it.
Workability is still king.
Now lets image you’re finally done with an asset. You’d want to make sure things are clean and engine friendly. Here’s the list of things to check upon consecutively:
- Deleted History, Frozen Transformations/Reset XForm, Collapsed Stack
Transformation information stored in a model could prevent it from being displayed correctly, making all further check useless. Plus it’s simply unacceptable for import into some engines. And even if it does import, objects orientation and normal direction could be messed up.
http://img138.imageshack.us/img138/6554/maxcollapsexform.jpg
In Maya, don’t forget to select your object.
http://img340.imageshack.us/img340/3876/mayafreezehistory.jpg
- Inverted Normals
While mirroring(scaling by a negative number) or performing a ton of other operations actually, your vertex normals could get turned inside out. You should have the right settings set in your modeling application in order to spot such problems.
In 3ds Max you can go to object properties and turn “Backface cull” on. Then examine your mesh.
http://img163.imageshack.us/img163/5918/maxbackfacecull.jpg
In Maya you could just disable “Double Sided lighting” in the lighting tab(if it’s missing hit “shift+m”), then make sure, that in the Shading tab “Backface Culling” is disabled. Then If you’ll check out your model with shading all the places with inverted normals will be black.
http://img193.imageshack.us/img193/4717/mayabackfacecull.jpg
- Mesh splits/ Open Edges
It sometimes happens, that while working we forget to weld some vertices or accidentally break/split some. Not only this could cause some lighting and smoothing issues, but it’s also a waste of memory and pretty much a sign of a sloppy work. You wouldn’t want that.
Open edges are an issue you want to think twice about. And not only because in some cases they could be an additional stress for computing dynamic lighting, but because it seriously reduces reusability of your asset. If you simply close the gap and find a place on your texture you can throw this new shell on, that would still be more preferable.
To detect both those issues in 3ds Max simply choose border selection mode (“3” by default) and hit select all (“ctrl + a” by default).
http://img682.imageshack.us/img682/6083/maxmeshsplits.jpg
In Maya you could use a handy tool called “Custom Polygon Display”. Choose “Highlight: Borders” radio button and apply to your object.
http://img340.imageshack.us/img340/2555/mayaborderedges.jpg
- Multiple edges/ Double faces
This double stuff is a nasty bugger since its almost impossible to spot unless you know how. And sometimes when you modify stuff you can get very surprised with things behaving not the way they should.
I could hardly remember having them in max, but just to be sure I always apply a STL Check modifier. Tick the appropriate radio button and check the “Check” checkbox.)
http://img706.imageshack.us/img706/5271/maxstlcheck.jpg
In Maya “Cleanup” tool is very useful. Just check “Nonmanifold geometry” and “Lamina faces” and hit apply.
http://img138.imageshack.us/img138/6838/mayacleanup.jpg
- Smoothing groups/soft-hard edges
For this one you’d want to as less smoothing groups/hard edges as possible. You might consider making it all smooth and just adding some extra chamfers, where bad lighting issues start to appear.
Plus there’s one more issue to watch out for, more in Maya then in 3ds Max though, since max utilizes the Smoothing Group concept:
Edges on planar surfaces would appear smooth even if they are not. To see which edges are actually unsmoothed “Custom Polygon Display” tool comes in handy again. Just click “Soft/Hard” round button right alongside the “Edges:”
http://img706.imageshack.us/img706/7755/mayasofthard.jpg
- UV splits
You would like your UVs to have the least number of seams possible, but, as long it is nice to work with. No need to go over the top with distortion here, just keep it clean and logical.
Broken/split vertices are a thing to watch out for too. 3ds Max would indicate them with different color inside the “Edit UVWs” window.
http://img682.imageshack.us/img682/5232/maxuvsplit.jpg
While in Mayas “UV Texture Editor” window you have Highlight Edges button, that simply checks “Highlight: Texture Borders” checkbox in the “Custom Polygon Display” tool for you.
http://img138.imageshack.us/img138/4157/mayauvsplits.jpg
- Triangulation
While checking triangulation up, first of all make sure, that all the triangles accentuate the shape you’re trying to convey, rather than contradict it. Then a quick glance, to check if triangulation is efficient.
Plus. Some engines have their own triangulation algorithms, and will triangulate a model on import themselves, with no concern about how you thought your triangulation looked. In trickier places this could lead to a messy result, so please take caution and investigate how your engine works and connect the vertices by hand If necessary. Btw, Maya more or less helps you find such places if you check “Concave faces” in “Cleanup Options”. In 3ds Max you’ll just have to keep an eye out for yourself.
http://img189.imageshack.us/img189/3761/mayaconcave.jpg
- Grid Alignment/ Modularity/Pivot point placement
Since the last generation of videogames, graphics production costs has increased significantly, so modularity and extensive reusability are now a very common thing. Ease of implementation and combination with different assets could save a lot of time, maybe not even yours, so don’t make your level designers hate you – think about it.
- Material Optimization
Evaluate your textures and materials again, since it’s probably the biggest source of optimization. Maybe the gloss map doesn’t deliver at all and specular is doing great on its own? Maybe if you used a twice smaller specular the asset would still hold up? Or maybe you can go with the diffuse for specular, since it’s a just a small background asset? Maybe that additional tileable normal isn’t necessary at all? Or maybe you could go with a grayscale spec, and use the spare channels for something else?
- Lightmapping possibility
If your engine supports lightmapping make sure you have a spare set of uniquely unwrapped UVs, which totally met all your engines requirements.
------------------------------------------------------------------------------------
Afterword
Everything said, there’s still one more thing left.
Please remember,
no matter how technical and optimized your model is, it’s meant to look beautiful first of all. No optimization could be an excuse for an ugly model. Optimization is not what artists do. Artists do art. And that’s what you should concentrate on.
The beauty of technicalities is that they can be precise to a pretty big extent. Which means you can write them down, memorize them and don’t bother thinking about them again for quite a while, just remembering instead.
But it’s art where you have to evaluate and make millions of decisions every single second.
All this text is not important and that is exactly why it is written down. Really important things aren’t all that easy to be expressed with words. And I hoped, that maybe, if you didn’t have to bother thinking about all the tech crap at least for some time, you’d concentrate on stuff much more important, and prettier I hope.
Cheers.
------------------------------------------------------------------------------------
Very helpfull links which I owe most of my information to(I suupose most you know them though)
Beautiful, Yet Friendly Part 1: Stop Hitting the Bottleneck (http://www.ericchadwick.com/examples/provost/byf1.html)
Beautiful, Yet Friendly Part 2: Maximizing Efficiency (http://www.ericchadwick.com/examples/provost/byf2.html) - a series of deep and easily understandable articles by Guillaume Provost
Unreal Developer Network (http://udn.epicgames.com/Main/WebHome.html) - contains a ton of usefull information.
Hummus on the triangulation (http://www.humus.name/index.php?page=Comments&ID=228) - some amazing tests.
Too much optimisation thread at polycount (http://boards.polycount.net/showthread.php?t=50588) thats where I first encountered most of the termes I've been talking about...
tl;dr? Sucks for you.