This project is read-only.

Xen 2.0

Sep 26, 2010 at 6:38 AM
Edited Sep 26, 2010 at 7:10 AM

... is working fine for me.  No issues.

(Before editing, this post contained a bunch of nonsense about multithreading which I could have avoided if I'd taken the time to glance at Application.cpp first... d'oh!)

Sep 26, 2010 at 1:56 PM
Edited Sep 26, 2010 at 1:57 PM

I've certainly discovered a bunch of issues today, there are a tonne of changes internally so things are rather fragile in places :-) So I will be putting out an update soonish.

I'm still interested in what the problem was.
I'm very curious for feedback on the (somewhat crazy) shader system changes(there are some very nasty native code hacks in there).
It's not properly documented yet, but all shaders have instancing and animation variants automatically generated (which if you think about it, is completely mad :).

And where did you get this C++ port of xen? :D

Sep 26, 2010 at 5:50 PM

Heh heh.  I went temporarily (I hope) dain bramaged there for a while.  Crunch time does bad things to ones sleep patterns.  I woke up at about 4:00AM realizing that I'd typed .cpp and hoped I'd get a chance to edit it again before anybody noticed :-)

There wasn't a problem with the multithreading.  I wanted to run something on another thread and didn't see that Application published access to its ThreadPool (because I didn't look)

I did notice that Tutorial 24 (Batch Model) is messed up right now, but I figured you already knew about that.  I'm starting a new project with Xen 2.0, so I'm just hitting the basic stuff so far.  I can try out some of my more advanced shaders on a proxy object in a test app though.  Back in a bit.

 

Sep 27, 2010 at 6:56 AM

Hey I noticed something weird... previous versions of Xen never, ever allocated memory after the initial startup.

2.0 seems to allocate every few hundred frames...  I kinda suspect its the DrawStatisticsDisplay and it might always have done that.  I might have just assumed that it was my code, too (cuz some areas of my code was *ugly*)  But I was just viewing a really simple scene with vsync turned off, getting about 950fps, and the Allocated Bytes trace was increasing about 16k every tenth of a second or so.  I'll break out CLR Profiler and see if I can track it down... after a brief interval for sleep.

Really I'm just posting again because I just read your comment that all shaders have instancing and animation variants automatically generated.  Comment:  say WHAT?!?!?  I'll have to take a look at what that means...  I'll start with my speed blur post-processing shader.  I'll see an animation variant for that??

 

Sep 27, 2010 at 5:08 PM
Edited Sep 27, 2010 at 5:15 PM

I've noticed the slow memory increase too. I haven't looked into it, but there was a slow(er) increase in XNA 3.1 as well relating the mouse-move events, so it's possible it's an issue within XNA 4 - however it's also just as likely I've done something dumb :P

And yes, if your shader needs to be used to draw geometry with hardware instancing - or with animation blend weights, then it should 'just work' if you call 'DrawInstances()' or 'DrawBlending()' on the vertices object. :D
Of course, you typically don't do instancing / animation in a post processing pass :P

Internally it does some incredibly sneaky 'adjustments' on any world matrix related shader parameter. Of course, there are cases where it fails, but there are ways to disable the step with a CompileOptions magic comment. (I have a feeling that in the beta, if it fails to generate them, then the error isn't being properly reported - it's something I've fixed since then). You will notice (for example) the character shaders in the HDR tutorial do not have any animation code in the vertex shader :)

The *huge* benefit is that you no longer have to branch all your shaders for these special cases, and (for example) if you set the ShaderProvider on a ModelInstance to null, then it'll be perfectly happy to use the currently bound shader - animated or not.

Sep 29, 2010 at 8:26 AM

Memory allocation isn't in DrawStatisticsDisplay, but I can't figure out where it is because CLRProfiler hasn't been updated to handle .NET 4.0  I wonder if I can compile it using .NET 4.0 and get it to work...

re: shaders and instancing/animation.  You are a crazy man.  :-)  Question about shaders in general though.  I've recently started using ShaderFX and exported my first shader for use in Xen this evening.  Aaaaand it failed.  Not because of any issue with the shader code, but because ShaderFX likes to create multipass shaders (believe it or not, I've got an ambient pass and a diffuse pass in the one I did this evening)  XenFX wants only single pass shaders.  I'm not sure how to work around this...  I suppose I can create two materials, export them separately, then render the model twice, switching shaders between each draw.  Is that the idea?  Not a pretty workflow as far as using ShaderFX is concerned, but workable, anyway.

But I'm also getting errors from other artefacts of ShaderFX's fx file.  (I selected the XNA DXSAS option when exporting)  ShaderFX likes to use #defines in the middle of the file and XenFX not only needs #defines at the top of the file, but it's complaining that #define lighttype 1 should end before the 1. 

I'm getting a lot of errors saying that a namespace cannot directly contain members such as fields or methods, for example at the float in the snippet below:

float FresnelPower
<
string UIWidget = "Spinner";
float UIMin = 0.0;
float UIMax = 100.0;
float UIStep = 0.01;
string UIName = "Fresnel Power";
> = 1.0;

   Worse:

// output to fragment program 
struct v2f { 
        float4 position    		: POSITION; 
        float3 lightVec    		: TEXCOORD0; 
        float3 eyeVec	    	: TEXCOORD1; 

		float3 cubeCoord		: TEXCOORD2; 
	float2 texCoord			: TEXCOORD3; 

}; 

//Ambient and Self-Illum Pass Vertex Shader
v2f av(a2v In, uniform float3 lightPos, uniform int lightType, uniform float3 lightDir) 
{ 
    v2f Out = (v2f)0;
... reports an error at every colon before a semantic in the struct, saying a semicolon is expected, and then gives the "namespace can't directly contain" error at the v2f return value of the vertex shader declaration.
XenFX isn't parsing the fx files right? These error messages are coming from the DirectX shader compiler and you're just passing them out? Is the compiler itself versioned, and is XenFX using a version that has since seen uprevs for HLSL or DXSAS updates? (The shaders are just using SM2.0 HLSL)
Hmmmm, I've just remembered that you've rewritten the ShaderSystem so you could release the source so I should probably figure these questions out by reading the code...
Sep 29, 2010 at 9:19 AM
Edited Sep 29, 2010 at 9:22 AM

When you use a multipass shader, it *is* just drawing the geometry twice - with the nasty side effect that it swaps the shader for every pass in every draw call. So having two shaders, and drawing your geometry twice will actually be more efficient, because the shader will only change once, not for every draw call.

All part of the wonder that is the bloated FX system. (Which is why XenFX really only deals with a small subset - there are tonnes of crazy things you can do in an FX file I simply ignore, because they are so madly overcomplicated*).

And yes, XenFX does do parsing of the FX file. For various reasons, the shader constants need to be treated as a single array - so internally the uniforms are actually removed from the FX, and replaced with macros that map to a single constant buffer. This obviously has some pretty heavy implications for the structure of the file.
Further, the instancing/blending generation heavily parses and modifies the FX code - as you'd expect. And finally, the assembly is also parsed to extract all the needed info about register allocations, etc. At one point it even parsed the output assembly to reconstruct it for the xbox (back when the xbox compiler was buggered) but it doesn't really do this anymore.

So yeah, there are certainly limitations to what you can do - and I didn't even realise that FX supported namespaces!
You may think this is totally overkill, and to an extent it is - but most of this is done simply because of nasty internal limitations in XNA.

If you can send me the files that are causing issues, I can at least look at where and how.
And I can send you the FX that is actually sent to the compiler - that'll *really* scare you :D

 

* for example, the following is (roughly) legal FX code:

int Shader = 0;

vertexshader shaders[] = { compile vs_2_0 MyCode(); }

technique Shader
{
  pass
  {
    vertexshader = shaders[Shader];
  }
}

 

Oct 1, 2010 at 4:51 AM

I didn't know that about multipass shaders.  I've never really played much with them, but not for any specific reasons...  I did assume they'd be more efficient than drawing twice because DX ought to know that the what resources would be needed for both passes and have the caches ready for the second pass.  That'd especially be true if you were running the same vertex shader... should it just drop the second pass out and stream the results of the first pass to the two pixel shaders one pipeline step, with the second pass delayed by one pipeline step.  Mmmm, maybe that's only possible on DX10 or better hardware...  Or maybe I'm just talking about stuff I know nothing about.  :-)