Xnalara and ray-tracing

General discussion and questions about XNALara go here.

Moderators: ObscureMemories, Runa, Love2Raid

User avatar
XNAaraL
XPS Author
Posts: 120
Joined: Fri Sep 21, 2012 9:01 am

Re: Xnalara and ray-tracing

Post by XNAaraL »

semory wrote:Man, raytracing...
still too slow. I may have to one day look at my code and see where I went wrong because the 2006 journal article claims to have gotten 3-4 frames/sec with a 180K triangle scene using SSE2 lol.
Ingo Wald (Germany) a director of ray tracing at NVIDIA and Peter Shirley (Illinois) distinguished scientist at NVIDIA :o Both well known.

VX2+MT, SIMD ... so CPU rendering and not GPU rendering like in XPS.

Just 4 minutes to render one model? I guess your model(s) was not posable and dont support render groups (different materials).
Just one diffuse texture?

My memories:
1980: Turner Whitted balls floating above a checkerboard floor https://www.scratchapixel.com/lessons/3 ... ng-whitted
Image
Turner Whitted's paper "An Improved Illumination Model for Shaded Display" is widely regarded as the first modern description of raytracing methods in computer graphics. This paper's famous image of balls floating above a checkerboard floor took 74 minutes to render on a DEC VAX 11/780 mainframe, a $400,000 computer.

1987 Eric Graham making the "The Juggler" http://www.etwright.org/cghist/juggler.html

Each image (frame) requires the calculation of 64,000 light rays and takes approximately 1 hour to generate.

Java code https://meatfighter.com/juggler/
User avatar
semory
Site Admin
Posts: 7755
Joined: Sat Aug 04, 2012 7:38 pm
Custom Rank: Kitty pu tu tu lay!
Location: Torrance, CA

Re: Xnalara and ray-tracing

Post by semory »

Yeah, the history of ray tracing is pretty crazy, and Peter Shirley is "the man." You want to learn ray tracing, you MUST have his books and his research articles. I'm not calling him a liar; I'm blaming myself for my poor coding hahaha! But he did that NVIDIA scene in 2006 and claimed 3-4 frames per second "in software" and I do believe him as he is "the ray tracing man." The dude pushes what can be done in software and it's super interesting, and it's what I wanted to experiment with back in 2016 before everything in my life started falling apart LMFAO! I have some catching up to do!

The concept of the ray tracing AABB tree is simple. Every triangle has a centroid and a min-max bounding box. You start with AABB of whole model and divide it into k bins along dominant (longest) axis. You associate each triangle with one of the k bins depending on if the centroid is in that bin. After binning you update the AABBs to encompass only the triangles within them. When you hit n triangles or less in a bin, that bin is a leaf node and you sort the index buffer (so you only need to store a pointer to the triangle data). If there are more than n triangles per bin you repeat the subdivision process.

Also regarding those pics I posted... yes 4 minutes, but as you mentioned there are some caveats:

#1 = The models are low poly.

#2 = Over 50% of the scene is blank space and there is very little transparency. The AABB tree is very fast when there are large sections of empty space. Most of the time spent ray tracing is spent on ray-triangle intersection tests. Put that model in a single room covering the whole rendering area and yeah, time will nearly double. There is a great journal article on this by the guy that ray traced Quake III, where he did a heat map for time-taken-per-pixel; it's an interesting read. Transparency, especially things like trees that were far away, destroyed his frame rate.

#3 = Only 4 samples per pixel. The pictures I posted look like shit. Lots of artifacts and aliasing along edges. Sampling kills. The guys that ray traced Quake III only used 1, 2, 4, and 8 samples per pixel. I don't even want to think what these models would look like if I placed them far away LMAO! Look at https://dl.acm.org/doi/fullHtml/10.1145/3447807. They are using minimum 100 samples per pixel and it takes 30 seconds to render a scene (though they are doing GPU accel). At 100 samples per pixels, notice that some of their pictures look like CRAP! So yeah, if I used 100 samples per pixel you are indeed looking at an hour! But with game models, do you really need 100 samples per pixel, or even 1,000? Do we really need Blender Cycles quality?

#4 = Also, remember XNALara and XPS only has up to three directional lights. No fancy lighting here. I think those pics I posted used only one of those directional lights, the default one. My source notes have:

Code: Select all

#pragma region LIGHTS
// default XNALara/XPS light
// angle horizontal = 315 (0 = shadow to right, 90 = shadow to front, 180 = shadow to left, 360 = shadow to right)
// angle vertical = 35 (-90, light on bottom, 0 = no shadow, 90 = light on top)
// intensity = 1.1
// shadow depth = 0.4
// <horz, vert>
// this rotates about z-axis, changing vert
// <0, 90> = <0, -1, 0> // light above
// <0, 0> = <-1, 0, 0> // light on side
// <0, -90> = <0, 1, 0> // light below

struct directional_light {
 bool enabled;
 real32 angle_horz;
 real32 angle_vert;
 vector3D direction;
 vector4D<real32> color;
 real32 intensity;
 real32 shadow_depth;
};

static directional_light lightlist[3];

void ComputeLightDirection(vector3D& D, real32 horz, real32 vert)
{
 real32 r1 = cs::math::radians(horz);
 real32 r2 = cs::math::radians(vert);
 real32 m[16];
 cs::math::rotation_matrix4x4_YZ(m, r1, r2); // order is Z then Y
 real32 v[3] = { 1.0f, 0.0f, 0.0f };
 real32 x[3];
 cs::math::matrix4x4_mul_vector3D(x, m, v);
 cs::math::vector3D_normalize(x);
 D.x = -x[0];
 D.y = -x[1];
 D.z = -x[2];
}

void InitLights(void)
{
 #define L1_DEFAULT lightlist[0].angle_horz = 315.0f; \
                    lightlist[0].angle_vert =  35.0f;
 #define L1_FRONT   lightlist[0].angle_horz = 270.0f; \
                    lightlist[0].angle_vert =   0.0f;

 // light #1
 L1_DEFAULT;
 lightlist[0].enabled = true;
 // lightlist[1].angle_horz = 0.0f;
 // lightlist[1].angle_vert = 0.0f;
 lightlist[0].color.x = lightlist[0].color.y = lightlist[0].color.z = lightlist[0].color.w = 1.0f;
 lightlist[0].intensity = 1.1f;
 lightlist[0].shadow_depth = 0.4f;
 ComputeLightDirection(lightlist[0].direction, lightlist[0].angle_horz, lightlist[0].angle_vert);
 std::cout << "dir = <" << lightlist[0].direction.x << "," << lightlist[0].direction.y << "," << lightlist[0].direction.z << ">" << std::endl;

 // light #2
 lightlist[1].enabled = false;
 lightlist[1].angle_horz = 225.0f;
 lightlist[1].angle_vert = 35.0f;
 lightlist[1].color.x = lightlist[0].color.y = lightlist[0].color.z = lightlist[0].color.w = 1.0f;
 lightlist[1].intensity = 0.0f;
 lightlist[1].shadow_depth = 0.4f;
 ComputeLightDirection(lightlist[1].direction, lightlist[1].angle_horz, lightlist[1].angle_vert);

 // light #3
 lightlist[2].enabled = false;
 lightlist[2].angle_horz = 180.0f;
 lightlist[2].angle_vert = 90.0f;
 lightlist[2].color.x = lightlist[0].color.y = lightlist[0].color.z = lightlist[0].color.w = 1.0f;
 lightlist[2].intensity = 0.0f;
 lightlist[2].shadow_depth = 0.4f;
 ComputeLightDirection(lightlist[2].direction, lightlist[2].angle_horz, lightlist[2].angle_vert);
}

#pragma endregion
My render group code is not very efficient, but it's there:

Code: Select all

vector4D<unsigned char> RG27(const XNAMesh& mesh, uint32 face_index, const cs::math::ray3D<real32>& ray, float u, float v, float w)
{
 using cs::binary32;
 using cs::math::vector2D;
 using cs::math::vector3D;

 // triangle vertices
 const XNAVertex& v1 = mesh.verts[mesh.faces[face_index].refs[0]];
 const XNAVertex& v2 = mesh.verts[mesh.faces[face_index].refs[1]];
 const XNAVertex& v3 = mesh.verts[mesh.faces[face_index].refs[2]];

 // face UVs
 cs::math::vector2D<real32> uv1(v1.uv[0]);
 cs::math::vector2D<real32> uv2(v2.uv[0]);
 cs::math::vector2D<real32> uv3(v3.uv[0]);

 // face UVs (edges)
 cs::math::vector2D<real32> te1 = uv2 - uv1;
 cs::math::vector2D<real32> te2 = uv3 - uv1;

 // you can't bump map degenerate UVs
 real32 scale = (te1[0]*te2[1] - te2[0]*te1[1]);

 //
 // iterpolated UVs
 //

 // interpolated UV coordinates
 cs::binary32 uv[2] = {
  u*v1.uv[0][0] + v*v2.uv[0][0] + w*v3.uv[0][0],
  u*v1.uv[0][1] + v*v2.uv[0][1] + w*v3.uv[0][1]
 };

 // diffuse sample
 auto DM_sample = SampleTexture(mesh.textures[0], uv[0], uv[1]);

 //
 // NORMAL MAPPING
 //

 binary32 N[3];
 binary32 T[4];
 binary32 B[3];

 // interpolated normal
 N[0] = u*v1.normal[0] + v*v2.normal[0] + w*v3.normal[0];
 N[1] = u*v1.normal[1] + v*v2.normal[1] + w*v3.normal[1];
 N[2] = u*v1.normal[2] + v*v2.normal[2] + w*v3.normal[2];
 cs::math::vector3D_normalize(N);

 if(enable_NM && !(std::abs(scale) < 1.0e-6f))
   {
    // interpolated tangent
    T[0] = u*v1.tangent[0][0] + v*v2.tangent[0][0] + w*v3.tangent[0][0];
    T[1] = u*v1.tangent[0][1] + v*v2.tangent[0][1] + w*v3.tangent[0][1];
    T[2] = u*v1.tangent[0][2] + v*v2.tangent[0][2] + w*v3.tangent[0][2];
    T[3] = u*v1.tangent[0][3] + v*v2.tangent[0][3] + w*v3.tangent[0][3];
    cs::math::vector3D_normalize(T);
    
    // N = T cross B (Z = X x Y)
    // T = B cross N (X = Y x Z)
    // B = N cross T (Y = Z x X)
    cs::math::vector3D_vector_product(B, N, T);
    B[0] *= T[3]; // for mirrored UVs
    B[1] *= T[3]; // for mirrored UVs
    B[2] *= T[3]; // for mirrored UVs
    cs::math::vector3D_normalize(B);
    
    // remap sample to [-1, +1]
    auto NM_sample = SampleTexture(mesh.textures[1], uv[0], uv[1]);
    real32 N_unperturbed[3] = {
     NM_sample.x*2.0f - 1.0f,
     NM_sample.y*2.0f - 1.0f,
     NM_sample.z*2.0f - 1.0f
    };
    cs::math::vector3D_normalize(N_unperturbed);

    // rotate normal
    N[0] = N_unperturbed[0]*N[0] + N_unperturbed[1]*B[0] + N_unperturbed[2]*T[0];
    N[1] = N_unperturbed[0]*N[1] + N_unperturbed[1]*B[1] + N_unperturbed[2]*T[1];
    N[2] = N_unperturbed[0]*N[2] + N_unperturbed[1]*B[2] + N_unperturbed[2]*T[2];
    cs::math::vector3D_normalize(N);
   }

 //
 // ENVIRONMENT MAPPING
 //

 // matrix transform
 float R[16] = {
  -cam_L[0], -cam_L[1], -cam_L[2], 0.0f, // map -L to X
   cam_U[0],  cam_U[1],  cam_U[2], 0.0f, // map +U to Y
  -cam_D[0], -cam_D[1], -cam_D[2], 0.0f, // map -D to Z
  0.0f, 0.0f, 0.0f, 1.0f
 };

 // veiw vector and normal in sphere map space
 float v_p[3];
 float n_p[3];
 cs::math::matrix4x4_mul_vector3D(v_p, R, cam_D);
 cs::math::matrix4x4_mul_vector3D(n_p, R, N);
 cs::math::vector3D_normalize(v_p);
 cs::math::vector3D_normalize(n_p);

 // compute reflection vector in sphere map space
 float dot2 = 2.0f*(n_p[0]*v_p[0] + n_p[1]*v_p[1] + n_p[2]*v_p[2]);
 float rx = v_p[0] - dot2 * n_p[0]; // use original view direction for sphere map
 float ry = v_p[1] - dot2 * n_p[1]; // use original view direction for sphere map
 float rz = v_p[2] - dot2 * n_p[2]; // use original view direction for sphere map

 // sphere map sample
 real32 m = 2.0f*std::sqrt(rx*rx + ry*ry + (rz + 1)*(rz + 1));
 real32 intpart;
 real32 tu = modff((rx/m + 0.5f), &intpart);
 real32 tv = modff((ry/m + 0.5f), &intpart);
 auto SEM_sample = SampleTexture(mesh.textures[2], tu, tv);

 // mix diffuse and environment map
 real32 w2 = mesh.params.params[0];
 real32 w1 = 1.0f - w2;
 DM_sample.x = Saturate(DM_sample.x*w1 + SEM_sample.x*w2);
 DM_sample.y = Saturate(DM_sample.y*w1 + SEM_sample.y*w2);
 DM_sample.z = Saturate(DM_sample.z*w1 + SEM_sample.z*w2);

 //
 // LIGHTING
 //

 real32 wo[3] = {
  -ray.direction[0],
  -ray.direction[1],
  -ray.direction[2],
 };

 vector3D<real32> L;
 L[0] = 0.0f;
 L[1] = 0.0f;
 L[2] = 0.0f;

 // factors
 binary32 kd = (k_conservation ? 1.0f - ka : 1.0f);

 // for each light
 for(uint32 i = 0; i < 3; i++)
    {
     // light disabled
     if(!lightlist[i].enabled) continue;

     // wi is normalized vector "from hit point to light"
     vector3D<real32> wi;
     wi[0] = -lightlist[i].direction.x;
     wi[1] = -lightlist[i].direction.y;
     wi[2] = -lightlist[i].direction.z;

     // if light intensity is not zero
     binary32 dot = cs::math::vector3D_scalar_product(N, &wi[0]);
     if(dot > 0.0f) {
        L[0] += kd*DM_sample.x*lightlist[i].intensity*lightlist[i].color.x*dot;
        L[1] += kd*DM_sample.y*lightlist[i].intensity*lightlist[i].color.y*dot;
        L[2] += kd*DM_sample.z*lightlist[i].intensity*lightlist[i].color.z*dot;
       }
    }

 // ambient + all light contributions
 DM_sample.x = Saturate(DM_sample.x*(ka*ambient_co[0]) + Saturate(L[0]));
 DM_sample.y = Saturate(DM_sample.y*(ka*ambient_co[1]) + Saturate(L[1]));
 DM_sample.z = Saturate(DM_sample.z*(ka*ambient_co[2]) + Saturate(L[2]));

 // final value
 vector4D<uint08> retval;
 retval.x = static_cast<uint08>(255.0f*DM_sample.x);
 retval.y = static_cast<uint08>(255.0f*DM_sample.y);
 retval.z = static_cast<uint08>(255.0f*DM_sample.z);
 retval.w = static_cast<uint08>(255.0f*DM_sample.w);
 return retval;
}
User avatar
XNAaraL
XPS Author
Posts: 120
Joined: Fri Sep 21, 2012 9:01 am

Re: Xnalara and ray-tracing

Post by XNAaraL »

semory wrote:...
But he did that NVIDIA scene in 2006 and claimed 3-4 frames per second "in software".

...

Also regarding those pics I posted... yes 4 minutes, but as you mentioned there are some caveats:

...
#5 = The models was static ... no posing
https://graphics.stanford.edu/~boulos/papers/togbvh.pdf
the topology of the BVH is not changed over time so that only the bounding volumes need be re-fit from frame to frame.
...
A BVH-based ray tracing system using these techniques achieve performance for deformable models comparable to that previously available only for static models.
My opinion ... the first frame takes 4 minutes ... the next frames achieve 3 FPS, by calculate only the delta of the bounding volumes from the last frame.
User avatar
semory
Site Admin
Posts: 7755
Joined: Sat Aug 04, 2012 7:38 pm
Custom Rank: Kitty pu tu tu lay!
Location: Torrance, CA

Re: Xnalara and ray-tracing

Post by semory »

Yep, the concept behind the tree is really simple. I didn't count preprocessing/tree construction, and since no posing, there is no need to update the AABBs. It was still slow, which is why this topic has me going back thinking, "Where did I do wrong?" Hahahaha! I am definitely going to try again.
User avatar
semory
Site Admin
Posts: 7755
Joined: Sat Aug 04, 2012 7:38 pm
Custom Rank: Kitty pu tu tu lay!
Location: Torrance, CA

Re: Xnalara and ray-tracing

Post by semory »

Almost done with code refactoring... though got a question for galey!

In your Alina D. model, the body uses RG22 on top and RG29 on bottom. The purpose of the RG29 is to take advantage of XPS rendering order of meshes to put a reflection on the dress? Are the meshes exact duplicates or are the RG29s scaled a little bit?
User avatar
semory
Site Admin
Posts: 7755
Joined: Sat Aug 04, 2012 7:38 pm
Custom Rank: Kitty pu tu tu lay!
Location: Torrance, CA

Re: Xnalara and ray-tracing

Post by semory »

Ah things I need to fix... ray tracing requires depth resolution as well, and when things are super close I tried putting alpha RGs ahead of non-alpha RGs. Works OK for RG29 over RG22 (like in Alcina's dress), but for alpha over alpha RGs like RG27 over RG25 (like in the claws) this doesn't work good. Eh... sorting on model+mesh indices is a pain but I guess what must be done must be done. Takes about 30 seconds to render the below... minus bump mapping on this (I'm missing a sign somewhere my bump mapping got boinked somewhere in translation from the old code). The code isn't worth crap right now but it's here for those interested in seeing how it workies.

When you guys take XNALara/XPS models into cycles or Daz to render, what do you do with these overlapping meshes? Do they render well with them?
test.png
You do not have the required permissions to view the files attached to this post.
User avatar
raidergale
Porter
Posts: 2632
Joined: Sun Nov 04, 2012 7:36 am
Custom Rank: Marie's my waifu kthx
Location: Italy

Re: Xnalara and ray-tracing

Post by raidergale »

semory wrote:Almost done with code refactoring... though got a question for galey!

In your Alina D. model, the body uses RG22 on top and RG29 on bottom. The purpose of the RG29 is to take advantage of XPS rendering order of meshes to put a reflection on the dress? Are the meshes exact duplicates or are the RG29s scaled a little bit?
Sorry for the late reply :runcry:
I use 22 below 29 (which is there to get a reflection to imitate a satin-like fabric) because I set the 29_ mesh opacity lower via XPS's material editor (you can see it's set to 55 in this case). This way you can see a slight reflection effect AND you also get the 22_ mesh below it, which retains full diffuse/AO/normal/mini normal/specular support, while 29 doesn't have AO and specular. I usually keep the meshes the same size since the way XPS order meshes shouldn't give any problem, BUT I always check the result and if needed I'll scale the 29_ mesh up a tiny bit (like 0.01 or something) to see if that solves it.

As for renders, I usually completely remove duplicate meshes that are only used to achieve desired effects on XPS. Using Alcina as an example, for renderd I'd remove all the 29_ meshes that are overlaid over the 22_ ones. Same goes for 27_ overlaid on top of 24/25 etc...
This is because with modern renderers you can achieve the same effect via shaders and proper textures on a single mesh, without needing to "cheat" like with XPS. Only overlapping meshes I keep are the backface culling ones, say if for example a dress has a different mesh for the outer and inner fabric. They'd obviously share the same position, thus being overlapped, but their normals are flipped the opposite way from eachother and thus will only get rendered if facing the camera, with backface culling on.
User avatar
XNAaraL
XPS Author
Posts: 120
Joined: Fri Sep 21, 2012 9:01 am

Re: Xnalara and ray-tracing

Post by XNAaraL »

semory wrote:Ah things I need to fix...
...,
but for alpha over alpha RGs like RG27 over RG25 (like in the claws) this doesn't work good.

Eh... sorting on model+mesh indices is a pain but I guess what must be done must be done.
...
test.png
:high: Dual depth peeling :high:
[Bavoil] "Order Independent Transparency with Dual Depth Peeling", Louis Bavoil, Kevin Myers -- February 2008
[Myers07] "Stencil Routed A-Buffer", Kevin Myers, Louis Bavoil, ACM SIGGRAPH Technical Sketch Program, 2007.
I know .... Steve, you know it :idea:
Rendering semi-transparent textures
Image
Blending -- depth testing works a bit tricky combined with blending
When writing to the depth buffer, the depth test does not care if the fragment has transparency or not, so the transparent parts are written to the depth buffer as any other value. The result is that the background windows are tested on depth as any other opaque object would be, ignoring transparency. Even though the transparent part should show the windows behind it, the depth test discards them.

So we cannot simply render the windows however we want and expect the depth buffer to solve all our issues for us; this is also where blending gets a little nasty. To make sure the windows show the windows behind them, we have to draw the windows in the background first. This means we have to manually sort the windows from furthest to nearest and draw them accordingly ourselves.


Alpha over alpha ... old article ... but still the best:
Alpha-blending and the Z-buffer. - Steve Baker
Worse still, if you decide to split and sort polygons (or just to sort and hope that the pathalogical overlap case doesn't show up), what key do you sort on? The center of the polygon? The nearest vertex? The furthest? :ohn:
... Some sort algorithms never terminate when given that input because L>T, T>C but C>L !!! :roll:

True, game engines mostly use depth peeling, but it also cannot solve all problems. Just another workaround that don't provide perfect results but is better than nothing.

XNALara "solution"
  1. make sure you render all your opaque polygonsm into the DepthBuffer, before you render all translucent ones with DepthBufferWrite enabled, to solve most of the problems (transparent over solid)
  2. render all transparent meshes again using DepthBufferWriteEnable=false, to "solve" the alpha over alpha issue.
The upshot of this is simply that you WILL get errors for any acceptable realtime algorithm. :roll:
It's just a matter of what you are prepared to tolerate. :ohn:

Code: Select all

public void RenderSceneFull(bool alphaPrepass) { // default alphaPrepass=false;
            
            graphicsDevice.RenderState.DepthBufferWriteEnable = true;
            graphicsDevice.RenderState.AlphaBlendEnable = false;

            graphicsDevice.RenderState.FillMode = FillMode.Solid;

            //opaque RG_4 -- first render pass -- DepthBufferWriteEnable = true
                effectsArmature.CurrentTechnique = effectsArmature.Techniques[shaderDict["DiffuseBump"]];
                foreach (Mesh mesh in model.GetMeshGroup(MeshGroupNames.MeshGroup4)) {
                    effectsArmature.Parameters["BoneMatrices"].SetValue(armature.GetBoneMatrices(mesh));
                    effectsArmature.Parameters["BumpSpecularAmount"].SetValue((float)mesh.RenderParams[0]);
                    mesh.RenderSinglePass(effectsArmature, "DiffuseTexture", "BumpTexture");
                }

                

            if (!alphaPrepass) {
                graphicsDevice.RenderState.AlphaBlendEnable = true;
                graphicsDevice.RenderState.SourceBlend = Blend.SourceAlpha;
                graphicsDevice.RenderState.DestinationBlend = Blend.InverseSourceAlpha;

                graphicsDevice.RenderState.AlphaTestEnable = true;
                graphicsDevice.RenderState.AlphaFunction = CompareFunction.GreaterEqual;
                graphicsDevice.RenderState.ReferenceAlpha = 200;
            }

            // transparent RG_6 -- second render pass -- DepthBufferWriteEnable = true
                effectsArmature.CurrentTechnique = effectsArmature.Techniques[shaderDict["DiffuseBump"]];
                foreach (Mesh mesh in model.GetMeshGroup(MeshGroupNames.MeshGroup6)) {
                    effectsArmature.Parameters["BoneMatrices"].SetValue(armature.GetBoneMatrices(mesh));
                    effectsArmature.Parameters["BumpSpecularAmount"].SetValue((float)mesh.RenderParams[0]);
                    mesh.RenderSinglePass(effectsArmature, "DiffuseTexture", "BumpTexture");
                }


            if (!alphaPrepass) {
                graphicsDevice.RenderState.AlphaBlendEnable = true;
                graphicsDevice.RenderState.SourceBlend = Blend.SourceAlpha;
                graphicsDevice.RenderState.DestinationBlend = Blend.InverseSourceAlpha;

                graphicsDevice.RenderState.AlphaTestEnable = true;
                graphicsDevice.RenderState.AlphaFunction = CompareFunction.Less;
                graphicsDevice.RenderState.ReferenceAlpha = 200;
            }

            // transparent RG_6 -- third render pass -- DepthBufferWriteEnable = false
            graphicsDevice.RenderState.DepthBufferWriteEnable = false;
    
                effectsArmature.CurrentTechnique = effectsArmature.Techniques[shaderDict["DiffuseBump"]];
                foreach (Mesh mesh in model.GetMeshGroup(MeshGroupNames.MeshGroup6)) {
                    effectsArmature.Parameters["BoneMatrices"].SetValue(armature.GetBoneMatrices(mesh));
                    effectsArmature.Parameters["BumpSpecularAmount"].SetValue((float)mesh.RenderParams[0]);
                    mesh.RenderSinglePass(effectsArmature, "DiffuseTexture", "BumpTexture");
                }

        }
[/spoiler]
User avatar
semory
Site Admin
Posts: 7755
Joined: Sat Aug 04, 2012 7:38 pm
Custom Rank: Kitty pu tu tu lay!
Location: Torrance, CA

Re: Xnalara and ray-tracing

Post by semory »

https://github.com/semory/xnalara-rt
Fixed the bump maps, got alpha sorting sorta figured out hahaha. I am going to integrate this with another old project I used to work on way back when hehehe, you shall soon see lmfao!

Galey, can I have dat pose file? It was not in the OptionalFiles.7z.

BTW, XNAaraL, yes, I experimented once with some techniques, not depth peeling or stochastic rendering because I felt they were too slow. I tried Weighted Blended Order-Independent Transparency once on XNALara models and it looked horrible because even a small amount of error wouldn't fool anybody! Not only having to render all opaque parts first, turning off z-testing and then rendering translucent parts to two separate accumulation buffers and then blending final result based on tweaking all kind of parameters that they use to diffuse the error from the order-dependent parts around -- it cut frame rate by 75% lol. It's like you read these journal articles and they have all these beautiful pictures, but you try it on XNALara models and it looks like crap because everybody knows what these models are supposed to look like, so even if there is a little bit of error people will notice! You ever try depth peeling in XPS? I do believe the article was based on DX9 hardware?
test-1920x1080a.png
test-1920x1080b.png
You do not have the required permissions to view the files attached to this post.
User avatar
raidergale
Porter
Posts: 2632
Joined: Sun Nov 04, 2012 7:36 am
Custom Rank: Marie's my waifu kthx
Location: Italy

Re: Xnalara and ray-tracing

Post by raidergale »

Sure thing, the pose for Lady D is here :yes:
Post Reply