akhanubis-eng
akhanubis-eng
Blog - Graphics programming
9 posts
A blog made by a student of Informations Systems Engineering about graphics programming, music and books.Blog en español: akhanubis.tumblr.com
Don't wanna be here? Send us removal request.
akhanubis-eng · 13 years ago
Text
Still alive
Although it has been a while since my last post in this blog, I'm still working with SlimDX and DirectX 11.
You can download the latest version of my work (which includes query, normal mapping, geometry shader and compute shader) at the link on the right and, if you are quite familiar with Spanish, read the entries of my blog at akhanubis.tumblr.com. Anyway, I'll try to update this blog during this week :D
0 notes
akhanubis-eng · 13 years ago
Text
SlimDX - DirectX 11 - Frustum corners, sphere and shader
Source code (take a look at CicloBase.HandleInput() for the controls)
Just like the previous post, I'll show you three different approaches for obtaining, in this case, the vertices of the frustum. Then, I'll breafly talk about it's boundingsphere and vertex shader.
Corners
Taking into account that the volume has reflection symmetry around both XZ and YZ planes in View Space, we only need to find one corner of the far plane and one of the near plane, and then find the other six using the symmetry planes.
In case you don't remember, in View Space the X, Y and Z axis are the camera's right, up, and backward (I'm working with right handed matrices) directions, and the camera is at the origin.
The yellow dots in the following image show the corners we are going to find with each method.
Extracting the corners using trigonometry
The position P of the far right top corner can be decomposed in the three vectors/components drawed in red in the image.
The Z coordinate can be obtained straigthforward because it is equal to the distance from the camera to the far plane (far distance). Sice we are using a right handed coordinate system, it's signed value will be -farDistance.
From the right triangle whose legs are farDistance and the Y vector, we can calculate the length of the last one. The angle between the top plane and the bottom plane is called vertical field of view, and thus the angle between the top plane and the -Z axis will be FOV_Y / 2.
\[\large{tan(\frac{fov_{y}}{2})=\frac{y}{Z_{far}}\Rightarrow y=Z_{far} tan(\frac{fov_{y}}{2})}\]
Similarly, the X coordinate can be obtained as:
\[\large{tan(\frac{fov_{x}}{2})=\frac{x}{Z_{far}}\Rightarrow x=Z_{far} tan(\frac{fov_{x}}{2})}\]
Or even better, considering that \(aspectRatio = \frac {width} {height}\):
\[\large x=aspectRatio \times y \Rightarrow x= aspectRatio \times Z_{far} tan(\frac{fov_{y}}2)\]
For the near corner, the algorythm remains unchanged except that we change \(Z_{far}\) with \(Z_{near}\).
It's being pretty easy so far, but how we extract the values of FOV_X, FOV_Y, farDistance and nearDistance from the projection matrix? Normally, we will using only one frustum in our app corresponding to the active camera, so we could save the values of the constants used when setting up our projection matrix. Anyway, knowing how to get them from the matrix can't hurt us.
As msdn states, Matrix.PerspectiveFovRH() returns the following matrix:
\[\begin{bmatrix}\frac 1{aspectRatio \times tan\left (\frac{fov_y}2 \right )}& 0 & 0 & 0 \\\ 0 & \frac 1 {tan\left (\frac{fov_y}2  \right )} & 0 & 0 \\\ 0 & 0 & \frac {Z_{far}}{Z_{near}-Z_{far}} & -1 \\\ 0 & 0 &  \frac{Z_{near} Z_{far}}{Z_{near}-Z_{far}}  & 0 \end{bmatrix}\]
We can then get the values with:
float tanY = 1f / inProjection.M22; float tanX = 1f / inProjection.M11; float fNear = inProjection.M43 / inProjection.M33; float fFar = fNear * inProjection.M33 / (1f + inProjection.M33);
tanX equals \(aspectRatio \times tan\left (\frac{fov_y}2 \right )\) because:
\[\left (width = 2Z_{far}tan\left (\frac {fov_x} 2  \right )    \right ) \wedge \left (height= 2Z_{far}tan\left (\frac {fov_y} 2  \right )  \right ) \\\ \left (2Z_{far} = \frac{width}{tan\left (\frac {fov_x} 2  \right )}   \right ) \wedge \left (2Z_{far} = \frac{height}{tan\left (\frac {fov_y} 2  \right )}  \right ) \\\ \frac{width}{tan\left (\frac {fov_x} 2  \right )} = \frac{height}{tan\left (\frac {fov_y} 2  \right )} \]
\(\large{aspectRatio = \frac{width}{height} = \frac{tan\left (\frac {fov_x} 2  \right )}{tan\left (\frac {fov_y} 2  \right )} \\\ tan\left (\frac {fov_x} 2  \right ) = aspectRatio \times tan\left (\frac {fov_y} 2  \right )}\)
Finally, adding the 3 coordinates for each corner:
Vector3 extremoFar = new Vector3(tanX, tanY, -1) * fFar; Vector3 extremoNear = new Vector3(tanX, tanY, -1) * fNear;
Extracting the corners using plane intersections
A point can be defined as the intersection of 3 planes. Although resolving the intersection of 3 planes may seem cumbersome (it involves a 3x3 linear system), the frustum planes are particularly simple (normals with null components and planes that contain the origin (\(D=0\))).
For example, in order to get the far top right corner, we will be using the far, top and right planes:
\(\large{N_{far,z} z + D = 0 \\\ N_{top,y} y + N_{top,z} z = 0 \\\ N_{right,x} x + N_{right,z} z =0}\)
Z can be extracted directly from the first equation and the other coordinates from the second and third ones using the Z value obtained:
extremoFar.Z = -far.D / far.Normal.Z; extremoFar.X = -right.Normal.Z * extremoFar.Z / right.Normal.X; extremoFar.Y = -top.Normal.Z * extremoFar.Z / top.Normal.Y; extremoNear.Z = -near.D / near.Normal.Z; extremoNear.X = -right.Normal.Z * extremoNear.Z / right.Normal.X; extremoNear.Y = -top.Normal.Z * extremoNear.Z / top.Normal.Y;
Extracting the corners by transforming them from Clip Space
The idea behind this method is the same as the one described in my previous post. To generate the vector in Clip Space and then transform it back to View Space using the inverse of the projection matrix. Remembering that the viewing volume in Clip Space is an AABB, the corners can be found with:
Matrix inverseProjection = Matrix.Invert(inProjection); Vector3 extremoFar = Vector3.TransformCoordinate(new Vector3(1f, 1f, 1f), inverseProjection); Vector3 extremoNear = Vector3.TransformCoordinate(new Vector3(1f, 1f, 0f), inverseProjection);
Vertex Buffer and Vertex Shader
Using extremoFar, extremoNear and the reflection symmetry, we can then write the 8 vertices/corners to the vertex buffer:
d.Write(extremoFar); //Far Right Top d.Write(new Vector3(extremoFar.X, -extremoFar.Y, extremoFar.Z)); //Far Right Bottom d.Write(new Vector3(-extremoFar.X, -extremoFar.Y, extremoFar.Z)); //Far Left Bottom d.Write(new Vector3(-extremoFar.X, extremoFar.Y, extremoFar.Z)); //Far Left Top d.Write(extremoNear); //Near Right Top d.Write(new Vector3(extremoNear.X, -extremoNear.Y, extremoNear.Z)); //Near Right Bottom d.Write(new Vector3(-extremoNear.X, -extremoNear.Y, extremoNear.Z)); //Near Left Bottom d.Write(new Vector3(-extremoNear.X, extremoNear.Y, extremoNear.Z)); //Near Left Top d.Write(Vector3.Zero); //Eye
The vertex shader we will be using is the "hello world" shader which multiplies the input by the World, View and Projection matrices. Considering that the input vertices which correspond to the corners of the frustum are in View Space, the world matrix will be the inverse of the view matrix A related to the frustum we are rendering. By the other side, the view and projection matrices will be the ones related to the active camera B:
inEffect.SetearParametro("MatricesBuffer", "matViewProjection", inViewProjection_cámaraB); inEffect.SetearParametro("MatricesBuffer", "matWorld", inverseView_cámaraA);
Bounding Sphere
Despite of the fact that the bounding sphere could be generated using BoundingSphere.FromPoints(), it's center and radius can be found as:
centroVS = new Vector3(0, 0, (extremoFar.Z + extremoNear.Z) * 0.5f); boundingSphereWS.Radius = (centroVS - extremoFar).Length();
At last!
That's all about frustum culling. The next topic will be normal mapping with multiple point lights and generation of tangents and bitangents (and maybe a post in the middle about a project I'm doing for the unniversity involving QR code detection in an image).
A video showing the initial camera frustum and another frustum just laying there:
3 notes · View notes
akhanubis-eng · 13 years ago
Text
SlimDX - DirectX 11 - Extracting frustum planes
Extracting the planes from the colums of the projection matrix
My method is based in the implementation proposed in http://crazyjoke.free.fr/doc/3D/plane%20extraction.pdf. Looking at how a point int View Space is transformed to Clip Space, it's possible to obtain the equations related to the planes from the colums of the projection matrix.
Something worthed pointing out is that the coordinate space of the resulting planes depends on the matrix used for the extraction. Quoting the paper:
If the matrix used is the projection matrix P, then the algorithm gives the clipping planes in view space (i.e., camera space).
If the matrix used is equal to the combined view and projection matrices, then the algorithm gives the clipping planes in world space.
If the matrix used is equal to the combined world, view, and projection matrices, then the algorithm gives the clipping planes in object space.
I adapted to C# the example provided in C by the author:
private void GenerarPlanos_columnasmatriz(Matrix inProjection) { //Basado en http://crazyjoke.free.fr/doc/3D/plane%20extraction.pdf planesVS = new Plane[6]; planesVS[(int)FrustumPlane.Near] = Plane.Normalize(new Plane(inProjection.get_Columns(2))); planesVS[(int)FrustumPlane.Far] = Plane.Normalize(new Plane(inProjection.get_Columns(3) - inProjection.get_Columns(2))); planesVS[(int)FrustumPlane.Left] = Plane.Normalize(new Plane(inProjection.get_Columns(3) + inProjection.get_Columns(0))); planesVS[(int)FrustumPlane.Right] = Plane.Normalize(new Plane(inProjection.get_Columns(3) - inProjection.get_Columns(0))); planesVS[(int)FrustumPlane.Top] = Plane.Normalize(new Plane(inProjection.get_Columns(3) - inProjection.get_Columns(1))); planesVS[(int)FrustumPlane.Bottom] = Plane.Normalize(new Plane(inProjection.get_Columns(3) + inProjection.get_Columns(1))); }
Extracting the planes from the vertices of the frustum
I didn't write this algorithm, but it should be very simple. Each plane is constructed using 3 points (vertices of the frustum) that are contained in it. Each face of the frustum has 4 vertices of which we will choose 3. Then, it's just a matter of using the constructor:
new Plane(point1, point2, point3);
Probably, this method generates two vectors from the given point and compute their cross product to obtain the plane normal. Then, in order to obtain D, it should replace the X,Y,Z variables of the equation of the plane with the coordinates of any of the points.
When using this method, we need to keep in mind that 3 points actually define 2 planes with opposite normals depending on the order of the arguments of the constructor. The sphere-frustum collision test that I implemented requires that the 6 planes are facing inwards the frustum in order to work properly.
Extracting the planes by transforming them from Clip Space
The following image (taken from msdn) illustrates how the transformation from View Space to Clip Space works:
When transformed, the frustum becomes an AABB(axis aligned bounding box). This volume is much easier to work with so if we could define the planes in Clip Space and find a way of transforming them back to View Space, the problem would be solved.
Given an inversible transformation matrix \(M\) which goes from a coordinate system \(A\) to a coordinate system \(B\), its inverse(\(M^{-1}\)) will transform a vector backwards. Therefore, we will be using the inverse projectin matrix, which will transform a vector in Clip Space to View Space.
Furthermore, the planes of an AABB are really simple to determine. Their normals have two null components, and D equals 1 for every plane except the near plane (D=0).
Finally, the planes in Clip Space are hardcoded and then transformed to View Space using inverseProjection:
private void GenerarPlanos_clipspace(Matrix inProjection) { Matrix inverseProjection = Matrix.Invert(inProjection); planesVS = new Plane[6]; //Z + 0 = 0 planesVS[(int)FrustumPlane.Near] = Plane.Normalize(Plane.Transform(new Plane(0f, 0f, 1f, 0f), inverseProjection)); //-Z + 1 = 0 planesVS[(int)FrustumPlane.Far] = Plane.Normalize(Plane.Transform(new Plane(0f, 0f, -1f, 1f), inverseProjection)); //X + 1 = 0 planesVS[(int)FrustumPlane.Left] = Plane.Normalize(Plane.Transform(new Plane(1f, 0f, 0f, 1f), inverseProjection)); //-X + 1 = 0 planesVS[(int)FrustumPlane.Right] = Plane.Normalize(Plane.Transform(new Plane(-1f, 0f, 0f, 1f), inverseProjection)); //-Y + 1 = 0 planesVS[(int)FrustumPlane.Top] = Plane.Normalize(Plane.Transform(new Plane(0f, -1f, 0f, 1f), inverseProjection)); //Y + 1 = 0 planesVS[(int)FrustumPlane.Bottom] = Plane.Normalize(Plane.Transform(new Plane(0f, 1f, 0f, 1f), inverseProjection)); }
In terms of speed, the first method is the fastest (the title of the paper is "Fast Extraction..." :P). Nevertheless, I'm staying with the third one because it's more abstract and can be generalized (we will use this again in order to obtain the vertices of the frustum). Also, the viewing frustum sould be generated only once at the beginning, so the time spent on it it's irrelevant.
1 note · View note
akhanubis-eng · 13 years ago
Text
SlimDX - DirectX 11 - Frustum Culling
Frustum culling is the process of removing objects that lie completely outside the viewing volume of the camera. The shape of this volume is a frustum build from 6 planes. Enough with the theory, google if you wanna know more :P
Collision
In order to select the objects that won't be sent to the rendering pipeline in each frame, we must do a collision test between the viewing frustum and the bounding sphere of each model. Considering that the 6 planes of the frustum are facing inside the volume:
If the sphere is behind at least one of the planes, it's  completely outside the frustum. 
If the sphere intersects at least one plane, it's intersecting the frustum (i.e. part of it is inside and the rest outside). 
If the sphere is ahead (i.e. inside the positive half-space) the 6 planes, it's completely inside the frustum.
foreach(Plane plane in PlanesWS) { PlaneIntersectionType resultado = Plane.Intersects(plane, inSphere); if (resultado == PlaneIntersectionType.Back) return CollisionResult.IsOutside; else if (resultado == PlaneIntersectionType.Intersecting) return CollisionResult.Intersects; } return CollisionResult.IsInside;
To improve performance with low effort, the frustum can be bounded to a sphere and a sphere-sphere collision test, which is faster than a sphere-frustum test, can be done before testing against each plane. Therefore, for most of the objects the method won't even get to the sphere-frustum test, discarding them at the beginning and saving CPU time:
if ((inSphere.Center - boundingSphereWS.Center).LengthSquared() > (inSphere.Radius + boundingSphereWS.Radius) * (inSphere.Radius + boundingSphereWS.Radius)) return CollisionResult.IsOutside; foreach(Plane plane in PlanesWS) ...
FPS comparison (top: without frustum culling, center: with frustum culling, bottom: with frustum culling and sphere-sphere test):
View Space
The collision test I presented uses objects whose coordinates are in World Space. Whenever the view or projection matrices change (e.g. movement of the camera, rotation), the viewing frustum and it's bounding sphere should be recalculated. To avoid this, a copy of the frustum is stored with View Space coordinates which are constants despite the changes of the view matrix. Every time the camera updates it's view matrix, you must make sure you update your world space version of the frustum by transforming the data of the view space one. I'm not sure, but seems logic to update the values instead of re-extracting them (although this is also quite simple).
As you can see, the frustum is heavily dependent on the camera, so it's handling should relay on the last one. Because I'm not the author of the camera class, I'm avoiding modifying it unless it's strictly necessary, which is not the case. So, as my main class ended up handling the frustum, I just need to make sure that whenever the camera is updated, the frustum is updated too.
public void Update(float elapsed) { UpdateInputStates(); camara.Update(elapsed); camaraFrustum.UpdateWSBoundingVolumes(camara.ViewMatrix); HandleInput(elapsed); }
In the next two posts, I'll be discussing a few methods for extracting the frustum data from the matrices. If you know some spanish, you can peek from the spanish version of the blog (it always gets updated first :P).
0 notes
akhanubis-eng · 13 years ago
Text
SlimDX - DirectX 11 - Bounding Sphere
I've been quite busy lately, but finally Diablo's evil forces have been subdued and Tyrael is once again an archangel (ups!), so I can get back to posting regularly.
I've added frustum culling to my project/framework and, in order to do that, I have added a Frustum class and some auxiliarly methods for SlimDX.BoundingSphere. There is a lot to talk about, so it's going to take several posts.
Camera
There were some issues creating the matrices. Methods were using fixed constants instead of the parameters received. Although common sense dictates that the field of view related to the Y axis should be a parameter, dhpoware's Camera class was written around FOVx. I didn't want to tweak another person's code so I decided to use FOVy to get FOVx and recalculate FOVy when calling Matrix.PerspectiveFovRH().
Mesh, Model, EsceneModel y Escene
I tried to make use of inheritance and interfaces to add polymorphism to those classes. For example, Model and EsceneModel share the drawing related methods and Model and Escene the movement related ones.
EsceneModel:
protected override Matrix World { get { return parent.World; } } protected override bool FrustumCulled(Frustum f) { if (parent.dirtyBoundings) Extensions.TransformarBoundingSphere(ref boundingSphereTransformada, boundingSphereOriginal, World); return f.TestCollision(boundingSphereTransformada) == Frustum.CollisionResult.IsOutside; }
Model:
protected override Matrix World { get { return Matrix.Scaling(Scaling) * Matrix.RotationQuaternion(Rotation) * Matrix.Translation(Position); } } protected override bool FrustumCulled(Frustum f) { if (dirtySphere) Extensions.TransformarBoundingSphere(ref boundingSphereTransformada, boundingSphereOriginal, World); return f.TestCollision(boundingSphereTransformada) == Frustum.CollisionResult.IsOutside; }
Bounding Sphere
Transform
SlimDX.BoundingSphere lacks of a Transform() method. Although transforming a sphere is just translating its center and scaling its radius, Transform() becomes really useful when, for example, you need to rotate the sphere around an arbitrary axis which doesn't contains the center of the sphere.
All the instances of EsceneModel which where generated from the same scene (i.e. share the attribute parent) have their vertex coordinates defined in the same Object Space and use parent's World Matrix at the moment of drawing. As a result, when rotating the scene, each sphere needs to be rotated around the axis of the scene. In order to accomplish that, I wrote the following method:
public static void TransformarBoundingSphere(ref BoundingSphere inSphere, BoundingSphere inSphereOriginal, Matrix inWorld) { Vector3 scale; Quaternion rotation; Vector3 translation; inWorld.Decompose(out scale, out rotation, out translation); //Si no es uniforme, podría convertirse en una bounding box no mínima. EVITAR SCALE NO UNIFORME (o directamente tirar excepción) float maxScaling = Math.Max(Math.Max(scale.X, scale.Y), scale.Z); inSphere.Center = Vector3.Transform(inSphereOriginal.Center, inWorld).ToVector3(); inSphere.Radius = inSphereOriginal.Radius * maxScaling; }
inSphereOriginal stores the center and radius of the original sphere in ObjectSpace, while inSphere returns updated with the transformated sphere in World Space. This extension method uses two spheres to avoid creating one on each call.
Draw
Another extension method for SlimDX.BoundingSphere which renders a sphere on the screen:
public static void Draw(this BoundingSphere inSphere, Effect inEffect, Matrix inViewProjection) { var contexto = CicloBase.Instance.device.ImmediateContext; contexto.InputAssembler.InputLayout = LinedSphere.Instance.sphereLayout; contexto.InputAssembler.PrimitiveTopology = PrimitiveTopology.LineStrip; contexto.InputAssembler.SetVertexBuffers(0, LinedSphere.Instance.sphereBinding); contexto.InputAssembler.SetIndexBuffer(LinedSphere.Instance.sphereIndex, Format.R16_UInt, 0); inEffect.SetearParametro("MatricesBuffer", "matViewProjection", inViewProjection); inEffect.SetearParametro("SpherePerRenderBuffer", "vSphereCenter", inSphere.Center); inEffect.SetearParametro("SpherePerRenderBuffer", "fSphereRadius", inSphere.Radius); int passes = inEffect.GetTechniqueByName("ShowBoundingSphere").Description.PassCount; for (int i = 0; i < passes; i++) inEffect.GetTechniqueByName("ShowBoundingSphere").GetPassByIndex(i).Apply(contexto); contexto.DrawIndexed(LinedSphere.Instance.sphereCantIndex, 0, 0); contexto.InputAssembler.SetIndexBuffer(null, Format.Unknown, 0); }
VertexBuffer
The sphere is representated as its projection (just the circumference) on the XY, YZ and ZX planes. Each circle is rendered as a set of vertices joined by lines. The amount of vertices used for each quarter of circumference can be tweaked with LinedSphere.sphereCantVertAuxPorTramo.
To avoid having one vertexbuffer for each sphere in my app, I wrote a shader which, given a vertexbuffer related to a sphere located at (0,0,0) and with radius = 1), translates its center an amount defined by the parameter vSphereCenter and scales its radius by fSphereRadius:
float4 VShaderShowBoundingSphere(float4 Pos : POSITION) : SV_POSITION { //El VBuffer usado debería tener el radio normalizado, por las dudas normalizo igual float4 posW = float4(vSphereCenter + normalize(Pos.xyz) * fSphereRadius , 1); return mul(posW, matViewProjection); }
If you take a look at the method Draw that I mentioned above, you'll see that vSphereCenter equals the sphere center position in WS and fSphereRadius its radius.
LineStrip
Although the most logic thing to do is to generate the vertexbuffer using three for cicles (one for each plane) and draw the sphere using PrimitiveTipology.LineList, I was bored so I decided to overkill this matter and use PrimitiveTipology.LineStrip minimizing the amount of index entries required. If every line starts where the previous ends, the model should allow us to draw it "holding the pencil" (I don't know if it's a common expression in english as in spanish), i. e., considering the model a connected undirected graph, it should have an Eulerian path.
A connected undirected graph has an Eulerian trail if and only if at most two vertices have odd degree. 
A connected undirected graph has an Eulerian cycle (path that starts and end in the same vertex) if and only if every vertex has even degree. 
Although our model has 6 fixed vertices and N extras in each quarter of circumference, each of these extras has degree = 2 and, considering that when you start travelling through a quarter of circumference you have only one way to go but forward until you reach one of the fixed vertices, these extra vertices can therefore be ignored.
Each fixed vertex has degree = 4, so our sphere has an Eulerian cycle. Now that we are sure that we won't be wasting our time, we can try to build one:
Finally, the amount of index entries will be:
sphereCantIndex = (quantityVertAuxPerQuarterCirc + 1) * 12 + 1;
while the amount when using PrimitiveTipology.LineList would be:
sphereCantIndex = (quantityVertAuxPerQuarterCirc + 1) * 12 * 2;
Video
0 notes
akhanubis-eng · 13 years ago
Text
Visual Basic - My first game
Source code (Run NAVE.EXE with admin rights after having installed the fonts (right click, Install)
Me and a friend wrote this game while we were at secondary school (I was 16 :P) on 2006. It was the final project of a programming related subject.
The whole drawing was made using BitBlt from Win32 API and masks.
0 notes
akhanubis-eng · 13 years ago
Text
2701
I'm currently reading Cryptonomicon by Neal Stephenson. Almost at the beginning of the book, Alan Turing speaks about the number 2701 wich is the product of the prime numbers 37 and 73. Later on, 2701 is used to indentify a Special Marine Forces Division.
While I was looking at a paper about FXAA, something took my attention:
NVIDIA Corporation
 2701 San Tomas Expressway 
Wath are the odds? :P
2 notes · View notes
akhanubis-eng · 13 years ago
Text
SlimDX & DX 11 - Wavefront .Obj Loader
Source code (VS 2010)
I've been struggling with this for a while, trying to make it as flexible as possible.  I used 3 different models/scenes (each one with a different vertex data layout (.obj syntax)) for testing. A Team Fortress 2 map converted with GCFScape, Crafty and Blender, a car model, and Altair from Assassin's Creed. All of them can be loaded properly in the app without having to tweak the .obj file, just the .mtl (Altair's model didn't even have one).
Syntax comparison (indentation is used for illustrative purposes :P):
gullywash.obj
mtllib $materialsFile (o $nombreObjeto (v $coordX $coordY $coordZ)[0..N] (vt $coordU $coordV)[0..N] (vn $coordX $coordY $coordZ)[0..N] (usemtl $material (s off)? (f $indicePosicion/$indiceTexCoords/$indiceNormal $indicePosicion/$indiceTexCoords/$indiceNormal $indicePosicion/$indiceTexCoords/$indiceNormal)[1..N] )[1..N] )[1..N]
rapide.obj
mtllib $materialsFile (v $coordX $coordY $coordZ)[0..N] (vn $coordX $coordY $coordZ)[0..N] (vt $coordU $coordV)[0..N] (usemtl $material g $nombreGrupo (f $indicePosicion/$indiceTexCoords/$indiceNormal $indicePosicion/$indiceTexCoords/$indiceNormal $indicePosicion/$indiceTexCoords/$indiceNormal)[1..N] )[1..N]
altair.obj
mtllib $materialsFile (g $nombreGrupo usemtl $material (v $coordX $coordY $coordZ)[0..N] (vt $coordU $coordV)[0..N] (f $indicePosicion/$indiceTexCoords/$indiceNormal $indicePosicion/$indiceTexCoords/$indiceNormal $indicePosicion/$indiceTexCoords/$indiceNormal)[1..N] )[1..N]
.mtl
The first useful line read by the parser contains the path for the material's file (.mtl). ParseMtllib() handles this file returning a dictionary of (materialName, Texture2D). The texture path is saved in the file as the diffuse map of the material (map_Kd). The models of a scene are divided later in solid and translucid using the file extension (.jpg or .png) as reference.
.obj
The .obj parser iterates over the whole file using several whiles blocks. I tried to come up with a neat and simple algorythm, but it turned out to be quite the opposite. Depending on the normal loading behavior chosen to parse a file, index's  data might end up being useless (vertex buffer size = triangles count * 3, index buffer = 0,1,2,.., vertex buffer size -1).
Generating Normals
If the model we want to load lacks of vertex normals (as Altair's does), they can be generated. First, the surface normal is calculated as the cross product between two sides of the face sharing the origin (see image below). Then, we can set the surface normal as the normal of the three vertexs. For a more accurate result, we could then find the average of the surface normals adjacents of a given vertex and set this as the vertex normal (as shown in the image below the image below :P).
You can handle the way the parser gets normals' data passing a flag when invoking it:
public enum NormalLoad { Ignore, Load, GenerateSurface, GenerateVertex }
Reading normals from file (left), generating surface normals for every vertex of a face (center) and generating vertex normals (right):
Altair's model (this one comes without normals):
TF2 Map
It seems that some models wich are rendered using a particular shader (reflection, perhaps?) have a transparent/translucent texture. However, the models are still being loaded:
In order to reduce the alpha blending problems caused by incorrect sorting, I disabled writes to the Z-buffer when rendering translucent models:
Finally, I wrote an alpha testing shader (discard the pixel if alpha == 0) to improve the look of the grass:
float4 PShaderAlphaTest(VS_OUTPOSTXR Input) : SV_Target { float4 color = texColorMap.Sample(samLinearWrap, Input.Txr); clip(color.a - 0.01f); return color; }
Limitations
Only loads diffuse maps (I'm thinking on adding normal maps)
Duplicated vertex buffer's entries depending on the normal generation technique chosen
Depending on the .obj syntax, there could be unused vertexs on a vertex buffer.
If a texture is missing, the models with that texture's material are ignored.
Doesn't support N-faces polygons (import the .obj to Blender and export it back cheking the triangulate faces box)
0 notes
akhanubis-eng · 13 years ago
Text
SlimDX & DX 11 - Framework
Tumblr media
Source code (VS 2010)
This posts is just a bunch of thoughts and interesting things about a framework I've been working on for the last few days. It's supposed to be the next logical step after having read SlimDX Tutorials 1, 2 and 3.
Depth buffer
There is nothing really interesting to say about this, so I just want to show you the code needed to get our depth buffer working.
Creating a DB:
Format depthFormat = Format.D32_Float; Texture2DDescription depthBufferDesc = new Texture2DDescription { ArraySize = 1, BindFlags = BindFlags.DepthStencil, CpuAccessFlags = CpuAccessFlags.None, Format = depthFormat, Height = RenderForm.ClientSize.Height, Width = RenderForm.ClientSize.Width, MipLevels = 1, OptionFlags = ResourceOptionFlags.None, SampleDescription = new SampleDescription(1, 0), Usage = ResourceUsage.Default }; var DepthBuffer = new Texture2D(device, depthBufferDesc); DepthStencilViewDescription dsViewDesc = new DepthStencilViewDescription { ArraySize = 0, Format = depthFormat, Dimension = DepthStencilViewDimension.Texture2D, MipSlice = 0, Flags = 0, FirstArraySlice = 0 }; renderTargetDepthStencil = new DepthStencilView(device, DepthBuffer, dsViewDesc);
Setting it as target of the OutputMerger:
contexto.OutputMerger.SetTargets(renderTargetDepthStencil, renderTargetBackBuffer);
Enabling depth testing:
DepthStencilStateDescription dsStateDesc = new DepthStencilStateDescription() { IsDepthEnabled = true, IsStencilEnabled = false, DepthWriteMask = DepthWriteMask.All, DepthComparison = Comparison.Less, }; depthEnabledStencilDisabled = DepthStencilState.FromDescription(device, dsStateDesc);
contexto.OutputMerger.DepthStencilState = depthEnabledStencilDisabled;
Alpha Blending
Every DX version handles alpha blending on a different way. DX 9 doesn't offer support for multiple render targets so AB is quite simple. DX 10, on the other hand, allows you to enable/disable AB for each RT you use simultaneously, but with a major drawback: BlendOp, SourceBlend, DestBlend, etc must be the same for every RT.
Fortunately (?), in DX 11 you are able to set each AB flag for each RT. If you take a look at BlendStateDescription, you will notice that it has an 8-length array of RenderTargetBlendDescription.
BlendStateDescription blendStateDesc = new BlendStateDescription() { IndependentBlendEnable = false, AlphaToCoverageEnable = false, };
Using IndependentBlendEnable = false, we only need to set up the first RT description (BlendStateDescription.RenderTargets[0]) and it will be copied to the other seven. By default, Alpha Blending is disabled.
blendStateDesc.RenderTargets[0] = new RenderTargetBlendDescription(); blendStateDesc.RenderTargets[0].RenderTargetWriteMask = ColorWriteMaskFlags.All; blendDisabled = BlendState.FromDescription(device, blendStateDesc);
On the other side, if we enable IndependentBlendEnable, we must set the flags for every RT we'll be using, and the remaining ones will end up with default values.
Once the BlendState has been generated, we pass it to the OM:
contexto.OutputMerger.BlendState = States.Instance.blendDisabled;
States.cs
I like the way XNA allows you a simple access to the most common depth, blend and rasterizer states without having to create them in between rendering frames, so I made a Singleton wich offers the same behavior.
public DepthStencilState depthEnabledStencilDisabled { get; private set; } public DepthStencilState depthDisabledStencilDisabled { get; private set; } public RasterizerState cullNoneFillSolid { get; private set; } public RasterizerState cullNoneFillWireframe { get; private set; } public RasterizerState cullBackFillSolid { get; private set; } public RasterizerState cullBackFillWireframe { get; private set; } public RasterizerState cullFrontFillSolid { get; private set; } public RasterizerState cullFrontFillWireframe { get; private set; } public BlendState blendDisabled { get; private set; } public BlendState blendEnabledSourceAlphaInverseSourceAlpha { get; private set; } public BlendState blendEnabledSourceAlphaDestinationAlpha { get; private set; } public BlendState blendEnabledOneOne { get; private set; }
Anyway, I suppose the rigth thing to do is to set the states in the technique11 definition inside the .fx file whenever is possible.
Effect
If you take a look at Tutorial 3, you'll see that, in order to set up a pipeline, it compiles every stage separately. Although this approach should give you more flexibility and reusability of the shaders developed, I prefer using the Effect class.
var efectoCompilado = ShaderBytecode.CompileFromFile(effectName, null, "fx_5_0", ShaderFlags.None, EffectFlags.None); effect = new Effect(device, efectoCompilado);
If you are familiar with XNA's version of Effect, understanding this one should be really easy.
Setting parameters values:
effect.GetVariableByName("texColorMap").AsResource().SetResource(textureView); effect.GetConstantBufferByName("PerRenderBuffer").GetMemberByName("matWorldViewProjection").AsMatrix().SetMatrix(inWorld * inViewProjection);
Applying a pass:
effect.GetTechniqueByName("Textured").GetPassByIndex(0).Apply(contexto); contexto.DrawIndexed(indexCount, 0, 0);
ElapsedTime
I'm not sure if this is worthed mentioning, but I used QueryPerformanceCounter and QueryPerformanceFrequency for time measuring.
[DllImport("Kernel32.dll")] private static extern bool QueryPerformanceCounter(out long lPerformanceCount); [DllImport("Kernel32.dll")] private static extern bool QueryPerformanceFrequency(out long lFrequency);
public float ElapsedTime() { long tickActual; QueryPerformanceCounter(out tickActual); float elapsed = (float) (tickActual - tickAnterior) / frecuencia; tickAnterior = tickActual; return elapsed; }
Camera & Input
I ported to SlimDX & DX 11 the XNA's camera developed by dhpoware (really nice work BTW). There is only one major change involving handling mouse and keyboard events. There is a nice sample about this at http://slimdx.googlecode.com/svn/trunk.
We need to "tell" the API how we will using the input device. If you wanna know the meaning of each creation flag, take a look at the sample above-mentioned.
mouse = new Mouse(dInput); mouse.SetCooperativeLevel(inForm, CooperativeLevel.Foreground | CooperativeLevel.Nonexclusive); keyboard = new Keyboard(dInput); keyboard.SetCooperativeLevel(inForm, CooperativeLevel.Foreground | CooperativeLevel.Nonexclusive | CooperativeLevel.NoWinKey);
Getting device's current state:
if (keyboard.Acquire().IsFailure)return; if (keyboard.Poll().IsFailure) return; currentKeyboardState = keyboard.GetCurrentState(); if (Result.Last.IsFailure) return; Trabajar con currentKeyboardState ...
Mesh.cs
It's just a DX 11 version of the MeshData, SmdLoader and SimpleModel classes of the samples. I'll probably add a Model class to manage transformations and any application/game related logic.
Escena.cs
It loads a list of meshes from a .obj and a .mtl file using ObjLoader.cs. There is a lot to say about this, so this is what I'll be covering on my next post. An example of a TF2 map converted to .obj and loaded in my framework with ObjLoader.cs can be found here. 
0 notes