posts about gpu generated visuals
February 15th, 2013

3d wave velocity viewer

This shows the velocity of the vertices of a 128^3 lattice cube. (2M points). The simulation is done on the gpu and takes only around 3ms on my ati hd5850. At 192^3 (7M points) it still renders in < 10ms (without visualization). The visualisation takes a lot longer: between 7ms for small points and a long time for larger ones, so the system is fillrate limited. This is not very surprising considering every vertex is drawn as a pointsprite, and looking trough the cube results in every pixel being drawn > 100 times.
The simulation is very straightforward: 2 2D textures are used for data storage:
1: vector4 containing position and a “static”-flag (which is used for the cube borders)
2: vector4 containing velocity and pressure;
The simulation is done in one pass and renders to 2 rendertargets at the same time.
Velocity is updated by sampling the positions of the 6 most direct neighbors. Pressure is stored in the 4th channel by adding the distance to the neighbors minus the “relaxed” distance. The pressure channel can be used in de visualization pass, but this is not shown in the videos. When sampling the neighbors, the texture-coordinates needed are converted from 3d to 2d, because the position and velocity textures are 2D. I am using xna so i can’t render to 3d rendertargets.
The velocity is influenced by mouse-pointer position and velocity.
The videos show the same system with some parameters tweaked differently:

you won’t miss anything skipping trough them.



December 30th, 2012

2d panoramic shadow maps

The depth maps for the lights are generated in one pass and one drawcall per light, without switching rendertargets. This is done by rendering a 1 pixel high panorama from each light’s perspective. The drawback of this method is that pointsprites have to be used. If trianglestrips are used, the vertices of a plane which are over the panorama seam, will end up on opposite sides, and its area would destroy the panorama. It would be possible to cut the geometry at the panorama ends, but that would have to happen every frame, and vertex buffers would have to be updated.
A drawback of using pointsprites is that they are square, while a height of 1 pixel would be sufficient. The sprites created a lot of overdraw which became noticable when drawing a lot of sprites. this can be fixed by using a scissorrect to make sure only a 1 pixel high strip gets rendered to the shadow/depth map.
This method will work with old technology; its all very basic and easy; I made it in a day. Performance is pretty good, even without any optimizations. 128 lights and 8192 points render in 10 ms on my 5850. To speed things up, the light radius could be clamped, so not all geometry has to be drawn for every light, and the light pass can be done in tiles, so a pixel does not need to loop through all lights.

This is the vertex-shader code to generate the depth-panorama:

VS_Output VS_RenderPanorama(VS_Input input)
{
VS_Output output;

float2 diff = particlePos – lightPos;
float x = atan2 (diff.y, diff.x);
x /= PI;

float y = lightIndex / numLights;

output.Depth = length (diff.xy);
output.Position = float4 (x, y, output.Depth, 1);
output.Size = max( pointSize / output.Depth, 1);

return output;
}

The pixel shader needs to make sure it gets rendered as 1 pixel high strips. This is needed because pointSprites are square, and pointSprites near the light would be big and cover multiple rows in the depth texture.
A depth-map for 4 lights would look something like this:

depth panorama for 4 lights
black is close to the lights, white is far.

After the depth-map has been rendered, a second shader(on a full screen quad) loops through all the lights, gets the vector between the pixel and the light, uses atan2 again to lookup in the depth-map, and can then decide whether it is shadowed or not. Just like regular shadow-mapping, but with 1 single panoramic texture.

December 14th, 2011

Collected doodles

A collection of some tests, prototypes and failures that are not really interesting enough to deserve their own post.

I was attempting to make a heavy rubber ball, with needles pointing outward, deforming the surface. I encoded the mesh in to a texture for simulation.
A mesh is encoded in 2 textures:

1: x: displacement, y: velocity, z: neighbor1, w: neighbor2
2: x: neighbor3, y: neighbor4, z: neighbor5, w: neighbor6

every vertex has a minimum of 2 neighbors, and a maximum of 6. If a vertex of a mesh has more than 6 connections, information(edges) is lost.

With this mesh / graph in a texture a pixelshader can do anything it would normally do on surrounding pixels (like a wave, blur, conways game of life etc.). In this example it drives a displacement map, with the benefit of seamlessness, and the absence of stretching. This is because it works directly with the vertices, without the intermediate step of using texture-coordinates to wrap a flat plane around a curved surface.

More or less the same idea:

….And one that emphasizes that a pixel/vertex has 6 neighbors in a hexagon (instead of 4 or 8 in a grid):

The following 3 videos are also related: I think they are based on each screenpixel being represented by a particle, which moves and also samples the screen, creating a feedback loop.



A particle system where each particle checks itself against all other particles. Using this technique. I can usually get 16384 (128 * 128 texture) particles moving around in real time :

And another one : magnets with poles: they form strings:

Audio-reactive cymatics shader:

Some organic weirdness: I think it was just particles driven by perlin.

November 14th, 2011

Post process voxels filter

This is a pixelshader that takes depth and outputs a new depth and normals that represent the scene made up of voxels. This can be useful for giving bone-animated objects a voxel style look. (animating with voxels is bothersome). It could also be applied to complete scenes. The “popping” that is visible at the silhouette of the object is hard to hide with a rotating perspective camera. It would be absent seen trough an orthographic non-rotating camera. I still have to fix the black separation lines that are sometimes visible.

It roughly works like this:
-Every screen pixel will belong to a cube (a cube can contain multiple pixels).
-Each cube needs to be completely drawn (the tricky part- because the silhouette of the object(s) will change).
-a pixel samples a number of pixels in an area that should be about as big as the biggest cube you would expect.
-since the screen-area of a cube can be very large (say 64 * 64 = 4096 pixels) it is impossible to sample all pixels in that area to search for a cube that could be occluding the current pixel. So for a single pixel there is a high error rate (it misses cubes that would be occluding itself), but this is solved later.
-It is sampling with a grid with a spacing of 16 pixels. (in one axis 1 out of 16 pixels is sampled)
-it does a ray-cast against every samples containing cube. The front-most hit is used as the new depth.
-at this moment a lot of cubes are only half drawn, or are represented by a few floating pixels.
-then a pass is run a number of times (the number depends on the spacing of the sampling grid in the first pass) which grows the cubes.
-1 pixel could become 3*3 pixels in 1 pass, if it is part of the front-most cube along that pixels ray.
-because this step is repeated, a cube which was only represented by a few pixels can grow to a complete cube.
-after this pass, all holes are filled, and the silhouette of the object has been changed.

October 10th, 2011

My new node based image generator

Node based image generator 1 from johan holwerda on Vimeo.

This is an application I am working on in my free time. I will be using it to create realtime 3d graphics with animated parameters.
In the video it shows the creation of a simple patch (animating 1 parameter of subblue’s distance estimator shader, and some camera rotation). Patches can be nested and saved. The signals between the modules can be of a wide variety of datatypes (texture, float, matrix, etc) and every parameter can be linked to every output (datatypes are automatically converted). I hope the main logic of the application will be done in a few months, so I can start writing modules and generating output.
Build with c# and xna.

December 30th, 2010

Boolean mesh operations shader

This is a shader that mimics boolean mesh operations in the pixel shader. It is very fast ( ~ 200 subtractive meshes @ 60+ fps on my system).
The result of the operations is not completely correct, but looks believable enough. Best results are obtained when the operands are convex. Because it runs on the pixel-shader, the system is not influenced by mesh topology, polygon density, etc.

It roughly works like this:
For each mesh do:

1st pass:
-render front-face depth to rendertarget texture y-channel
-render back-face depth to rendertarget texture x-channel
(with culling off and additive blending, this can be done in 1 pass)

2nd pass:
-send a boolean variable to the hlsl code, indicating if the current object is additive or not.
-render the object again. (this time with culling on ) This is done because the pixelshader doesn’t need a full-screen quad, but only the coverage of the mesh. This is a big speed-gain.
-compare xy to zw of the rendertarget texture (zw is empty the first iteration of the loop)
-do the boolean operations, and write the result to the zw channels. See code below.

repeat.

When all meshes are processed, the w-channel represents the depth of the front-face of the resulting object, and the z channel represents the depth of the back-face, but that is only needed for the boolean operations.

The first pass of the hlsl code:


float4 PS_Depth(VSOut In, float faceValue : VFACE) : COLOR
{
float depth = In.color.x; //the vertex-shader outputs depth in the color channel

if (faceValue < 0){ //faceValue < 0 means backfacing, > 0 means front-facing
return float4 (depth , 0,0,1);
}else{
return float4 (0, depth ,0,1);
}
}

The second pass of the hlsl code :


float4 PS_Boolean(VSOut In, float2 pixelPos:VPOS) : COLOR
{
float2 ScreenUV = (pixelPos.xy * uvStep) + (uvStep * 0.5f); //get the screen-space texture-coordinates
float4 currentState = tex2D(resultSampler, ScreenUV); //read from the output of the previous pass

float2 sum = currentState.zw; //z = backFace, w = FrontFace
float2 operand = currentState.xy; //x = backFace, y = FrontFace

if (isSubstractive){
//-----back face--------------
if (operand.y > currentState.z){
if (operand.x < currentState.z){
sum.x = operand.y;
}
}
//-----front face--------------
if (operand.y > currentState.w){
if (operand.x < currentState.w){
if (operand.x < currentState.z){
sum.y = 0;
}else{
sum.y = operand.x;
}
}
}
}else{
sum.y = max (currentState.w, operand.y);
if (currentState.z > 0){
sum.x = min (currentState.z, operand.x);
}else{
sum.x = operand.x;
}
}
return float4 (0,0,sum);
}

The biggest caveat with this method, is that the texture containing the resulting shape, only stores the front-face and the back-face, so all information between these two is lost.

December 17th, 2010

Angular-extrude

angular extrude from johan holwerda on Vimeo.

December 17th, 2010

Sphere-renderer

shader based sphere renderer from johan holwerda on Vimeo.

This is a pixelshader that draws spheres on pointsprites. This allows the spheres to be perfectly round, and to have a lot of them in view in realtime.
They intersect as though they are spheres (and not like the planes which they are drawn on). This is done by rendering the sprites to the depth-buffer as though they were spheres.

Here’s the pixel-shader code :
-first render point-sprites, and let the vertex-shader pass their depth to the pixel-shader.


void PS_ShowDepth(VS_OUTPUT input, out float4 color: COLOR0,out float depth : DEPTH)
{
float dist = length (input.uv - float2 (0.5f, 0.5f)); //get the distance form the center of the point-sprite
float alpha = saturate(sign (0.5f - dist));
sphereDepth = cos (dist * 3.14159) * sphereThickness * particleSize; //calculate how thick the sphere should be; sphereThickness is a variable.

depth = saturate (sphereDepth + input.color.w); //input.color.w represents the depth value of the pixel on the point-sprite
color = float4 (depth.xxx ,alpha ); //or anything else you might need in future passes
}

December 17th, 2010

Particle glitch

particle glitch from johan holwerda on Vimeo.

Pretty error.

December 17th, 2010

Sphere-Renderer hooked up to position-noise

XNA3Editor 2010-12-10 11-02-28-92 from johan holwerda on Vimeo.

This work is licensed under GPL - 2009 | Powered by Wordpress using the theme aav1