Shaders

I started writing these demos in early 2009.

New: thumbnails below are stored in a generated texture atlas for your fast-browsing experience. Additionally, this page totals 262KB. Screenshots are dynamically loaded when clicked.

Non-linear Projections Sep, 2013

1
0
Render to large cube map, then draw a full screen polygon and sample with what ever coordinate function looks cool.

Shader Comments:
 
Name:
Comment:
 
Hemisphere projection with Z*2-1... ish.
shaders/peturbed-hemisphere-projection3.png
Grid-based smoke/fluid - Incomplete Jul, 2013

1
0
Largely based on this: http://prideout.net/blog/?p=58

In my opinion, much easier to implement than SPH.

Shader Comments:
 
Name:
Comment:
 
False colouring of the temperature in the fluid.
shaders/smoke.png
Atmospheric Scattering - Incomplete Jun, 2013

3
0
Still needs a fair bit of work.

Shader Comments:
 
Name:
Comment:
 
From within the atmosphere radius. No sun is drawn, the effect is from "Mie Scattering" only.
shaders/atmosphere1.png
Only fixed function lighting on the planet is used, so the halo effect looks a little odd.
shaders/atmosphere2.png
... with a proper planet surface.
shaders/atmosphere3.png
Recursive Stencil Reflections Aug, 2012
Recursive reflections using the stencil buffer.
2
0

This is quite similar to portal rendering, except the exit portal is simply the mirror of the entrance portal. The portals surfaces are rendered using recursion (depth-first). Scene geometry is rendered after each step of the recursion, otherwise portals cannot "cover" other objects in the scene easily.

For each portal, the area to render to is masked using the stencil buffer, taking the intersection of all portals currently in the recursion chain. Contrary to this, intersection is done simply by incrementing all portal geometry (I assume they're all planes with no overlap). The main scene can then be rendered using an if (val == recursion_depth) stencil test. Lastly, the current portal geometry is rendered into the depth buffer over the scene, creating a "cap". Now when something higher in the recursion tree is rendered, it can't "poke through" the portal. It should be noted that any time geometry is rendered through a portal, a glClipPlane should be used to cull objects behind the portal exit - both when rendering the scene and performing portal intersections!

Adding simple screen-space bounding box / backface culling tests increased performance 10×.

Shader Comments:
 
Name:
Comment:
 
8 reflective quads, with max 7 reflections, 60fps.
shaders/stencil-reflect2.png
8 reflective quads, with max 3 reflections.
shaders/stencil-reflect.png
Tone Mapping Feb, 2012
Tone mapping using OpenGL Atomic Counters for a histogram
2
0

A histogram is built for the final image, counting luminance in 512 buckets. This is done in the fragment shader using OpenGL 4 atomic counters.

Once the histogram is built it is downloaded. Each luminance bucket is scaled according to its prefix sum and the total luminance. The values are smoothed to remove harsh changes before being uploaded and used to render the scene. Linear interpolation is used between mapped values.

With this method, specific ranges of luminance can be tone mapped without affecting the rest of the scene. The next step might be to introduce some sort of object localization to the algorithm.

Shader Comments:
 
Name:
Comment:
 
A view of the bright skybox. Note the dark terrain.
shaders/tonemap1.png
As the view moves towards the darker terrain, brightness is increased.
shaders/tonemap2.png
Lens Flare 2 Feb, 2012
An array of dynamically generated lense flare quads
1
0

Generated lens effects. These effects are caused by light bouncing between lenses, instead of refracting directly through to the sensor.

The star texture is generated by randomly, additively splatting quads with one white vertex near the center and the other three black near a random angle/radius. The quad is interpolated as two triangles, split such that the mid-grey value is near the 3 black vertices.

The rest of the flares are stored in a list with the following attributes:

  • Colour
  • Position along the ray
  • 4 Radii giving intensities (off, on, still on, off) which use cosine interpolation.

There's other stuff in this scene but most I've already done before.

Shader Comments:
 
Name:
Comment:
 
shaders/lensflare2.png
Irregular shadow mapping Nov, 2011
Shadow mapping using an irregular z-buffer.
1
0

Had to give this a go when I read about it. Fairly complex and currently slow. I'm hoping to speed it up at some point and add antialiasing and perhaps soft shadows.

What it does:

  1. Render scene to a G-buffer (positions, normals, colour).
  2. Create a per-pixel linked list buffer as a kind of shadow map.
  3. Add all positions from the G-buffer to the corresponding linked lists (per-pixel buckets). Depth, camera space fragment x/y coord and exact light space x/y coord is stored.
  4. Re-render the scene from the light's point of view. For every fragment, compare the depth to the positions in the appropriate linked list.
  5. For each position in shadow, set a 1 in a shadow mask (using the camera space fragment x/y coord).
  6. Do a post process pass, applying deferred shading with the shadow mask texture.
The "shadow map" is more of a uniform grid spatial data structure, holding camera-space fragments. When rendering the scene from the point of view of the light, three important things are done:
  • Expand the triangle so fragments are rasterized for ALL intersecting pixels (done in the geometry shader).
  • In the fragment shader, check each fragment position from the linked list against the triangle manually using the light space x/y coord (pass in 2D vectors from the geometry shader and check with Barycentric coordinates).
  • If the check passes, go on to find the interpolated depth value ( finally found a good source here. It's quite simple.) and compare it to the bucketed fragment's depth.

Shader Comments:
 
Name:
Comment:
 
The stanford "Asian Dragon" model, 100k polys at a mere 10fps. The bunny runs at 80fps.
shaders/irregular.png
GPU Based Fluid Simulation (SPH) Jun, 2011
Fluid simulation computed using shaders.
3
0

In progress.

Videos are both ogg/theora and mp4/h264 and *should* work on the iphone.
4.9MB, a more recent, faster implementation
464KB
3.6MB
9.5MB

Some older, less awesome videos. I can add all of these now because videos are embedded in a separate page :).

Shader Comments:
Helen03/07/11
Truly amazing! Very life-like, beautiful rather hypnotic to watch...
 
Name:
Comment:
 
10k particles. 43 fps. 1283 Volume splat and raycast metaballs.
shaders/fluid11.png
4096 particles at 160fps.
shaders/fluid3.png
A screen-space metaball rendering (colour is surface normal). This stores all fragments from raytraced spheres per-pixel. Obvious artefacts, but proof of concept. Ultmately too slow - I ended up using volume raymarching.
shaders/fluidrender.png
Procedural Terrain May, 2011
Procedurally generated heightmap and texture on the GPU using a variety of techniques.
3
0
This demo procedurally generates terrain on the GPU using shaders. The following methods are implemented:
  • Fractal: Midpoint displacement/diamond-square algorithm
  • Perlin Noise
  • The Fault Algorithm
  • Smoothing and random noise

Source: terrain.frag

A texture and normal map is also generated on the GPU. Colour can be added to the terrain texture based on a min/max height, and steepness limits with a perlin noise factor (thanks to terragen for the ideas).

Textures can also be generated for sections of the terrain for LOD.

I'd quite like to take a look at tessellation shaders and attempt this kind of thing dynamically.


Perlin Noise
For an overview of perlin noise, this (yes, almost first on google) is great. The main issue is implementing a 2D noise function in GLSL. Given Nvidia haven't implemented noise(), the easiest solution is to generate a noise texture (make sure to set GL_REPEAT) once at load time. This works well because a coordinate will of course always give the same value. Furthermore, the texture's linear interpolation can do the smoothing for you.

Fractal
The diamond-square algorithm produces more interesting terrain than perlin noise but is much more of a pain to code. The implementation here is intended to allow any dimensions, not just n2+1. Instead of starting with a fixed n×n grid, start with a grid of 2×2, then use that to generate the mid-point of a 3×3, followed by the edges. Then increase to 5×5 etc. This implementation stores these grids in the corner of the same heightmap.

Fault
The fault algorithm is, in my experience, less well known, expensive and difficult to tile. The GPU implementation is of course much faster than CPU. After smoothing it can produce interesting results but should be combined with other techniques, such as Perlin noise, to hide the obvious fault lines.

Shader Comments:
Simon22/11/11
Can you post source codes for "Procedural Terrain" and "Approx n-body simulation"? It would be very intersting for me.
Simon31/12/11
Thanks for source. Interesting method.
 
Name:
Comment:
 
shaders/terrain3.png
2049x2049 poly grid rendering a heightmap with normal and colour map.
shaders/terrain.png
1025x1025 poly grid.
shaders/terrain2.png
Variance Shadow Maps May, 2011
Point lights casting shadows using variance shadow maps.
4
0

This demo uses geometry shaders to render the variance shadow maps to cube maps. The cube maps are blurred, again with geometry shaders, using a separable gaussian blur. While this is not a correct spherical blur, artefacts are barely noticeable. Each light is additively blended to the scene using deferred shading. Rendering to the shadow maps takes approximately 2/3 of the time taken to blend the lights with shadows into the scene.

Multisampling

In previous demos, multisampling can easily be done using a multisample renderbuffer. I had a go here doing explicit multisampling however there was not much to go on in the way of examples.

The brief rundown of the code is as follows:

//create a multisample texture
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, tex);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, samples, format, x, y, GL_FALSE);

//... and to access the samples in a shader
vec4 pos = texelFetch(deferredPositionMS, pixCoord, sampleIndex);

There is also a GL_TEXTURE_RENDERBUFFER_NV texture which maps to a renderbuffer. This way a renderbuffer can be the render target, blitted and/or read in a shader using a uniform samplerRenderbuffer. However after trying both, it seems neither is faster.

Shader Comments:
Pyar19/05/11
Testing per-shader comments. Eventually I may need to do some dynamic loading of text as well as images  . Currently it's all just one big hidden chunk of repetitive generated html.
 
Name:
Comment:
 
16 point lights, 512x512 blurred shadow maps with multisampling.
shaders/variance.png
200 point lights, 128x128, 16fps, multisampling, blur, GTX 460. Who needs ambient occlusion right?
shaders/variance2.png
200 point lights, 128x128, 30fps, simple blur, GTX 460.
shaders/variance3.png
More blur.
shaders/variance4.png
Tree Animation (Incomplete) Apr, 2011
Tree generation and animation using quaternions.
1
0

A preliminary test to generate, animate and render trees. Definitions of node types and child probabilities are stored in text files. This format supports recursion so it's relatively simple to design new trees, albeit without a nice gui and realtime sliders. Trees are generated CPU side and then uploaded to the GPU where they stay. Currently each node can have a maximum of 4 children.

A fragment shader updates the animation state and then the tree heirarchy, setting node positions relative to their parents etc. Then a bunch of points are passed through a geometry shader which will either spit out a stretched cube or plane for a branch or leaf. The fragment shader then does raycasting to render a cylinder inside branch cubes. Texture coordinates are generated for this and a normal map is applied.

This demo also uses a multisampling FBO however some form of LOD is definately required.

The tree generation needs quite a few more features implemented to give a more accurate tree. At the moment, much of it is random and needs weighting. The most evident abnormality is gravity plays no role yet. The animation speed could probably be optimized as I've implemented quaternion slerp operations on the GPU. Matrices may just be faster anyway. The ray-cylinder intersection tests are not quite right and I should use billboards anyway.

Inspiration from:

Shader Comments:
 
Name:
Comment:
 
Over 8,000 nodes (branch/leaf) each with quaternion animation (22fps). This tree has a node depth of 20.
shaders/trees.png
Indirect Lighting (Incomplete) Mar, 2011
Far from complete.
4
0

Testing out a few methods to find a good balance between quality and performance. These images splat light (essentially deferred shading, spawning down sampled per-pixel point lights) which produces artefacts when scattering surfaces do not face the viewer. An edge finding algorithm could clear up the down sampling issues but I think ultimately multiple viewing angles are needed along with depth peeling.

The light attenuation is not calculated correctly yet. For example the second bounce (three bounces are computed in these images) is strangely brighter than the first. I'll get round to fixing it some day.

Soft shadows would be a great addition. A coarse indirect illumination along with SSAO may give a decent performance/quality trade-off.

Shader Comments:
 
Name:
Comment:
 
full-rez deferred point lights (5fps)
shaders/indirect4.png
1/4 size splatting (below 1fps)
shaders/indirect3.png
1/8 size splatting (realtime)
shaders/indirect.png
1/8 size splatting indirect light only
shaders/indirect2.png
Metaballs Mar, 2011
Brute force metaballs with geometry shaders.
4
0

Metaballs are essentially functions in a 3D space. They apply a radial influence which decreases over distance. This could be seen for example as a density volume. It is common to render this 3D space by defining a threshold to form an isosurface. This is the 3D form of a contour line. For any point in the volume, a value is given as the sum of all metaball's functions. The points where this value is equal to the threshold define the isosurface.

This demo renders the isosurface using marching cubes, computed using geometry shaders. It starts by drawing a grid of points which are run through a geometry shader. Each point marks the center of a cube and the isosurface is evaluated at each courner. Triangles are generated from a lookup table based on the isosurface value assuming the cube is not completely inside or outside the surface. In two screenshots only face normals are generated for each triangle.

The biggest problem occured when trying to pass the lookup tables to the geometry shader. Uniform memory is not big enough to support the table needed without compression so instead texture buffers were used.

Normals can also be generated per-vertex by explicitly integrating the density function. While this requires an extra 4 function lookups (points not aligned with the grid) the results are clearly worth it given a simple function. This causes a performance hit of approx 50%.

The isosurface function should really be evaulated at each unique vertex and stored. For now, this demo simply calls the function in the geometry shader (shader source). GPU Gems 3 has a nice article on marching cubes although the geometry is static.

Update:
The textured screenshot includes caching the 3D data for both isosurface values and normals in an array. If the ratio of empty space is large, caching normals may actually be slower. This is common for higher res grids with the same simple geometry. A rough performance improvement estimation puts the caching method at double the speed.

Update:
Predicting whether cubes are empty or not in the geometry shader can almost triple the speed. If any isovalue for a vertex is greater than a large magnitude, assume all other vertices are too and don't generate any geometry. Using hystopyramids as mentioned here is definitely worth a look.

Shader Comments:
 
Name:
Comment:
 
Grid size: 64x64x64, preprocessing both positions and normals. Approx 30fps.
shaders/metaballs4.png
Grid size: 64x64x64, 32 balls using brute force vertex normals (surprisingly fast)
shaders/metaballs3.png
Grid size: 64x64x64, 32 balls (easily realtime)
shaders/metaballs.png
Grid size: 128x128x128, 32 balls (barely realtime)
shaders/metaballs2.png
Lens Flare Mar, 2011
A billboard lens flare using the occlusion query.
1
0
A very simple test of the OpenGL occlusion query. The number of pixels that pass the depth test when drawing the sun determine the size and opacity of the lens flare billboard.
Shader Comments:
 
Name:
Comment:
 
shaders/lensflare.png
Mandelbrot set fractal Mar, 2011
The mandelbrot set rendered using the brute force method in GLSL with double precision.
1
0
This demo will run easily in realtime on a 460 with zooming and panning controls. There are many better and more accurate methods to do this but procrastination only motivates so far.
Shader Comments:
 
Name:
Comment:
 
Very basic supersampling is used for antialiasing.
shaders/mandelbrot.png
Order independent transparency Dec, 2010
Implements correct transparency sorting at the fragment level, not primatives/triangles.
4
0

This demo uses the OpenGL 4 atomic operations to capture all fragments for each pixel. This is essentially depth peeling in a single pass. Currently the demo stores the fragments in multiple frame buffers which uses quite a lot of memory. A second pass sorts the fragments to be used for transparency.

ATI construct linked list to store fragments which can save a huge amount of memory. "Pages" can be used to reduce atomic operation collisions.

Shader Comments:
 
Name:
Comment:
 
The "linearized" fragment list buffer method, used to render the sponza atrium model by Frank Meinl.
shaders/lfb-atrium.png
Calculating transmittance through a solid with alternating blue/green stripes.
shaders/lfb-dragon.png
Volumetric false colouring using depth peeling.
shaders/depthpeeling.png
Order independent transparency. Normals are inverted if facing away from the light.
shaders/oit.png
Exploded Cube - Incomplete Oct, 2010
Procedural generation in the geometry shader - 'exploded cube' fractal technique.
1
0

This method takes a cube built from triangles and recursively subdivides triangles by splitting the longest edge in two. The split point is moved a set distance from the center of the cube and randomized.

I haven't got round to getting smoothed normals and it's quite slow. The geometry feedback extension should improve the speed. Subdivision is difficult as the longest edge on each triangle may not be the same for adjacent triangles.

Shader Comments:
 
Name:
Comment:
 
A bunch of tessellated asteroids on a 2D plane colliding with each other.
shaders/geometry.png
Caustics Sep, 2010
Real-time surface and volumetric caustics using linear raycasting and point and line splatting.
4
0

This scene contains

  • Shadows
  • Surface caustics
  • Volumetric caustics
  • Refraction
The rest of the volumetric light (straight, without refracting through objects) is not shown.

Based on this technique.

The first step of this approach renders depth and normals for front and back faces of refractive geometry from the light's perspective. The scene's (non-refractive geometry) depth is also rendered.

Linear ray marching refracts through the front face and then back face of refractive geometry before intersecting with the scene. The refractive exit positions and scene's intersection position is stored in a texture.

Lines are drawn in a separate eye space texture which is then blurred for volumetric caustics. Points are drawn to a light space texture to create a caustic map. The volumetric light texture is added to the scene and the caustic map is projected onto the scene.

This same ray marching method is repeated from the camera's perspective to render the refractive geometry after caustics have been added.

There are a few extras such as copying depth buffers around for correct occlusion. I'm sure some parts of the rendering pipeline for this could be reused and made more efficient but it runs in realtime on a 9600.

An attempt at using polygons instead of a caustics map was also made.

Shader Comments:
 
Name:
Comment:
 
Including a caustics map and shadows.
shaders/caustics2.png
With a teapot refractor.
shaders/caustics3.png
Slightly refined version.
shaders/caustics4.png
With a sphere refractor.
shaders/caustics5.png
Approx n-body simulation Sep, 2010
A system of particles where each gravitates to all (but in this case not all) others.
2
0

This demo uses floating point position and velocity textures which are double buffered.

Computation is done by drawing a full screen quad (althrough I was recently informed a single triangle is better) with the second buffers bound to an FBO. Then the particles are numerically integrated in the fragment shader, reading from the current buffers bound to sampler2Ds before writing the results through gl_FragData[]. The buffers are swapped and the process repeats. This way all data remains entirely on the GPU.


uniform sampler2D positions;
uniform sampler2D velocities;

//opengl3, use glBindFragDataLocation
out vec4 outPosition;
out vec4 outVelocity;

void main()
{
vec2 coord = gl_FragCoord.xy / textureSize(positions);
vec3 pos = texture(positions, coord).rgb;
vec3 vel = texture(positions, coord).rgb;

//sum gravity to N other particles
vec3 acc = ...

//integrate
pos += ...
vel += ...

//pre-opengl3, use gl_FragData[0/1]
outPosition = pos;
outPosition = vel;
}

One version uses geometry shaders to turn points into quads to render the particles. For pre-opengl3, a VBO of quads is stored where each vertex is given the texture coordinate of the particle in the position/velocity textures. Quads are rotated and stretched in their direction of travel to approximate motion blur.

Each particle gravitates towards 10 random particles each frame. To make it less boring an HDR and Bloom effect is added. Very realtime (>60fps) framerates can be achieved depending on the number of particles each gravitates towards.

For a rand() function, a noise texture can be generated and/or combinations of particle velocity and position used with floor() and mod() to produce fairly random numbers. Sometimes the less random numbers can lead to interesting results.

Running on an 8800GTS.

Shader Comments:
 
Name:
Comment:
 
Computed on the GPU >60fps with 250k particles.
shaders/particles.png
24fps with 1M particles
shaders/particles2.png
Deferred Shading Apr, 2010
Just because everyone else was doing it.
2
0

Many point lights orbiting the origin.

The scene is rasterized however positions, colour and normals are stored, unlike the traditional method of calculating lighting on the fly. This information (the G buffer) is used in a second pass which draws quads over point lights. For each pixel in the quad, the position and normal of the scene is sampled to calculate diffuse and specular light contributions. The result is additively blended into the final render.

Running on an 8800GTS.

Shader Comments:
 
Name:
Comment:
 
8,000 point lights running at 30fps
shaders/deferredshading.png
And again with 1M small lights (1-2fps).
shaders/deferredshader2.png
Water, refraction & flocking Apr, 2010
Basic boid flocking with dynamic water ripples and distortion.
2
0

A double buffered texture stores height and velocity for the grid of water. Each frame the fragment shader applies a function to simulate water movement over the surface (somewhat incorrect but it looks ok). This heightmap is used to render distortion (again, aproximated) and lighting of the water surface.

The fish flock using a kd tree which will run 5000 boids/fish run at 8fps or 2000 at 32fps.

Shader Comments:
 
Name:
Comment:
 
shaders/boids2.png
shaders/boids.png
3D Wood Function Mar, 2010
A 3D wood shader which can take a seed value and be applied to any object independent of UVs/texture coordinates.
3
0
Uses a 3D perlin noise function passed to a sin generate the grain. Normal displacement is also generated. A seed value can be passed in per-object.
Shader Comments:
 
Name:
Comment:
 
shaders/wood.png
shaders/wood2.png
A per-object noise seed is set for each object.
shaders/wood3.png
Interior Mapping Mar, 2010
A very basic raycasting method for rendering simple building interiors.
2
0

The idea came from here. Rooms have varying walls, depths and lights can be turned on and off. The wall textures are stored in a cube texture (shader source).

The method involves intersecting the room walls/ceiling/floor after the ray entrance position is found in tangent space. A random texture can then be chosen from the cube texture.

This same technique has been extended here ("instant animated grass").

Shader Comments:
 
Name:
Comment:
 
shaders/interiormapping2.png
shaders/interiormapping.png
Screen Space Ambien Occlusion Mar, 2010
Ambient occlusion generated in screen space.
1
0
Need to add a blur to the AO to reduce noise caused by jittered sample directions. previous version
Shader Comments:
 
Name:
Comment:
 
shaders/ssao2.png
Motion Blur Feb, 2010
Per-object motion blur rendered in screen space.
2
0
Velocity per pixel (generated by storing previous object transformations) is rendered to a temporary buffer. Both current and last velocity textures are used to blur in the direction of velocity. This is incorrect but looks ok at high framerates. Ideally, depth peeling would be used so moving objects could be made transparent and blur could be seen from occluded objects.
Shader Comments:
 
Name:
Comment:
 
shaders/motionblur.png
shaders/motionblur2.png
Volumetric lighting - Incomplete
Volumetric lighting is generated by raycasting through shadow maps.
3
0
Still need to limit trace distance based on depth (at the moment occluded volumetric light is still rendered).
Shader Comments:
 
Name:
Comment:
 
shaders/volumetric3.png
A later attempt using a similar method to deferred rendering.
shaders/volumetric4.png
shaders/volumetric2.png
Depth of Field - Incomplete Jul, 2009
A screen space depth of field effect.
1
0
One of the more computationally complex demos. This is because objects in focus should not contribute or "bleed" blur colour to objects out of focus. There are many performance improvements that need to be made for example downsampling blur textures.
Shader Comments:
 
Name:
Comment:
 
shaders/dof4.png
HDR Lighting with Bloom

1
0
Separable functions are awesome. This uses a simple Gaussian blur. Many people confuse High Dynamic Range (HDR) and bloom. Bloom refers to the glow effect while HDR generally refers to the use of floating point images which have a greater range of intensities than the regular 8-bits-per-channel images. HDR images require tone mapping so the screen can better display the range of information.

For example, after bloom is applied with floating point textures, colour intensities of 0 to 1 map to 0 to 0.8. Colours greater than 1 map to 0.8 approaching 1.0 but may never actually get there.

On a side note, the impressiveness of modern video game graphics is directly proportional, with positive correlation, to the amount of bloom. One can project this trend to approximate what games will look like in the future.
Here's a screenshot:

Shader Comments:
 
Name:
Comment:
 
shaders/hdr.png
Shadow mapping
and not much else
1
0
...yep
Shader Comments:
 
Name:
Comment:
 
shaders/shadows.png
Relief Mapping Apr, 2009
Parallax mapping by linearly raycasting through heightmaps followed by a binary search.
2
0
Geometry is faked by raycasting through a heightmap. Self shadows are also computed. This is done by raymarching in tangent space followed by a binary root finding search.
Shader Comments:
 
Name:
Comment:
 
12 triangles used to draw box.
shaders/relief.png
And again, writing to the depth buffer for geometry intersection.
shaders/relief2.png
Cubic and planar reflections Mar, 2009
Reflections using render to textures.
1
0
The scene is rendered to cube maps (and 2d texture) using FBOs. It is then used to colour the reflective surfaces after reflection vectors are modified by sine waves (for a ripple effect). Note that spheres reflect each other as the reflection textures are used from the previous frame.
Shader Comments:
 
Name:
Comment:
 
shaders/reflect.png

Mouse-over shader

<Description>

Select Shader

 
 
 
General Comments:
Pyarelal Knowles18/05/11
Comments will be coming soon.
Michael Paton19/05/11
Epic... Awesome stuff m8  
:(19/05/11
my comment didn't work...
Pyar19/05/11
yeah it did
Jesse the Awesome of Awesomest!20/05/11
BAM!
Helen23/05/11
Fabulous!!!!!!!
Simon22/11/11
Can you post source codes for "Procedural Terrain" and "Approx n-body simulation"? It would be very intersting for me.
Pyar07/12/11
Apologies for the delay. I've set up php mail() for comments properly now. To my surprise I've had ads attempted to be posted here, of all places.
Daniel15/12/11
I would like to get a source code example for your Marching Cubes algorithms. I am a beginner at OpenGL for the most part, and would like to figure this out, but not many real examples out there.
Email: <email hidden>
xjli27/07/12
hi,I would like to get a source code example for your Irregular shadow mapping.My Email:<email hidden>,tks
Pyar27/07/12
(replied). I'll put some code up eventually. In the mean time, have a look at chapter 20 at openglinsights.com. Not a lot of changes need making for irregular shadow maps.
longchamp27/04/13
very nice article man
Pyar27/04/13
thanks very much!
 
Name:
Comment:
 

I haven't released the source for most of these, mainly because since writing them I always think of better ways to code it. If you want one of the examples or even just the GLSL, feel free to email me (or add a comment).