HARDWARE PERLIN NOISE DEMONSTRATION

by Paul R. Dunn


      Perlin noise can be used to make some very impressive looking cloud effects, but at a substantial cost of processing power. Here is some code that I wrote after experimenting with Perlin noise. I realized that by using the texture blending characteristics of todays 3D graphics hardware, you could generate similar effects on the graphics card and move the burden away from the CPU. And what a heavy burden that is!
      This function utilizes a pre-made noise texture and renders it in several passes to a render surface. The render surface ends up with a texture that resembles perlin noise and it can be animated like Perlin noise (or rendered into a 3rd dimension like Perlin noise). The algorithm it turns out, is very simple and uses virtually no CPU. That's because all of the work is being done on the graphics card. Many graphics engines could benefit from creating foggy or fiery textures this way rather than on the CPU.
      A sample of code follows that was written in C++ with the DirectX9 SDK. The entire application that this function was extracted from can be downloaded in binary (executable) format here <HWPerlin.exe> (72KB). It will run on windows systems with DirectX9 and a 3D card installed. Run the app and press the 'N' key to toggle through the 3 render styles or 'type's. It generates a texture 512x512 pixels and animates it. Compare that app to this one <ClassicPerlin.exe> (56KB) which generates a texture 256x256 pixels (stretched to 512x512) and is not as smoothly animated. It is more taxing on the CPU.
      Compared with algorithms that generate Perlin noise on the CPU, this algorithm can generate higher resolution textures and animate them faster just because a CPU isn't designed to handle the kind of high volume parallel processing that a pixel processor offers.


These images taken from the app at 512x512 were reduced to 256x256 for web publication:

Base Noise: this noise pattern was generated once in the app.

Perlin Noise: this pattern was generated by multi texturing the base noise.

Perlin Noise: each layer was offset differently to produce this pattern.






The primary texture blending function:
/************************************** pOutputTex is an IDirect3DTexture9* that will be filled with the Perlin noise. It can later be mapped onto a mesh. pNoiseTex is a texture filled with a standard noise algorithm. In this program, I generated one pNoiseTex at the beginning of the application and used it throughout. GRIDSIZE is the width and height of the output texture pOutputTex **************************************/ void NoiseClass::Advance(int type) { HRESULT hr; IDirect3DDevice9* pD3DDev = pRasDev->GetDevice(); DWORD FVF_pV = D3DFVF_XYZRHW | D3DFVF_TEX1; struct _pV { FLOAT x,y,z,rhw; FLOAT tu,tv; }; float flim = GRIDSIZE; _pV pV[] = { { 0.f, 0.f, 0.f,1.f, 0.f,0.f }, { 0.f,flim, 0.f,1.f, 0.f,1.f }, { flim, 0.f, 0.f,1.f, 1.f,0.f }, { flim,flim, 0.f,1.f, 1.f,1.f }, }; IDirect3DSurface9* pSurface; hr = pOutputTex->GetSurfaceLevel(0,&pSurface); printerr("GetSurfaceLevel",hr); hr = pD3DDev->SetRenderTarget(0,pSurface); printerr("SetRenderTarget",hr); pD3DDev->SetFVF( FVF_pV ); pD3DDev->Clear(0,NULL,D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER,0x00000000,1.f,0); pD3DDev->BeginScene(); // pNoiseTex is just a basic random noise texture pD3DDev->SetTexture(0,pNoiseTex); switch (type) { default: case 0: // standard pD3DDev->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); pD3DDev->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); break; case 1: // deeper contrast pD3DDev->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_ADDSIGNED ); pD3DDev->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); pD3DDev->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_TEXTURE ); break; case 2: // darker or less dense pD3DDev->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SUBTRACT ); pD3DDev->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); pD3DDev->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_TEXTURE | D3DTA_COMPLEMENT ); break; } // use alpha channel to cut ampitude in half while frequency doubles pD3DDev->SetTextureStageState( 0, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1 ); pD3DDev->SetTextureStageState( 0, D3DTSS_ALPHAARG1, D3DTA_TFACTOR ); // no more texture stages, 1 is enough! pD3DDev->SetTextureStageState( 1, D3DTSS_COLOROP, D3DTOP_DISABLE ); pD3DDev->SetTextureStageState( 1, D3DTSS_ALPHAOP, D3DTOP_DISABLE ); // standard alpha blending, nothing fancy pD3DDev->SetRenderState( D3DRS_ALPHABLENDENABLE, TRUE ); pD3DDev->SetRenderState( D3DRS_BLENDOP, D3DBLENDOP_ADD ); pD3DDev->SetRenderState( D3DRS_SRCBLEND, D3DBLEND_SRCALPHA ); pD3DDev->SetRenderState( D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA ); // make the texture map properly pD3DDev->SetRenderState( D3DRS_CULLMODE, D3DCULL_NONE); pD3DDev->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); pD3DDev->SetSamplerState( 0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP ); pD3DDev->SetSamplerState( 0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP ); static float time = 50.f; // start with a non-zero value while (time>1024.f) time-=1024.f; // keep time normalized // the main blending loop: makes pretty perlin noise out of messy random noise int count = 0; float scf = 1.f/GRIDSIZE; // scale factor (or frequency of noise) float shft; int txf = 0xff; // texture factor (or amplitude of noise) while (scf<flim/GRIDSIZE && txf>0) { count++; shft = time*powf(scf,1.5f) * (GRIDSIZE/1024.f); pD3DDev->SetRenderState( D3DRS_TEXTUREFACTOR, (txf<<24)+(txf<<16)+(txf<<8)+(txf) ); pV[0].tu = shft+0.f; pV[0].tv = 0.f; pV[1].tu = shft+0.f; pV[1].tv = scf; pV[2].tu = shft+scf; pV[2].tv = 0.f; pV[3].tu = shft+scf; pV[3].tv = scf; pD3DDev->DrawPrimitiveUP(D3DPT_TRIANGLESTRIP,2,&pV[0],sizeof(pV[0])); scf *= 2.f; // frequency txf /= 2; // amplitude (persistence) } time += 0.005f; pD3DDev->EndScene(); }

      Notice that the perlin concepts of frequency, amplitude and persistence are applicable here. Each rendering pass doubles the frequency. The scale factor or frequency ('scf') is used as texture coordinates. Thus a frequency of two stretches a two by two grid of the base noise pattern over the target area. A frequency of four stretches a four by four pixel area, and so on. Linear interpolation is used for the magnification filter to produce a softened look. Now, this part is probably inferior to the classical algorithm which has nice quadric interpolation. I expect that future graphics hardware will support better interpolating algorithms.
      Each consecutive pass also adjusts 'txf' (the texture factor or amplitude) by dividing it in half. This effectively reduces the contrast (the brightness range between darkest and lightest) by fifty percent. Rendering enough passes would eventually lead to a texture factor of zero which should produce a uniform pattern of black. The rendering loop however, tests that the texture factor is larger than zero, so it will only render until no more detail is produced. In my code, dividing the texture factor by two corresponds to Perlin's persistence of 0.5. Other values could be used, but keep in mind that here 'txf' is an integer value and a factor of two works well.
      The only other 'trick' we use here is to calculate a shift amount ('shft') that we use to animate the texture. Each layer's texture offset is calculated a little differently to produce a slow movement for the low frequencies and faster movement for the higher frequencies. This produces a cloud-like motion. Clearly, this behaviour, as well as the way frequency and amplitude are scaled, could be altered to produce different kinds of noisy effects. Start 'scf' at a larger frequency and render in red or orange to produce a fiery look.
      Another way to alter the effect is to change the way the texture stages are blended together. This is done in the function above inside the 'switch' statement. Case one with the deeper contrast, in my opinion, is the better looking out of the three. It also happens to be the type of blending I used when I snapped the pictures posted above. Pressing 'N' once after initiating the program produces noise similar to those images.
      Any questions or comments on the code or algorithm can be directed to the address in the footer of this document.


References:

http://freespace.virgin.net/hugo.elias/models/m_perlin.htm
      The article that got me started. The pseudo code found here is what I used to make the ClassicPerlin app which is available for download above.

http://freespace.virgin.net/hugo.elias/models/m_clouds.htm
     Hugo Elias describes his cloud rendering algorithm in detail. Perlin noise is the basis of the technique. The section titled 'Creating Clouds' has a nice diagram that shows the way the base noise is layered and tiled, making it clear how the function posted here is operating.

http://www.noisemachine.com/talk1/
     A presentation from Ken Perlin himself.

http://mrl.nyu.edu/~perlin/doc/oscar.html
     Notes about Ken's academy award and some C code for the algorithm.

http://astronomy.swin.edu.au/~pbourke/texture/perlin/
     Another good explanation of Perlin noise and its applications.

http://www.robo-murito.net/code/perlin-noise-math-faq.html
     Matt Zucker discusses the mathematical complexity of the Perlin noise algorithm and the subsequent cost to process it.


comments:
yet another web page Dunn by Paul 12/19/2004 © 2004 (updated 08/17/06)