This documents presents the NeL 3D library.
The NeL 3D library comprises world surface representations for content movement and collision management for both server and client applications. It also includes 3D audio and video rendering modules that are used exclusively on the client.
The objective of the 3D library is to provide an architecture framework for housing the code modules required for representing a virtual universe, and its contents, in 3D.
Nevrax developed the code modules required to represent the universe that their product is based around. This universe includes animated characters, objects and special effects in a variety of environments including undulating terrain, towns and the insides of buildings.
With time the list of code modules included in the NeL 3D library will swell to encompass all standard world and content representations.
NeL's 3D library is structured in layers built on top of a common driver layer. The Nevrax team worked with an OpenGL and DirectX implementation of the driver layer.
NeL is currently tested on Linux and Windows platforms.
For rendering production purposes it is important to take into account the target machine specification.
For Nevrax' first product the client program must run at a consistent 30+ frames per second at 1024x768 resolution on the a PC of the following specification:
- Intel Pentium3 1GHz
- Graphics card comprising Nvidia NV20 chipset
- 128MBytes RAM
- 4GBytes free space on hard disk
The current 3D technology implementations was based on this machine specification.
Statement of requirements
The basic problem of representing a 3D universe can be split up into the following major technical segments:
The scenery rendering system must be capable of rendering both confined spaces and open terrains. In both cases the key consideration is the minimisation of problems that endanger the 'suspension of disbelief'.
For the open terrains
- Smooth landscapes: The landscape must be smooth (i.e. non-polygonal), even at close distances. This said it must be possible to generate sharp ridges and to 'roughen up' the surface where necessary.
- Overhanging scenery: The landscape must be able to include bridges, caves and overhangs
- Long view distance: View distance is very important - it should be possible to see at least 1 kilometer. The far distance should not fade to black or to fog to hide the end of the visible scene. Artefacts of the far clipping distance should be minimised - cliff faces should not appear out of nowhere and suddenly blot out the sky.
- Lighting and shadows: The quality of lighting and shadows is extremely important - more so than the possibility of moving light sources.
- Displaying large numbers of objects and characters: It must be possible to display large numbers of objects (both animated and inanimate) at any time. The nature of massively online products prohibits a tight control over the number of elements in view at any given time which means that the rendering engine must be capable of adapting display quality in function of the work load.
- Sky representation: There must be support for cloud formation and dispersion and for weather effects.
For the confined spaces
- Highly detailed scenery: The confined space scenery should have a level of detail and quality of lighting equivalent to that of video games such as Quake 3 or Half Life.
- Limited view distance: View distance is less important than for open terrain - it is reasonable to require the artists to comply with view distance constraints as they construct the indoor scenery.
- Displaying large numbers of objects and characters: It must be possible to display large numbers of objects (both animated and inanimate) at any time. Note that it is often the case, in confined spaces, that large numbers of objects and characters can be obscured by thin walls. It would be unacceptable to drop the level of quality of objects in view due to objects that are obscured.
- Given that confined spaces may have open roofs, it is important that they too cater for sky and weather representations and that they allow for effects such as rain to be confined to the spaces where there are holes.
- It must be possible to create windows in confined spaces that look out over open terrain.
Character and Object Animation and Rendering
The characters and objects in the virtual universe will be animated. Animation information can be divided into two broad categories:
Visual animation information
This is the information used by the renderer to display the animated model. It includes:
- A description of the skeleton that is used by the model
- A description of the skin used by the model and the relationship between the skin control points and the bones of the skeleton (i.e. which bones control which parts of the skin)
- Skeletal animation information (e.g. key-frames for the skeleton)
- Skin animation information (e.g. skin distortions for facial expressions)
Logical animation information
This is the information used by the application to manipulate an animated object in the world. It includes:
- Animation data relating to movement in the world (e.g. an animation of a character running would include a velocity curve, information relating to the moments in the animation when a foot should be in contact with the floor, etc).
- Animation data relating to logical events (e.g. the moment at which the knife leaves the hand of a knife thrower)
- Animation relating to special effects and sound effects (e.g. the triggers for the sound effect and dust effect when a dusty character claps his hands)
Nevrax have the following requirements for the animation system:
- Composite Skinning: Skinning for all animated objects and characters should be smooth. This needs to include objects and characters who's skin is comprised of multiple separate parts. For example, it should be possible to replace a character's hand mesh with a gloved alternative without breaking the skinning.
- Skeleton scaling: It must be possible to apply the same animations to characters who share a common base skeleton but who differ in size.
- Multiple channels of animation: It must be possible to mix together different animations from different sources. For example, it should be possible to mix a run animation with a punch animation to give a running punch.
- Animation tweening: It must be possible to mix different animations that impact on the same part of the skeleton. For example, to mix a limp and a walk to give a slight limp.
- Binding to objects: Characters must be able to carry objects around and mount on other characters or objects (e.g. a man riding a motorbike carrying a gun).
- Basic Inverse Kinematics: There needs to be a basic inverse kinematics system capable of making minor adjustments to animations. This is of particular importance for scaled skeletons. e.g. in the above example, the motorbike rider must sit on the saddle, must have his feet on the pedals and must have a hand on the handlebar. It is vital that the IK system is not too processor intensive and does not cripple performance.
- Blended Shapes: It must be possible to blend between different versions of certain skins for providing facial expressions and lip sync animations.
- API - Animation control: The application programmers need to have complete control over the animation. This means that they need to be able to specify the list of animations to mix with their relative weights, the time value for each animation, the list of IK targets and bind points.
- API - Animation interrogation: For the most part, the application program will interpret the logical animation data without reference to the 3D API. However, the application program does need to have an interface for interpreting the positions of dummy objects that are linked to the skeleton. Note that this information needs to be available for objects that are not necessarily in view and that the 3D animation engine would not normally need to process.
- Tools: Ideally, we need to be able to edit logical and physical data in a single animation tool and export the data into separate blocks for the 3D module and for the application program. Note: if, for some reason, more than one tool is required then it is imperative that work done in one tool does not invalidate work previously done in the other tools.
A third party NeL user has the following additional requirements for which they are extending the animation system:
- Powerful Inverse Kinematics: It must be possible to direct a certain part of a skeleton to a given location and for the rest of the skeleton to move naturally based on joint weights and constrains. If an object is on the ground the IK system will know how to move the character's body, including bending the feet, to get to it.
- Physical Constrains: The engine will need to constrain the animation coming from the mixer. For example if a character is running over rough terrain, then the engine will pull his feet down to the ground at the right place. If a character is running up a hill, the physics engine will take care of modifying the run cycle accordingly.
- Distortion Effects: All animation cycles will be created with 100% efficiency of a given motion. The game mechanics need to be able to provide distortion input to modify this. For example the physical status of a character will decide how fast he runs. If, for example, the character is drunk, the physics engine needs to apply distortion effects to the animation cycle. There should also be the option to put subtle random distortion into all cycles to give a more natural feel.
- Binding to objects: Carried objects must simulate their physics based on their weight and dimension and their relationship with the carrier. For example if a character is carrying a bag and running, the bag should jump up and down a little based on the physics of the character.
- Live motion capture: There needs to be support for live motion capture from different sources and transmission of live motion data across the internet. There needs to be optimised animation data compression and a mechanism for dealing with lags and missing data.
- Caching: As all these features require extensive calculations, there must be support for caching of pre-calculated animations wherever possible. For example if a character is walking up a steep hill it should be possible to only calculate most things connected to the animation once until the steepness of the hill changes.
- Standard Skeleton Templates: There needs to be a standard skeleton template that can be used through the animation pipeline, from the 3D authoring tool, into the game engine and back into the 3D authoring tools for tweaking before high quality TV/ Film rendering takes place.
- Tools: There need to be export plug-ins for 3D Studio MAX and Maya.
Nevrax require the following types of visual effect:
There needs to be a flexible and extensible particle system that will be used for smoke effects, spray effects, fire effects and a large variety of other visual special effects. The basic rule is that the more fully featured the particle system, the more impressive the effects will be.
- Emitters: Different forms of emitter must be supported including at least pipes, cones, spheres, faces and meshes. Emitters must have particle type, density and emission velocity parameters as a minimum.
- Particles: Particles of different types must exists including both sprites and meshes with a wide range of parameters including animation over time, movement (rise or fall, bounce, etc), lighting (are they self-illuminated), etc.
- External effects on particles: Particles should be effected by wind and by dynamic lighting
- Tools: There needs to be a WYSIWYG tool for developing particle systems.
- API: The application programmers need to be able to launch 'pre-prepared' particle systems, designed by an artist, saved in a format known only to the particle system module. They also need to be able to create and control new particle systems via the API.
- Optimisation: The particle system must be optimised, taking into account whether or not emitters and their particles are in view, their distance from the camera, etc.
There needs to be a general mechanism for controlling screen slur, screen colouring and camera shakes.
Nevrax requires water representations for the following cases:
- Perfectly flat transparent water surfaces: There needs to be light reflection off the surface and ripple effects when objects move at the water surface. Objects under water should be lightly distorted.
- Waterfalls into flat water: Waterfalls need to cause localised disturbances in the flat water that they drop into
- Flat water seen from underneath: There need to be screen colouring and distortion effects. There need to be lighting caustic effects on the scenery and objects. The surface needs to be predominantly reflective with similar caustic effects. Only vague scenery outlines and very close objects need to be seen through the water surface.
- Water with waves: In some cases waves lap up on beeches, in others they meet rocks or cliff faces. This water does not need to be very transparent.
- Water with waves seen from underneath: There need to be the same screen distortion effects as flat water. The water surface can be completely opaque though the forms of the waves must be shown with a lighting effect
- Transitions from waves to and from flat water: It is possible for flat water to become wavy and wavy water to become flat. There need to be transition effects to mask the change of modes.
- Entering and exiting water: The camera can never be semi-submerged. It is either in water or out of water.
There need to be transition effects as the camera plunges into or emerges from the water. It will be the application programmers' responsibility to make sure that the camera does not clip the tops of waves and switch in and out of water regularly. This means that there must be an API for determining the water and wave height.
Weather effects can be split into two problems:
- Sky effects: We need to be able to generate and manage believable clouds and manage their effect on general illumination as they blot out the sun.
- Precipitation effects: We need effects for rain (both light and heavy), hale and snow. We do not require puddle formation or snow build-up for now. Precipitation effects must take into account the absence or presence of rooves and other large objects
Sound & Music
Audio treatment is extremely important to immersion in a virtual universe. It is therefore key that we implement the following features:
- Music playback: The music must be in a common format such as MP3 so that users can choose to use their own.
- Scene analysis: The scene analysis must take into account the occlusion effects of scene geometry in identifying the sound sources that are of interest, together with the reverb, echo and filtering effects to be applied. It should also identify the ambient sound set to use for generating atmosphere.
- Key sounds: There needs to be a mechanism for determining which sounds, in the space around the camera, are particularly loud, particularly close or of particular interest. These sounds need to be positioned in 3D space and need to be subject to effects relating to the scene geometry (echoes, reverberation, etc).
- Background noise management: There needs to be a general filtering of the noise generated by characters, creatures and objects that are not close to the camera in order to accentuate the key sounds.
Technical design overview
The 3d library is implemented in the following three layers. For more details in the technical design refer to the technical documentation.
Layer 1: Driver Layer
Layer 1 implements the effective 3D operations. This is highly API and platform-dependent.
Modification of the layer 1 should be very rare, as it is the most stable part of the NeL 3D library. New extensions to OpenGL or new drivers will require intervention in this layer.
Layer 2: Object Layer
Layer 2 implements the various objects (landscapes, meshes light and so on) and rendering.
Modification to layer 2 occurs only when new object types are added, such as special different landscape representations, different styles of animation, and so on.
Layer 3: High Level Layer
Layer 3 provides support for spatial manipulation and the necessary interfaces to create the whole scene, and modify it according to the model of the universe. It manages the scene graph depending of the needs of the environment.
Modification of the layer 3 is reserved to changes in the whole predicate on which the NeL library is based, as it is where objects are created and removed, their properties altered, according to the necessities of the 3D universe.
NeL Data exporters
Nevrax uses 3DSMax to develop their 3D layout data. The data is exported by a set of plug-ins provided in binary format.
Collision and Movement Surfaces
It is important that the collision model and path finding models used by the application code are coherent with the visual model of the world. It is therefore key that the two are generated from the same tool. The 3D scene exporters are therefore capable of generating the following information required by the application's collision code:
- A mesh of the world at the highest level of visual sub-division detail with material information for each face.
- Material property information, edited with the visual properties (slipperiness, etc.)
Current Feature List
The following is a summary of the features currently available in the NeL 3d engine :
- Use of Bezier patches and the ROAM algorithm, to provide adaptive subdivision of the landscape based on distance from the camera, steepness, etc.
- Pre-calculated shadows, displacement map and dynamic lighting.
- Landscape texture mapping at a constant 1.5cm/ texel, with the possibility of a second complete additive texture layer. Texture continuity breaks due to bilinear filtering have been eliminated.
- Area based audio effects (wind, etc)
Portal based interiors
- Use of portal algorithm with view casting into the landscape mesh
- Audio occlusion and resonance effects
Characters and objects
- Component based object construction, which allows the assembly of multiple "parts" of objects into a single mesh
- Blended skinned animation (mixing multiple animations) with inverse kinematics
- Blended Shapes, to provide morphing and lip-sync style animations
- Multi-Resolution Meshes, to provide a smooth reduction in polygon count as objects retreat into the distance.
- Character and object rendering supports bump mapping, environment mapping, multi-texturing and real time shadow casting.
- Animation and environment related sound effects
- Support for particle systems
- full screen effects
- volume effects
- audio effects.
- Exploitation of 3D graphic accelerator cards' TNL, pixel shader and vertex shader capabilities.
- Adaptive memory management with background hard disk data streaming.
- Adaptive texture and polygon detail to manage CPU load, GPU load and video memory constraints.