Radioactive-Software Radioactive-Software
Forum to discuss Radioactive-Software Products
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Are you using an impostor? and other questions...
Goto page Previous  1, 2, 3 ... , 12, 13, 14  Next
 
This forum is locked: you cannot post, reply to, or edit topics.   This topic is locked: you cannot edit posts or make replies.    Radioactive-Software Forum Index -> Programming
View previous topic :: View next topic  
Author Message
rappen
Mini-Me


Joined: 22 Apr 2007
Posts: 6633
Location: Spijkenisse, Netherlands

PostPosted: Thu May 01, 2008 5:30 am    Post subject: Reply with quote

Glover Klien wrote:
Preprocessor directives are lines included in the code of our programs that are not program statements but directives for the preprocessor. These lines are always preceded by a hash sign. The preprocessor is executed before the actual compilation of code begins, therefore the preprocessor digests all these directives before any code is generated by the statements. Now Dan technical code has never been one I like to talk about but it is known. StoooFooo


agreed
_________________
Rappen Fo'shizzle my nizzle oh fo dizzle Razz

Back to top
View user's profile Send private message AIM Address MSN Messenger
logic.cpp
Super-Cool-Dude


Joined: 17 Feb 2008
Posts: 88

PostPosted: Sun May 04, 2008 4:26 am    Post subject: Reply with quote

dgreen:

it looks like the physX tutorial programs never really deal with real meshes... it looks like they just create NxActors with NxShapes and then they just render those shapes by passing the actual NxShape polys to their openGL renderer.

So i ask again; how's the correct way of updating my mesh's position/orientation/etc according to physical changes? meanwhile i just manually update the mesh's position only (not rotation etc) to the NxActor's getGlobalPosition()... i feel that this is a wicked ugly unintended way of doing it
Back to top
View user's profile Send private message
CrowBar
Pornstar


Joined: 05 Oct 2007
Posts: 244
Location: Netherlands

PostPosted: Sun May 04, 2008 6:59 am    Post subject: Reply with quote

Shocked

Last edited by CrowBar on Mon May 05, 2008 5:02 am; edited 1 time in total
Back to top
View user's profile Send private message
logic.cpp
Super-Cool-Dude


Joined: 17 Feb 2008
Posts: 88

PostPosted: Mon May 05, 2008 2:15 am    Post subject: Reply with quote

these ageia tutorials are really off the wall. i mean, they barely explain half of what objects your supposed to create and what functions to call because the other half is hidden away in these "common code" header files, containing functionality non of the docs will explain... dude this is so painfully uphill
Back to top
View user's profile Send private message
dgreen
The One


Joined: 24 Sep 2005
Posts: 6811
Location: Raleigh, NC

PostPosted: Mon May 05, 2008 5:54 pm    Post subject: Reply with quote

logic.cpp wrote:
these ageia tutorials are really off the wall. i mean, they barely explain half of what objects your supposed to create and what functions to call because the other half is hidden away in these "common code" header files, containing functionality non of the docs will explain... dude this is so painfully uphill


It really ain't that hard, and Ageia is the best out there...and their tutorials/docs were all I needed to get everything implemented in a few days, week tops.

You need to cook a triangular mesh into a shape, and just associate that shape with your actor...then get the oritentation/position of the actor and use that.

Ageia does sooo much for you. It will generate a convex mesh for ANY arbitrary set of points, you just send them a point cloud of all your vertices and they'll generate a convex mesh < 256 triangles to use as the collision approximation. So you can give them your 20,000 triangle car model as input, and you'll get back a 160 triangle optimized mesh, you then cook it [ you can save to file etc. ] then associate with the actor.

search for "mesh cooking ageia"...it's really not hard but programming does require a lot of patience and you can't expect things to work perfectly or at all, for the first time...it takes years of work to get some things working well.

Any more Qs lay them on me, or if you need clarification lemme know...I didn't reference the docs for this I just typed it up, everything should line up tho.

- Danny
_________________
I run this place.
"Some muckety-muck architecture magazine was interviewing Will Wright about SimCity, and they asked
him a question something like "which ontological urban paridigm most influenced your design of the simulator,
the Exo-Hamiltonian Pattern Language Movement, or the Intra-Urban Deconstructionist Sub-Culture Hypothesis?"
He replied, "I just kind of optimized for game play."

Back to top
View user's profile Send private message Visit poster's website AIM Address
rappen
Mini-Me


Joined: 22 Apr 2007
Posts: 6633
Location: Spijkenisse, Netherlands

PostPosted: Tue May 06, 2008 6:09 am    Post subject: Reply with quote

Yea, totally...
_________________
Rappen Fo'shizzle my nizzle oh fo dizzle Razz

Back to top
View user's profile Send private message AIM Address MSN Messenger
logic.cpp
Super-Cool-Dude


Joined: 17 Feb 2008
Posts: 88

PostPosted: Sun May 11, 2008 8:13 am    Post subject: Reply with quote

dgreen:

checking out raycast cars here...
[would you ever] / [is there any reason to ever] use non-raycast cars?
Back to top
View user's profile Send private message
logic.cpp
Super-Cool-Dude


Joined: 17 Feb 2008
Posts: 88

PostPosted: Tue May 13, 2008 3:52 pm    Post subject: Reply with quote

looks like ppl are busy around here. and rappen wont ya stfu for once XD
Back to top
View user's profile Send private message
dgreen
The One


Joined: 24 Sep 2005
Posts: 6811
Location: Raleigh, NC

PostPosted: Tue May 13, 2008 8:24 pm    Post subject: Reply with quote

haha
_________________
I run this place.
"Some muckety-muck architecture magazine was interviewing Will Wright about SimCity, and they asked
him a question something like "which ontological urban paridigm most influenced your design of the simulator,
the Exo-Hamiltonian Pattern Language Movement, or the Intra-Urban Deconstructionist Sub-Culture Hypothesis?"
He replied, "I just kind of optimized for game play."

Back to top
View user's profile Send private message Visit poster's website AIM Address
logic.cpp
Super-Cool-Dude


Joined: 17 Feb 2008
Posts: 88

PostPosted: Tue May 13, 2008 9:53 pm    Post subject: Reply with quote

hey if you already had a moment to have a look here, how about answer my qs:

1) it looks like the physX tutorial programs never really deal with real meshes... it looks like they just create NxActors with NxShapes and then they just render those shapes by passing the actual NxShape polys to their openGL renderer.

So i ask again; how's the correct way of updating my mesh's position/orientation/etc according to physical changes? meanwhile i just manually update the mesh's position only (not rotation etc) to the NxActor's getGlobalPosition()... i feel that this is a wicked ugly unintended way of doing it

2) when/why would i use regular car physics instead of raycast cars?
Back to top
View user's profile Send private message
dgreen
The One


Joined: 24 Sep 2005
Posts: 6811
Location: Raleigh, NC

PostPosted: Wed May 14, 2008 2:43 am    Post subject: Reply with quote

logic.cpp wrote:
hey if you already had a moment to have a look here, how about answer my qs:

1) it looks like the physX tutorial programs never really deal with real meshes... it looks like they just create NxActors with NxShapes and then they just render those shapes by passing the actual NxShape polys to their openGL renderer.

So i ask again; how's the correct way of updating my mesh's position/orientation/etc according to physical changes? meanwhile i just manually update the mesh's position only (not rotation etc) to the NxActor's getGlobalPosition()... i feel that this is a wicked ugly unintended way of doing it

2) when/why would i use regular car physics instead of raycast cars?


1) a shape can be a convex mesh, you can't pass it a 10000 triangle mesh, you cook the mesh into a convex shape <= 255 triangles...as described in my previous post.

2) you would use raycast cars when exact physics arent't required, just 75% etc....it's an approximation. Whereas using rigid bodies would e simulating every aspect.....I had some really great stuff going on in newton, the vehicle physics looked really fantastic, it used rigied bodies for everything....turned out to be too slow for my game [ but if I was doing Gran Turismo or something it would be perfect] . Look back for some old vids/screenshots or search "newton" in the forum...

Anyways, I currently use raycast cars through ageia [ with a convex mesh collision body... ] I will get around to posting some debug wireframe screenshots if you'd like...I'm still tweaking the physics though.

- Danny
_________________
I run this place.
"Some muckety-muck architecture magazine was interviewing Will Wright about SimCity, and they asked
him a question something like "which ontological urban paridigm most influenced your design of the simulator,
the Exo-Hamiltonian Pattern Language Movement, or the Intra-Urban Deconstructionist Sub-Culture Hypothesis?"
He replied, "I just kind of optimized for game play."

Back to top
View user's profile Send private message Visit poster's website AIM Address
logic.cpp
Super-Cool-Dude


Joined: 17 Feb 2008
Posts: 88

PostPosted: Wed May 14, 2008 9:48 am    Post subject: Reply with quote

dgreen wrote:
logic.cpp wrote:
...
1) it looks like the physX tutorial programs never really deal with real meshes... it looks like they just create NxActors with NxShapes and then they just render those shapes by passing the actual NxShape polys to their openGL renderer.

So i ask again; how's the correct way of updating my mesh's position/orientation/etc according to physical changes? meanwhile i just manually update the mesh's position only (not rotation etc) to the NxActor's getGlobalPosition()... i feel that this is a wicked ugly unintended way of doing it.


1) a shape can be a convex mesh, you can't pass it a 10000 triangle mesh, you cook the mesh into a convex shape <= 255 triangles...as described in my previous post.


i dont think you understood me; im asking about after i already have the collision volume ready (whether its a basic shape or a cooked ageia mesh), i need to "attach" the ageia mesh with my actual graphical mesh so that every time the ageia mesh (actor) changes position / rotation / etc, the graphical model will also move/rotate/etc along with it. How is this "attachment" supposed to be done? I dont suppose ageia actors automagically know which graphic model they are intended to manipulate...

what i did so far was that every frame i change my graphical model's position according to the agiea-actor's globalPosition(). Is that how its meant to be done?
Back to top
View user's profile Send private message
dgreen
The One


Joined: 24 Sep 2005
Posts: 6811
Location: Raleigh, NC

PostPosted: Wed May 14, 2008 8:06 pm    Post subject: Reply with quote

logic.cpp wrote:
dgreen wrote:
logic.cpp wrote:
...
1) it looks like the physX tutorial programs never really deal with real meshes... it looks like they just create NxActors with NxShapes and then they just render those shapes by passing the actual NxShape polys to their openGL renderer.

So i ask again; how's the correct way of updating my mesh's position/orientation/etc according to physical changes? meanwhile i just manually update the mesh's position only (not rotation etc) to the NxActor's getGlobalPosition()... i feel that this is a wicked ugly unintended way of doing it.


1) a shape can be a convex mesh, you can't pass it a 10000 triangle mesh, you cook the mesh into a convex shape <= 255 triangles...as described in my previous post.


i dont think you understood me; im asking about after i already have the collision volume ready (whether its a basic shape or a cooked ageia mesh), i need to "attach" the ageia mesh with my actual graphical mesh so that every time the ageia mesh (actor) changes position / rotation / etc, the graphical model will also move/rotate/etc along with it. How is this "attachment" supposed to be done? I dont suppose ageia actors automagically know which graphic model they are intended to manipulate...

what i did so far was that every frame i change my graphical model's position according to the agiea-actor's globalPosition(). Is that how its meant to be done?


Yea
_________________
I run this place.
"Some muckety-muck architecture magazine was interviewing Will Wright about SimCity, and they asked
him a question something like "which ontological urban paridigm most influenced your design of the simulator,
the Exo-Hamiltonian Pattern Language Movement, or the Intra-Urban Deconstructionist Sub-Culture Hypothesis?"
He replied, "I just kind of optimized for game play."

Back to top
View user's profile Send private message Visit poster's website AIM Address
logic.cpp
Super-Cool-Dude


Joined: 17 Feb 2008
Posts: 88

PostPosted: Wed May 14, 2008 9:36 pm    Post subject: Reply with quote

now thats just one example of something none of the docs/tutorials would teach me Sad
Back to top
View user's profile Send private message
dgreen
The One


Joined: 24 Sep 2005
Posts: 6811
Location: Raleigh, NC

PostPosted: Sun May 18, 2008 2:26 pm    Post subject: Reply with quote

Just found this document in the PhysX SDK...it's like PhysX cliffs notes Wink

PhysX SDK Integration Guide
Introduction
This is the PhysX integration guide. It is designed to walk you through integrating the PhysX SDK into your game. Common issues that arise during integration are addressed at each step.

1. Familiarize Yourself with the SDK
2. Basic SDK Components
3. Level Geometry
4. Terrain Geometry
5. World Objects
6. Drawing your World Objects
7. Physics Parameters and Scale
8. Object Materials
9. Joints
10. Springs
11. Asynchronous API
12. Debug Rendering
13. Moving your Objects
14. Character Controller
15. Vehicle Controller
16. Triggers
17. Raycasts
18. Contacts
19. Particles and Special Effects
20. Exporters, Importers, and PML files

1 Familiarize Yourself with the SDK
Read the first lesson in the PhysX Training Programs, “Lesson 101: Box on a Plane”. From it, you will learn all the necessary code to build a complete PhysX application. Familiarize yourself with the code used to do the following:

1. Initializing the SDK and Creating the Scene
2. Setting Parameters for the SDK
3. Adding Materials to the SDK
4. Creating the Scene
5. Creating the Actors in the Scene
6. The Physics Loop
7. Drawing the Actors
8. The Debug Renderer
9. Resetting the Scene
10. Shutting Down the SDK

Peruse this lesson until you have gotten a feel for how a PhysX application is structured. Think about where in your game you would need to put the SDK initialization, main loop, and shut-down. Also think about how and when you want to construct your actors and joints within the context of your game.

Read over the “PhysX Math Primer” document to learn how basic 3D math is done in PhysX. Also take a look at the “User Defined Classes” document to learn about the user callback objects that PhysX employs.

2 Basic SDK Components
All physics SDKs are built upon the few fundamental objects: the physics SDK itself, a physics environment or scene, parameters for the scene, rigid bodies, materials that define the surface properties of the rigid bodies, collision shapes the rigid bodies are built from, triangle meshes that certain collision shapes are constructed from, constraints that connect rigid bodies at a point, and springs that connect rigid bodies via forces over a distance. The final component of the SDK is a set callback objects the user can employ to query the scene for information. This includes debug wireframes, contact information between designated contact pairs, joint breakage, and the results of raycasts through the scene.

If you have never used a physics in your game before, this will brand new to you. If you have implemented your own physics or collision systems, look at how these objects correspond to objects you have implemented. If you have used a physics SDK before or are currently using one for your game, make note of what PhysX objects and methods correspond to the objects and methods of your current SDK. For each major component of the SDK, look for its PhysX equivalent. Examine each object and note the similarities and differences in their implementation and use.

1. Physics SDK
class NxPhysicsSDK Physics SDK object
NxPhysicsSDK::NxCreatePhysicsSDK() Creates the physics SDK

2. Scenes
class NxScene Physics environment
NxPhysicsSDK:createScene() Creates the physics environment

3. Scene Parameters
enum NxParameter SDK global parameter
NxPhysicsSDK::setParameter Sets a parameter to a non-default value

4. Rigid Bodies
class NxActor Rigid body
NxScene::createActor() Creates a rigid body

5. Materials
class NxMaterial Rigid body surface and interior material
NxPhysicsSDK::setMaterialAtIndex() Registers a material with the SDK

6. Collision Shapes
class NxShape Collision shape in a rigid body
NxActorDesc::shapes List of shapes that compose the rigid body

7. Triangle Meshes
class NxTriangleMesh Triangle mesh used to create a triangle mesh collision shape
NxPhysicsSDK::createTriangleMesh() Create a triangle mesh from a point cloud or a user-supplied convex hull

8. Constraints
class NxJoint Constraint connecting two actors at a point that allows different levels of rotational and translational freedom between the actors
NxScene::createJoint() Create a joint given a joint descriptor and two actors
9. Spring and Damper Effectors
class NxSpringAndDamperEffector Spring connecting two actors, attached to points on each actor
NxScene::createSpringAndDamperEffector() Create a spring given a spring descriptor and two actors

10. User Defined Classes
These are user callback classes the user fleshes out the methods for and attaches to the SDK to provide user-written functionality called from within the solver. They are outlined and explained in the “User Defined Classes” that accompanies this guide.

These are the major objects of the SDK. In the following sections we will show you how to use these objects to build the physical components of your game.

3 Level Geometry
For BSP levels, you will want to build your level geometry into a static actor or group of static actors. Sample code to do this is in “Lesson 603: Game Level”.

// Physics SDK globals
NxPhysicsSDK* gPhysicsSDK = NULL;
NxScene* gScene = NULL;

// Actor globals
NxActor* gameLevel = NULL;
...
NxActor* CreateLevelMesh()
{
gLevelMesh.convertLevel();

// Build physical model
NxTriangleMeshDesc levelDesc;
levelDesc.numVertices = g_Level.m_numOfVerts;
levelDesc.numTriangles = gLevelMesh.nbTriangles;
levelDesc.pointStrideBytes = sizeof(tBSPVertex);
levelDesc.triangleStrideBytes = 3*sizeof(NxU32);
levelDesc.points = (const NxPoint*)&(g_Level.m_pVerts[0].vPosition);
levelDesc.triangles = gLevelMesh.triangles;
levelDesc.flags = 0;

NxTriangleMeshShapeDesc levelShapeDesc;
NxInitCooking();
if (0)
{
// Cooking from file
bool status = NxCookTriangleMesh(levelDesc, UserStream("c:\\tmp.bin", false));
levelShapeDesc.meshData = gPhysicsSDK->createTriangleMesh(UserStream("c:\\tmp.bin", true));
}
else
{
// Cooking from memory
MemoryWriteBuffer buf;
bool status = NxCookTriangleMesh(levelDesc, buf);
levelShapeDesc.meshData = gPhysicsSDK->createTriangleMesh(MemoryReadBuffer(buf.data));
}

NxActorDesc actorDesc;
actorDesc.shapes.pushBack(&levelShapeDesc);
NxActor* actor = gScene->createActor(actorDesc);
actor->userData = (void*)1;

return actor;
}

void InitNx()
{
...
// Initialize BSP Level Actor
LoadBSP();
gameLevel = CreateLevelMesh();
...
}

In this example we load a game level BSP. We create a triangle mesh descriptor and attach the BSP mesh to it. We then create a triangle mesh shape descriptor and pass our triangle mesh descriptor into NxPhysicsSDK::createTriangleMesh() to build the mesh data for the shape. This mesh shape gets pushed onto our actor. Since we want the actor to be static, we don’t give the actor a rigid body. This means the actor will be baked into the scene and will not move.

4 Terrain Geometry
As with a BSP, you want to build your terrain geometry into a static actor or group of static actors. Sample code to do this is in “Lesson 601: Heightfields”.

// Actor globals
...
NxActor* heightfield = NULL;

NxVec3* gHeightfieldVerts = NULL;
NxVec3* gHeightfieldNormals = NULL;
NxU32* gHeightfieldFaces = NULL;

NxActor* CreateHeightfield()
{

// Build physical model
NxTriangleMeshDesc heightfieldDesc;
heightfieldDesc.numVertices = HEIGHTFIELD_NB_VERTS;
heightfieldDesc.numTriangles = HEIGHTFIELD_NB_FACES;
heightfieldDesc.pointStrideBytes = sizeof(NxVec3);
heightfieldDesc.triangleStrideBytes = 3*sizeof(NxU32);
heightfieldDesc.points = gHeightfieldVerts;
heightfieldDesc.triangles = gHeightfieldFaces;
heightfieldDesc.flags = 0;

heightfieldDesc.heightFieldVerticalAxis = NX_Y;
heightfieldDesc.heightFieldVerticalExtent = -1000;

NxTriangleMeshShapeDesc heightfieldShapeDesc;

NxInitCooking();
if (0)
{
// Cooking from file
bool status = NxCookTriangleMesh(heightfieldDesc, UserStream("c:\\tmp.bin", false));
heightfieldShapeDesc.meshData = gPhysicsSDK->createTriangleMesh(UserStream("c:\\tmp.bin", true));
}
else
{
// Cooking from memory
MemoryWriteBuffer buf;
bool status = NxCookTriangleMesh(heightfieldDesc, buf);
heightfieldShapeDesc.meshData = gPhysicsSDK->createTriangleMesh(MemoryReadBuffer(buf.data));
}

NxActorDesc actorDesc;
actorDesc.shapes.pushBack(&heightfieldShapeDesc);
NxActor* actor = gScene->createActor(actorDesc);

actor->userData = (void*)-1;

return actor;
}

void InitNx()
{

heightfield = CreateHeightfield();
}

So CreateHeightfield() is much the same as CreateLevel(), only we attach a triangle mesh that we created from a heightmap rather than the triangle mesh from the game level BSP file. The other difference is we need to set an “up” axis for the terrain.

heightfieldDesc.heightFieldVerticalAxis = NX_Y;
heightfieldDesc.heightFieldVerticalExtent = -1000;

For terrain, you need to give your triangle shape descriptor a vertical axis (NX_Y, from above, is the y-axis) and a vertical extent (-1000, this is the depth of the heightfield). This defines the shape of the heightfield as well as makes sure objects collide with the heightfield realistically.

5 World Objects
Now that you have your level geometry in place, you will want to populate your game world with objects that aren’t part of the main static scene. We call all physical objects in PhysX “actors” and use the words “actor” and “object” interchangeably. These actors can be crates, doors, trees, rocks, any physical object that resides in your game world.

Most actors will be dynamic. They are part of the physical scene and will move and collide with the world and other objects realistically. In general, you want to keep your dynamic object densities within a dynamic range of 10, that is, if your lowest density object has a density of 5, your highest density object should have a density of around 50.

Actors that you want total control over, you will want to make kinematic. These actors will not react to the world and other actors. They will move precisely where you want them to go, pushing all dynamic objects aside as if they had infinite mass. Kinematic actors should be used for objects you consider to be of such high density, they will push all other objects aside. They are good for moving objects that are effectively immune to the physical scene, like heavy moving platforms or large moving blast doors or gates. Kinematic objects can be turned into dynamic objects and vice-versa. If the objects are stationary and will never move, you want to make them static. Static objects cannot be made dynamic or kinematic.

Most of your objects will consist of a single shape. There are five shape types: plane, box, sphere, capsule, and triangle mesh. The plane shape type is primarily used to construct a ground plane at some level in your game but is otherwise rarely needed. The plane, box, sphere, and capsule shape are geometic shapes that use their geometry to determine points of contact and collision normals. The triangle mesh shape uses the triangles it is constructed from to determine points of contact and collision normals. Some objects will be built from two or more shapes. Something like a table needs to be built from five shapes: the tabletop and the four legs. With the tools we provide, you will be able to build the physical representation of your object in Max or Maya, then save it out to a PML file that your game can read in.

As an example, here’s how to make a box-shape actor:

NxActor* box = NULL;

NxActor* CreateBox(const NxVec3& pos, const NxVec3& boxDim, const NxReal density)
{
// Add a single-shape actor to the scene
NxActorDesc actorDesc;
NxBodyDesc bodyDesc;

// The actor has one shape, a box
NxBoxShapeDesc boxDesc;
boxDesc.dimensions.set(boxDim.x,boxDim.y,boxDim.z);
boxDesc.localPose.t = NxVec3(0,boxDim.y,0);
actorDesc.shapes.pushBack(&boxDesc);

if (density)
{
actorDesc.body = &bodyDesc;
actorDesc.density = density;
}
else
{
actorDesc.body = NULL;
}
actorDesc.globalPose.t = pos;
return gScene->createActor(actorDesc);
}

void InitNx()
{

box = CreateBox(NxVec3(5,0,5), NxVec3(0.5,1,0.5), 20);
}

Note we put the box shape at a local position of (0,boxDim.y,0) within the actor. That way, when we give the actor its global position, the global position will be located at the bottom of the box. It is a good idea in general to place the global position of the actor where the actor rests on the ground naturally so you can easily place them in your levels.

By giving the object a positive density, we indicate we want the object to be dynamic and we give the actor a body descriptor. Once the actor is created, its center of mass and inertia tensor are calculated from the collision shapes and density that define it. Instead of a density, you can supply a total mass for your actor as well, or build your own inertia tensor if you wish to create an object with non-uniform density. If we set the actor’s body descriptor to NULL with a zero or negative denstity, the box will be static. Once the dynamic box has been created, we can make it kinematic by calling the following on the box actor:

box->raiseBodyFlag(NX_BF_KINEMATIC);

Set it back to dynamic by clearing this flag.

box->clearBodyFlag(NX_BF_KINEMATIC);

When choosing shapes for your objects, the general rule is to build your objects from the simplest shapes possible. Whenever you can, use the basic geometric shapes: box, sphere, and capsule. If an object is more complicated than one shape alone, add more shapes to better define it. If your object doesn’t conform to one of these shapes or a moderate combination of the shapes, use a triangle mesh. With triangle meshes, whenever possible you want to use convex meshes or a union of convex meshes. The easiest way to create a convex mesh is to build a point cloud and let PhysX build a convex hull around it for you. The following is from “Lesson 104: Convex Shapes and Anisotropic Friction”.

NxActor* CreateConvexObjectComputeHull(const NxVec3& pos, const NxVec3& boxDim, const NxReal density)
{
NxActorDesc actorDesc;
NxBodyDesc bodyDesc;

// Compute hull
NxVec3 verts[8] =
{
NxVec3(-boxDim.x,-boxDim.y,-boxDim.z),
NxVec3(boxDim.x,-boxDim.y,-boxDim.z),
NxVec3(-boxDim.x,boxDim.y,-boxDim.z),
NxVec3(boxDim.x,boxDim.y,-boxDim.z),
NxVec3(-boxDim.x,-boxDim.y,boxDim.z),
NxVec3(boxDim.x,-boxDim.y,boxDim.z),
NxVec3(-boxDim.x,boxDim.y,boxDim.z),
NxVec3(boxDim.x,boxDim.y,boxDim.z)
};

// Create descriptor for convex mesh
NxConvexMeshDesc convexDesc;
convexDesc.numVertices = 8;
convexDesc.pointStrideBytes = sizeof(NxVec3);
convexDesc.points = verts;
convexDesc.flags = NX_CF_COMPUTE_CONVEX;

NxConvexShapeDesc convexShapeDesc;
convexShapeDesc.localPose.t = NxVec3(0,boxDim.y,0);

NxInitCooking();
if (0)
{
// Cooking from file
#ifndef LINUX
bool status = NxCookConvexMesh(convexDesc, UserStream("c:\\tmp.bin", false));
convexShapeDesc.meshData = gPhysicsSDK->createConvexMesh(UserStream("c:\\tmp.bin", true));
#else
printf("Linux does not behave well with UserStreams, use MemorBuffers instead\n");
exit(1);
#endif
}
else
{
// Cooking from memory
MemoryWriteBuffer buf;
bool status = NxCookConvexMesh(convexDesc, buf);
convexShapeDesc.meshData = gPhysicsSDK->createConvexMesh(MemoryReadBuffer(buf.data));
}

if (convexShapeDesc.meshData)
{
NxActorDesc actorDesc;
actorDesc.shapes.pushBack(&convexShapeDesc);
if (density)
{
actorDesc.body = &bodyDesc;
actorDesc.density = density;
}
else
{
actorDesc.body = NULL;
}
actorDesc.globalPose.t = pos;
return gScene->createActor(actorDesc);
// gPhysicsSDK->releaseTriangleMesh(*convexShapeDesc.meshData);
}

return NULL;
}

We create our frustum-shaped pyramid by giving the triangle mesh descriptor eight points, four for the larger base of the pyramid and four for the top. We then pass our descriptor into NxPhysicsSDK::createConvexMesh(), flagging it with NX_CF_COMPUTE_CONVEX to let the the mesh creation function know we want it to compute a convex hull for us.

You can also provide your own convex hull. The following code is from “Lesson 213: Convex Object Creation”.

NxActor* CreateConvexObjectSupplyHull(const NxVec3& pos, const NxVec3& boxDim, const NxReal density)
{
NxActorDesc actorDesc;
NxBodyDesc bodyDesc;

// Supply hull
NxVec3 verts[8] =
{
NxVec3(-boxDim.x,-boxDim.y,-boxDim.z),
NxVec3(boxDim.x,-boxDim.y,-boxDim.z),
NxVec3(-boxDim.x,boxDim.y,-boxDim.z),
NxVec3(boxDim.x,boxDim.y,-boxDim.z),
NxVec3(-boxDim.x,-boxDim.y,boxDim.z),
NxVec3(boxDim.x,-boxDim.y,boxDim.z),
NxVec3(-boxDim.x,boxDim.y,boxDim.z),
NxVec3(boxDim.x,boxDim.y,boxDim.z)
};

NxU32 indices[12*3] =
{
1,2,3,
0,2,1,
5,7,6,
4,5,6,
5,4,1,
1,4,0,
1,3,5,
3,7,5,
3,2,7,
2,6,7,
2,0,6,
4,6,0
};

// Create descriptor for triangle mesh
NxConvexMeshDesc convexDesc;
convexDesc.numVertices = 8;
convexDesc.pointStrideBytes = sizeof(NxVec3);
convexDesc.points = verts;
convexDesc.numTriangles = 12;
convexDesc.triangles = indices;
convexDesc.triangleStrideBytes = 3 * sizeof(NxU32);
convexDesc.flags = 0;

NxConvexShapeDesc convexShapeDesc;
convexShapeDesc.localPose.t = NxVec3(0,boxDim.y,0);

NxInitCooking();
if (0)
{
// Cooking from file
#ifndef LINUX
bool status = NxCookConvexMesh(convexDesc, UserStream("c:\\tmp.bin", false));
convexShapeDesc.meshData = gPhysicsSDK->createConvexMesh(UserStream("c:\\tmp.bin", true));
#else
printf("Linux does not behave well with UserStreams, use MemorBuffers instead\n");
exit(1);
#endif
}
else
{
// Cooking from memory
MemoryWriteBuffer buf;
bool status = NxCookConvexMesh(convexDesc, buf);
convexShapeDesc.meshData = gPhysicsSDK->createConvexMesh(MemoryReadBuffer(buf.data));
}

if (convexShapeDesc.meshData)
{
NxActorDesc actorDesc;
actorDesc.shapes.pushBack(&convexShapeDesc);
if (density)
{
actorDesc.body = &bodyDesc;
actorDesc.density = density;
}
else
{
actorDesc.body = NULL;
}
actorDesc.globalPose.t = pos;
return gScene->createActor(actorDesc);
// gPhysicsSDK->releaseTriangleMesh(*convexShapeDesc.meshData);
}

return NULL;
}

Providing your own convex hull means providing the triangle mesh descriptor with the vertices of your object as well as the indices of these vertices to show how your triangles are constructed. Ordering of the indices is important as PhysX uses the winding to determine the triangle’s facing. Indices of (v0,v1,v2) mean the triangle normal is

(v1 – v0)  (v2 – v0)

This normal should face the exterior of your object. If you store your triangles with an opposite winding, you can set the following flag on your triangle mesh descriptor.

convexDesc.flags |= NX_MF_FLIPNORMALS;

You can build static objects from convex or concave triangle meshes. If you want to build a dynamic object from a concave triangle mesh, you will want to decompose the concave triangle mesh into a group of convex triangle meshes. You can then add each convex triangle mesh shape to the actor to build your concave triangle mesh dynamic actor.

It is usually a good idea to call isValid() on your triangle mesh descriptor, especially if you are supplying your own hull.

if (!convexDesc.isValid()) return NULL; // or error message

Some notes about meshes:

• As mentioned above, be sure that you define face normals as facing in the direction you intend. Collision detection will only work correctly between shapes approaching the mesh from the “outside”, i.e., from the direction in which the face normals point.
• Do not duplicate identical vertices. If you have two triangles sharing a vertex, this vertex should only occur once in the vertex list, and both triangles should index it in the list. If you create two copies of the vertex, the collision detection code won’t know that it is actually the same vertex, which leads to decreased performance and unreliable results.
• Also avoid t-joints and non-manifold edges for the same reason. A t-joint is a vertex of one triangle that is placed right on top of an edge of another triangle, but this second triangle is not split into two triangles at the vertex, as it should. A non-manifold edge is an edge (a pair of vertices) that is referenced by more than two triangles.
• To precompute the inertia tensor, mass, and center of mass of a mesh, you can use the NxComputeVolumeIntegrals() function in the Foundation SDK. This is the same function that also gets used internally if you do not provide an inertia tensor. Note that the inertia tensor computation uses triangles’ winding to tell which side of a triangle is ‘solid’. For this reason, improper winding may lead to a negative mass.
• It is possible to assign a different material to each triangle in a mesh. Object materials are explained in Section 8 below.

6 Drawing your World Objects
You will need to make sure you’re drawing your actors in their proper position and orientation. To draw an actor, cycle through its shapes and have a routine to draw each shape. For transformation matrices, PhysX is column major, so is OpenGL. To convert a PhysX shape global transformation to an OpenGL transform, use the following code. This is in and around the setupGLMatrix() code in the tutorials:

// Get an OpenGL transform (float[16]) from a PhysX shape’s global pose
// (NxMat34)
NxShape* shape = NULL;

NxMat34 pose; // 3x4 column major PhysX matrix
shape->getGlobalPose(pose);

NxMat33 orient = pose.M;
NxVec3 pos = pose.t;

float glmat[16]; // 4x4 column major OpenGL matrix
orient.getColumnMajorStride4(&(glmat[0]));
pos.get(&(glmat[12]));

// clear the elements we don't need:
glmat[3] = glmat[7] = glmat[11] = 0.0f;
glmat[15] = 1.0f;

Direct3D is binary compatible with OpenGL so you can use the same method to get a D3D transformation matrix.

Look at DrawShapes.cpp in the tutorials to get an example of how to draw your shapes. You can use these methods to draw a shape that closely matches the physical representation of the shape, or you may want to draw a different representation of your shape at the orientation and position of the PhysX shape if the graphical representation of the shape differs substantially from the physical representation of the shape, e.g., the graphical representation is more detailed than the physical representation.

Use the debug renderer to check the positions and orientations of the collision shapes within your actor against your graphical representations of them.

void InitNx()
{

// Set the debug visualization parameters
gPhysicsSDK->setParameter(NX_VISUALIZE_COLLISION_SHAPES, 1);

}

7 Physics Parameters and Scale
Once you have your level geometry and world objects in place, you may want to adjust any scale-dependent SDK parameters. PhysX parameters are set up by default assuming 1 unit = 1 meter. If your units are substantially larger or smaller than 1 meter, you will want to adjust these parameters to scale. The main scale-dependent parameters are gravity, min separation for penalty, sleep linear velocity, bounce threshold, and visualization scale. These parameters are set in InitNx() in the tutorials, right after the SDK and before the scene is initialized.

// Physics SDK globals
NxPhysicsSDK* gPhysicsSDK = NULL;
NxScene* gScene = NULL;
NxVec3 gDefaultGravity(0,-9.8,0);

void InitNx()
{
// Create the physics SDK
gPhysicsSDK = NxCreatePhysicsSDK(NX_PHYSICS_SDK_VERSION);
if (!gPhysicsSDK) return;

// Set the physics parameters
gPhysicsSDK->setParameter(NX_MIN_SEPARATION_FOR_PENALTY, -0.05);
gPhysicsSDK->setParameter(NX_DEFAULT_SLEEP_LIN_VEL_SQUARED, 0.15*0.15);
gPhysicsSDK->setParameter(NX_BOUNCE_TRESHOLD, -2);

// Set the debug visualization parameters
gPhysicsSDK->setParameter(NX_VISUALIZATION_SCALE, 0.5);

// Create the scene
NxSceneDesc sceneDesc;
sceneDesc.gravity = gDefaultGravity;
sceneDesc.broadPhase = NX_BROADPHASE_COHERENT;
sceneDesc.collisionDetection = true;
gScene = gPhysicsSDK->createScene(sceneDesc);

}

1. Gravity
sceneDesc.gravity = gDefaultGravity;

So by default, gravity is -9.8 m/sec^2. If your scale is s meters/unit, you should divide this value by s, so

gDefaultGravity = (1/s)*NxVec3(0,-9.8,0);

2. Skin Width
PhysX is an iterative solver, so it allows object surfaces to close to within a certain distance of each other before applying penalty forces. This is the NX_SKIN_WIDTH parameter you set when initializing PhysX.

gPhysicsSDK->setParameter(NX_SKIN_WIDTH, 0.05);

The default skin width is 0.05, so about 5 cm. It means objects are allowed to penetrate each other by 5 cm.

Again, depending on scale s meters/unit you want to adjust this parameter to something like this:

gPhysicsSDK->setParameter(NX_SKIN_WIDTH, -0.05*(1/s));

3. Sleep Linear Velocity
PhysX rigid bodies are put to sleep when they fall below a certain linear velocity threshold, i.e., PhysX considers them non-moving for all intents and purposes so stops their motion and pays no attention to them until they are awakened by a sudden force, impact, etc.

gPhysicsSDK->setParameter(NX_DEFAULT_SLEEP_LIN_VEL_SQUARED, 0.15*0.15);

The default setting is velocity squared, so if your scale is s meters/unit, set your default sleep parameter to something like this:

gPhysicsSDK->setParameter(NX_DEFAULT_SLEEP_LIN_VEL_SQUARED, 0.15*0.15*(1/s)*(1/s));

Note: There is no need to adjust the sleep angular velocity to scale as this velocity is in radians and hence scale independent.

4. Bounce Threshold
PhysX rigid bodies will not bounce if moving under a certain speed. For all intents and purposes the objects are considered to be closing with each other so slowly, the SDK considers they will get no energy from the collision and stops them both.

gPhysicsSDK->setParameter(NX_BOUNCE_TRESHOLD, -2);

-2 is a closing speed, so with s meters/unit, make this parameter:

gPhysicsSDK->setParameter(NX_BOUNCE_TRESHOLD, -2*(1/s));

5. Visualization Scale
Not important to the simulation itself, but important for debugging purposes. The visualization scale is the length of the debug wireframe normals (and other vectors) displayed by the debug renderer.

gPhysicsSDK->setParameter(NX_VISUALIZATION_SCALE, 0.5);

If you want wireframe normals at 0.5 units where the scale is s meters/unit.

gPhysicsSDK->setParameter(NX_VISUALIZATION_SCALE, 0.5*(1/s));

These are the five main scale-dependent parameters you need to consider when using a scale that varies considerably from 1 meter per unit. Remember to scale these when you make adjustments to these parameters on a per-object basis as well (this is possible with sleep linear velocity and bounce threshold).

8 Object Materials
Once you have your level geometry and world objects in place, you will want to adjust the materials on your objects to make sure they slide off each other and collide with each other in the manner you want. Actor materials are discussed in “Lesson 108: Materials”.

void InitNx()
{
// Create the physics SDK
gPhysicsSDK = NxCreatePhysicsSDK(NX_PHYSICS_SDK_VERSION);
if (!gPhysicsSDK) return;

// Create the default material
NxMaterial* defaultMaterial = gScene->getMaterialFromIndex(0);
defaultMaterial->setRestitution(0.5);
defaultMaterial->setStaticFriction(0.5);
defaultMaterial->setDynamicFriction(0.5);

}

Materials have three properties: restitution, static friction, and dynamic friction.
Restitution defines the “bounciness” of the object, how much energy the object retains after a collision. A restitution of 1 indicates perfect elasticity, the object will lose no energy in a collision. A restitution of 0 indicates perfect inelasticity, the object will lose all energy in a collision. Here we have set the default restitution of all objects to 0.5, somewhere in between the two extremes.

Static and dynamic friction are the two surface properties defined by the material. Static friction determines how difficult it is to start an object sliding from a stop. Dynamic friction determines how difficult it is to keep an object sliding. We have set both static and dynamic frictions to 0.5, so all objects in the scene will have average resistance to sliding.

We have only defined the default, universal material here, set at material index 0. You can add more materials to your scene as needed. For instance, you can add a completely frictionless material to index 1.

// Create a frictionless material
NxMaterial* frictionlessMaterial = gScene->getMaterialFromIndex(1);
frictionlessMaterial->setRestitution(0.5);
frictionlessMaterial->setStaticFriction(0);
frictionlessMaterial->setDynamicFriction(0);

Materials are defined on a per-shape basis. To apply the frictionless material to an actor, do the following.

NxActor* actor;

NxShape** shapes = actor->getShapes();
NxU32 nShapes = actor->getNbShapes();
while (nShapes--)
{
shapes[nShapes]->setMaterial(1);
}

For triangle mesh shapes, you can apply materials to the entire shape or on a per-triangle basis.

As a general rule, think about the actual materials your objects are made out of when determining what material parameters to apply to them. If the object is a wooden crate, it most likely has average restitution and friction. If the object is a metal ball, like a pinball, it will have a high restitution and low friction. If the object is a wet cloth, it will have a low restitution and high friction.

Certain objects will have anisotropic friction, that is, they will slide better along one axis than another. The restitution and friction of object materials can be combined in different ways as two objects bounce off and slide along each other. By default, restitutions and frictions are averaged, though you may wish to switch to a different combine mode to achieve a desired result. Both anisotropic friction and material combine modes are explained in detail in “Lesson 105: Materials”.

9 Joints
Many of your objects may be jointed together. A door swinging on a hinge will be connected to a wall by a revolute joint. A rope hanging from the ceiling will be a series of rigid bodies connected by spherical joints end to end, with one end connected to an absolute position in the level by a spherical joint or other type of joint. Some objects you will want to joint together with a fixed joint to create new rigid bodies, either because you wish them to break apart into the two jointed pieces at some point or because it’s convenient just to weld the two objects together rather than create a completely new one built from the same shapes.

Joints are discussed in detail in “Lesson 201: Joints”. The majority of the intermediate lessons cover all the different types of joints and how to use them.

Joints are relatively new to games and their use has been infrequent. Possibly the most prevalent use of joints in games has been to contruct ragdolls. Basic ragdoll construction is discussed in “Lesson 404: Ragdolls”. This lesson builds a ragdoll in code.

One of the simplest joints is the revolute or “hinge” joint detailed in the first lesson on joints.

// Actor globals
NxActor* box1 = NULL;
NxActor* box2 = NULL;

// Joint globals
NxRevoluteJoint* revJoint = NULL;

NxRevoluteJoint* CreateRevoluteJoint(NxActor* a0, NxActor* a1, NxVec3 globalAnchor, NxVec3 globalAxis)
{
NxRevoluteJointDesc revDesc;

revDesc.actor[0] = a0;
revDesc.actor[1] = a1;
revDesc.setGlobalAnchor(globalAnchor);
revDesc.setGlobalAxis(globalAxis);

revDesc.jointFlags |= NX_JF_COLLISION_ENABLED;

return (NxRevoluteJoint*)gScene->createJoint(revDesc);
}

void InitNx()
{

box1 = CreateBox(NxVec3(0,5,0), NxVec3(0.5,2,1), 10);
box1->raiseBodyFlag(NX_BF_KINEMATIC);
box2 = CreateBox(NxVec3(0,1,0), NxVec3(0.5,2,1), 10);

NxVec3 globalAnchor = NxVec3(0.5,5,0);
NxVec3 globalAxis = NxVec3(0,0,1);

revJoint = CreateRevoluteJoint(box1, box2, globalAnchor, globalAxis);
}

This connects two box shape actors together with a revolute joint located at (0.5,5,0) in world space. The axis the lower box swings about the upper box is (0,0,1).

This joint connects the dynamic actor box2 to the kinematic actor box1. If you want to connect box2 to a position in world space, pass in NULL instead of box1.

revJoint = CreateRevoluteJoint(NULL, box2, globalAnchor, globalAxis);

With this, box2 will be jointed to the world instead of box1. This is useful for jointing objects to fixed positions in your game world.

There are many different types of joints: revolute, spherical, prismatic, cylindrical, point-on-line, point-in-plane, etc. Each type corresponds to certain axes being set free for rotation or translation. You can define limits, springs, motors, and breaking forces on joints to further refine their behaviour. There is a Six Degree of Freedom joint which will eventually replace all the different types as a single all-encompassing joint. Details on this joint are available in “Lesson 215: Six Degree of Freedom Joints”.

10 Spring and Damper Effectors
Objects can be connected by spring and damper effectors as well as joints. This is a “looser” connection between actors as joints are generally hard constraints that completely stop rotational or translation motion of the actors along certain axes between them, while springs generate forces that act to pull them together or push them apart.

Spring and damper effectors are covered in “Lesson 214: Spring and Damper Effectors”. In this lesson, two spheres are connected at their centers by a spring.

// Actor globals
NxActor* sphere1 = NULL;
NxActor* sphere2 = NULL;

// Spring and Damper Effector globals
NxSpringAndDamperEffector* spring = NULL;

NxSpringAndDamperEffector* CreateSpring()
{
NxSpringAndDamperEffectorDesc sadeDesc;
return gScene->createSpringAndDamperEffector(sadeDesc);
}

void InitNx()
{

sphere1 = CreateSphere(NxVec3(3,0,0), 1, 25);
sphere1->raiseBodyFlag(NX_BF_FROZEN_ROT);
sphere1->raiseBodyFlag(NX_BF_DISABLE_GRAVITY);
sphere1->setLinearDamping(0.25);

sphere2 = CreateSphere(NxVec3(-3,0,0), 1, 25);
sphere2->raiseBodyFlag(NX_BF_FROZEN_ROT);
sphere2->raiseBodyFlag(NX_BF_DISABLE_GRAVITY);
sphere2->setLinearDamping(0.25);

spring = CreateSpring();

NxVec3 pos1 = sphere1->getCMassGlobalPosition();
NxVec3 pos2 = sphere2->getCMassGlobalPosition();
NxVec3 springVec = pos2 - pos1;

spring->setBodies(sphere1, pos1, sphere2, pos2);

NxReal distSpringRelaxed = springVec.magnitude();
NxReal maxSpringCompressForce = 2000;
NxReal distSpringCompressSaturate = 0.25*distSpringRelaxed;
NxReal maxSpringStretchForce = 2000;
NxReal distSpringStretchSaturate = 2.0*distSpringRelaxed;

spring->setLinearSpring(distSpringCompressSaturate, distSpringRelaxed, distSpringStretchSaturate,maxSpringCompressForce, maxSpringStretchForce);

NxReal maxDamperCompressForce = 100;
NxReal velDamperCompressSaturate = 200;
NxReal maxDamperStretchForce = 100;
NxReal velDamperStretchSaturate = 200;

spring->setLinearDamper(velDamperCompressSaturate, velDamperStretchSaturate, maxDamperCompressForce, maxDamperStretchForce);

}

The spring damper effector applies a force to each sphere along a line linking their centers of mass. When the distance between the spheres is less than the relaxed distance, a repulsive force is applied, increasing to a maximum force as the spheres approach each other. When it is greater than the relaxed distance, an attractive force is applied, increasing to a maximum as the spheres move away from each other. Damping forces works to conteract the relative velocities between the spheres as they exceed a certain “compress velocity” or “stretch velocity”.

11 Asynchronous API, the Main Physics Loop, and ProcessInputs()
You should now have a complete game level filled with actors built from various shapes and materials, some jointed to each other or to the world, some connected by springs. No new data or input is introduced into the world. You only need to call NxScene::simulate(), ::flushSteram(), and ::fetchResults() each frame with a time delta for the frame and the simulation runs on its own. This is shown in the following basic physics loop.

void RenderCallback()
{

if (gScene && !bPause) RunPhysics();

}

void RunPhysics()
{
// Update the time step
NxReal deltaTime = UpdateTime();

// Run collision and dynamics for delta time since the last frame
gScene->simulate(deltaTime);
gScene->flushStream();
gScene->fetchResults(NX_RIGID_BODY_FINISHED, true);
}

This physics loop is inefficient as it stalls the game while it waits for the physics simulation to update. This is fine for applications with few objects where the speed hit is negligible, but for large game applications with potentially thousands of objects, we want to take advantage of our ability to run the physics simulation concurrently with the rest of the code. For a full discussion of concurrency and the asynchronous API, see “Lesson 801: Running on Hardware”.

In order to take advantage of the asynchronous API, we will reverse the ordering of the main physics loop so that we call fetchResults() before simulate() as shown in the following updated physics loop.

void RenderCallback()
{

if (gScene && !bPause)
{
GetPhysicsResults();
ProcessInputs();
StartPhysics();
}

}

void GetPhysicsResults()
{
// Get results from gScene->simulate(deltaTime)
while (!gScene->fetchResults(NX_RIGID_BODY_FINISHED, false));
}

void ProcessInputs()
{
// Gather results of previous inputs and issue new inputs
ProcessKeys();

// Show debug wireframes
if (bDebugWireframeMode)
{
if (gScene) gDebugRenderer.renderData(*gScene->getDebugRenderable());
}
}

void StartPhysics()
{
// Update the time step
NxReal deltaTime = UpdateTime();

// Start collision and dynamics for delta time since the last frame
gScene->simulate(deltaTime);
gScene->flushStream();
}

For this order-reversal to work, we make an initial call to StartPhysics() in InitNx() to get the ball rolling.

void InitNx()
{
...
// Initialize the physics loop
UpdateTime();
if (gScene && !bPause) StartPhysics();
}

In these next sections, we will be introducing inputs to the scene. In between fetchResults() and simulate() we will put a ProcessInputs() function which gathers results from previous inputs and issues new inputs to the scene.

12 Debug Rendering
Note the contents of our example ProcessInputs() function.

void ProcessInputs()
{

// Show debug wireframes
// Show debug wireframes
if (bDebugWireframeMode)
{
if (gScene) gDebugRenderer.renderData(*gScene->getDebugRenderable());
}
}

If you haven’t already, familiarize yourself with the Debug Renderer in the “User Defined Classes” document. We put the SDK and scene visualize() calls here and pass in our Debug Renderer report. The functions in the Debug Renderer report are called by the solver within the NxScene::simulate(), ::fetchResults() loop to display debug wireframe information.

Adjustments or inputs to the user defined reports must be done in ProcessInputs() after data for the reports from the previous scene is retrieved by NxScene::fetchResults() and before the next scene is kicked off with the NxScene::simulate() function.

13 Moving your Objects
The other function within our sample ProcessInputs() is ProcessKeys():

void ProcessPhysics()
{
ProcessKeys();

}

void ProcessKeys()
{
// Process keys
for (int i = 0; i < MAX_KEYS; i++)
{
if (!gKeys[i]) { continue; }

switch (i)
{
// Force controls
case 'i': {gForceVec = ApplyForceToActor(gSelectedActor,NxVec3(0,0,1), gForceStrength,bForceMode); break; }

}
}
}

NxVec3 ApplyForceToActor(NxActor* actor, const NxVec3& forceDir, const NxReal forceStrength, bool forceMode)
{
NxVec3 forceVec = forceStrength*forceDir;

if (forceMode)
actor->addForce(forceVec);
else
actor->addTorque(forceVec);

return forceVec;
}

Given a particular keypress, a force or torque will be applied to gSelectedActor using NxActor::addForce() or ::addTorque().

In your game code, you will need to determine what forces or torques you want to apply to your actors, batch them up, and then fire them off in your ProcessInputs() function for them to influence the next iteration of the simulation.

If you wish to move an actor using moveGlobalPos*() or setGlobalPos*(), you will want to do this in ProcessInputs() as well. Note that, in general, you should use moveGlobalPos*() or setGlobalPos*() only on kinematic actors. If you need to use either on a dynamic actor, the safest way is to set the actor to kinematic, move it with moveGlobalPos*() or setGlobalPos*(), then set it back to dynamic. It is possible to use either moveGlobalPos*() or setGlobalPos*() on dynamic objects, but realize these objects are being influenced by physical forces from the scene as you move them, and the results can be unpredicatable.

14 Character Controller
You now have a game level populated with objects that are, with the right materials, behaving in a manner consistent with your game world. Appropriate objects are connected by joints and/or springs. You can apply forces to dynamic objects or move kinematic objects by adding inputs to the scene in a ProcessInputs() or similar function that resides between the last NxScene::fetchResults() and the next NxScene::simulate().

Now you want to add a controller for your player. For complete information on the character controller look at “Lesson 605: Character Controller”. The first lesson puts the character controller on a terrain and the second in a game level.

The character controller consists of a Controller Hit Report to define how the character affects dynamic objects it collides with.

class ControllerHitReport : public NxUserControllerHitReport
{
public:
virtual NxControllerAction onShapeHit(const NxControllerShapeHit& hit)
{
//RenderTerrainTriangle(hit.faceID);

if (1 && hit.shape)
{
NxCollisionGroup group = hit.shape->getGroup();
if (group!=GROUP_COLLIDABLE_NON_PUSHABLE)
{
NxActor& actor = hit.shape->getActor();
if (actor.isDynamic())
{
if (gPts[gNbPts]!=hit.worldPos)
{
gPts[gNbPts++] = hit.worldPos;
if (gNbPts==MAX_NB_PTS) gNbPts=0;
}

// We only allow horizontal pushes. Vertical pushes when we stand on dynamic objects creates
// useless stress on the solver. It would be possible to enable/disable vertical pushes on
// particular objects, if the gameplay requires it.
if (hit.dir.y==0.0f)
{
NxF32 coeff = actor.getMass() * hit.length * 10.0f;
actor.addForceAtLocalPos(hit.dir*coeff, NxVec3(0,0,0), NX_IMPULSE);
...
}
}
}
}

return NX_ACTION_NONE;
}

virtual NxControllerAction onControllerHit(const NxControllersHit& hit)
{
return NX_ACTION_NONE;
}

} gControllerHitReport;

void InitCharacterControllers(NxU32 nbCharacters, NxScene& scene)
{
...
// Create all characters
for(NxU32 i=0;i<nbCharacters;i++)
{
...
desc.callback = &gControllerHitReport;
...
}
}

Set the character controller parameters to define the character’s vertical axis and other properties.

void InitNx()
{
...
// Create the character controllers
InitCharacterControllers(gNbCharacters, *gScene);
...
}
...
//#define USE_SPHERE_CONTROLLER
#define SKINWIDTH 0.1f
...
static NxVec3 gStartPos(0,2,0);
static NxVec3 gInitialExtents(0.5,1,0.5);
#ifdef USE_SPHERE_CONTROLLER
static NxF32 gInitialRadius = 1;
#endif
...
static NxU32 gNbCharacters = 0;
#ifdef USE_SPHERE_CONTROLLER
static NxSphereController** gControllers = NULL;
#else
static NxBoxController** gControllers = NULL;
#endif

#include "ControllerManager.h"
static ControllerManager gCM;
...
void InitCharacterControllers(NxU32 nbCharacters, NxScene& scene)
{
...
gControllers = (NxBoxController**)NX_ALLOC(sizeof(NxBoxController*)*nbCharacters);
...
gNbCharacters = nbCharacters;

// Create all characters
for(NxU32 i=0;i<nbCharacters;i++)
{
...
gControllers[i] = (NxBoxController*)gCM.createController(&scene, desc);
...
}
}

The character controller performs intersection tests to determine how the character controller object should be placed in the scene each update. From the character's origin you sweep out an AABB (or sphere) in a certain direction and detect what geometry the shape hits first. Using the geometry data and how the shape intersects it, we get the character to slide along the level geometry.

BoxController::BoxController(const NxControllerDesc& desc, NxScene* s) : Controller(desc, s)
{
Controller* ctrl = this;
appData = ctrl;

const NxBoxControllerDesc& bc = (const NxBoxControllerDesc&)desc;

extents = bc.extents;

// Create kinematic actor under the hood
if(1)
{
NxBodyDesc bodyDesc;
bodyDesc.flags |= NX_BF_KINEMATIC;

NxBoxShapeDesc boxDesc;
...
boxDesc.dimensions = extents*0.8f;
...
NxActorDesc actorDesc;
actorDesc.shapes.pushBack(&boxDesc);
actorDesc.body = &bodyDesc;
actorDesc.density = 10000.0f; // ### expose this ?
actorDesc.density = 10.0f; // ### expose this ?
actorDesc.globalPose.t = position;

kineActor = scene->createActor(actorDesc);
}
}

15 Vehicle Controller
The vehicle controller makes use of swept shapes as well. A swept sphere shape is used to simulate the tires on the vehicle. See “Lesson 701: Wheel Shapes” for a full explanation.


class ContactReport : public NxUserContactReport
{
public:
virtual void onContactNotify(NxContactPair& pair, NxU32 events)
{
// Iterate through contact points
NxContactStreamIterator i(pair.stream);
//user can call getNumPairs() here
while(i.goNextPair())
{
//user can also call getShape() and getNumPatches() here
NxShape * s = i.getShape(carIndex);
while(i.goNextPatch())
{
//user can also call getPatchNormal() and getNumPoints() here
const NxVec3& contactNormal = i.getPatchNormal();
while(i.goNextPoint())
{
//user can also call getPoint() and getSeparation() here
const NxVec3& contactPoint = i.getPoint();

//add forces:

//assuming front wheel drive we need to apply a force at the wheels.
if (s->is(NX_SHAPE_CAPSULE)) //assuming only the wheels of the car are capsules, otherwise we need more checks.
//this branch can't be pulled out of loops because we have to do a full iteration through the stream
{
CarWheelContact cwc;
cwc.car = pair.actors[carIndex];
cwc.wheel = s;
cwc.contactPoint = contactPoint;
wheelContactPoints.pushBack(cwc);
}
}
}
}
}
} carContactReportObj;

NxUserContactReport* carContactReport = &carContactReportObj;

void tickCar()
{
NxReal steeringAngle = gSteeringValue * gMaxSteeringAngle;

NxArray<CarWheelContact>::iterator i = wheelContactPoints.begin();
while(i != wheelContactPoints.end())
{
CarWheelContact & cwc = *i;

WheelShapeUserData * wheelData = (WheelShapeUserData *)(cwc.wheel->userData);

//apply to powered wheels only.
if (wheelData->frontWheel)
{
//steering:
NxMat33 wheelOrientation = cwc.wheel->getLocalOrientation();
wheelOrientation.setColumn(0, NxVec3(NxMath::cos(steeringAngle), 0, NxMath::sin(steeringAngle) ));
wheelOrientation.setColumn(2, NxVec3(NxMath::sin(steeringAngle), 0, -NxMath::cos(steeringAngle) ));
cwc.wheel->setLocalOrientation(wheelOrientation);

if (frontWheelIsPowered)
{
//get the world space orientation:
wheelOrientation = cwc.wheel->getGlobalOrientation();
NxVec3 steeringDirection;
wheelOrientation.getColumn(0, steeringDirection);

//the power direction of the front wheel is the wheel's axis as it is steered.
if (gMotorForce)
cwc.car->addForceAtPos(steeringDirection * gMotorForce, cwc.contactPoint);
}
}
if (!wheelData->frontWheel && rearWheelIsPowered)
{
//get the orientation of this car:
NxMat33 m = cwc.car->getGlobalOrientation();
NxVec3 carForwardAxis;
m.getColumn(0, carForwardAxis);
//the power direction of the rear wheel is always the car's length axis.
cwc.car->addForceAtPos(carForwardAxis * gMotorForce,cwc.contactPoint);
}
i++;
}

wheelContactPoints.clear();

}

Essentially, the car we have created has four tires, represented by swept spheres that cast to the ground.

16 Triggers
You now have a fully walkable and/or drivable level replete with physical objects and a controllable camera.

At this point, you want to populate your level with trigger shapes. To read more on trigger shapes, go to “Lesson 304: Trigger Report”.

A trigger is a shape with the NX_TRIGGER_ENABLE flag set. Each trigger calls onTrigger(), from the user’s Trigger Report, for every shape that intersects the trigger area.

static NxI32 gNbTouchedBodies = 0;

class TriggerReport : public NxUserTriggerReport
{
public:
virtual void onTrigger(NxShape& triggerShape, NxShape& otherShape, NxTriggerFlag status)
{
if (status & NX_TRIGGER_ON_ENTER)
{
// A body just entered the trigger area
gNbTouchedBodies++;
}
if (status & NX_TRIGGER_ON_LEAVE)
{
// A body just left the trigger area
gNbTouchedBodies--;
}
NX_ASSERT(gNbTouchedBodies>=0);
}
} gTriggerReport;


NxActor* triggerBox = NULL;

// Create a static trigger
NxActor* CreateTriggerBox(const NxVec3& pos, const NxVec3& boxDim)
{
NxActorDesc actorDesc;

NxBoxShapeDesc boxDesc;
boxDesc.dimensions = boxDim;
boxDesc.shapeFlags |= NX_TRIGGER_ENABLE;

actorDesc.shapes.pushBack(&boxDesc);
actorDesc.globalPose.t = pos + NxVec3(0, boxDim.y, 0);

return gScene->createActor(actorDesc);
}

void InitNx()
{

triggerBox = CreateTriggerBox(NxVec3(0,0,0), NxVec3(2,2,2));
triggerBox->userData = (void*)-1;
gScene->setUserTriggerReport(&gTriggerReport);
}

Triggers are used to detect when the player or other game objects enter or leave certain areas of the game world.

17 Raycasts
Your game now has trigger shapes in place to determine when the character arrives at different points in the level. Now you want the ability to perform raycasts to determine things like where actor shadows fall onto other actors or line-of-sight from A.I. characters to the player.

Raycasting is discussed in detail in “Lesson 305: Raycast Report”.

To cast rays through your scene, you will need a Raycast Report.

class RaycastReport : public NxUserRaycastReport
{
public:
virtual bool onHit(const NxRaycastHit& hit)
{
int userData = (int)hit.shape->getActor().userData;
userData |= 1; // Mark as hit
hit.shape->getActor().userData = (void*)userData;

const NxVec3& worldImpact = hit.worldImpact;

// Light up the hit polygon on a triangle mesh shape
NxTriangleMeshShape *tmShape = hit.shape->isTriangleMesh();
if (tmShape)
{

}

return true;
}

} gRaycastReport;

You will want to do your raycasts in ProcessInputs().

NxActor* emitter = NULL;

void ProcessInputs()
{

// Cast a ray out of the emitter along its negative x-axis
if (bRaycastClosestShape)
RaycastClosestShapeFromActor(emitter, groupFlagA | groupFlagB);
else
RaycastAllShapesFromActor(emitter, groupFlagA | groupFlagB);

}

void RaycastClosestShapeFromActor(NxActor* actor, NxU32 groupFlag)
{
// Get ray origin
NxVec3 orig = actor->getCMassGlobalPosition();

// Get ray direction
NxVec3 dir;
NxMat33 m;
actor->getGlobalOrientation(m);
m.getColumn(0, dir);
dir = -dir;

NxRay ray(orig, dir);
NxRaycastHit hit;
NxReal dist;

// Get the closest shape
NxShape* closestShape = gScene->raycastClosestShape(ray, NX_ALL_SHAPES, hit, groupFlag);
if (closestShape)
{

}
}

void RaycastAllShapesFromActor(NxActor* actor, NxU32 groupFlag)
{
// Get ray origin
NxVec3 orig = actor->getCMassGlobalPosition();

// Get ray direction
NxVec3 dir;
NxMat33 m;
actor->getGlobalOrientation(m);
m.getColumn(0, dir);
dir = -dir;

NxRay ray(orig, dir);
NxReal dist = 10000;

RaycastLine rl(ray.orig, ray.orig + dist*dir, NxVec3(0,0,1));
rlArray.pushBack(rl);

// Get all shapes
NxU32 nbShapes = gScene->raycastAllShapes(ray, gRaycastReport, NX_ALL_SHAPES, groupFlag);
}

The raycast report will contain the results of the raycasts you send out at the end of the next NxScene::fetchResults().

18 Contacts
You now have a complete game level with physical objects, character controller, vehicles, chase camera, trigger areas, and the ability to cast rays through the scene.

You now want to be able to find points of contact between the objects in your scene. This is covered in “Lesson 303: Contact Report”. You will need a Contact Report object.

class ContactReport : public NxUserContactReport
{
public:
virtual void onContactNotify(NxContactPair& pair, NxU32 events)
{
// Iterate through contact points
NxContactStreamIterator i(pair.stream);
//user can call getNumPairs() here
while(i.goNextPair())
{
//user can also call getShape() and getNumPatches() here
while(i.goNextPatch())
{
//user can also call getPatchNormal() and getNumPoints() here
const NxVec3& contactNormal = i.getPatchNormal();
while(i.goNextPoint())
{
//user can also call getShape() and getNumPatches() here
const NxVec3& contactPoint = i.getPoint();

// Get the normal force vector
normalForceVec = pair.sumNormalForce;

// Get the friction force vector
frictionForceVec = pair.sumFrictionForce;

// Get the penetration vector
penetrationVec = - contactNormal * i.getSeparation();
}
}
}
}
} gContactReport;

The contact report is attached to the scene like so:

void InitNx()
{

// Create the scene
NxSceneDesc sceneDesc;
sceneDesc.gravity = gDefaultGravity;
sceneDesc.broadPhase = NX_BROADPHASE_COHERENT;
sceneDesc.collisionDetection = true;
sceneDesc.userContactReport = &gContactReport;
gScene = gPhysicsSDK->createScene(sceneDesc);

}

Every actor pair you add as a contact pair will be passed to onContactNotify(), in the User Contact Report, if the pair is touching. To ensure a contact pair is enabled for the next iteration of the simulation, add the pair to the contact list before or in ProcessInputs(). The contact results available to you in your game code will be the ones from the current state of the simulation, i.e., the physics “back buffer”, renewed with each NxScene::fetchResults().

void InitNx()
{

box1 = CreateBox(NxVec3(-3,5,0), NxVec3(0.75,0.75,0.75), 5);
box2 = CreateBox(NxVec3(3,0,0), NxVec3(1,1,1), 5);

GameObject* Object1 = new GameObject;
Object1->id = 1;
Object1->events = 0;
box1->userData = Object1;

GameObject* Object2 = new GameObject;
Object2->id = 2;
Object2->events = 0;
box2->userData = Object2;

}

void ProcessInputs()
{

gScene->setActorPairFlags(*box1, *box2, NX_NOTIFY_ON_START_TOUCH | NX_NOTIFY_ON_TOUCH | NX_NOTIFY_ON_END_TOUCH);

}

19 Particle Systems and Special Effects
Look at the “PhysX FX” document for more information on how to implement particle systems and other special effects within an application using a generic physics SDK.

The SDK has a specialized fluid and smart particle implementation. “Lesson 410: Particle Systems” details how to implement your own particle system using rigid bodies as particles.

20 Exporters, Importers, and PML Files
Once you have a basic level populated with objects, character controllers, and vehicles, you will want to write an importer to be able to load PML files into your game and an exporter to write them out from your game object editor.

Details on importing and exporting PML files can be found in “Lesson 502: Loading PML Files”

21 Conclusion
This integration guide is meant to be a step-by-step instruction manual on how to integrate the PhysX SDK into a game application.

If you have any questions or comments about this guide, please forward them to Bob Schade at bschade@ageia.com.
_________________
I run this place.
"Some muckety-muck architecture magazine was interviewing Will Wright about SimCity, and they asked
him a question something like "which ontological urban paridigm most influenced your design of the simulator,
the Exo-Hamiltonian Pattern Language Movement, or the Intra-Urban Deconstructionist Sub-Culture Hypothesis?"
He replied, "I just kind of optimized for game play."

Back to top
View user's profile Send private message Visit poster's website AIM Address
Display posts from previous:   
This forum is locked: you cannot post, reply to, or edit topics.   This topic is locked: you cannot edit posts or make replies.    Radioactive-Software Forum Index -> Programming All times are GMT - 5 Hours
Goto page Previous  1, 2, 3 ... , 12, 13, 14  Next
Page 13 of 14

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2002 phpBB Group