Inside Unreal Physics
This post hopes to document various additional information about using physics in Unreal. It is meant to be supplemental information to the Epic documentation.
The main Unreal Physics Docs are at: https://docs.unrealengine.com/latest/INT/Engine/Physics/
PhysX is the Physics Engine embedded in Unreal. The history of PhysX can be found here: https://en.wikipedia.org/wiki/PhysX. It was originally video game Middleware called Novodex and is now currently maintained by nVidia. It is used in a variety of game engines.
The source code is free and available at https://github.com/NVIDIAGameWorks/PhysX-3.4, but you must be a registered nVidia developer to access (which is also free).
PhysX is multi-threaded and supports a large amount of rigid-body simulation features.
The default implementation embedded in Unreal utilizes the CPU for all computation. However, nVidia also provides a custom version of Unreal which utilizes the GPU for some cosmetic physical simulations like partices, cloth, and fluid dynamics. See more on this at: https://github.com/NvPhysX/UnrealEngine.
The main nvidia PhysX SDK is at: docs: http://docs.nvidia.com/gameworks/content/gameworkslibrary/physx/guide/Index.html
Finally, the Physx api documentation itself can be accessed from the help file included with the Physx source in the /Documentation directory .
(You can use other Physics engines with UE, but this post is focused on PhysX)
Physx runs a real-time simulation of rigid bodies. RigidBody actors are added to a Physx Scene, and the scene is simulated every frame. These simulated bodies are associated with Unreal Actors and used to update the actors' positions. The physics bodies are typically simple collision primitive approximations like spheres, boxes, capsules. (Because it is faster to calculate collision using simple primitives). The physics simulation is run during every frame(possibly many times per frame), updating the position of these primitive bodies.
Then this information is used to update the transforms of the relevant SceneComponents. (This occurs inXXX) In this manner, Unreal can render the SkeletalMeshes , StaticMeshes, and any other Actors at the appropriate positions of their physics body proxies. Thus, the rendered scene is really a "visualization" of a simpler PhysX simulated scene. You can use console commands "Show Collision" or "pxvis collision" to draw a wireframe visualization of the underlying scene.
The simulation itself consists of updating rigid body positions based on their velocity and interactions with forces, determining which bodies are in contact, and then applying counter depenetration forces. In addition, callbacks can be registered to notify your game code of collisions when they occur. (These trigger the OnOverlap and OnHit events you often use in BP).
In addition to RigidBodies, joints can be used to connect the bodies. These joints are configured using constraints and motors, and the solver integrates their effects during the physics tick. For example, a spring joint may be configured which increases corrective force with the distance to the joint. Constraints define the allowed positions a body may be in in relation to the joint. Motors are optionally used to generate a proportional acceleration instead of a force.
Scene Queries....Raycasts and LineTraces
A very important capability is the ability to query the current PhysX scene, after it has been simulated. For example, in UE when you execute a LineTrace, you are generating a raycast query against the underlying PhysX scene. Shooting a gun, selecting the object under the mouse cursor, checking for actors with-in a given range, all require querying the underlying PhysX scene.
Querying is not free, but is often faster than you might think, since PhysX stores all collision geometry inside an optimized hierarchical spatial structure designed for querying. Each query is broken done into 3 phases:
Broad phase traverses the global scene spatial partitioning structure to find the candidates for mid and narrow phases.
midphase traverses the triangle mesh and heightfield internal culling structures, to find a smaller subset of the triangles in a mesh reported by the broad phase.
Narrow phase performs exact intersection tests (ray test for raycast() queries, and exact sweep shape tests or overlap tests for sweep() and overlap() queries).
There are a couple of variations to be aware of:
LineTraceByChannel - This traces against objects that have a Block setting on a specific Trace Channel
LineTraceByProfile - Traces for objects that have a specific Profile
LineTraceForObjects: This applies a filter to trace against specific Types Of Objects
In addition to the regular LineTrace, there is a MultiLineTrace
Multi will return multiple hits for the trace, and the target can be configured to Block or Overlap. In this case, a block will prevent further tests for intersection.
Another built-in scene query function is BoxOverlapActors and SphereOverlapActors. Both of these return an array of actors (there are also component versions),
Each physics tick, the simulation is run for a certain elapsed time delta. Due to the nature of the underlying simulation, it is better to run each physics tick with the same size time delta. (non-uniform timesteps increase instability). Since our game is running with a variable frame rate, it is often common to break-up the physics-tick into multiple smaller "sub-steps". So every frame, multiple sub-steps of relatively the same size are executed. This in essence chops up the frame delta time into uniform time slices. This can result in a more accurate and stable simulation. Sub-stepping typically comes into play at lower framerates, where we must simulate physics more times per frame because the elapsed time is greater.
Sub-stepping can be turned on in the project specific physics settings. You can also register a c++ callback to be called from the sub-step. You can use this callback to provide custom physics code during each sub-step.
You can request physics to callback your own custom function during the sub-step with AddCustomPhysics.
Add a custom delegate to your header
Then bind the delegate in the constructor:
And set the callback during every tick:
UE Physics Source
UE wraps up most of the PhysX api in an abstraction layer. This is located in Engine/Source/Runtime/Engine/Classes/PhysicsEngine. Many of the features in this wrapper are also exposed to blueprints, however certain parts are only accessible with c++.
There are 3 kinds of bodies in the simulation:
Dynamic(PxRigidDynamic) - These are the standard rigid bodies, affected by forces and collision.
Kinematic(PxRigidDynamic) - These are Dynamic bodies that have infinite mass and inertia. If moved with setKinematicTarget they will affect dynamic bodies. These are good for dynamic objects that are moved by your code instead of by the physics simulation. e.g. moving platform. If a body is not simulating physics and is not set to static mobility, it will be kinematic.
Static(PxRigidStatic) - This is for stuff that doesnt move. e.g. level geometry. If they are moved, the spatial partitioning structure must be rebuilt (unlike Kinematic bodies) which has a cost. In addition, when they are moved they do NOT affect dynamic bodies. This type of body is created in UE when the Mobility type is set to static. Most of the WorldStatic geometry in the scene is typically Static.
UE uses a slightly different abstraction names, and there are a few settings that affect the underlying PhysX body: If Simulate Physics is on == Dynamic If Mobilty is Movable == Kinematic If Mobility is static == Static If one or both bodies is Dynamic, then collisions occurs. If both are static then no collision occurs. If one is kinematic and the other is Kinematic or Static, then you must move the kinematic with Sweep checked in order for collision to occur.
Again - one very common source of confusion surrounding collisions is between Kinematic and Kinematic, as well as Kinematic and Static. When moving a Kinematic body, remember to set sweep to true if you want collisions.
A RigidBody is created in PhysX with createRigidDynamic or createRigidStatic. (These are wrapped up in BodyInstance in CreateActor_PhysX_AssumesLocked)
Dynamic bodies move, while static bodies haves implicit infinite mass/inertia e.g. they are stationary.
Shapes are attached to rigidbodies and used to generate collisions with the scene. Some example shapes are sphere, cubes. etc...(http://docs.nvidia.com/gameworks/content/gameworkslibrary/physx/guide/Manual/Geometry.html)
The UE wrapper for these rigid bodies is FBodyInstance.
FBodyInstance::InitBody is responsible for creating the underlying PxRigidActor - either Dynamic or Static.
When your level loads, static and dynamic rigid bodies are created for all the level geometry, skeletal mesh, static mesh. Upon starting, InitBody may be called hundreds of times.
This is typically done through the PrimitiveComponent in OnCreatePhysicsState.
Channels, Collisions and Traces
In UE, a collision occurs when objects collide during simulation. Configuring this is another common source of confusion and frustration. See https://docs.unrealengine.com/latest/INT/Engine/Physics/Collision/Overview/index.html
The basic approach is to assign geometry a category (called ObjectType) and then configure how this object responds to collisions with these types. In addition to configuring how an object responds to collision with other object, you can also define how it responds to various trace channels.
Collisions With Objects During Simulation ( Object Response Section )
There are 3 options for handling collisions: ignoring, blocking and overlapping.
Blocking will cause the simulation to prevent penetration and apply corrective forces to prevent penetration.
You can optionally handle this block in an OnHitEvent.
Overlapping occurs when bodies are interpenetrating each other.
You can optionally handle the collision in an OnOverlapEvent.
NOTE: Both objects have settings to enable/disable generation of these events.
Typically there are many different types of objects in your game - depending on what type of objects are colliding, you may want them to block, overlap, or ignore. For example, the arrows of a hero should not hit the hero but only hit enemies.
Rather than define these interactions on a per-object level, you can define an ObjectType for each type of object.
e.g. Hero, Enemy, Bullet, Wall
You can specify this object type on each instance or class of an object..
In addition specifying an ObjectType, you can also specify a Collision Response to control how the object reacts to every other object type. Ignore or Overlap or Block.
There are presets defined that store these settings and can be configured in Project Settings.
Trace Response To LineTrace Querying (Trace Response Section )
A very similar setup is used to configure LineTraces.
Since you can query the physics scene after each simulation step using LineTraces, you can define whether these traces Ignore, Overlap, or Block.
In this context, the Overlap setting lets the trace pass through and also hit other objects. See Multi Line Trace
To summarize, the Trace Channels are used to define how an object responds to various types of LineTraces while the Object Collision Response specifies how the physics simulation behaves during integration.
For example, if you only want certain objects to respond to your hero attacks, then you could create a HeroAttack trace channel and use this when you test your attacks. This way, you have granular control over what responds to being attacked.
SetCollisionEnabled allows you to configure whether a given object responds to Physics Collision, Traces, or both.
Joints ( Constraints )
A joint consists of two bodies and their reference frames(position and orientation) to the joint. Each joint defines a relationship between the these positions and the constraint itself that will be enforced by the PhysX constraint solver.
Joints can be used in many scenarios - door hinges, ragdoll bodies, car wheels are all examples of joints.
The solver attempts to enforce the constraint by applying corrective forces. In addition, the joint can optionally be configured to utilize motors, which are implicitly integrated. Motors are more expensive, but are also more stable.
A joint is created in PhysX with a call to PxD6JointCreate. (All joints in UE are D6 joints)
UE wraps up the PhysX joint in FConstraintInstance.
UE wraps the PhysX PxD6Joint constraint with FConstraintInstance.
A joint defines which axis are locked, limited, or free.
Joint Drives (Motors)
Some PhysX joints may be actuated by a motor or a spring implicitly integrated by the PhysX solver.
If the bodies of the joint are out of compliance with the joint, projection will teleport them to correct positions.
UE also includes support for PhysicsAssets. These enable you to easily create bodies and constraints for a skeleton asset. (Both the skeleton asset and Physic Asset are created by default when importing a skeletal mesh)
The physics asset can be edited using the PHaT Tool. (This tool has been updated in 4.18!)
The ragdoll is simply a group of physics bodies tied together with constraints. For example, when a character is killed, he typically stops animating and becomes a ragdoll, crumbling and tumbling to the ground. In Blueprints, you can simply use the SetSimulatePhysics node, and your character will go full ragdoll. PhysX simulates the rigidbody position of the ragdoll, solving for the constraints every frame.
It is frequently useful to refine the default constraints to make your ragdoll more realistic.
Hopefully this has helped give additional information and context on using Physics and Unreal for your game development!
NOTES ON KINEMATIC FROM PHYSX
Sometimes controlling an actor using forces or constraints is not sufficiently robust, precise or flexible. For example moving platforms or character controllers often need to manipulate an actor's position or have it exactly follow a specific path. Such a control scheme is provided by kinematic actors.
A kinematic actor is controlled using the PxRigidDynamic::setKinematicTarget() function. Each simulation step PhysX moves the actor to its target position, regardless of external forces, gravity, collision, etc. Thus one must continually call setKinematicTarget(), every time step, for each kinematic actor, to make them move along their desired paths. The movement of a kinematic actor affects dynamic actors with which it collides or to which it is constrained with a joint. The actor will appear to have infinite mass and will push regular dynamic actors out of the way.
To create a kinematic actor, simply create a regular dynamic actor then set its kinematic flag:
Use the same function to transform a kinematic actor back to a regular dynamic actor. While you do need to provide a mass for the kinematic actor as for all dynamic actors, this mass will not actually be used for anything while the actor is in kinematic mode.
It is important to understand the difference between PxRigidDynamic::setKinematicTarget() and PxRigidActor::setGlobalPose() here. While setGlobalPose() would also move the actor to the desired position, it would not make that actor properly interact with other objects. In particular, with setGlobalPose() the kinematic actor would not push away other dynamic actors in its path, instead it would go right through them. The setGlobalPose() function can still be used though, if one simply wants to teleport a kinematic actor to a new position.
A kinematic actor can push away dynamic objects, but nothing pushes it back. As a result, a kinematic can easily squish a dynamic actor against a static actor, or against another kinematic actor. As a result, the squished dynamic object can deeply penetrate the geometry it has been pushed into.
There is no interaction or collision between kinematic actors and static actors. However, it is possible to request contact information for these cases if PxSceneFlag::eENABLE_KINEMATIC_PAIRS or ::eENABLE_KINEMATIC_STATIC_PAIRS gets set.