1 - Critical Notes
Did you really debug your data? ;)
There are some common and tempting artistic approaches that do not work well with TP.
Sometimes there are work-arounds, sometimes not.
Please keep the following in mind when working with TP:
Never Use ObjectXYZ mapping coordinates
ObjectXYZ coordinates may cause errors with TP.
Recommend only using Explicit Mapping or Vertex Color coordinates.
ObjectXYZ is based on the original object size, and TP's size is based on the extent of ALL particle groups!
Sim with final UVs
Make sure your UVs and geometry are final before simming.
UVs cannot be changed after caching, and replacing geometry procedurally after caching can be extremely difficult (or maybe impossible in some cases)
Memory and Reference values are NOT cached
Memory values and References are NOT cached.
Only DataChannels (DC) are cached.
SOLUTION: Create a rule at the end of the TP tree that assigns the Memory/Reference value to a DC (if applicable).
CAMERA-MAP-PER-PIXEL or CAMERA-MAP texture maps in materials will likely cause problems.
These will have issues and often if the particle moves the texture will not stick. This includes objects that Fragment or VolumeBreak.
SOLUTION: apply a Camera Map modifier (or Camera Map Animated modifier) to the source object before importing into TP and optionally collapse that to the base object. Then remove all CameraMapPerPixel textures and setup the texture map coordinates as Explicit Mapping.
If you are using the awesome CameraMapGemini modifier and have multiple cameras & UVs, then you might want to bake out a new texture map of the combined textures and collapse each Camera UV channel.
If applying camera-based UVs there is also the option of using TP's CameraMap operator.
Applying Camera Mapping on an existing cache is not carried through topology changes (Fragmenter or VolumeBreaker)
What this means is if you try to apply Camera Map to an existing cache at a certain frame and those particles later go through Fragmenter or Volumebreaker (or any topology change whether new faces are created or not), then the original Camera Map will not be retained.
apply UV Maps to the geometry before the sim (be aware this adds some extra mesh data and will slightly increase cache sizes)
apply a Camera Map through the shot camera at the starting frame on your original sims
New VolumeBreaker faces use UV 1
VolumeBreaker object material MUST use UV channel 1 for inner faces
Only UV channel 1 works for inner faces of VB objects.
UV 2 or above will not apply the texture correctly.
VolumeBreaker 'Max Chunks'
VolumeBreaker's 'Max Chunks' is NOT per particle,
It is the total number of frags that can be created by this operator (per timestep?)
Avoid Heavy Meshes
Dense meshes can be very slow to simulate in TP, and even slower to display in the viewport through TP.
Avoid using Turbosmooth, Tessellate, Subdivide and MeshSmooth (or at least turn them off).
Remember, we can add Turbosmooth after TP has been cached.
Displacement texture maps MUST be disabled before importing objects via Obj2Particle or GeomInstance.
Avoid crazy raytracing render with super-far particles
Kill particles who are far outside camera view or not visible
It creates all sorts of problems with rendering and dust emissions if there are too many unused/invisible particles and create massively huge bounding boxes.
Never cache sims to the network
NEVER sim directly to the network – HUGE speed penalty
Sim to a local drive and copy to the network.
Simming directly onto the network can sometimes be 6x longer.
Never sim with GaO enabled (Groups as Objects)
Sim with Groups As Objects (GaO) turned OFF, or suffer a ~22% speed penalty
Tested with a 200 frame VB-fragging sim, results: 27 min 38 sec with GaO ON versus 22 min 40 sec with GaO OFF
SC 'Delayed Frames'
SC delayed frames is particle age dependent
If you are switching from a neutron group to an active group that has SC delayed frames, you have to reset the particle age to 0 in order for the delayed frames to work correctly.
Enable 'Render Instance'
Enable 'finalRender Instance' in GeomInstance and StdShape to reduce cache sizes This will reduce cache size regardless of your renderer. (this is now ON by default in 2016)
However, if you need custom vertex color data per particle stored in GeomInstance or StdShape then DO NOT use the "Render Instance"
Don't Shift-Drag TP's to Duplicate
Do NOT shift-drag TPs. This will result in the material being shared by both TPs, so one of them will not render correctly. Resolve this by assigning TP a new material or using the "UVW remove" utility (Material) - and then scrub the timeline.
Duplicating ScriptOps and Custom Plugin Operators
Duplicating script operators OR TP nodes may result in script operators being broken (they do not retain their 'self' values?)
do not duplicate script operators – create them new each time
do not dupplicate TPs – create a new TP, save the old blackbox, and then import that blackbox
if you do duplicate, debug those operators to confirm data is flowing as expected
if you do duplicate, be prepared to recreate all script operators
Don't use dashes "-" in dynset names
Do NOT use dashes in your cache names This can lead to incorrect nesting of caches, and MXS doesn't like dashes. (Underscores_are_ok)
VRayPhysicalCamera Motion Blur Settings
VRayPhysicalCamera + TP fragmenting may require changing VRay camera properties:
Shutter Angle (deg) should be set to 178
Shutter Offset (deg) should be set to 2
Without these settings you may get this error when objects fragment in TP: "An unexpected exception occurred while rendering" and the motion blur may appear out of sync for fragging AND non-fragging particles.
Also, without this setting, TP will sometimes not render the first frame of the cache. (Try setting <<>>)
Link Constraints cause huge slow-downs, so better to bake the objects out if possible.
CHECK IF THIS IS STILL HAPPENING
Why aren't there per-particle values for Smoke or Fluid?
In TP some of the data structures are very complex (e.g. Smoke and Fluid), so the values need to be optimized and therefore cannot be per particle, else enormous memory requirements would be needed.