

But my feeling is it’s probably a big job to bring everything across. I suspect I should just stick with Unity and HDRP for rendering - but it gives me a lot of grief. Hair has standard bone names - can I write a script so whatever Creator needs gets applied on each sync from Unity? Or do I have to reapply these each time a change is made? Or are they merged into the USD file in separate layers etc? if I carefully name assets can scripts notice “oh, it has ‘tree’ in the name, so add wind animation support after imported”. For things not brought across, is it possible for me to write a script to add these upon import to avoid redoing it all by hand? E.g.Are HDRP volumetric clouds supported in some way? (HDRI Sky I could use instead.).Are camera settings brought across? Or do they all need redoing?.Are animation clip (*.anim) files brought across? Or do they need to go into FBX files?.Are Cinemachine cameras supported? You can define camera settings then blend between them, or have a camera track a target (follow and/or look at).a sequence may add extra objects to the scene just for that shot (sequence). If an object is disabled, will it be brought across? E.g. It adds complexity as it enables/disables objects based on the currently selected sequence. Unity has a Sequences package designed to support a hierarchy of Timeline objects for animation sequencing.Are Timelines brought across? (Do I have to redo all the animation sequencing again?).I was trying to understand if hair bones would come across (I am using fairly low quality models).

Would cloth simulation come across? I am using the MagicaCloth 2 extension in Unity.Is texture tiling supported? I use tiles for the face texture for blush etc.If they don’t come across, can I instead define a mapping rule?

I have shader graphs for skin, for eyes, for hair etc.
