Was inspired by how Fear’s eyes from Inside Out could move and wanted to try to recreate that inside of Maya using Lattices and surface Deformers
Houdini content made following these tutorials
The mesh is made using a six sided tube, with slightly random scales set on the sizes and widths for each, copied to scattered points. Then a Boolean is added to each mesh with a slight change in rotation on that to give the crystals different rotations on the top cap.
the VDBfromPolygons makes the entire poly set a VDB Volume, and then converted back to polygons as a single mesh.
Finally a PolyReduce node to take the mesh down from 1.3mil polys to 20thousand polygons.
I’ve been messing with using Houdini over the past couple weeks, mostly for Pyro sims (explosions are cool, sue me), and have been trying to export the volumes to Maya for rendering with Arnold. I’ve finally got it working to a point that I’m mostly happy enough with, but will definitely need changes in the future.
Adding collissions to the simulation gave me over an hour of issues with the smoke sim clipping through the collision object. Initially I tried just adding more Substeps to the PyroSolver node, and making the collision object thicker, but neither worked. Luckily everything was ‘quickly’ fixed completely by adding a GasEnforceBoundary node to the Advection input of the PyroSolver.
I also changed the StaticSolver to an RBDSolver, but I’m not sure that helped fixed the issue.
(problem screencaps shown below)
Currently it’s still not a perfect process between Houdini and MtoA.
Firstly is the emission colours. I have absolutely no idea how to have a gradient ramp based on the intensity of the Heat channel emission, so it has to be one single colour.
(BlackBody doesn’t seem to work either, and I have no idea how to export that)
Arnold GPU Beta also doesn’t support OpenVDB volumes yet, so rendering with that just straight up doesn’t show the volumes.
Having to render volumes with CPU feels like slow pain.
luckily none of these test frames took more than a minute each at [3,2,2,1,1,1 // 10,2,2,4,0,10]
Another of the issues currently is that turning on Motion Blur while rendering gives the aiVolume a blue tinge through the entire bounding box, making the renders basically unusable.
I have no idea if this is an aiVolume Container issue, an AiStandardVolume issue, or a Houdini issue from exporting either the Heat or Vel.x, Vel.y, or Vel.z channels
Unfortunately, rendering out Motion Vectors and compositing the blur in Fusion doesn’t seem to work as nothing is rendered out in the MV Pass, so I can’t do that 😦
Overall, it’s definitely fun as hell to do simulations in Houdini and exporting them to Maya for rendering, but I’m definitely going to need a load more practice before I feel comfortable
TLDR: Maya 2019, Arnold 3.2.0, ACEScg 1.0.3, Fusion 9
A couple months ago I started trying to figure out a colour management workflow that was simple enough to understand and worked well
luckily the first thing I found out about is the ACEScg OCIO workflow.
Unluckily, even Natron (which supports OCIO configs) didn’t produce an image that looked the same as Maya/Arnold did.
None of the other programs that I had supported OCIO configs (Photoshop, After Effects, DarkTable, Krita, Sketchbook, etc)
after months of thinking that maybe i just had it set up wrong between Maya and Natron, I tried out Fusion and realised that, no, I did not have it wrong.
Fusion worked perfectly right out of the box, with the Read node connected to an OCIOcolorprofile node, set to import the ACEScg (as the file was exported as), and transform it to sRGB(ACES)
wish I’d known earlier that this was just a program issue, instead of something that I could fix easily, but I’m just happy that it’s working just in time for a new big project i’m about to start 🙂
done over the past two (2) days
started off using Arvid Schneider’s Xgen Fur tutorial and went from there messing with some maps for clumping and hair emission
all went surprisingly well ^^ definitely want to try this out on a rigged character