We’re producing a film at work in Maya, rather than our normal 3ds Max. We’re trying to get a *ton* of assets from Max Vray over to Maya Vray and it’s proving to be difficult! More so than you’d think.
We’re using .vrscene’s to get the shaders over, but the mesh doesn’t come in as “normal” mesh (I asked one of our Maya chaps about this – I have very little experience with it myself!) and has some limitations. So we’re shunting the mesh over via FBX and the shaders over via .vrscene.
HOWEVER something funky was happening to the shader names which meant we couldn’t script it to automatically apply the appropriate shaders from the .vrscene to the mesh from the FBX. As such, we now have a custom exporter for Max which renames all the materials with a specific prefix in the material name (after smashing apart mesh with multi-subs and applying individual materials to each object), exports the vrscene, then applies Standard materials with the same name to the mesh’s and exports the FBX. With this, we can use the shader from the .vrscene (With the specific prefix’s) and link them up with the Blinn’s on the FBX (with the same prefix’s).
But the fun doesn’t end there! We had to write a few special exceptions in the prefix naming function to allow for VRayBlend materials, 2-Sided materials etc because they don’t convert over properly in the .vrscene converter, so then the Maya script that matches it all up sorts that out too.
Then you have some maps that don’t work with the .vrscene converter, even though they *do* work with VRay, such as the Composite map. It’s functionality cannot be duplicated with daisy-chained VrayCompTex maps due to the lack of opacity for each layer, so now the custom exporter has to manually move those maps over (because the vrscene converter doesn’t) and writes out a text file to the directory that allows a human to at least see how it was all put together. We could probably automate this into a MayaLayeredTexture but we haven’t got there yet.
I could go on! The upshot is that this is way harder than I thought (because I thought “export as .vrscene” was the extent of it!)
Are we missing an obvious trick here? Or is it actually just a slog?
I’ve just updated the Point Cloud to Max script, based on some feedback (ie a bug!!)
Nuke Pointcloud to 3ds Max – Python
Please download and enjoy 🙂
Here it is:
Click here to download!
It should be pretty self explanatory – select a relevant BakedPointCloud node, run the script, tell it where to save the .csv file and you’re good to go. You’ll then have a .csv file that can be loaded into Max using Thinkbox/Krakatoa’s PRT Loader, and it’ll store all the colour information as well as location. The only thing you’ll need to do is rotate the PRT Loader 90 degrees, since Max is Z-Up. When you do that, any cameras or geometry you move between Nuke and Max will align perfectly (since the FBX exporter – as well as the great Max Script Nuke’em – automatically re-orient).
Let me know if you have any questions, comments, or otherwise enjoy it!
Well, I’ve made some progress of sorts.
In my last post I mentioned wanting to get a point cloud from NukeX (generated from a camera track) into Max’s new(ish) point cloud system. Well, the good news is that I’ve not got the point cloud – including colour data – from Nuke and into Max. The bad news is that it’s not using Max’s own point cloud system, as this requires a very particular format, the exact structure of which eludes me.
What I have done, courtesy of a smart idea from Dave Wortley, was to bring it into Max using Thinkbox’s PRT Loader, which can be grabbed as part of the Krakatoa demo. Krakatoa’s great and I advise you all take a look, but if you don’t want to make the investment in buying it just yet, the demo supports all the PRT Loading your body (and RAM) can handle, including what we need.
The actual process involves running a Python script inside NukeX with the required Point Cloud node selected. It’ll spit out a .csv file which can be loaded into the PRT Loader (and if you create the PRT Loader at the origin, its location will match up perfectly with any cameras you WriteGeo out from Nuke using .fbx format, as long as you ensure the scale is 1.0 when you import it in). At that point, you have the camera and a great point cloud from which to build up a proxy model of the scene, safe in the knowledge that the point cloud was generated using the same camera you’ll be projecting from.
Once I tidy up the code, I’ll release it on here – at the moment it has no UI and it just spits files out to your desktop (Y-Up, no less) but I’ll try and clean it up ASAP and get it on here.
In the mean time, please take a quick look at the video below showing how it’s working so far:
NukeX Pointcloud in 3ds Max from Dan Grover on Vimeo.
My Python journey is just beginning, but I’ve gotten to the point where I’m beginning to get up to speed with the basic syntax which makes doing a lot of thinking about a lot easier! If’s, Loops, all that jazz.
But the biggest two changes compared to maxscript that I’ve found so far are as follows:
1 – You have to “import” modules to get additional functions. This is good and bad – on the one hand, there are a bunch built in and you can download or make more, which means that the possible functions around to use are not only larger, but also expandable (Again, compared to MaxScript). For company-wide distributed scripts, this means that I need to make sure everyone has the correct plugins installed but that’s easy enough.
2 – You can do Ifs, Elif’s and Else’s – this might not be anything unusual to other coders, but MaxScript only had the first and last of those – there’s no else-if. Basically it allows you to completely control the flow of a set of if conditionals in a way that you can’t with MaxScript. It doesn’t strictly allow you to do anything new, because you can repeatedly do sequential if-statements with flagged variables and stuff, but sod all that – this makes it so much easier. I likey!
My Raspberry Pi arrives tomorrow, and I have my book all about coding in Python which I’ve started to read through. I also started looking at the Max SDK documentation for the Python stuff – It’s going to be a long road, but I’m excited about it!
I have a MxS currently that remaps assets from wherever they are currently to another, single folder by copying all the assets there and then remapping, including XRefs (and nested XRefs). This is a key part of our cloud based rendering system, but the problem is that some of the XRef Max files are very large, and whilst they zip up nice and tiny, it’s not possible to remotely request a local unzip on the server they get uploaded to. So what I’m hoping to do with Python (the server also runs a WAMP stack) is to have a standalone Python script up there that listens on a port and unzips files on request. Or perhaps I can do it by making a small text file in a given folder that the Py script will check, unzip the file in that text file, then delete the text file? I’ll need to experiment…
I’ve been so, so lazy about learning Python. I’ve long since wanted to actually write standalone or web apps (And we have a WAMP stack running in the cloud courtesy of Amazon, so there’s definitely somewhere to use it – in fact, I know exactly what I want to do with it!) but I’m starting with something else – A Rasberry Pi and a book teaching Python for beginners with a Rasberry Pi in mind! So I’m adding a new tag, and hope you update this blog on how I do.