Converting Assets from Max VRay to Maya VRay

We’re producing a film at work in Maya, rather than our normal 3ds Max. We’re trying to get a *ton* of assets from Max Vray over to Maya Vray and it’s proving to be difficult! More so than you’d think.

We’re using .vrscene’s to get the shaders over, but the mesh doesn’t come in as “normal” mesh (I asked one of our Maya chaps about this – I have very little experience with it myself!) and has some limitations. So we’re shunting the mesh over via FBX and the shaders over via .vrscene.

HOWEVER something funky was happening to the shader names which meant we couldn’t script it to automatically apply the appropriate shaders from the .vrscene to the mesh from the FBX. As such, we now have a custom exporter for Max which renames all the materials with a specific prefix in the material name (after smashing apart mesh with multi-subs and applying individual materials to each object), exports the vrscene, then applies Standard materials with the same name to the mesh’s and exports the FBX. With this, we can use the shader from the .vrscene (With the specific prefix’s) and link them up with the Blinn’s on the FBX (with the same prefix’s).

But the fun doesn’t end there! We had to write a few special exceptions in the prefix naming function to allow for VRayBlend materials, 2-Sided materials etc because they don’t convert over properly in the .vrscene converter, so then the Maya script that matches it all up sorts that out too.

Then you have some maps that don’t work with the .vrscene converter, even though they *do* work with VRay, such as the Composite map. It’s functionality cannot be duplicated with daisy-chained VrayCompTex maps due to the lack of opacity for each layer, so now the custom exporter has to manually move those maps over (because the vrscene converter doesn’t) and writes out a text file to the directory that allows a human to at least see how it was all put together. We could probably automate this into a MayaLayeredTexture but we haven’t got there yet.

I could go on! The upshot is that this is way harder than I thought (because I thought “export as .vrscene” was the extent of it!)

Are we missing an obvious trick here? Or is it actually just a slog?

Nuke PointCloud to Max Python Script

Here it is:

Click here to download!

It should be pretty self explanatory – select a relevant BakedPointCloud node, run the script, tell it where to save the .csv file and you’re good to go. You’ll then have a .csv file that can be loaded into Max using Thinkbox/Krakatoa’s PRT Loader, and it’ll store all the colour information as well as location. The only thing you’ll need to do is rotate the PRT Loader 90 degrees, since Max is Z-Up. When you do that, any cameras or geometry you move between Nuke and Max will align perfectly (since the FBX exporter – as well as the great Max Script Nuke’em – automatically re-orient).

Let me know if you have any questions, comments, or otherwise enjoy it!

Thanks,
Dan

Some Progress!

Well, I’ve made some progress of sorts.

In my last post I mentioned wanting to get a point cloud from NukeX (generated from a camera track) into Max’s new(ish) point cloud system. Well, the good news is that I’ve not got the point cloud – including colour data – from Nuke and into Max. The bad news is that it’s not using Max’s own point cloud system, as this requires a very particular format, the exact structure of which eludes me.

What I have done, courtesy of a smart idea from Dave Wortley, was to bring it into Max using Thinkbox’s PRT Loader, which can be grabbed as part of the Krakatoa demo. Krakatoa’s great and I advise you all take a look, but if you don’t want to make the investment in buying it just yet, the demo supports all the PRT Loading your body (and RAM) can handle, including what we need.

The actual process involves running a Python script inside NukeX with the required Point Cloud node selected. It’ll spit out a .csv file which can be loaded into the PRT Loader (and if you create the PRT Loader at the origin, its location will match up perfectly with any cameras you WriteGeo out from Nuke using .fbx format, as long as you ensure the scale is 1.0 when you import it in). At that point, you have the camera and a great point cloud from which to build up a proxy model of the scene, safe in the knowledge that the point cloud was generated using the same camera you’ll be projecting from.

Once I tidy up the code, I’ll release it on here – at the moment it has no UI and it just spits files out to your desktop (Y-Up, no less) but I’ll try and clean it up ASAP and get it on here.

In the mean time, please take a quick look at the video below showing how it’s working so far:

NukeX Pointcloud in 3ds Max from Dan Grover on Vimeo.

Thanks,

Dan

Quick Bit of Python Stuff

My Python journey is just beginning, but I’ve gotten to the point where I’m beginning to get up to speed with the basic syntax which makes doing a lot of thinking about a lot easier! If’s, Loops, all that jazz.

But the biggest two changes compared to maxscript that I’ve found so far are as follows:

1 – You have to “import” modules to get additional functions. This is good and bad – on the one hand, there are a bunch built in and you can download or make more, which means that the possible functions around to use are not only larger, but also expandable (Again, compared to MaxScript). For company-wide distributed scripts, this means that I need to make sure everyone has the correct plugins installed but that’s easy enough.

2 – You can do Ifs, Elif’s and Else’s – this might not be anything unusual to other coders, but MaxScript only had the first and last of those – there’s no else-if. Basically it allows you to completely control the flow of a set of if conditionals in a way that you can’t with MaxScript. It doesn’t strictly allow you to do anything new, because you can repeatedly do sequential if-statements with flagged variables and stuff, but sod all that – this makes it so much easier. I likey!

Python Adventure

My Raspberry Pi arrives tomorrow, and I have my book all about coding in Python which I’ve started to read through. I also started looking at the Max SDK documentation for the Python stuff – It’s going to be a long road, but I’m excited about it!

I have a MxS currently that remaps assets from wherever they are currently to another, single folder by copying all the assets there and then remapping, including XRefs (and nested XRefs). This is a key part of our cloud based rendering system, but the problem is that some of the XRef Max files are very large, and whilst they zip up nice and tiny, it’s not possible to remotely request a local unzip on the server they get uploaded to. So what I’m hoping to do with Python (the server also runs a WAMP stack) is to have a standalone Python script up there that listens on a port and unzips files on request. Or perhaps I can do it by making a small text file in a given folder that the Py script will check, unzip the file in that text file, then delete the text file? I’ll need to experiment…

Dan

Rasberry Pi and Python fun!

I’ve been so, so lazy about learning Python. I’ve long since wanted to actually write standalone or web apps (And we have a WAMP stack running in the cloud courtesy of Amazon, so there’s definitely somewhere to use it – in fact, I know exactly what I want to do with it!) but I’m starting with something else – A Rasberry Pi and a book teaching Python for beginners with a Rasberry Pi in mind! So I’m adding a new tag, and hope you update this blog on how I do.

Dan

Backburner fun!

Hi All,

So, now I’ve got a hold on fiddling with Backburner via MaxScript (whilst trying to avoid its…. subtleties, as described here and here) I’ve been having some fun, using Backburner for some interesting tasks.

One of the most useful but obvious – insomuch as it’s right there in the cmdjob help entry, is submitting After Effects renders to the farm. It’s very easy to write a little bit of code wrapped up in an UI to make the tasklist file described (a comma separated file detailing the name and range of a job). Submit this along with the location of the After Effects .aep file and the comp name and you’re good to go.

But it got me thinking… all you’re really doing when you do this is submit a command via cmd.exe to the machines in question. So… why not go further? The first thing I thought of was a response to a problem we had at work where we received a file from a colleague off-site that contained a plugin none of us had installed, and nor did the farm. We had the choice of either stripping out the offending objects (if we could find the damn things) and then replicating its functionality without using the (free) plugin, or we could go on the usually arduous process of installing the plugin on all the workstations and all the render nodes. It’s just copying some .dlo files into Max’s /plugins/ directory, but still, if only there were some way of giving a universal copy command across the network…

The basic code was very simple – it offered the user the ability to select a file, a destination and select one of the groups (mentioned in the first link up there) on the farm. It then creates a small .bat file which is, effectively, just a command copy the file to the destination (ie COPY “X:\network\file.dlo” “C:\plugins\”, exit 0) and sends a separate job to Backburner for each node in the group, with only that node offered as a server for each job. Once the underlying code was done, I added a few tweaks, such as the ability to add entire folders to the mix, but it’s really just an expansion on this pretty basic concept.

Putting these two examples together – After Effects jobs and Copying files – gives me some fancy ideas for a bit of fun, but the most obvious one to me was… fonts! Unfortunately when you install a Font, it isn’t just a matter of copying it to the Windows /fonts/ folder – there’s also a registry entry that gets added. Otherwise the above script would be enough to install fonts, network wide – very handy if your AE job has a non-standard font. However, it shouldn’t be hard to add a checkbox to the above script, which will effectively add a line to the .bat file that gets generated which performs the appropriate registry tweaks using regedit – I just haven’t done it yet!

But if anyone has any other cool ideas on how Backburner could be leveraged for useful or fun tasks, please let me know! If anyone wants any more information on how to do the above in detail, feel free to email me at dan-grover@dan-grover.com.

A note about Maxscript and Backburner

Another thing that I have discovered during my trials and tribulations with getting Maxscript and Backburner to play nicely is that Groups on Backburner don’t behave as they should under 3ds Max 2013. Before product update 6, you could not really get any information about groups at all – all requests to GetGroupName returned an empty string.

With product update 6 comes some steps forwards – it now returns the correct group name! – but little else. You still can’t reliably return a list of the servers in a group. You cannot create groups, nor can you edit or delete them. This is true whether your connection to the manager has queue control or not. I have, thusly, come up with a solution that started out as a temporary fix until a new product update or 2014 comes around to fix it, but has turned out to work fairly robustly, so I see no need to change, even should groups get fixed!

The process basically involves having groups defined outside of Backburner. In my case, I have a folder full of files – text files, incidentally, though they are never seen by the user – which has, as its file name, the desired name of the “group” and the text file itself contains a comma separated list of server names. These are created, edited and deleted using a simple script that reads and writes the text files. These same files are read in my Backburner submission script, which then supplies the server list contained in the file to backburner (in the case of NetRender in the “job.submit servers:server_array” parameter, and in the case of cmdjob’s as part of the “-servers ‘server1,server2,server3′” etc flag). This has the advantage of being very very quick and simply, as well as robust – so long as no bugs are brought in that mess with the (currently functioning) use of servers in backburner submission.

The main downside is that this is set at submission and, though you can change the contents of the text files defining the groups whenever you like, this doesn’t change already submitted groups. Of course, you can always change which servers are assigned to a job in the backburner monitor, so it’s not like you need to resubmit if you realise there was a problem in the group.

Anyway, that’s my solution, and hopefully it could help someone if they find themselves in a similar position.

MaxScript, Backburner and Dependencies!

Hi All,

I have today finally made progress with a problem I’ve been contending with, on and off, for a few weeks now! The problem is of using MaxScript and Backburner with dependencies. Just submitting Max renders to Backburner using Maxscript isn’t a problem, though for some unutterable reason dependencies are not supported. There are a handful of options out there to try and solve this problem, namely…

– Sending the job to backburner suspended, without any dependencies, then setting the dependencies by editing the (unused) <DependsOn> XML tag in the backburner job folder, before archiving and unarchiving the job (so that the XML file gets re-read). This is problematic as the only way to archive and unarchive a job (also not a function available via MaxScript!) is via Telnet, either through Python or dotNET. I managed to very vaguely get this working using dotNET but Telnet is not the most elegant of things, especially when in an automated system.

– Setting post-render scripts that perform a certain task when a render is complete. The problem here is that the post-render script is called every time a render server finishes its job – which might not necessarily be at the end of the actual job, of course. Whilst this is potentially surmountable by checking to see what frame was rendered in the script, it also meant that each render node needed a licensed copy of Max if it were to open and close files.

– Using cmdjob.exe, the command-line backburner queue magician. This is what I have ended up using.

The solution is actually relatively elegant now. I can’t post actual code as this is for a paid job, but the basic process is this: cmdjob.exe can send jobs to backburner which have dependencies. So what you do is have a cmdjob task launch your actual render tasks. Instead of submitting scenes directly from Maxscript to Backburner, have cmdjob.exe run on Backburner, call up a copy of max and then run a script on it, with the script containing instructions to send a render job to backburner. Because this job is not sent until the cmdjob is executed, and because cmdjob’s can have dependencies, you can effectively have dependencies through maxscript.

Which is easier said than done! So here is a more detailed approach to the process:

– The user loads a scene they want to render.

– When they run the script, a series of other scripts are generated and saved alongside the .max file. These scripts contain all the instructions needed to alter the scene as per the users wish (for example, there may be 3 scripts – one simulates and saves out a particle sequence, one loads this sequence and pre-calculates a GI solution, the third loads both the generated particles and the pre-calculated GI solution and renders the final frames.) and then submit it to Backburner as netrender, before closing the instance of Max.

– Instead of simply sending the first job to the render farm via backburner in MaxScript, a commandline call is made to cmdjob.exe (more info here) to load a copy of max and run a certain script – in this case, the first one that we just generated. The crucial thing to know here is that cmdjob.exe jobs CAN be set to be dependent on other jobs.

– So the cmdjob.exe job is sent to backburner, and is picked up by a render node. This machine needs to have a licensed copy of Max and for that reason I recommend a special machine dedicated to these sorts of tasks. They don’t need a fast processor or fancy graphics card, but they do need a lot of RAM as they will be opening all your max scenes.

– This machine opens max and runs the script that was generated. This script basically contains all the options needed to be useful (such as turning on any particle generators, setting output paths for precalculated GI etc), then sends itself to Backburner via netrender (Job A). Cruicially, it then also submits the NEXT cmdjob.exe task to backburner, dependent on the job it’s just sent (Job B, dependent on Job A). It then closes that instance of Max, and backburner sets the task as finished.

– Next, Backburner finds itself with two new tasks – Job A and Job B from above. Job B is dependent on Job A. So Job A is set off to render, and when it’s finished, Job B begins – and the same process as the previous step begins all over again. This time it loads up the next script, submits that render (Job C), and submits another cmdjob (Job D), again, depedent on the one it just sent.

– This process continues ad-infinitum.

There are a lot of complications here. Do you want to hard-code the whole process of what submits what? My solution was the have the very original script run by the user generate all the scripts (.ms files prefixed with numbers indicating their order) to be used by the cmdjobs, and one of the last lines of each of those scripts was to “fileIn” another script (“fileIn” being the scripting equivalent of XRefing). This script deletes the .ms that called it and looks in the folder to see if there are any more. If there is, it launches a new cmdjob running the first script alphabetically. Thus, when that script runs and submits its next job, it will again delete the script which called it and look for the next. This way, I can have almost infinite scripts all daisy-chain off one another.

This took me a while to work out (thanks in no small part to a few frustrating Max bugs!) but it’s working quite well, with the added benefit of allowing the user to submit jobs to backburner that could previously not have been sent there. If you have any questions about the process, please feel free to email me at dan-grover@dan-grover.com and I’ll try and help as best I can!

Thanks,

Dan