Multiple Jobs on one computer
Multiple jobs on one machine
With other render managers you may start multiple clients/workers on one machine.
One client does not know anything about the other clients.
With Royal Render, you can run one rrClient on one machine only (by default, you can create exceptions).
One rrClient can run up to 8 job threads (slots).
Each job thread is like a new rrClient.
Each job thread can take any job not related to the other jobs.
Using one rrClient application instead of multiple apps saves network connections and can include functions to handle multiple jobs better.
A typical setup for some companies with larger machines and many different jobs with different requirements is to run for example 3 job threads.
(There are not many companies that use all 8 jobs threads. Exception: If you want to split your GPUs).
The first 2 job threads are for 3D jobs as they require a lot of memory and once rendering, not a lot of traffic.
The 3rd is used for comp jobs as they require little CPU and a lot of network traffic.
Multi-instance jobs
In addition to job threads there are multi-instance jobs.
If you have a very small job with many frames you can tell RR that one job should start up to 10 instances (frames) at the same time.
So if you use for example 8 job threads and each job thread starts a 10-instance job, you end up with 80 instances of render applications.
Memory (and/or core) requirement
If you have smaller and larger jobs, then you should set a memory requirement.
Note: This is a requirement, not a limitation!
Example:
Your machine has 100 GB of available RAM.
You have submitted jobs with a memory requirement of 40GB and some with a requirement of 80GB.
RR takes care that these jobs fit onto the machine.
Which means the machine can either run 2x 40GB jobs or 1x 80GB job.
Note: System/OS memory is taking into account as well.
E.g. if you render on an artists workstation over night and the artist forgot to close his app that takes 30GB, then this client takes jobs with a max requirement of 70GB only.
Unexpected RAM usage
If one job with a requirement of 40GB is taking 70GB while rendering, the rrClient recognizes this and may take another job with a requirement of max 30GB only.
Limitation of this feature:
Of course this works only if the machine is not starting 2 jobs at the same time.
Example:
If 2x 40BG jobs with are started at the same time, RR "thinks" both will use 40GB while rendering.
And if one job ends up wanting to use 70GB, then this job will crash as the machine does not have (70+40=) 110GB RAM.
As this job setting is a memory requirement, not a memory limitation.
It works the same for core requirement.
Example:
A 3D job should require at least 14 cores, the comp jobs requires at least 4 cores.
A machine with 16 cores may take 1x 3D job or 4x comp jobs.
Memory (and/or core) limitation (restriction)
In addition each job has a memory and core limitation setting.
This one is used very rarely by our customers.
Most customers do not limit the cores or memory.
Example:
Your machine has 20 cores.
You want to render 2 jobs.
While there are rendering, each process gets the same ammount of cores by the OS. (Which is 10 in this case)
But then there are times in which the second job slot does not use all cores.
E.g. if a new job was received and is loading. If a frame is pre-processing data, if a frame re-loads textures, ...
In this case the other job thread can use all cores for that time.
Core limitation
The core limitation works by telling the render application how many cores it should use.
RR does not tell the OS which cores should be used for which render process.
The OS distributes all running processes evenly on all cores, which is usually better as it offers a better energy and heat distribution.
Side-note / Exception on Windows: If you have told the rrClient to reserve cores for an artist logged in, then RR keeps these cores completely free of any render process.
Memory Limitation
RR tells the OS that the process must not take more than this ammount of memory.
So far this works on Windows only!
If the render application requests more memory from the OS, then the OS denies the request.
And the render application either frees unused memory blocks to continue (e.g. textures of objects not rendered any more or previous frames in comp apps)
or aborts the render.
Important Linux note
We have tested some ways to implement this feature on Linux and it does not work as desired.
We want a feature that the render application will know if it has to free memory to be able to continue.
The OS should deny a request from the app for more memory.
So far Linux was not able to provide such a functionality for a processes group.
Linux does not even deny requests for a memory block larger than the computer has in total.
The render app always ended up crashing.
And if it crashes anyway, we have not seen any reason to let it crash earlier.
One issue is the "Linux Memory Overcommit".
An other issue is that a process limit like the one used by tools like ulimit are working for one process only.
And a renderer app might start mutliple processes and if we limit each of these processes to e.g. 40GB, it does not work as a total limit.
Note:
We may review this issue again on Linux if there is interest in such a feature.
It may require to change a OS setting to work as wel.
Important:
Please see Memory Management on this help page.