What is Sequence divide ?


Most of the RR jobs have Sequence Divide enabled.

With Sequence divide the Server sends the frame range in segments to the clients.

The size of the segment (Not the number of segments) depends on the Min/Max Frames settings for Sequence Divide.

Frames have been send to a client are blue in the framebar.


An example:

Your scene frame range is 200.

Sequence Divide Min/Max frames is set to 10/10.

The server will send the job 20 times to a client, each time the client gets 10 frames to render.

There will be at max 20 clients rendering the job, not more. 


Why min and max?


The min/max gives the server a little bit latitude.

For example if you have only a few free clients, then the server will send larger segments.

If you have a slower client, it will get a smaller segment.


What should I set as min/max ?


The min/max values depend on your frame render and scene load time.

Then it also depends on your company, how many machines you have and what your total render load is.

If your scene takes very long to load, you would probably use higher values as the clients have to load the scene file for every render segment they get.


The min value is good if a client renders at least 15-30 minutes on a segment.

Thje max value should not let a client render for more than 3 hours. Cause while a client has a render segment, the server can not average clients between user nor projects. 


Auto-adjusted by server ?


If the min/max settings are to low, then the server tries to increase them.

A client should at least render 2-3 minutes on a segment, otherwise the overhead will be to high and it can even happen that not all your clients are rendering.


The server will adjust the sequence size, if the clients are rendering less than 60 seconds on a segment.

The reasons for this adjust are:

  • You increase your render time per frame with the load time of the scene file.
  • There is a small delay in sending jobs to clients. This is caused by technical limitations or explicit implemented delays to reduce fileserver usage.




Comparison render time:

The pure frame render time is 20 seconds.

Then it takes 60 seconds for the render application to start up, load the scene and all textures.

Your scene has 200 frames.


Two results of the example with different min/max settings:


min/max = 2 

You send 2 frames to each client, your total time will be (60s+2*20s)= 100s.
Your effective render time per frame is now 100s/2= 50s. You just doubled your frame time.
The whole scene takes 200*50s = 167 minutes = 2,7 hours

min/max = 10

Your total time for a segment will be (60s+10*20s)= 260s.
Your effective render time per frame is 260s/10= 26s. Almost no loss.

The whole scene takes 200*26s = 86 minutes = 1.4hours





Comparison render farm efficiency:

The pure frame render time is 20 seconds.

This time we take a small application startup time like 30s for the render application to start up, load the scene and all textures.

In the worst case, RR needs 6 seconds until the job is started at the rrClient.


Two results of the example with different min/max settings:




min/max = 2 

Let's assume the server sends a segment every 5 seconds.
A client takes 76s to render a segment. In these 76s the server can send jobs to 76s/5s= 15 clients.

After 15 clients got their job, the first is idle again. So it gets a new one.

This way the 16th client will never get a job. 
You are stuck to a max of 15 clients rendering. 



min/max = 10

Let's assume the server sends a segment every 5 seconds.
A client takes 236s to render a segment. In these 236s the server can send jobs to 236s/5s= 47 clients.

You get 47 clients rendering. 





"Disable" Sequence Divide


If you set Sequence Divide Min to 0, then the sequence is not divided.

All frames are send at once to one client.

E.g. for simulation pre-passes.