.
 
.

ÜberwareTM

View Forum Thread

Anyone can view the forums, but you need to log in in order to post messages.

> Forum Home > Feature Requests > Automated job configuration and distribution

  Sun, 21/Nov/2010 12:36 PM
Marc
11 Posts
Hi,

after using Smedge in production for a while at our site, here's a suggestion for a possible new feature I'd like to see in the future.

This is a bit complex to explain, but I try my best (non english user here).

We have a mixed environment with 32/64bit linux and windows machines with various cpu/ram configurations. One problem we face on a daily basis is that some engines will be blocked for a long time due to the job requiring more ram than the engine can handle (and thus swapping on disk). Another one is to find the optimal packet size for work units across jobs. Some jobs render really quick and could be submitted with a large packet size. Other ones take hours per frame and should be submitted frame by frame.

We would like Smedge to be able to automate the whole job configuration process - leaving the user with just submitting the job, setting a priority and be done with it.

In order to do that, Smedge must have some additional information about the engines/jobs:

- how much ram does the current job require (not known until rendered)
- how long does one frame take to render (not known until rendered)
- how much ram does the current engine provide (known)

In order to gain the missing information, Smedge could follow this workflow:

- always start the job in sample mode with a packet size of one to determine the missing information about ram consumption and render time
- compare the sample frames ram consumption with the amount of ram available on the engines and disable those engines, that would otherwise swap to disc
- instead of setting the packet size static for the whole job, calculate it on the fly based on the render times from the sample pass and the job's priority

If something like this would be possible, users should not have to worry about manually excluding engines and determining the "correct" packet size. This would be a huge time saver in our daily workflow.

Hope the basic idea is understandable. I understand that this feature is probably far more complex as I described it here. The intention was to provide the rough idea that was floating in my mind for some time. If, for some reason, this is totally out of question or simply a bad idea, feel free to shoot me. :)
   
  Mon, 22/Nov/2010 1:47 AM
Robin
1138 Posts
Hi

Dynamic packet sizing is already in progress, but completely automatic memory detection is not always possible.

However, I have just added manual memory distribution as a new means of determining the work load for a machine, rather than using the core count. This means that if you look at your memory usage ahead of time, you can use it to specify the number of workers an Engine can take. For example, it will start up to 2 processes set to use 8 GB of RAM on a 16 GB Engine, and won't start it on a machine with only 4 GB.

Note that Smedge does not actually limit the memory of the process to the number set (at this time), nor does it currently adjust the memory usage based on actual running memory usage. But it's a step in that direction.

The new build will be ready shortly. I'll post a message on the Announcements forum with links for the download.

-robin
   
  Mon, 22/Nov/2010 2:56 PM
Marc
11 Posts
This is great news!

Thank you for the clarification. We're looking forward for the next build then.
   
  Thu, 25/Nov/2010 8:44 AM
Jamie
102 Posts
That does sound like a good feature, right now we're using a similar approach but using historical performance and in-app submit tools to tune the submission the next time it arrives on the farm.
   
Page 1 of 1

.
.
.
.
©2000 - 2013 Überware. All rights reserved