View Forum Thread

Anyone can view the forums, but you need to log in in order to post messages.

> Forum Home > Announcements > Smedge Memory Distribution

  Thu, 02/Dec/2010 7:21 PM
1136 Posts
Smedge 2011 includes the ability to now distribute work to the Engines by dividing up the memory instead of, or in addition to, dividing up the CPUs (cores) on the machine. This feature is in "alpha" testing, meaning that basic tests have been done, but it hasn't been put through its paces in a large variety of environments and hardware.

That's where you come in. If you are interested in this feature, the alpha build of Smedge 2011 is available on all platforms for trying it out. Everything else about this build is nearly identical to the current release of Smedge. This means that, in general, testing the "alpha" build is not any more likely to cause problems than running the release build (for any features other than this new system).

If you are interested in testing out memory distribution, download the build for your platform here:




Smedge 2011 will not communicate with Smedge 2010 (at this point). This means that to test 2011 properly, you need to run it on every machine you want to use for testing, and you should not also run the older version at the same time to avoid conflicts.
  Mon, 06/Dec/2010 4:50 AM
177 Posts
what is the advantage of the memory feature ?
  Mon, 06/Dec/2010 9:57 AM
102 Posts
I'm imagining it'd solve a problem we often face here where the artist selects 2 cpus per frame on an average 2gb per cpu pool but the job actually needs much more ram than that 2gb per cpu.

Certainly here when threads start using more GB of ram collectively on a node than is available the node pages heavily to the drives and slows right down almost to the point of failure.

Soon as you get a 4gb per frame job running using one core per-frame on an 8core node with 16gb of ram it'll crawl that node to a complete standstill regardless of how large you stagger the jobs.

Note: (SmedgeEngine on such nodes does sometime get disconnected/falls of if this happens or if the job crashes (poor submit presets etc).

If the artist selected 4 cpus per frame then the job would run fine. Often we find the need to allocate cpus based on the expected memory usage of that packet.

Robin's new addition of memory (required) per frame option would probably be along the lines of a new means of allocating numbers of cores needed per frame based on the feedback from the node's spec systems etc.

This is mostly speculation on my part as I'm still to play with the latest builds :D

Now if it altered the job's ram/cpu allocated in realtime as the renders start completing as the render results start coming in and self-tweaking them to suit the job best, that'd be an awesome feature.

(kind of planning on using herald to read mantra output gb used to revise poorly submitted jobs here.)


Edited by author on Mon, 06/Dec/2010 10:04 AM
Page 1 of 1

©2000 - 2013 Überware. All rights reserved