Uitgeverij ONTMOETING

 

Personal Render Farm 1

NEXT PAGE

In a personal- or firm bound Render Farm one can start off very simple with one server and one client machine. And as it goes along one can add more client machines. An advantage is that the farm can grow depending on the demand. Just add one or more machines to the existing ones and a faster render machine is available with more horsepower. But it comes with a twofolded price: the needed hardware and the needed software licenses.

Network render does not peak anymore in Google's Search Engine.

 

Seven years ago distributed computing was not possible for Home Users. Now with (sometimes) older machines or machines that are really cheap, you can build a powerful Render Farm without a massive financial outlay or a degree in computer science. Although profound knowledge of computer hardware and networks is advisable.

ButterflyNetRender setup

Multi-core computers reduce rendering times and therefore make network rendering less of an issue for those doing fairly simple projects. For those doing projects with huge amounts of compositing, vectors, radiosity, etc., render times can still be enormous long, and in those cases a true Render Farm can be interesting. However my guess is, from reading the render forums over the years, that not many render users fall into this category.

One way of looking into the distributed render model is using the Beowulf cluster (designed by Donald Becker). In principle a network composite of computers that act like one big single machine. The operating system must be tricked into thinking that all components belong to one single device. The big advantage is that any piece of software designed for a multi-core CPU will work. Because the render software thinks that it is running on an ordinary operating system with standard hardware. Scientific use is eminent.

The backside of this fantastic idea is that it is limited the use of open source OS. And almost all commercial render software do not operate on such a reliable environment. What a pitty!

'Home Made' Render Farm

 

Some fine render programs on Linux:

http://www.techdrivein.com/2011/08/5-open-source-
3d-modelling-rendering.html

Art of Illusion (free and open source 3D modelling and rendering app.)

Blender, the most popular of all renderers

http://blogs.computerworld.com/pixars_rendering_software_big_on_
linux_servers_not_mac

Pixar's rendering software: big on Linux servers.

POV-Ray Full-featured ray tracer. http://www.povray.org/

Radiance Suite of tools for performing lighting simulation

Sunflow Rendering system for photo-realistic image synthesis

YafRay One of the best known powerful Ray Tracers.

 

http://www.textureszone.com/tutorials/4-applications-reviews/69-12-
free-3d-high-quality-rendering-softwares.html

Kerkythea Another well known render program

LuxRender open source rendering system

Ogre

PIXIE, RenderMan renderer

Render Principles book 

Render Farm in Cluster

Then the other option. A Render Farm with several computers in a cluster, each of them with their own operating system and resources. Sometimes quite ordinary standard machines with CPU, RAM, Graphic Card and ethernet etc. These machines must be told what to do when they started up. That is the job for the queue manager. We leave everything to that managing software and we are up and running. Or do we?

Before we can start, we need render software in the machines with the corresponding licenses. And before queing jobs there must be some sort of way to tell which machine has to do what en when.

It would be nice to have something as a Batch Render software system that could brake up a standard rendering process into several smaller seperate jobs, for instance in dividing the picture into four different pieces. That needs to put together afterwards. Or to redirect single frames to different client machines.

The server machine sometimes is called Master and the client machine slaves.

The easiest method is to work with standard render software in each machine (except the queing server) that queues different render files in each client machine.

Some queing managers support bucket-based distribution. Splitting a single frame into smaller ones and send different sections of the same image to different machines. Of course this feature is limited to one or sometimes more render brands and version numbers and is a hardly available.

Afterwards the queue manager will reconstruct the complete picture. Job done.

All the completed pictures and before that, the project data is stored on a seperate (most of the time) Firewire or Gigabit networkdisk. This is the shared drive to all the client machines. The client gets their jobs from the queue server and reports back when it is finished. The project data goes right into the shared drive.

In choosing the right hardware with CPU and RAM, it is advisable to figure out what CPU speed and number of cores are optimum against the selling price. RAM should be sufficient to incorporate the whole picture and the accompanying files and data. 16 GB should be enough for most applications and sometimes for smaller resolutions and simpler 3D models 8 GB will also do. Sometimes lower rated GHz cores are cheaper to buy and therefore more can be purchased.

Each client machine did not need its own keyboard, mouse and monitor, but that's only needed to install the operating system and the render software. Suitable remote control will be enough to maintain.

 

Sharing

With a tool Remote Desktop or a free VNC app you control remotely each client machine in your network. The machines will appear in a seperate window in your desktop, taking control over for mouse and keyboard.

 

NEXT PAGE


to the top