What is Reality?
You have rendered with Radiosity, but how was it made?
A rendering of a radiosity Renderer is an image, where each pixel consist of 3 floating point values.
Each contributing with 0 - 265 steps in between.
CIE RGB color gamut
CIE 1931 xy chromaticity diagram showing the gamut of the wide-gamut RGB color space and location of the primaries. The D50 white point is shown in the center.
The 'normal' RGB color sceme used for computers and displays. We skip the spectral way of rendering here and HDRi and RAW photo's.
The range of image brightness is very wide. Because of the big differences in light from the source (sun) or artifially lights in the scene. But if one measure the amount of brightness in a natural scene, one sees that the range of brightness is hardly 'unmeasurable'. It is way to large. But with computer screens and with color pictures one cannot mimique the wide range of brightness in any way.
But people (and marketing depts. of render brands) never argue about the question of this sort of approximantion to realism. Confusion all over.
So what is the solution to this confusion about realism?
What can we, or must we do, to accomplish that?
Nothing really, we accept that photographs act like a sort of presentation of 'reality'. And the same sort of output from the computer screen with RGB representations is also accepted as a 'physical model' of the light in our virtual 3D scene.
We see that the screen, the image file or the image printed on paper are all representatives of a big approximation of reality. The same problems we are facing with the introductance of Photo camera's. The Exposure is the main part and the diaphragma is not going to bridge the wide gap between the sun out there and the representation on the film, or now on the digitally sensor to shuffle the bits around to make a digital file.
Strangely enough, the now history old dia-film is in fact very good. With dia transparencies we can beter resemble the wide range of brightnesses out there, in nature. Because of the very good color resemblence paralleled with nature response to light and color. And the fact that in the making proces (camera with lens and film), the opposite will bring back a lot of quality (projector, light source and screen). And more physically right (a claim of render brands), then most (any) digitally render image.
Slides and transpariencies with unbeaten sharpness and brilliance.
Slides which are indistinguishable from analogue slides.
Duplicates worthy of the name.
Leave the path of light unchanged, use only those rays that work for you.
Reversal films are the preferred film choice of professional photographers for images intended for reproduction in print media. This is because of the films' high contrast and high image resolution compared to negative (print) films.
Leitz prado slide projector from the past
July 1 , 2009
Color reversal film produces positive transparencies, also known as diapositives.
Over the active dynamic range of most films, the density of the developed film is proportional to the logarithm of the total amount of light to which the film was exposed, so the transmission coefficient of the developed film is proportional to a power of the reciprocal of the brightness of the original exposure. The plot of the density of the film image against the log of the exposure is known as an H&D curve.
projector lenses lightpass in a slide projector. Opto lens system 35 mm slide projector.
Fundamentals of Film Exposure
Density the H&D curve and gamma
Interesting 'laws' of the manner we look at images
Calibration of computer screen
They assume perfection and controlled conditions, which we know is not always realistic. Your monitor works in RGB. Just 3 colors, Red, Green and Blue. It's rear illuminated, and white is the absence of density, and black is the absence of light.
Prints are created in CMYK. 4 colors. Cyan, Magenta Yellow and Black. White comes from the substrate, and black is added, because CM&Y make an ugly brown. The CMYK diagram is much smaller then the RGB one. On top of that, even with very accurate profiles, the nature of your paper will effect how it looks and how the tonal range records.
You can see that the two outputs are very different, so I don't think it's realistic to assume that they will match. Look for the "feeling" of the image. Density should match, but there will be variables.
SENSITOMETRIC AND IMAGE-STRUCTURE DATA
ISO technical exploration
Mar 11, 2013
Small abstract of complete article.
Most people understand the practical use of ISO, but what is it, where does it come from, and what's difference between ISO in film and digital? I'm going to explore the history and technical underpinnings of the system. If you've ever wondered what ISO means or how it works, this one's for you!
For large format printing purposes, though, you can see why it's vital to shoot with lots of light and at base ISO. Hence also the maxim "expose to the right," meaning get the image as bright as possible on the histogram without clipping highlights. Not only does that maximise the amount of light signal compared to the reasonably fixed noise level of the imaging electronics, but the way the data is digitised means that more information can be stored in the highlights than in the shadows.
Reality or just another render image? Click in the picture to enlarge.
RELATIONSHIPS BETWEEN PHYSICAL MEASUREMENTS AND USER EVALUATION OF IMAGE QUALITY IN MEDICAL RADIOLOGY &endash; A REVIEW
The term image quality is devoted mainly to the technical aspects of the image: primarily contrast, sharpness and noise. Even when one speaks of clinical image quality,
If the opinion is just based on an impression of quality, the usefulness of the assessment may be questionable (Vucich 1979, Barrett and Myers 2004, Månsson 2000).
HISTORY AND EVOLUTION
Working constraints in lens design
From the very beginning, lens makers were constrained by the properties of light and its behaviors in glass, the availability of suitable glass, and the practicalities and limitations of the manufacturing process. The physical properties of glass cause it to bend or refract light as it passes through a lens, but in the process, it will also separate or diff ract light into its component colors. A perfect photographic lens would bring all light, of all colors, from all portions of the lens into focus on a flat plane without distortion. Aberrations are problems of focus caused by the inability of a lens to bring light from all portions of a lens into focus atthe same point, or the inability to bring light of all colors to focus at the same point. Distortions are a problem of geometry.
Misunderstanding and Confusion:
(What to do with the image once you've rendered it)
The output of a radiosity renderer is an image where each pixel consists of three floating point values, one for each of red, green and blue. The range of brightness values in this image may well be vast. As I have said before, the brightness of the sky is very much greater than the brightness of an average surface indoors. And the sun is thousands of times brighter than that. What do you do with such an image?
Your average monitor can at best produce only dim light, not a lot brighter than a surface indoors. Clearly you cannot display your image directly on a monitor. To do this would require a monitor that could produce light as bright as the sun, and a graphics card with 32 bits per channel. These things don't exist for technical, not to mention safety, issues. So what can you do?
Most people seem to be happy to look at photographs and accept them as faithful representations of reality. They are wrong. Photographs are no better than monitors for displaying real-life bright images. Photographs cannot give off light as bright as the sun, but people never question their realism. Now this is where confusion sets in.
Our vision is just about the most important sense we have. Every day I trust my life to it, and so far it hasn't got me killed. Frequently it has saved my life and limb. This was an important sense for our ancestors too, right back to the very first fish or whatever we evolved from. Our eyeballs have had a long time to evolve and have been critical in our survival, and so they have become very good indeed. They are sensitive to very low light levels (the dimmest flash you can see is as dim as 5 photons), and yet can cope with looking at the very bright sky. Our eyeballs are not the only parts of our vision, perhaps even more important is the brain behind them. An increadibly sophisticated piece circuitry, poorly understood, and consisting of many layers of processing takes the output of our eyeballs and converts it to a sense of what actually exists infront of us. The brain has to be able to recognise the same objects no matter how they are lit, and actually does an amazing job of compensating for the different types of lighting we encounter. We don't even notice a difference when we walk from the outdoors lit by a bright blue sky, to the indoors lit by dim yellow lightbulbs. If you've ever tried to take photos in these two conditions, you may have had to change to a different type of film to stop your pictures coming out yellow and dark.
Try this: Go out in a totally overcast day. Stand infront of something white. If you look at the clouds, you will see them as being grey, but look at the white object, and it appears to be white. So what? Well the white thing is lit by the grey clouds and so can't possibly be any brighter than them (in fact it will be darker), and yet we still perceive it to be white. If you don't believe me, take a photo showing the white thing and the sky in the background. You will see that the white thing looks darker than the clouds.
Don't trust your eyes: They are a hell of a lot smarter than you are.
So what can you do? Well, since people are so willing to accept photographs as representations of reality we can take the output of the renderer, which is a physical model of the light in a scene, and process this with a rough approximation of a camera film. I have already written an article on this: Exposure, so I will say no more about it here.