"Rose"


Index

Rose

Columns

Skysphere

Clouds

Sunglare

Radiosity


Back to Top

Rose

Modelling

In this shot, the CSG on the rose is much more visible. What I actually did was take two prisms, align them to each other perpendicularly and add spheres and cylinders along the edges to round them off. But instead of just building one petal and scaling it, I scripted a macro to take measurements and build new petals, keeping the size of the spheres/cylinders the same no matter the size of the petal.

Using a while-loop, the petals get broader and shorter whilst rotating around the center. On close inspection you'd notice that the petals overlap, but I feel it is more important that the petals at least don't come out of each other's side, whereas a little overlapping is fine. In the end, the petals look much closer wrapped that way.

For the curved green leafs, I created a macro which would create dozens of prisms, add spheres/cylinders along the outer edges and then do a successive translation/rotation-combo. So, for the second section, the section receives its own rotation, but then gets translated to fit onto a non-rotated former section. Then both get rotated again, translated to attach the former section, rotate, translate, etc. In effect, the 25th element thus gets rotated 25 times and translated 24 times. The result is that the elements are properly attached to each other. Experimenting with values of how many elements to use and how much to rotate them (using a cosine-wave) finally resulted in these nice leafs.

The base onto which the leafs and petals are stuck is also a prism. As it isn't seen in the image, I didn't bother that much rounding the edges and such, still, the prism is a pyramid and ends in a tip, so that there's at least a subtle effect of the base shrinking to match the stem.

Finally, the stem is made with a bunch of spheres and cylinders and was modelled using my own BSpline-Macros. The thorns (not visible in the close-up above) are just a set of spheres in a blob that gets scaled down and translated a little to add the curvature.

Textures

The textures are simple RGB-values, the main clue lies within the finishes. Using specular highlighting, exponential and metallic reflection and some higher brilliance value (around 4), the texture looks much like ceramic or painted metall, a good idea for such a "technical" rose.


Back to Top

Columns

Modelling

The columns are rather simple CSG objects, just a bunch of cylinder, torii and spheres.They get placed to encircle the rose with a while-loop. Instead of using CSG-difference to create the top-brim, I've used two open and hollow cylinders, as to save the extra calculation required to determine the difference. To close the gaps between the inner and outer ring, I just placed two discs with hole in the middle, fitting the radius of the inner cylinder.

The same applies for the base, instead of cutting the step out of a cylinder, I used a few open cylinders and discs. I haven't made any tests if this approach actually renders faster than the CSG-difference because there are more objects involved, though less CSG.

Textures

Especially on the floor I made use of a trick to get the reflection just the way I wanted. On many objects, you can see reflections when looking at them from the right angle, e.g. water surfaces reflect light when viewed at shallow angles.

In this case, I didn't want the rose to get reflected, especially considering that the sky would have been reflected as well. To avoid that, I faded the texture from a non-reflective one at the center to a reflective one at the edge using the cylindrical pattern.

Also note that the floor has a more detailed texture than the columns. I wanted the columns to appear rather simplistic and they shouldn't destract from the main element, the rose. Then again, absence of detail would have been painly visible near the rose, hence the turbulous texture there.


Back to Top

Skysphere

POV-Ray Skysphere versus Digital Photo of a Sunset in Moscow, Idaho

Modelling

There's not much modelling to a sky_sphere, just texturing it properly...

Textures

To get a realistic color gradient from the sun towards the blue sky, I resorted to using colors based off of a photo. The main problem with color-gradients is that the real-life spectrum of light has a different "gradient" than the gradients created by the RGB-model. Just look at the image of the light's spectrum and check what you get when going halfway between red and blue: it ain't purple. So, what I did was to pick the colors from the photo and map those onto a color_map for my skysphere. The result is pretty convincing, as it is hard to exagherate or create non-realistic gradients when using photos as reference.


Back to Top

Clouds

Modelling

Modelling the clouds is less an actual CSG act, but rather a fiddling with density of the media statement. The object in which the media is placed was, in this case, a huge cylinder.

Textures

To create clouds like these, there are two proven methods that I know of: either use the famous "stacked planes technique" (described here) or use media. Especially since POV-Ray 3.5, media got enhanced with several features that speed up the processing so that its actually worthwhile to use it.

So, to create clouds, one needs to understand the concept of media, which I won't explain here. Its all in the documentation that ships with POV-Ray, so no need to reiterate the same stuff again. Still, for clouds, we need to create a proper density statement which will, in the end, look like, obviously, clouds.

What I did in this case was to use the bozo-texture and assign a density to it, so that certain parts of the texture don't scatter the light, whilst others do. I often begin with a statement much like
density{bozo color_map{[0 rgb 0][.4 rgb 0][.6 rgb 1][1 rgb 1]}}
and thus get a nice initial density to start off with. Slap some scaling, turbulence onto it, tweak the color_map, and I get a density which looks like the one seen above.

Then again, coloring the clouds is a problem in itself. Every lightsource would interact with the clouds, so to avoid that, I always place the clouds and their lightsource into a single lightgroup which doesn't get affected by global-lights. The lighting of the clouds is then independant of the rest of the scene and thus highly flexible.

First of all, I assign a color to the lightsource, which was yellow-orange in this case for the sunset. A tiny nudge of blue in the RGB-statement avoids that when the light is scattered heavily, it won't show up as yellow, but as white (<1,.5,0> multiplied with 20 will later on get clipped to <1,1,0>, whereas a small blue-value would later on result in <1,1,1> when clipped).

And then I go off and tweak the settings for the clouds' density and coloring until it looks right. I often use a combination of scattering and emission. Why? If I'd just use scattering, some parts of the clouds would be VERY dark in the end, so I add some emission to add that soft glue clouds seem to have. If the clouds are heavily colored themselves (instead of white, I sometimes go with a more artistic color), I often use extinction 0 on the scattering, but then add absorption with the *inverted* color ( 1-Color_Of_Scattering). The problem is, when scattering media filters the red-component because the color is red, the shadow of the object will be cyan (<1,1,1> - <1,0,0> = <0,1,1>). So, I absorb that (cleverly using absorption) and thus get a red object with a dark red shadow. I often tweak the extinction value to something different than 0 though, but it's a good idea to start with that.

If you just use color-ranges between <0,0,0> and <1,1,1>, clouds won't look like they do above. If you multiply either the colors of scattering/absorption/emission or just add a density-statement like this to your media:
density{rgb Some_Value}
the effect of the media gets amplified. Bright parts get brighter, dark parts get darker. I often have to tweak the settings during several renders until the density of the clouds (not the media-density, the visual density) looks right and the colors aren't too exagherated, but still artistic.

Once that is done, I crank up the samples as high as it is required to avoid banding or artifacts in the clouds. For this scene, I ended up using method 2 with 75 intervals, only 1 min- and max-sample. It was sufficient for the print, but when scrutinizing the image, the banding still shows up. But that technical part of the media is just a quality vs time thing, if you've got the time and the processing power, just crank up the quality and get better results.


Back to Top

Sunglare

Modelling

The sunglare is actually just a cone, with the tip pointing towards the lightsource, the base pointing towards the camera. The main idea is to have the axis of the cone align with the direction from the camera's position to the lightsource's position. The base of the cone should lie in front of every other object, whereas the tip may be anywhere, as long as its farther away than the base. Thus, the media is layered on top of everything and adds the glaring effect.

Varying the cap-sizes on both ends of the cone affect how large the sun appears to be, and how large the falloff from blinding brightness to "normal" is. Additionally, to avoid interferance with other objects and lighting, the cone is placed in its own lightgroup, along with "no_shadow" and "no_reflection" keywords to make it only visible to the camera and not the rest of the scene. The cone is placed about two POV-Units away from the camera's position and is fifty units long in this image. Varying the length has an effect on how bright the center is and thus also affects the falloff of from bright to normal.

Textures

The cone is filled with scattering media which only interact with the lightsoure inside the cone's lightgroup. White scattering media shows the sunlight's color, but you could also use a white lightsource and color the media. I've used a lowered extinction value here to amplify the effect. Since the density isn't detailed (it's just plain density, no patterns needed here), the quality of the media doesn't have to be very high, I've used method 2 with 20 intervals in the final render, but I think that less would have sufficed.


Back to Top

Radiosity

The image seen above is a post-processed version to make the radiosity more visible. There's no light_source in the image, it's pure loaded radiosity on white textures.

Radiosity can add a lot to a scene. In this case, all ambient-values were set to 0, which would result in utterly black shadows when there is no light present. In reallife though, light gets scattered by objects, and you almost never see total black shadows. POV-Ray's radiosity approximates this by creating samples that are spread throughout the image (depending on the quality settings of the radiosity block in the global_settings). What I do in almost all my images when applying radiosity is a technique which had been thought of during discussions in the newsgroups.

I first render the image with a low error_bound, like .1, and save the results. In a second pass, the radiosity is loaded and no new samples are taken by using always_sample off. The error_bound is raised to .4, which in turn blurs the samples somewhat more (this is an internal side-effect in POV-Ray 3.5 which will probably propagate to 3.6, but, as Radiosity is an experimental feature, it may change in future versions). Using even higher values for error_bound will blur the samples even more, which may or may not be what you want. The idea is mainly to blur out the small-scale artifacts which often appear on edges or corners, but leave the general radiosity lighting there. In the image above you can clearly see dark freckles near the tops and bases of the columns. With the normal lighting and all they won't be visible as they are very small and in a normally dark part of the image anyway, but for other spots, the blurring helps. Just try it! Another handsome side-effect is that due to the blurring, not as high count values are needed, as lots of artifacts that you get when using low values get smoothed away.

Also, don't forget to use pretrace_start 1 and pretrace_end 1 when loading the radiosity data, as the pretracing is mainly needed for the radiosity calculations to figure where more samples are needed. When you're loading, no new samples should be shot (as they might have unwanted side-effects on the existing data, especially for animations, the random nature of the new samples would be obvious as random flickering of brighter/darker spots), thus the pretrace can be switched "off" by using 1 as a value.


Back to Top

Data Protection Act