Section 7.4.2
Focal Blur

Simulates focal depth-of-field by shooting a number of sample rays from jittered points within each pixel and averaging the results.

The aperture keyword determines the depth of the sharpness zone. Large apertures give a lot of blurring, while narrow apertures will give a wide zone of sharpness. Note that, while this behaves as a real camera does, the values for aperture are purely arbitrary and are not related to f-stops.

The center of the zone of sharpness is the focal_point vector (the default focal_point is <0,0,0>).

The blur_samples value controls the maximum number of rays to use for each pixel. More rays give a smoother appearance but is slower, although this is controlled somewhat by an adaptive mechanism that stops shooting rays when a certain degree of confidence has been reached that shooting more rays would not result in a significant change.

The confidence and variance keywords control the adaptive function. The confidence value is used to determine when the samples seem to be close enough to the correct color. The variance value specifies an acceptable tolerance on the variance of the samples taken so far. In other words, the process of shooting sample rays is terminated when the estimated color value is very likely (as controlled by the confidence probability) near the real color value.

Since the confidence is a probability its values can range from 0 to 1 (the default is 0.9, i. e. 90%). The value for the variance should be in the range of the smallest displayable color difference (the default is 1/128).

Larger confidence values will lead to more samples, slower traces and better images. The same holds for smaller variance thresholds.

By default no focal blur is used, i. e. the default aperture is 0 and the default number of samples is 0.


Section 7.4.3
Camera Ray Perturbation

The optional keyword normal may be used to assign a normal pattern to the camera. All camera rays will be perturbed using this pattern. This lets you create special effects. See the animated scene camera2.pov for an example.

Section 7.4.4
Placing the Camera

In the following sections the placing of the camera will be further explained.

Section 7.4.4.1
Location and Look_At

Under many circumstances just two vectors in the camera statement are all you need to position the camera: location and look_at. For example:

camera { location <3,5,-10> look_at <0,2,1> }

The location is simply the x, y, z coordinates of the camera. The camera can be located anywhere in the ray-tracing universe. The default location is <0, 0, 0>. The look_at vector tells POV-Ray to pan and tilt the camera until it is looking at the specified x, y, z coordinates. By default the camera looks at a point one unit in the z-direction from the location.

The look_at specification should almost always be the last item in the camera statement. If other camera items are placed after the look_at vector then the camera may not continue to look at the specified point.


Section 7.4.4.2
The Sky Vector

Normally POV-Ray pans left or right by rotating about the y-axis until it lines up with the look_at point and then tilts straight up or down until the point is met exactly. However you may want to slant the camera sideways like an airplane making a banked turn. You may change the tilt of the camera using the sky vector. For example:

camera { location <3,5,-10> sky <1,1,0> look_at <0,2,1> }

This tells POV-Ray to roll the camera until the top of the camera is in line with the sky vector. Imagine that the sky vector is an antenna pointing out of the top of the camera. Then it uses the sky vector as the axis of rotation left or right and then to tilt up or down in line with the sky vector. In effect you're telling POV-Ray to assume that the sky isn't straight up. Note that the sky vector must appear before the look_at vector.

The sky vector does nothing on its own. It only modifies the way the look_at vector turns the camera. The default value for sky is <0, 1, 0>.


Section 7.4.4.3
The Direction Vector

The direction vector tells POV-Ray the initial direction to point the camera before moving it with look_at or rotate vectors (the default is direction <0, 0, 1>). It may also be used to control the (horizontal) field of view with some types of projection. This should be done using the easier to use angle keyword though.

If you are using the ultra wide angle, panoramic or cylindrical projection you should use a unit length direction vector to avoid strange results.

The length of the direction vector doesn't matter if one of the following projection types is used: orthographic, fisheye or omnimax.


Section 7.4.4.4
Angle

The angle keyword specifies the (horizontal) viewing angle in degrees of the camera used. Even though it is possible to use the direction vector to determine the viewing angle for the perspective camera it is much easier to use the angle keyword.

The necessary calculations to convert from one method to the other are described below. These calculations are used to determine the length of the direction vector whenever the angle keyword is encountered.

The viewing angle is converted to a direction vector length and vice versa using the formula The viewing angle is given by the formula

  angle = 2 * arctan(0.5 * right_length / direction_length)

where right_length and direction_length are the lengths of the right and direction vector respectively and arctan is the inverse tangens function.

From this the length of the direction vector can be calculated for a given viewing angle and right vector.

From this the length of the direction vector can be calculated for a given viewing angle and right vector.

  direction_length = 0.5 * right_length / tan(angle / 2)

Section 7.4.4.5
Up and Right Vectors

The direction of the up and right vectors (together with the direction vector) determine the orientation of the camera in the scene. They are set implicitly by their default values of

right 4/3*x up y

or the look_at parameter (in combination with location). The directions of an explicitly specified right and up vector will be overridden by any following look_at parameter.

While some camera types ignore the length of these vectors others use it to extract valuable information about the camera settings. The following list will explain the meaning of the right and up vector for each camera type. Since the direction the vectors is always used to describe the orientation of the camera it will not be explained again.

Perspective projection: The lengths of the up and right vectors are used to set the size of the viewing window and the aspect ratio as described in detail in section "Aspect Ratio". Since the field of view depends on the length of the direction vector (implicitly set by the angle keyword or explicitly set by the direction keyword) and the lengths of the right and up vectors you should carefully choose them in order to get the desired results.

Orthographic projection: The lengths of the right and up vector set the size of the viewing window regardless of the direction vector length, which is not used by the orthographic camera. Again the relation of the lengths is used to set the aspect ratio.

Fisheye projection: The right and up vectors are used to set the aspect ratio.

Ultra wide angle projection: The up and right vectors work in a similar way as for the perspective camera.

Omnimax projection: The omnimax projection is a 180 degrees fisheye that has a reduced viewing angle in the vertical direction. In reality this projection is used to make movies that can be viewed in the dome-like Omnimax theaters. The image will look somewhat elliptical. The angle keyword isn't used with this projection.

Panoramic projection: The up and right vectors work in a similar way as for the perspective camera.

Cylindrical projection: In cylinder type 1 and 3 the axis of the cylinder lies along the up vector and the width is determined by the length of right vector or it may be overridden with the angle vector. In type 3 the up vector determines how many units high the image is. For example if you have up 4*y on a camera at the origin. Only points from y=2 to y=-2 are visible. All viewing rays are perpendicular to the y-axis. For type 2 and 4, the cylinder lies along the right vector. Viewing rays for type 4 are perpendicular to the right vector.

Note that the up, right and direction vectors should always remain perpendicular to each other or the image will be distorted. If this is not the case a warning message will be printed. The vista buffer will not work for non-perpendicular camera vectors.


Section 7.4.4.5.1
Aspect Ratio

Together the right and up vectors define the aspect ratio (height to width ratio) of the resulting image. The default values up <0, 1, 0> and right <1.33, 0, 0> result in an aspect ratio of 4 to 3. This is the aspect ratio of a typical computer monitor. If you wanted a tall skinny image or a short wide panoramic image or a perfectly square image you should adjust the up and right vectors to the appropriate proportions.

Most computer video modes and graphics printers use perfectly square pixels. For example Macintosh displays and IBM SVGA modes 640x480, 800x600 and 1024x768 all use square pixels. When your intended viewing method uses square pixels then the width and height you set with the +W and +H switches should also have the same ratio as the right and up vectors. Note that 640/480 = 4/3 so the ratio is proper for this square pixel mode.

Not all display modes use square pixels however. For example IBM VGA mode 320x200 and Amiga 320x400 modes do not use square pixels. These two modes still produce a 4/3 aspect ratio image. Therefore images intended to be viewed on such hardware should still use 4/3 ratio on their up and right vectors but the +W and +H settings will not be 4/3.

For example:

camera { location <3,5,-10> up <0,1,0> right <1,0,0> look_at <0,2,1> }

This specifies a perfectly square image. On a square pixel display like SVGA you would use +W and +H settings such as +W480 +H480 or +W600 +H600. However on the non-square pixel Amiga 320x400 mode you would want to use values of +W240 +H400 to render a square image.


Section 7.4.4.5.2
Handedness

The right vector also describes the direction to the right of the camera. It tells POV-Ray where the right side of your screen is. The sign of the right vector can be used to determine the handedness of the coordinate system in use. The default right statement is:

right <1.33, 0, 0>

This means that the +x-direction is to the right. It is called a left-handed system because you can use your left hand to keep track of the axes. Hold out your left hand with your palm facing to your right. Stick your thumb up. Point straight ahead with your index finger. Point your other fingers to the right. Your bent fingers are pointing to the +x-direction. Your thumb now points into +y-direction. Your index finger points into the +z-direction.

To use a right-handed coordinate system, as is popular in some CAD programs and other ray-tracers, make the same shape using your right hand. Your thumb still points up in the +y-direction and your index finger still points forward in the +z-direction but your other fingers now say the +x-direction is to the left. That means that the right side of your screen is now in the -x-direction. To tell POV-Ray to act like this you can use a negative x value in the right vector like this:

right <-1.33, 0, 0>

Since x increasing to the left doesn't make much sense on a 2D screen you now rotate the whole thing 180 degrees around by using a positive z value in your camera's location. You end up with something like this.

camera { location <0,0,10> up <0,1,0> right <-1.33,0,0> look_at <0,0,0> }

Now when you do your ray-tracer's aerobics, as explained in the section "Understanding POV-Ray's Coordinate System", you use your right hand to determine the direction of rotations.

In a two dimensional grid, x is always to the right and y is up. The two versions of handedness arise from the question of whether z points into the screen or out of it and which axis in your computer model relates to up in the real world.

Architectural CAD systems, like AutoCAD, tend to use the God's Eye orientation that the z-axis is the elevation and is the model's up direction. This approach makes sense if you're an architect looking at a building blueprint on a computer screen. z means up, and it increases towards you, with x and y still across and up the screen. This is the basic right handed system.

Stand alone rendering systems, like POV-Ray, tend to consider you as a participant. You're looking at the screen as if you were a photographer standing in the scene. Up in the model is now y, the same as up in the real world and x is still to the right, so z must be depth, which increases away from you into the screen. This is the basic left handed system.


Section 7.4.4.6
Transforming the Camera

The translate and rotate commands can re-position the camera once you've defined it. For example:

camera { location < 0, 0, 0> direction < 0, 0, 1> up < 0, 1, 0> right < 1, 0, 0> rotate <30, 60, 30> translate < 5, 3, 4> }

In this example, the camera is created, then rotated by 30 degrees about the x-axis, 60 degrees about the y-axis and 30 degrees about the z-axis, then translated to another point in space.


Section 7.4.5
Camera Identifiers

You may declare several camera identifiers if you wish. This makes it easy to quickly change cameras. For example:

#declare Long_Lens = camera { location -z*100 angle 3 } #declare Short_Lens = camera { location -z*50 angle 15 } camera { Long_Lens // edit this line to change lenses look_at Here }

Section 7.5
Objects

Objects are the building blocks of your scene. There are a lot of different types of objects supported by POV-Ray: finite solid primitives, finite patch primitives, infinite solid polynomial primitives and light sources. Constructive Solid Geometry (CSG) is also supported.

The basic syntax of an object is a keyword describing its type, some floats, vectors or other parameters which further define its location and/or shape and some optional object modifiers such as texture, pigment, normal, finish, bounding, clipping or transformations.

The texture describes what the object looks like, i. e. its material. Textures are combinations of pigments, normals, finishes and halos. Pigment is the color or pattern of colors inherent in the material. Normal is a method of simulating various patterns of bumps, dents, ripples or waves by modifying the surface normal vector. Finish describes the reflective and refractive properties of a material. The halo is used to describe the interior of the object.

Bounding shapes are finite, invisible shapes which wrap around complex, slow rendering shapes in order to speed up rendering time. Clipping shapes are used to cut away parts of shapes to expose a hollow interior. Transformations tell the ray-tracer how to move, size or rotate the shape and/or the texture in the scene.


Next Section
Table Of Contents