SurveyTransfer Team hereby says special thanks to Mr. József Borján, who volunteered to share his knowledge and experience with our followers. We hope you will like the article from Mr. Borján.
If you also have GIS, 3D modeling, land surveying or 3D surveying technical knowledge that you would like to share with others, then write to us via our page or our e-mail address [email protected].
Be our next guest blogger, whom the professional audience will get to know! 🙂
3D stereo visualization can be useful for sketches, figures, animations, photos, or videos. Three-dimensional vision can facilitate the understanding of structures and processes that are difficult to see. At the same time, they also show many similarities in terms of the procedures and the processing of the results of aerial photography. All of this makes use of the ability of the human brain to evaluate the differences between the pixels reaching the light-sensitive surface of both the eyes and informs us of this as a sense of depth. Geospatial procedures process correlations between the same points on the surface.
The following figure shows the case when the true shape of the depicted element is not revealed from the single flat image. The anaglyph image with glasses shows the real shape of the element.
By looking at the picture, we cannot decide the true shape of the geometry. If we put on color filtering (e.g., red/cyan) glasses, after a few seconds we realize that it is actually a spiral-shaped line. If we close either of our eyes, the spatial experience disappears.
So, the anaglyph image lets us interpret reality. This is, of course, the case for all unknown surfaces.
The following is a picture of an old machine. It’s a completely different view when you look through the glasses. Here, too, try to see with one eye and observe the difference.
In a publication for children on aviation and space navigation, this stereo diagram shows the relationship between spaceships and the Earth.
Of course, anaglyph is not the only method of stereo display, but it is almost the simplest representation. First, let’s look at other display options.
Recently, stereo viewing using parallel images has undergone a renewal, yet it’s the oldest of all categories. By placing the recordings taken at the left and right eye side by side, and looking at them through magnifying glasses with the appropriate focal length, we get images and videos that are as close as possible to reality.
Google charms the world with its stereo viewers that can be folded out of a small cardboard box. The image is transmitted via a smartphone placed in front of the lenses. This is Google Cardboard.
There are more advanced devices than the previous one. These are the virtual reality headsets.
Use a pair of pictures of the complicated pipe system of an airplane engine as an example.
The difference between the two images is striking. The pictures are in the order according to the recording: left-right. This pair of images should be viewed with Google’s stereo viewers or with virtual reality headsets.
Crossed image pair
The image taken from the position of the left eye shows the object slightly to the left, the image taken from the position of the right eye shows the object slightly to the right. If we swap the two images, i.e., we place the image taken in the position of the left eye on the right side and vice versa, i.e., we force our eyes to be in a crossed position, then after some trying the stereo image is created. We can also help ourselves by focusing on our finger moving back and forth in front of the screen until the stereo image appears. This solution does not require any auxiliary equipment, it can be used in any size you like.
About the anaglyph technique
Basically, we consider the anaglyph technique the simplest feasible solution. Apart from the headset, we only need the display. Phones, tablets, laptops and desktop computers, TVs and projection screens are also suitable as displays. A headset can be purchased very cheaply. The screen size does not depend on the optical system. Of course, a paper image of any size, made with a high-quality color printer, provides a spatial experience too. We can also see A3 and A2-sized pictures at exhibitions. The following picture shows the anaglyph appearance of the previously presented recording.
About 3D vision
Most living things have two eyes. This not only creates a reserve of perception, but also provides a greater opportunity in terms of quality. The field of view of the two eyes is slightly different, so the possibility of perception is expanded.
The common (overlapping) section of the field of vision, on the other hand, serves spatial vision. Since the two eyes do not see the same side of the objects, two slightly different projections are created on the fundus. Distant objects (located at infinity) are imaged in the same place in both projections. Projection points from the corresponding points of nearby objects will be different. This discrepancy is constantly processed and analyzed by the brain. We evaluate its result as a sense of space. Aerial photos are processed in a similar way, this is how 3D surveys or Google Earth images rendered to height coordinates are created.
There are other phenomena that also help in judging depth. One of these is perspective shortening. Images of more distant objects arrive at a smaller angle, so smaller objects are usually farther away. Even the thickness of the air layer helps in the perception of depth, the shades of more distant objects, e.g., mountains fade into blue. And of course, covering also helps, the object closer to us covers those behind it.
About 3D image recording
There were experiments with the option of recording images with two optics even in the early days of photography. The two optics of the camera were placed at the distance of the human eyes, and the images were recorded on a glass sheet so wide that both images could fit on it at the same time. Data were written in the empty field between the two images.
At the end of the 19th and the beginning of the 20th century, stereo photography grew into a fashionable industry. The footage of aircraft taken during World War I are particularly interesting.
Digital image recording gained ground and caused a significant change in photography in general.
The two images can be displayed side by side on a single screen. With the help of free software, it is possible to edit a parallel, crossed, and anaglyph image from the recording pair, and we can even serve other display systems with their help.
In the following, we present the anaglyph technique in more detail. The basis of the anaglyph technique is as follows. Every color monitor handles 3 color channels R, G, B, i.e. red, green and blue light points make up the color points coming to the eye according to the rules of additive color mixing. Usually, the brightness of each color channel can be set in 256 levels from 0-255. With the three color-channels, 16777216 shades can be set.
Sufficiently small points are perceived as one. In the case of cathode ray monitors, the light points are arranged in a triangular shape, in the case of LCD monitors, adjacent sticks follow each other. In the memory, the R, G and B color codes are stored sequentially per pixel and per byte. In the case of anaglyph images, the points of the red color channel of one image are software-adjusted to the cyan color layer that appears as the sum of the green and blue color channels of the other image. With small, usually free and easy-to-use software (such as StereoPhoto Maker), the two image contents can be moved relative to each other using the arrow keys. Certain points of the two images can be brought to a neutral position, i.e., the corresponding points of both images are placed on the same image coordinate.
On the software interface, you can open the images to be edited: left and right. These can be seen side by side. Select the appropriate stereo image type from the tool icons. The two images will then be merged, but you can move the images to the right and left with the arrow keys. The neutral point can then be taken as the image content requires. Finally, the combined anaglyph image can be saved in any image format of your choice. The software also contains certain image modification procedures.
It is advisable to determine the location of the neutral point depending on the image content. By default, the farthest points are meant to be covered, so the difference will be greatest in the foreground. This gives the most effective sense of depth. For human figures, this solution can be confusing. Good solutions are to place the neutral point in the back third, or in the middle, or perhaps in the first third in depth. In the middle case, the image creates the feeling as if we were looking through a window (the image plane), and the objects were behind. The objects in front are thus less distorted. We can also place the neutral point in the first third, then the whole picture is like looking into an aquarium. So, the location of the neutral point should be chosen separately for each image. Wherever we pick the neutral point, with a horizontal head movement in front of the screen, the pixels also appear to rotate. The picture below clearly illustrates the choice of the location of the neutral point. The neutral point is on the edge of the middle plate towards us. The first plate thus sticks out of the picture, and the other two appear to be behind a window.
The significance of the base
In the basic case, the axis distance of the two optics is almost equal to the distance of the human eye. This is usually not enough to perceive more distant landmarks. Therefore, if we place two independent cameras at a distance greater than the distance of the eyes, i.e., we increase the base distance, we can also reconstruct more distant objects in 3D. A good example of this is the case when we take two consecutive shots with an airplane and edit them together. As another example, I would like to mention that by fixing in two positions the Google Earth image with height coordinates, we also get a three-dimensional terrain.
Creating stereo images
By moving the camera, we can take still pictures with a single camera. If possible, we place some movable structure on a stand. Even a wooden board is suitable with a guiding support and bumpers in the extreme positions. The following picture shows a professional stereo stand.
A pole stand can also be used. The camera, mounted on a metal stick that reaches eye level, can be moved by tilting it to the two viewpoints. The rotation can be adjusted by software.
Many manufacturers produced two-channel cameras, or single-channel ones with two front optics. It is also an accepted solution to use synchronous control by attaching two identical cameras to one element.
FUJI Fine Pix’s 3D digital camera is a low-cost piece, shown in the following picture.
In anaglyph mode, the two images are stitched together, and the software must split them into left and right images. The two-sided image can be converted into several stereo formats with free software. With these software, the neutral point (the place that goes to the same place on each image, overlapped) can be chosen as desired by changing the relative position of the two images.
Examples for practical use
Presenting small objects
Pieces that are difficult to interpret
Terrain models from Google Earth
The Google Earth recordings also contain the height coordinates. Therefore, they are suitable for displaying 3D in both top view and oblique view. Save the image on the left, then rotate the bottom of the image slightly to the left. In this position, we save the right image. We then use the stereo editor to create the anaglyph image.
Although in the stereo technique the end result is easily shared in the form of an image, animation or video, and a display/printed image and anaglyph glasses are sufficient for visualization, the world of 3D models and maps is not so simple. SurveyTransfer brings the easy sharing and visualization of maps and 3D files!
If you really liked what you read, you can share it with your friends.
Did you like what you read? Do you want to read similar ones?