SurveyTransfer Team hereby says special thanks to Mr. József Borján, who volunteered to share his knowledge and experience with our followers. We hope you will like the article from Mr. Borján.
If you also have GIS, 3D modeling, land surveying or 3D surveying technical knowledge that you would like to share with others, then write to us via our page or our e-mail address firstname.lastname@example.org.
Be our next guest blogger, whom the professional audience will get to know! 🙂
Color vision, color coding
There are four groups of elements (we mean elements that are required to solve tasks with a computer) that are worth discussing.
1. To solve any task, an instrument is required. This is called hardware.
Hardware elements related to image processing are: displays (monitors) and printers on the base machine, and also image digitizers: scanners, cameras, tablets, special displays, printing illuminators, photo printers.
2. To operate the hardware, a program i.e., software is needed.
Software elements related to image processing are: control software (drivers) of displays and digitizing devices, image display and presentation software, image modification software.
3. The software creates or receives data, and the result of the processing is data too.
In case of images, the code of image files is the data that is encoded by digitizers, transcoded by image processors, and decoded by viewers.
4. The user, who needs to be familiar with the possibilities of all three groups.
Our special task is computer image processing, we learn about the necessary hardware, software, and data types to become confident users.
You need light to see and take pictures. Light is an electromagnetic wave. The two most important characteristics of a wave are amplitude and wavelength. The amplitude expresses the strength and intensity of the light, while the wavelength expresses its essential attribution, its color.
Nanometer (nm) is 10 -9 m, i.e., a thousand million times smaller than meter (mm is a thousandth of a m, micron is a thousandth of a mm, nm is a thousandth of a micron.). The image above can be produced with a prism, but we can also see rainbows in nature.
The ultraviolet range is not visible. However, the cameras “see” it, so this must be taken into account, e.g., by applying a UV filter. The counterfeit money detector also illuminates the money with UV light. Infrared is not visible; it is used in for example remote controls.
Light can be natural (e.g. sunshine) or artificial (light bulb). In both vision and photography, it is common that the light source illuminates the object, the light is reflected from it, on the eyepiece, or through the camera lens or onto the fundus, or onto the film in the camera, or reaches the digital sensor. Digital devices were developed based on how the human eye works.
Let’s take a look at how vision works.
The light containing mixed colors entering our eyes reaches the fundus through the optical system of the eye. There, cells called rod cells and cones generate electrical impulses that reach the visual center of the brain.
Some of the cones are sensitive to long waves (L). When such light arrives, these pins send information to the brain that we perceive as red and orange.
The medium waves (M) stimulate only the corresponding cones, the signals coming from them are visible as green.
Shorter waves (S) stimulate the third type of cones.
There are millions of cones and rod cells on the fundus and, as we have seen, they are sensitive to different wave ranges.
They sense the light and transmit the data to the brain in the form of electrical impulses, where it is assembled into an image. If some mixed color comes to the cones, even all three types produce an electrical impulse. Our brain evaluates according to the rules of additive color mixing, and we see the world as white.
Technical light perception
The literature of CCD
The sensors of cameras and scanners work the same way. The sensor only detects light intensity. The more light received; the more electrons are released. Color pixels are detected by dividing the surface of a pixel into four parts: one red, one blue and two quarter green filters. Electrons collected in a cell are conducted and evaluated. A color usually takes up one byte of space, on some machines even more. The machine stores the three (RGB) color information.
To prove this, let’s look at the codes of images stored in Windows BMP format!
Although the codes can be written using the binary numeral system, for practical reasons they can be displayed in hexadecimal. In the hexadecimal system, in addition to the digits 0-9, A, B, C, D, E and F form the unique value of the digits, respectively 10, 11, 12, 13, 14, 15. The values of a byte are described by two digits. The first digit is worth 16 times its unique value, and the second digit is worth 1 time. The shape of a color code, e.g. A9, then it is decimally 10×16+9, i.e., 169. This is a light color component.
There are code numbers for the individual color samples. In the first rows in binary, in the second in hexadecimal, and in the third row in decimal. The last pattern was created based on an element of a specific image (see below)
Let’s see how the computer stores data.
A black rectangle and its code:
A red rectangle and its code:
A green rectangle and its code:
A white rectangle and its code: (ezt a téglalapot magyarban is vehetnénk kisebbre kicsit)
A part of an image and its code:
It’s easy to see that the fundamental of computer image processing is that it’s making manipulations with codes.
Dark colors have a low code, light parts have a higher code. A dark image can be made brighter by adding some to each code value. Of course, this can be achieved by dragging a slider.
Additive color mixing
The mixture of the three basic colors gives new colors. Red and green together make yellow, red and blue together make magenta, and green and blue together make cyan. The three components together give white. If the lightness value of the three components is the same, we get gray. The darkest gray is black (00 00 00), medium gray e.g. C0 C0 C0, the lightest gray is white: FF FF FF.
Additive color mixing is illustrated in the following figure. The circles of the pure color components overlap each other, they give the mixed colors, while where the circles of all three components overlap, we get white.
In the following figure, I present an interesting analysis. I read the color codes in the axis of a printed part of the rainbow, and then drew the changes of the individual components with a statistical program.
Moving from left to right, the image is first white, all three colors have high value. At the start of red, the other two components fade. Towards yellow, green intensifies. Green, red and blue also have low values. Cyan, green, and blue are strong. Purple, blue and red are dominant, then with white, all three colors are strengthened again.
Displaying colors on screen
There are three electron sources in the cathode ray tube monitor, each aiming only at its own phosphor dot through a mask. A pixel is formed by three small phosphor dots. The dots are so close to each other that we see them as one dot and additive color mixing takes effect.
Let’s see how the colors look on the monitor! The following are enlarged views of the screen:
In LCD monitors, columns next to each other make up one pixel.
If you really liked what you read, you can share it with your friends.
Did you like what you read? Do you want to read similar ones?