Latitude (dynamic range) CCD sensors. What are the prospects?

Pages: 1 2 3

03.11.2002 18:26:00
So, today one of the factors that limit the use of digital photography is the small range of brightness transmitted without distortion. This option is called Dynamic Range IMO. So, a few questionYou.
1. Are there technologies that can extend the dynamic range CCD sensor?
2. Is there a possibility of using certain technical tricks, for example 2- or three separate multiple exposures with different periods of withdrawal charge, receive a kind of "bracketing" and stick the images, which will be transferred to different ranges of brightness, not "meddle" a single exposure?
3. Is there a possibility of partial, selective removal of the charge from the CCD?
4. Which way will the manufacturers of digital cameras, in order to increase fotoshirotu their cameras?
 

03.11.2002 19:07:00
"You can not grasp the immensity" (C) A little mistake with the address, it was necessary to SONY. COM send. It is also the fact that all your questions have positive answers, but you forgot to change - to ask about the cost of the CCD, which satisfy these requirements: after all, you probably want to keep 500U? 1000 or?

03.11.2002 19:25:00
To be perfectly honest, I do want free then. . . . .
Problem is that the chips are rapidly becoming cheaper, and the fact that yesterday was applied to the cells for 20 thousand dollars, today may be the camera for $ 200. The question is how the manufacturers of digital cameras will be in a semi-consumer segment of the market, to deal with the lack of fotoshirotoy matrices?

03.11.2002 20:14:00

1. D = lg (S / N)
Can increase Saturation signal (Q determined by the capacity of the cell) - that is, to increase the capacity, or use materials that can hold a greater number of electrons per unit area can be reduced Noise, at least shumodava .
2. Highly sensitive films contain two or more layers of different sensitivity. It is so possible to achieve both high sensitivity and acceptable grain size (in our case - cell). Double (or more) shooting (with different exposures) has long been successfully used as a rule - by hand, with a good tripod and only for static objects.
3. CCD not. And what does it give? You can read the CMOS cell somehow.
4. For compact IMHO, increase fotoshirotu anybody especially is not necessary. For high-quality compact (let's call them analogues rangefinder) and DSLRs - the increase in cell size. Up to go beyond 35mm frame. Because make great little cell - usually wants and high resolution. Not so long ago, I proposed the use of a non-linear current-voltage characteristic branch protection diodes to produce nonlinear, and therefore wider HC in the highlights. I also wrote about the matrix with micron cells that could programmatically associate into larger - it would (in analogy with the film) and get great latitude (from pooled into large conglomerates cells) and high resolution (at the expense of individual small cells ).
5. And where did you get the idea that fotoshirota in a semi-professional segment (with a half-frame matrix) is not enough? You can talk about the lack of breadth in the light due to the linearity of the CCD and hence saturation. But in the shadows of latitude has often exceeds the film. IMHO, in the amateur-compact class requires latitude, not much higher than the breadth of the print. And the flash exposure metering. Plus a good denoising. To such a lover of digital (and everywhere standing) digital printing kiosk stick flash money and get the card. Alas.

05.11.2002 4:11:00
16-bit CCD quite have the 12-stop range -

PhaseOne H20 digital back

05.11.2002 11:22:00

1. And as far as reasonably possible increase in DD by increasing the capacity of the cell, as I understand it, there is a limiting factor production technology matrices?
2. There is clear.
3. IMO. Partial charge reading from the sensor will allow first record "light" normally exposed areas, then the accumulation of charge - the average, and then - the shadows.
4. I am interested in the consumer segment of the market, and I mean small matrix.
Also, I've seen tests with contrasting images obtained from digital cameras, in a match against the negatives and slides.
Progress is inevitable because, so the farther the more perfect will become available products.

In fact, in this PDFke see only one line with the name Ranged.
But I prefer not to believe these figures m.
Now, if removed "wedge" of the camera, or at least a plot with Gaugeable brightnesses, IMO we would certainly see:
transmitted without distortion differential brightness 3-4 diaphragm.
near the borders of the diaphragm 12 (indicated value) will see a completely nonlinear transmission rates brightness.
Yes, actually, 12 aperture - it will not fuck a dog, this is a very large difference in brightness, and I doubt, in our time there are technical devices that can transmit in this range without distortion (straight lines in harakteresticheskoy curve) the range of brightness.
Think for yourself, for example, we have the facility: 2, 8 2, 4, 5. 6 8 11 16 22 32 64 128 - 11 diaphragms. We set it to a value of 11. And we get a picture in which equally well exposed by the sun at its zenith - 128, and deep shadows. But this is unrealistic.

05.11.2002 14:10:00

1. Capacity cell area is determined, the technology is much lesser extent.
3. Reads from zeroing can be implemented if the double-frame transfer (second layer of cells, which moves charge from exposed by layer, they read from it).
4. Partly matrix (while maintaining adequate resolution) - hence, a small cell size, therefore, low signal to noise ratio, therefore - fotoshirota small.
quote:
Now, if removed "wedge" of the camera, or at least a plot with Gaugeable brightnesses, IMO we would certainly see:
transmitted without distortion changes in brightness in the 3-4 aperture
I have almost all comparisons used test slide Kodak IT-8. Where there is a 24-segment gray scale (about 2. 5D, or diaphragms 8). Which is no worse than your "wedge". And many compacts pass 20 fields of 24. In addition, any comparison can see the characteristic curves cameras. Which in the case of RAW quite linear over the 7-8-9 "Stop" in the case of compacts. For example, I can say that the matrix S2Pro transmits 10. 2 stage - linearly with (because the CCD on the other can not - unlike the film).
difference in brightness between the sun at its zenith, and deep shadows may exceed 1: 1,000,000, ie, 6D or 20 diaphragms.
By the way, what do you think about this picture: http: // digicam. narod. ru / photo / s2pro03 / j0449. jpg
And it is shot in Jpeg, where S2 transmits no more than 7. 5 steps.
With larger cells on a large matrix it is possible to transmit and 12 degrees - just here no physical limitations.

05.11.2002 19:14:00
IMHO the best way to increase the dynamic diapazon- reduce noise.
Since the dark current can be effectively crushed by cooling the matrix and on short exposures not much bother,
main enemy readout noise.
And it strongly depends on the production technology (defects cr. Gratings), so that the reserves there, especially if you do not chase the speed reading of the entire frame.
16 MP CCD Kodak KAF-16802CE with almost optimal pixel size 9 microns, now has a dynamic range of 72 db or 3. 6D (maximum reading speed 20Mh),
while the overall noise 21 electron (signal saturation, respectively 94K el.). When reading speed 50KH - 1. 5MH and cooling seems to be possible to reduce up to 10 e-mail. A matrix that flies on the telescope has a noise like Habbl 1-2 e.
Here you in the future and 4. 6D and the ISO will be tightened in the soaring heights

05.11.2002 22:38:00
Give noiseless stationary digital camera (Hubble) with a refrigeration unit! ! !

06.11.2002 9:44:00



Give noiseless stationary digital camera (Hubble) with a refrigeration unit! ! !

Heh, that's just a personal booster for it will be hard, I'm afraid.

By the way, there are new rumors about Pentakovskoy SLR. It seems to be already sure to be on PMA2003 and immediately after the exhibition will be in retail. Built on a modified MZ-S, the matrix 6 MP (there is a suspicion that it will be a matrix of Faveona, the new model?), The crop factor of 1. 5, the price in the area in 1500-1600. e. Because with this, it would be very curious to see the noise Favyonovskoy matrix Sigma. It may be that after all these here are the prospects for the development of matrices for the near future. Multi-layer CMOS. Without Baer.



06.11.2002 11:33:00

IMHO the best way to increase the dynamic diapazon- reduce noise.
Since the dark current can be effectively crushed by cooling the matrix and on short exposures not much bother,

above some backs (I think even Cinar) already use active cooling of the matrix (in any case, so the producers themselves write) and the easy to reach 12 stops reaching the image (and not the output matrix) with the same cell size 9 microns. E. It is the output chain CCD / CMOS-ADC-algorithms. . . .

06.11.2002 17:47:00

way to Habbl`e new camera weighs only 400 kg.


Popular article is to increase the dynamic range of the image using bracketing and PHOTOSHOP published in E-PHOTO N9 for 2002. (From 48-49)


07.11.2002 8:55:00
quote:
:

I have almost all the comparisons used test slide Kodak IT-8. Where there is a 24-segment gray scale (about 2. 5D, or diaphragms 8). Which is no worse than your "wedge". And many compacts pass 20 fields of 24. In addition, any comparison can see the characteristic curves cameras. Which in the case of RAW quite linear over the 7-8-9 "Stop" in the case of compacts. For example, I can say that the matrix S2Pro transmits 10. 2 stage - linearly with (because the CCD on the other can not - unlike the film).
difference in brightness between the sun at its zenith, and deep shadows may exceed 1: 1,000,000, ie, 6D or 20 diaphragms.
By the way, what do you think about this picture: http: // digicam. narod. ru / photo / s2pro03 / j0449. jpg
And it is shot in Jpeg, where S2 transmits no more than 7. 5 steps.
With larger cells on a large matrix it is possible to transmit and 12 degrees - just here no physical limitations.
OK.
totally agree.
Is it possible to see the curves for S2Pro?
I saw in a magazine photos & videos curves for CCDs with typical non-linear plots for the highlights and shadows. You
measures the different? Or what is it?
picture I liked, but the question is: where the sky?
has not worked? Or zhpeg does not see it? And you can see it in RAW?


Do this in an Internet magazine?


07.11.2002 17:23:00

in F & V Did you see graphics HC for JPEG "as well - that is, after the gamma correction. That no longer has anything to do with CCD - do not know how, or they do not want to build curves of RAW. Look at the graphs in the journal: on axis X - logarithm of exposure along the axis Y - linear scale levels Jpeg "a. Why? Look
graphics HC for E-10 or E-20 on my site. There is a difference - is not it?
Because, IMHO, to build schedules, we must understand that you are building.
I think there's plenty of charts to understand the principle. Charts S2 will be in the article, I do not want them to spread prematurely.

If you can not, but really want. . . cm. http: // www. dpreview. com / forums / read. asp? forum = 1014 & message = 3444595
and description - http: // www. dpreview. com / forums / read. asp? forum = 1014 & message = 3452171
But - shh. . .


07.11.2002 19:28:00


Does not it seem to you, my dear, that the construction of the linear HC from RAW is only useful for a more precise definition sredneserogo particular camera (evaluating its expo). After
in highlights saturation occurs in almost the same exposure for any CCD (nominal sensitivity is usually one and the same), and in the shadows of all determined by their own noise, where the gamma correction and does not smell.

07.11.2002 20:00:00

not think, respected
Just as I do not think it makes sense to build a HK film scanned in JPEG frames.
Actually, I rarely something seems.

07.11.2002 22:50:00
IMHO. . . . HC
inetersno mainly because it shows complete and fotoshirotu levels db (in your case) and D - optical density (in the case of a film). . . and in the magazine F & V - Photoshop and levels of L (albeit on a linear scale) at different exposures ! ! !
This is certainly good. . . . and gives some idea of ​​what and how to transmit camera (film) at different exposure. But this is not one frame! (More than one exposure).

Yet Fotoshirota has more to exposure - how many steps to the left and right can give acceptable results (in case of useful fotoshiroty) or even at least some result (in the case of total). And in case of fotoshiroty much depends on the treatment process. After all, when talking about Photoshop, then pdrazumevayut processing n standard process (E-6, C41). . . . . And what happens if not standard?
As shown - all. . . . . . ! ! there at the film ends and another begins fotoshirota no less important parameter - the optical density - the degree of blackening. Or a range of optical densities. . . . .
Actually he is more interested in t. To. He shows how a story can be captured in one exposure.
Fotoshirota is of interest when it is likely to make a mistake in the exhibition.

And of course, a different group of people, for whom the most important fotoshirota - it's mostly reporters novice photographers. . . . and people who are interested in a large range of optical densities - it's mostly commercial photography (advertising, art......) t. e., where it is important to "capture" the maximum. . . . and the sun, glare. . . . and deep shadows (and the exposure they certainly can expose).
That's why reporters are removed on the negative (large and small fotoshirota Range. Opt. Density), and commercial photography - mostly on the slide (large Range. Wholesale. Fotoshirota densities and low).

If we talk about what fits in one exposure, it is not fotoshirota, as is the range of optical densities. In this case, the range of useful numbers of distinct layers in one image (one exposure).

Ideally, that would evaluate DD digital camera, have yet to remove a wedge with a large DD - in one exposure. And look at
levels. I'm not sure, and it seems to me is not the fact that the value of D = lg (Smax / N) - D = lg (Smin / N) at different exposures (and shooting a white sheet) will be equal to the value D = lg (Smax / N) - D = lg (Smin / N) with a single exposure to a wide plot.

position sredneseroy way point on the HC - is also an interesting feature.


Mich. . . . A Matlab allows you to work on all levels of the 4096 RAW? Or how Photoshop - allow - allows you to work, but only displays 256
And yet - you, too, in JPG Matlab cool?

SVAM
(had the time to use another nickname)

08.11.2002 0:26:00

Fotoshirota - is the range of exposures that cause linear (within some error) change the blackening density (or signal level "numbers"). The concept of full fotoshiroty captures and non-linearities of HC, ie, it is the range of exposures at which the density change (level).
Domestic fotoshiroty concept - as an opportunity to make a mistake on a few steps in the underexposure or overexposure when determining the correct exposure for the shooting of a "standard" story. It is the quotient of the basic definition. Because no one can say - what is "standard plot." This 5 steps? More? Knowing
same fotoshirotu (in the general sense of the word) can be for any plot to predict what a mistake carrier "forgive." For example, if the film or matrix has latitude in 9 steps, and I take the story with a difference of brightness in 7 steps, "household" fotoshirota - allowable error will be much smaller than when shooting a subject having a brightness difference of 5 degrees. So I try as much as possible to avoid such an interpretation fotoshiroty.

I wrote above - what's wrong with the gray scale IT-8 slide? I give her all the tests. Moreover, for comparison with the film on it fotoshirote (test pictures and correction) has become a major tool for comparison. Because the two are so different technologies can only be compared in the same coordinates. By the way, that surprised me - regardless of exposure compensation when shooting, when scanning film on CoolScan 4000ED, no frame (neither negative nor, a fortiori, slide) did not pass a field darker 20th (probably because in the deep shadows in film / N ratio drops dramatically, attributed to 4000ED scanner is difficult, because the negative is not worked out for areas with the lowest density, and for the positive - the maximum, more precisely worked out a continuous grain). Figure transferred all 24 fields. Of course, in the highlights Latitude negative film is incomparably higher than that of the slide and figures. I even thought that to get more saturated colors and reduce granularity, Superia Reala better exposed to overexposure to the stage - and a half. IMHO, as negative and positive film is very close to each other (the emulsion is practically the same, different handling), future shadows passed a highly rarefied grain - an analogy with the transmission of light gradients on an inkjet printer and a similar pattern noise.

Matlab allows you to work with a full 16-bit image. A display depends, IMHO, the video card, OS and drivers, and, IMHO video card does not allow you to have more than 8 bits per R, G, B channels at its output. There are Luma (where did the 32-bit color), but, IMHO, to lay a 16-bit RGB to RGBL output signal can neither Photoshop nor any other program (this is akin to the problem Restatement CMY in CMYK). And do not think that the monitors are able to pass graduation analog levels corresponding LSB more than 8-bit signal.

And more. . . Fame, as you would appreciate fotoshirotu on a linear scale, especially when it comes to more than 8 radryadnom signal (say, 12-bit)? IMHO, only if the graph is not less than HK half a meter
The second remark on line graphs (because with them it all started) - whether linear HC eyes? As I recall, she was still much closer to logarithmic.

08.11.2002 0:32:00


read the entire thread on dpreview - a very serious job.

Here are two questions -

1) Why not include a Mathlab - ovsky analysis of RAW and JPEG analysis analysis TIFF-16 issued a standard RAW-converter in standard contrast and. t. n., without Sharpe and appropriate white balance? Just for me, it's as if the starting point of the image. Still, 12-bit TIFF "and instead of 8 bit JPEG" and artifacts should give the difference

2) Why would not the English version? IMHO, this analysis is quite at the level of the best examples, and not only the country, but the world must know its heroes.

08.11.2002 0:47:00

1. I promised to dpreview do it, it seems that it is necessary (though time, as always, no matter what is missing). And converters divorced. . . Native LE I did not want (results at the level of JPEG "as well), a full-fledged native EX I still do not, and to choose between, as it will make SharpRaw, Bibble and QimagePro... I'm afraid that even the scales are different. Is that convert in linear TIFF without gamma correction. But then the result will be very close to what I received directly from RAW using MatLab "a. Frankly, meaning large do not see this.
Perhaps it is appropriate to such an analogy:
pure RAW - analogue negative, RAW, processed converter and converted into TIFF - this is a scan negatives, with the inevitable loss, subtract the negative mask, white balance (as it will), and so on. N. Therefore, As a rule, it is better to get the result (when it comes to technology) without further changes.
On the practical side - it is clear: of course, no one will work directly with RAW, but only with the converted in TIFF. Let's just say I gave results corresponding to the ideal converter - what can be learned from RAW, without knowing whether there is now such a converter, or he will be in a year.
Comparable with the analysis of the worlds under a microscope at film resolution test only because minilab, printing with such quality, I do not know

2. English version of that? HC S2 Pro - in the sense of the signatures on the charts? So it's a small part of the future review of S2 and comparison with film technology.

08.11.2002 4:45:00

English version of that? HC S2 Pro - in the sense of the signatures on the charts? So it's a small part of the future review of S2 and comparison with film technology.

And - this is a work in progress!

Just sometimes in the pursuit of knowledge find, say, a German site with a very notable information. But to me it is intelligible only if there is an English version (in German, I am very weak).

Edakii global pool of information


IMHO, the video card, OS and drivers, and, IMHO video card does not allow you to have more than 8 bits per R, G, B channels at its output. There are Luma (where did the 32-bit color), but, IMHO, to lay a 16-bit RGB to RGBL output signal can neither Photoshop nor any other program (this is akin to the problem Restatement CMY in CMYK). And do not think that the monitors are able to pass graduation analog levels corresponding LSB more than 8-bit signal.


32-bit color - 4th byte is like a alpha channel - (transparent pixel).

usually an unused.

But it is slowly changing -

Matrox Parhelia-512 GigaColor

10-bit GigaColor technology
10-bit GigaColor Plugin for PhotoShop (Adobe Photoshop plug-in for 16-bit TIFFs)

like there too slowly and Ati with nVidia go, maybe it will come.

08.11.2002 9:27:00

Actually, I rarely something seems.

And had not seemed?
How would you comment (think guess where values):
E10- 8, 6EV
E20- 8, 2EV
Dimage7- 8EV
G2- 9, 3EV
I understand that because of jpeg, RAW turned out that about one and also.

And then suddenly such a huge difference in S2 (in ten (more than 3 stops) times).
different techniques, so bad converter S2 or something else (say, another structure SCCD)?

I really do not understand .
Regards.

08.11.2002 12:38:00
quote:
:

in F & V Did you see graphics HC for JPEG "as well - that is, after the gamma correction. That no longer has anything to do with CCD - do not know how, or they do not want to build curves of RAW. Look at the graphs in the journal: on axis X - logarithm of exposure along the axis Y - linear scale levels Jpeg "a. Why? Look
graphics HC for E-10 or E-20 on my site. There is a difference - is not it?
Because, IMHO, to build schedules, we must understand that you are building.
I think there's plenty of charts to understand the principle. Charts S2 will be in the article, I do not want them to spread prematurely.



Adding to 07. 11. 2002 17: 30:

If you can not, but really want. . . cm. http: // www. dpreview. com / forums / read. asp? forum = 1014 & message = 3444595
and description - http: // www. dpreview. com / forums / read. asp? forum = 1014 & message = 3452171
But - shh. . .


Clear. That is, the comrades of photos & videos measures the (leading to more familiar to me film terminology) "dynamic range" (a specific point on the graph is the index of its opacity), the optical density is not very negative on the densitometer, and measures the tonal ratio of brightness on the prints made with this negativity. As for exposure, right scale is logarithmic exposure by definition. So here I have no questions. For
HC Thanks for a great job, but IMO you should not write as often sorry for that forum.


general, the term came from fotoshirota sensitometry. It is the science that studies the sensitivity of photosensitive materials.
Standard test fotoshirotu simple. We take a piece of film of 30-40 cm, and on display it in a certain amount of light. First - there is no exposure, net negative, most descendants add Decl light descendants still, and so Dalle.
amount of light impinging on the film is dispensed so that the exposure area to the illuminated one was \\ floor diaphragms, a third aperture, or aperture larger than the previous. That is, do not change the aperture, and make a special translucent wedge with variable density, and through him to shine on film
Then, this piece of film showing, write on it the process of developing and type, and measures the field for each of its optical density on a special device, he called a densitometer. On the X axis values ​​of the density D-logarithmic value on the Y - the amount of light (exposure). The obtained
Krivulina which characterizes fotoshirotu film.
Straight section Krivulina - have the correct exposure zone, where a linear (in fact, and D-logarithmic value, and the amount of light - also a logarithmic value) changes in illumination occurs linear change in optical density of the film.
Usually it is called fotoshirota.


08.11.2002 19:37:00

S2 when shooting in JPEG / TIFF doing very contrasting pictures. Which leads to a large loss of latitude. In order to get a "beautiful." That is, as I understand it, a marketing ploy to increase sales not only by those who need the latitude, but also by those who are tired of "Soap" and wants to take the SLR, which "makes no worse"
Same too contrasting ( for my taste), the pictures do the last compact Nikony.
G2, on the other hand, compresses the range when writing Jpeg "s very much. And as much presses noise. Therefore we obtain the following results: in a strong noise and averaging neighboring pixels can be obtained fotoshirotu (statistically) more than in RAW. But losing detail . By the way, fotoshirota film can only be measured statistically.
statistically - that is, not due to get the value of a precise measurement of the levels, but by taking an average value of the set of inaccurate measurements - the method is probably familiar.

If suddenly clear: for example, in 10-bit RAW in the deep shadows of two adjacent fields have a value of 3 ± 2 levels with a variance of 1. 5 and 5 + -3 with variance 2. 0. These fields (without treatment) will not be included in fotoshirotu since levels signal will overlap the noise. But if "smart" intracameral (or external) algorithm (knowing that detail in such deep shadows is not needed) will average values ​​and suppress noise (dispersion), the output of these fields will have a value of 3 + -0. Dispersion 5 and 5 0. 5 + 0. Dispersion 0. 5 5, when converted to Jpeg compression range, for example, obtain the value of 2 + -0. 4 with the dispersion 42 and 0. 4 + 0. 4 with a dispersion of 0. 45, and these levels will be counted in fotoshirotu. As far as it may seem strange or unnatural.
just need to understand that only one or fotoshirotoy only one resolution or only noise to characterize the quality of the impossible. There will always be one of the algorithms to improve performance (for example, for a record value) for the expense of others. Negative film, for example, shows a very high resolution and very good fotoshirote due to the very large noise (grain). "Numbers" with a small mesh size tends to achieve maximum resolution, "figure" with a large cell size sacrifices resolution for far less noise with sufficient fotoshirote. Physics imposes restrictions on any technology to produce images.

again - easier. Photoshop can "destroy" is not only an increase in the noise level, but also excessive slope HC (for more juicy images), ie, an increase in contrast. When shooting in bad weather or in the smoke (which took place this summer) - is justified. In addition, removing most need Photoshop, Photoshop fit in a paper print, and without additional compensation. Not all and not always need a lot of Photoshop. way, there is an opinion that Fujifilm gave JPEG "at HK S2Pro similar Fujifilm Velvia (very popular positive film with high contrast and saturation)


08.11.2002 20:17:00
:
I dprevyu on in the comments to your branch also read the comments about the noise level. IMO I think that for each value of the exposure would be good to indicate the level of noise. It's like a gain in movies - so for example, you can pick up the signal level of 18 decibels to 36 decibels or, but then get out noises that score all the details izobrazhniya. And when the contrast is low (around pixel-three-five) parts is small, the noise can safely eat it, and the image will be of no practical value.
Therefore, it is necessary to have a certain level of noise, which define the boundaries of sensitivity to detail. Let's say it's 9-12 dB for low-contrast image with more detail, 15-18 decibels -for image contrast, and so on.
And accordingly, the boundary fotoshiroty CCD should be considered rather dark noise level is not CCD, and a certain value exceeding the useful signal level above the dark noise, and different for various scenes.

08.11.2002 23:50:00

logarithmic representation HC allows show on the same graph as the signal and the noise. The more I use it.
So if someone needs to determine latitude by criteria other than mine, he can easily do it on schedule.
PS restriction level S / N = 1 was chosen for compact cameras. If you limit the level of FSH 9-12dB, the breadth of many compacts would be indecent low (less than 1. 6D), despite the fact that they allow you to print good quality photos, cycling almost all the available fingerprint latitude (2. 1-2. 4D). Since such a low signal / noise ratio is obtained in the shadows, where the noise masked, IMHO, it is quite possible to use this criterion. In addition, on any other arbitrarily chosen criteria are very difficult to negotiate, and, moreover, to justify.
Latitude DSLR I decided to measure the same, "compact" method, to avoid confusion and could compare the results of different classes of cameras.
 

09.11.2002 17:50:00

Fotoshirota - is the range of exposures that cause linear (within some error) change the blackening density (or signal level "numbers").
Mish! It's just everything is clear. . . . But here. . .
Somehow, when it comes to what we need to remove the "wide" story on film, then talk about the slide with his meager fotoshirotoy, rather than negative, which fotoshirota much more.
Scanner, incidentally, also does not work with fotoshirotoy and optical density.
Fotoshirota - is only one parameter of the film, which is not significant in terms of opportunities to remove the film "wide" plot. There is another option - an optical density range. And it seems there is no denying that the analog density in numbers - a level.
One could say that in the figure fotoshirota accurately reflects what you can remove the "wide" plot and D (fotoshiroty) = D (density range), but it must be carried out at least two conditions:
1. The slope of the HC should be 45 degrees.
2. The value of D = lg (Smax / N) - D = lg (Smin / N) at different exposures (and shooting a white sheet) must be equal to the value D = lg (Smax / N) - D = lg (Smin / N ) with a single exposure

If the first part is done (and then - only in RAW), then the second - no.

... Now about linearity.
Slide has a HC that have seen many .... She has her own extended linear portion of HC.
RAW - too linear HC ....
But if the slide image is already possible to look normal and are not required to scale, the figure that would get the same in appearance image - you must give quite tangible range. And then the linearity in the figure flies to hell ...
With all of this in linear slide is not going away ...
What do we have? Either slide something linearity .... or digit. With all of this scale (X-axis and Y) and here and there in some coordinates ....
Thus - in slide sredneseraya point falls exactly in the middle of HK ...

way that surprised me - regardless of exposure compensation when shooting, when scanning film on CoolScan 4000ED, no frame (neither negative nor, a fortiori, slide) did not pass the field darker 20th

disfigured field as - visually or instrumentally? And where are indistinguishable - on film or scanner does not see?
Strangely, Nikon LS2000 perfectly distinguishes the field ... 20-22 4-7 levels difference instrumentally. If a little lift curves, or use the corresponding ICM, it will be an eye to see. A linear RAW you many fields differ? ? ? Given that in the film is enough to give a slight rise curves in these fields, as in the figure - it is necessary to give a significant scale.


That is, do not change the aperture, and make a special translucent wedge with variable density, and through him to shine on film
And I, because it all fit into one ekspzitsiyu.
Actually wanted to say that the method that come up to the figures, measuring FSH Series frames taken at different exposures are partially reflected. If
dispensing light by changing the exposure and thereby assess Photoshop - you can accept, then the evaluation levels (which in fact reflects the fit "a" story or not) in such a way (with different exposures) - I can not agree.


09.11.2002 19:42:00

Glory, range of optical densities well, nothing to do with the concept of «a» story , at least for me.
quote:
Somehow, when it comes to what we need to remove the "wide" story on film, then talk about the slide with his meager fotoshirotoy, rather than negative, which is much more fotoshirota
not argue , many photographers concepts of latitude and plot density range confused.
quote:
One could say that in the figure fotoshirota accurately reflects what you can remove the "wide" plot and D (fotoshiroty) = D (density range), but it must be carried out at least two conditions:
1. Tilting HC should be 45 degrees.
2. The value of D = lg (Smax / N) - D = lg (Smin / N) at different exposures (and shooting a white sheet) must be equal to the value D = lg (Smax / N) - D = lg (Smin / N ) with a single exposure
1. In RAW always and only in this way. Range in the input image, as I have previously mentioned, to compensate for the nonlinearity of display devices, such as a typical monitor brightness is proportional not uronvyu signal modulation electrode, and the value of the modulating voltage to the power 0. 45 (L = V ^ 0. 45). For linear brightness falls mock image, and recalculate the levels of reverse law L "= L ^ (1/0. 45) = L ^ 2. 2
IMHO, this is a problem of modern" iron "that displays digital images. Of course, if have appeared monitors capable to compensate the nonlinearity of the brightness of the input signal automatically, gamma correction of images would have been no need. And there would be no double conversion (or rather, moved to the monitor or video card or the driver).

2. Why did you decide that is not fulfilled? Imagine, I shoot in RAW and do not use the "cunning" software from the manufacturer (converts MatLab "om or own program, and in line TIFF). And what do you think there is some physical processes that violate interchangeability? Or you again about gamma correction saying? So I do not use her in the tests at all, if you can get the results directly.

What to CoolScan 4000 - I just can not get meaningful information from the field after the 20th (noise, that is, the grain does not count). What appears to be a consequence of the settings and "straightness" of human hands, which scanned (I gave the film to be scanned in the company, which is engaged professionally). That's probably just indicative of the actual conditions in which we are And for my test is just unprincipled.


quote:
10-bit. . .
Ah, then it would not only 10 (or better - 12) bits for the display, but the resolution of at least 6MP 3000h2000 to view the entire frame (better - twice as much as, say, 12-14Mp), contrast controls. . . at least 1000: 1, and a good idea of ​​the image files gamma correction to remove, even monitor + graphics card is a linear device.
I'm afraid this will have to wait a long time. . . and requests also in place will not stand. . .

09.11.2002 20:37:00

gamut image input, as I have previously mentioned, to compensate for the nonlinearity of display devices

Of course, the intensity of luminescence is nonlinear, so the built-in hardware monitor gamma correction takes place regardless of the gamma correction when converting 10-12 bit RAW in 8 bit JPEG (TIFF). Is not that right?

09.11.2002 21:32:00

Not so, electric path monitor linear, non-linearity makes CRT, ie, electron-optical converter. Schemes to compensate for the non-linearity in the monitors I have not seen. By the way, an analog circuit that operates in the frequency range up to 200-300MGts, I imagine myself badly, digital signal processing circuit are rarely used, mainly for LCD, who have become attached to existing standards (I do not know what are the characteristics of the luminance signal level for LEP ).
correction in video card drivers or programs like Adobe Gamma, only lead to a range of standard value (depending on the OS: Macintosh - 1/1. 8, Windows - 1/2. 2).

09.11.2002 22:40:00


About built-in hardware gamma correction, say everywhere (taprimer here ), although I am far from these problems .
As for the so-called gamma correction of digital cameras, it is, IMHO, takes place only in the senior ranks (junior Preserving Linear RAW) and its value is determined by nothing more than a bit like bringing RAW (ADC) to the bit JPEG, that perfectly illustrate your wonderful graphics.

09.11.2002 23:45:00

Thank range of optical densities well in no way connected with the concept of "wide" story, at least for me.
Mish!
levels in the figure - a reflection of the real story. Changing light 2 times - this 1 bit ADC. If this change is noticeable camera it useful levels.
range of optical densities (levels in the case of the figures) is formed because of the individual distinct levels. If the levels are many and they are distinguishable, and it says that transferred the plot "a".
Display "broad" plot implies a that are visible and deep shadows and brightest places. If
with light everything is clear, the shadows can be transferred only if distinguishable low levels (lower level).
ADC digicam like all linear and binary system in figure no one has yet canceled
Yes, actually I explain it all. . . . you yourself know it all.
scanners all feel that way - is the density of the film and have a scanner with a matrix of the ADC. . . . . and so on. n.
and basic parameter for determining DD is the optical density, not fotoshirota.

Do not argue, many photographers concepts of latitude and plot density range confused.
Yes, nothing confused - the breadth of the plot - it is an input parameter and density - closed. That's all.
I'm talking about something else - if fotoshirota film determines its ability to perceive the "wide" story (at least so it was led to consider in this forum), then why not take a negative fotoshirotoy with his big, and take a slide with its small fotoshirotoy? ? ? ? In the case of figures
fotoshirota understood as in tape .... . What's the matter?

1. In RAW always and only in this way.
Mish I remember you talking about the position of a point in sredneserogo RAW E-10. The position of the
sredneserogo in RAW S2 in the same place?

2. Why did you decide that it is not executed?
Paragraph 2 does not hold exactly. I've tested it on my camera RAW data. There is a difference, even in 256 levels.



What to CoolScan 4000 - I just can not get meaningful information from the field after the 20th (noise, that is, the grain does not count). What appears to be a consequence of the settings and "straightness" of human hands, which scanned (I gave the film to be scanned in the company, which is engaged professionally).
CoolScan 4000 quite clever machine .... and if he did not see them there simply was not. Should be sought where they lost - or when shooting with the exposure ... the devil has beguiled. or with film development lost

And about scale - I was not talking about the scale of compensation or monitors that can provide profiles of ICM, but of that scale that is introduced during the conversion.
Let's leave scheme, which should be introduced as an amendment to the iron .... Now the only question about the soft-- we get to JPG or TIFF in the final result.
Together obtained two scales - that in iron and that in the software when converting. Then I understood correctly.
What you say?

10.11.2002 0:02:00

good link - a "dead" link?
I still tend to think that a repetition of the output 8-bit file znacheniy10-bit RAW file into the least significant bits - it is a feature of the algorithm of your Pro90 (as I recall) - will not go back to last year's controversy, since all were "at her. "

10.11.2002 9:32:00
quote ():
Highly films contain two or more layers of different sensitivity. It is so possible to achieve both high sensitivity and acceptable grain size (in our case - cell).
IMHO theoretically possible using Interline CCDs (eg. Sony) has a separate cell exposure and storage (masked) with an external gate to get two shots with different exposures in one matrix. Example:
1. opens the shutter
2. Matrix exhibited 1ms
3. The image is transferred to the storage area. (& Lt; 1mks)
4. Matrix exhibited continued for another 10 ms.
5. closes the shutter.
As a result, the whole process took 11 ms. The cells of the storage frame with an exposure of 1 ms, the light-sensitive cells in the frame with an exposure of 10 ms.
It remains to consider the frame of the storage cells, and then move back and consider the second frame. Only
who cares?

10.11.2002 11:15:00

repetition in the output 8-bit file znacheniy10-bit RAW file into the least significant bits

It is not a repetition, and in almost linear dependence of HC jreg-and in the left half (junior 4-5 stops) fotoshiroty (shift exposure level leads to a doubling of the value), as can be seen on any of your graphs. Therefore, the scale is not trivial to link graphs HC Jpeg and RAW throughout the range, there is a more complex relationship.

10.11.2002 15:23:00
My humble opinion on fotoshirote:
To begin quote from the site Kodak:
quote:
Exposure latitude is the range between overexposure and underexposure within which a film will still produce usable images. As the luminance ratio (the range from black to white) decreases, the exposure latitude increases. For example, on overcast days the range from darkest to lightest narrows, increases the apparent exposure latitude. On the other hand, the exposure latitude decreases when the film is recording subjects with high-luminance ratios such as black trees against a sunlit, snowy field.
film is fundamentally different from the figures nonlinearity characteristic curve fotoshirota is a qualitative (or semi-quantitative) measure of the nonlinearity. If vyrazhatsya mathematical language, fotoshirota correlates with the length of the projection of a more or less linear portion of the characteristic curve on the axis displays. Slides with a strong S-shaped curve, with a large slope in the middle of give high contrast images (due to loss of contrast in highlights and shadows) and a small fotoshirotu. Negative film that's just have a flat curve and, consequently, less contrast, most Immersed fotoshirotu. This is well seen in the krivulkah from AGFY
http: // www. agfa. com / photo / products / pdf / F-AF-E4_en. pdf
For all sensors with linear HC (What is to successfully seek CCDs) with equal dynamic range fotoshirota by definition is the same. And thus, the concept is not absolutely necessary. Especially at a forum on digital photography. IMHO the dynamic range is enough. And
of JPEG, which destroys the dynamic range.
turns out not all equally useful JPEG. Steep peppers in tight chambers (such as Kodak DCS760) achieve more useful JPEG:
http: // www. kodak. com / US / en / corp / researchDevelopment / ... eatures / eri. shtml

10.11.2002 15:27:00

If I need to pass "a story" - I'll take the negative rather than slide. Unless, of course, there will be additional requirements: for example, printing is easier to work with slides. Perhaps for some work needed "a story" with reference to the output range.
If I'm shooting for a number, then the output is equal to the input DD - within the latitude of the sensor. In the case of a negative output of the input is less than in the case of the slide - more. Of course, more than "width" at the input, the greater the output - if the same material.

sredneserogo point position depends on:
1. Exposure and Metering
2. when converted to a non-linear format (TIFF, JPEG) - the settings of the converter, roughly speaking, the gamma correction. Look, I link to S2 drives.

Well, let's count: srednesery reflects 18% of the light, it must be configured camera light meter.
18% - is 20lg (0. 18) = -15 dB
look at HC S2Pro - sredneserogo point in the green channel (channel sensitivity varies) -22dB at -3dB level of saturation, which gives -19dB on the level of saturation.
What does it mean in terms sredneserogo to 4dB (1 in 5 times) lower than expected? For me - is that the light meter is set to a small underexposure to reduce the likelihood of saturation of cells due to local over-exposure. After latitude in the shadow areas is more than allowed.
Although some firms (eg, domestic Zenit) adjusts the exposure meter by 20% reflection, digital cameras, this value may be 10-12% (11% we get to the point sredneserogo at -19dB).

How are you measuring when it turns out that paragraph 2 does not match? Than watching RAW? If converted, are you sure that the result is linear, that were turned off all the algorithms, including white balance?
quote:
And about the scale - I was not talking about the scale of compensation or monitors that can provide profiles of ICM, but of that scale that is introduced during the conversion.
Let's leave scheme, which should be introduced as an amendment to the iron .... Now the only question about the soft-- we get to JPG or TIFF in the final result.
Together obtained two scales - that in iron and that in the software when converting.
Softovaya gamma during the conversion and there is an amendment to iron
hardware (or firmware) is only a means of gamma calibration video to bring to the standard scale 2. 2 (for Windows).


Look at HK slide - it is linear (in a certain range of brightness). Tangent of the angle (contrast) of inclination at the moment we do not care. Brightness of the subject are linear in nature. Now look at the slide itself - nonlinearities (due to the nature of, for example) can not you see? That is, the product of linear components gives a linear result that adequately perceived by the eye.
Now take the CTF, shoot in RAW for example, gray scale linearly convert levels or take only one color channel. The result is displayed on the monitor. Impossible to watch. Why? Because you look fairly accurate copy of what was filming - but in a non-linear display device. What should be done with image to the monitor image resembled the original? Perform gamma correction, the inverse average monitor that will restore linearity.
And if the monitor is not calibrated, then for each monitor for proper transmission ratio of the brightness of the object will have to do their correction.

The same thing happens when you scan - just a few of the scanners (or driver) will give you a pure RAW without such a correction.

You can make it even easier: Draw in Photoshop cells with levels of RGB = 1, 2, 4, 8, 16, 32, 64, 128 and 255 (which corresponds exactly to the shot in 8-bit RAW to scale with Tami reflection (transparency ) 1/256, 1/128, 1/64, 1/32, 1/16, 1/8, 1/4, 1/2 and 1). And look closely at the screen. The screen will look like on the scale of doubling the brightness, and its exponential growth. For a correct perception of the need to use a rather strong gamma correction and gain (approximately)
number brightness 1, 32, 64, 96, 128, 160, 192, 224, 255 - that would look like with a doubling of the brightness scale.
If we talk in terms of sredneseroy point, the nature (what we see with our eyes), as well as on a slide or in the levels of RAW point in 10-20% of the maximum brightness is very close to the maximum brightness (reflective objects, light sources do not count), but very far away - up to thousands of times from the minimum brightness. And at this point is the video system at the level of 128 - Monitor transmits approximately the same amount of visible gradations from 128 to 255 and from 0 to 128.
the way, is not it strange that the white man came up sredneserogo point that is not an average, but to -tu reflection is very similar to the white skin? Maybe if the photo came up negros, sredneseraya point would be closer to the middle of HC photographs?

Faced with such a lack of understanding (the more that a year ago, like all these issues discussed in detail), I began to doubt myself, I decided to see what was on the gamma correction write elsewhere. And the first link I found is as follows:
quote:
Gamma correction compensates for differences in the colors displayed on a variety of output devices, so that the image looks the same when viewed on different monitors. Gamma value of 1 corresponds to the "ideal" monitor, that is, such that has a completely linear relationship mapping from white to black. However, there is no ideal display. Computer monitors are non-linear devices. The greater the gamma value, the higher linearity. The standard value range for NTSC video - 2. 2. For computer monitors gamma value typically ranges from 5 to 1. 2. 0.
Original entry.


Digital cameras during the conversion, of course, use a simple conversion: no one will calculate the exponential function for each point in each channel, followed by normalization. This will require large computational resources and will lead to large errors in calculations. Accurate enough (and sufficient in practice) is a piecewise linear approximation, which is often made of two segments: the first follows the original value, the second value are translated with some coefficient of volume. This is what we discussed in detail with you a year ago.

Another link - Academic tutorial on the basics of computer graphics
small quote:

quote:
Some graphics systems have built-in hardware gamma correction, which can be adjusted. As a rule, built gamma correction value differs from the average range monitors of 2. 5 and close to the minimum value of the monitors gamut (see. Above). Additional gamma correction required to ensure fidelity, called "system".

image file can have its own scale, equal to the amount of gamma correction used in the formation of the file. This gamma correction is called a "file". Most raster image files, except for files TGA and PNG, do not provide for preservation of the "file gamut", so playback may require selection.

IBM PC and graphics workstations SUN firms do not have built-in gamma correction, t. E. Their system range is approximately equal to 2. 5. Therefore, for faithful playback signal (pixel value of the code) should be raised to a level of 2.1. 5.

And yet - IMHO, a little delusional explanation , but is directly related to the issues discussed and a statement about the applicability of the JPEG format to encode images without gamma correction.

quote:
Note. As you know, when compressed JPEG format image is separated into two parts: an insignificant part of the information is discarded. Allocation algorithm irrelevant information based on the characteristics of human vision and, as a consequence, only works properly with nonlinear digitized images.


way, PNG format provides gamma correction value required for its proper viewing. So do not blame some monitors, but also because of the nonlinearity of their time to create later.
IMHO, the converted linearly TIFF "in a profile can be assigned to the image to be displayed correctly - without spoiling the image file itself. Another thing is that with embedded profiles can run programs much less than we would like.

10.11.2002 18:36:00

first repeats of the original

I am about the same, only goes S2 contrast bullying leads through conversion.

second value are translated with some coefficient of volume.

Here, because of the presence of the first linear portion, and there is a need calculate exponential function for each point

11.11.2002 16:11:00

If I need to pass "a story" - I'll take the negative rather than slide. Unless, of course, there will be additional requirements: for example, printing is easier to work with slides. Perhaps for some work needed "a story" with reference to the output range.
Mish! That's a good word you used - "pass".
Do you mean that the negative "apprehended." Then - yes. . . . It can absorb a lot, but to convey. . . . .
e. As soon as you showed it - all fotoshirota and over. You'll be able to pass only through that window, which you will allow its output DD.
Before it showed - you can "slide" thereby "window" for its Photoshop how you want. . . . . .
In any case at the output window of the slide is greater than the negative.
And as a consequence - it can convey more.
If you could show negative several times, he would have certainly could convey more.
And for this reason it is useful to reporters (actually all that I have written) - that can fix a lot, and then in the process of developing a site is determined to pass. Send
simultaneously and deep shadows and light, as it does slide, negative can not. Although the unmanifested negative has both.
For me it is important that through both neck (input and output) have passed the maximum information.
If I was a reporter, you probably would have chosen the negative.

the output DD is input
Well sobstvenenno what I wanted to say - to judge the DD must be both input and output for DD. E. A total of two DD



If I'm shooting for a number, then the output is equal to the input DD
Actually with this all started. Do you think so, I think that it is not.
And in case numbers poroektsiya HC on the X-axis is not equal to the projection on Y.

How are you measuring when it turns out that paragraph 2 does not match? Than watching RAW? If converted, are you sure that the result is linear, that were turned off all the algorithms, including white balance?
Mish! I'm not talking about the data that directly from the matrix. And the fact that we get the image in TIFF 16, at least linearly. As far as it is linear - well, I do not know. . . . . is that it gives when the option "linear".
You're talking about the absolute data that you get with the aid of Matlab. But in life are family converters.
In case JPG - result it is very obvious that 2 is not satisfied.
If RAW - as you know - this intermediate result, which is of interest from a technical point of view, not as an end result.
And yet. . . . . why paragraph 2 does not work - you can not shoot a single exposure - do a series of shots, changing the Exposure value (or only time). Try to take off the gray scale with one double exposure, but in one case, shutter-priority, and the other aperture priority. In general, what would exhibit was one and the same, and Exposure value is different. And then make measurements of noise. It is clear that no neighboring Exposure value. . . . and it is possible to give a spread within the linear portion of the HC.
I think it will be significant that paragraph 2 is not satisfied. With

point position sredneserogo in RAW agree.
gamma understandable. . . . It seemed to me that the compensation of non-linearity of equipment is only implemented in hardware or tools like Adobe Gamma. And anyway - it is not clear what - why the heck do double range - in the first case of hardware or tools like Adobe Gamma And then followed another software conversion (and thus cut to the quick data in the file)? Do not it be easier to just one range to do (hardware) without editing the data in the file?
Well, if you imagine a chain of camera-computer-printer, the data from the camera exposed scheme (which in the converter changes the range of the file). . . . the printer you want to output linearly to the eye. It turns out that the printer is not linear to the disgrace?
Or he linee, but the driver makes a reverse range?
Apparently I missed something when discussing a year ago. . . .

11.11.2002 17:53:00
Sorry, that is wedged, but
quote ():
And as a consequence - to pass it (slide) can no longer (than negative).
what more he can pass? After all, it got less precisely because of the narrowness of his FN. Slide can demonstrate to the acc. a greater range of their equipment "blackening" only through a close eye / brain coeff. kontratnosti QC. On the negative, by virtue of an undervalued QC, on this same apparatus show a smaller range of "blackening" - but then this equipment is not for negative . For negative need equipment that can bring it into line with the QC eye / brain so. E. The same as it treats people. To do this, use either the photo paper with acc. contrast or scanning / post-processing to increase the contrast. However, it is negative initially zapechetlevaet greater range of brightness of the subject and the fact that the simple approach, it (the range of brightness = FN) is transmitted smaller range blackening (t. E. A lower DD) simply means that this simple approach is not applicable. Bring to a QC QC negative slide (resp. Reading or just theoretically) and get a true view of the object in terms of tone and transmission + advanced (compared with a slide) DD.
QC negative to fit the FN paper, so it's so small, and the slide is fitted to the CC "srednechelovecheskomu" as was originally intended for direct viewing, then through "slaydoskop", and then through a slide projector and only relatively recently started to be scanned. Negativity has severely hampered the mask. And then, if to think, then the current state of the photo process, when almost everything goes through the scanner, you need a new kind of photographic material - not intended for direct browsing as a slide or negative for the process and has the QC and FSH negative film, but apparently positive the nature of the transmission of brightness or negative, but without a mask. This photographic material will transmit a range of brightness of the object that is available only negativity without lack of the latter in the form of a mask, and CC will be given to the "psychophysical" value respectively. correction during scanning. This photographic material will film photography a few years of his life under the pressure of the CCD.

11.11.2002 19:38:00

used for this purpose or photo paper with acc. contrast or scanning / post-processing to increase the contrast.

I'd like to look at this scanner

11.11.2002 20:14:00
Well. . . For example, I still use Microtek 35T + old man, has often makes a fool, but still takes quite slides and negatives multipass better (because of the mask and run down the lamp apparently). My purposes, by and large, suitable, although the resolution of 2700 would be better of course, and the noise would be less. Basically I'm on it "kontrolku" films did, but his old slaydoteku (CH-65 and UT-18) digitized. So nothing special and do not need, and most importantly - less noise. The main thing is not what filigree, and on what to watch - monitor for slaydovskim DD is not stealing, photo paper - the more so the main plus a large DD (here you are absolutely right) - the ability to "move the window" - to correct errors or exposure to bend range (losing linearity in the shadows / highlights, but retaining their relative detail).

11.11.2002 20:55:00

transmitted simultaneously and deep shadows and light, as it does slide, negative can not.

This scanner can not, with all the negative on the contrary, even edinyzhdy manifest. Just one (negative) not to be scanned. Print on photo paper with a balanced and everything will fall into place.



possibility of "moving window" - to correct errors or exposure to bend range (losing linearity in the shadows / highlights, but retaining their relative detail).

For digitized slide because of the noise and digitization HK film bend makes sense except in the light, and for the negative only in the shadows.
This, IMHO, their fundamental difference.

11.11.2002 23:43:00

honest exponent calculation faster and more accurately than
approximation of its piecewise linear function. This is for information

12.11.2002 0:02:00

quote:
honest exponent calculation faster and more accurately than its approximation of a piecewise linear function.
Oh really? And what is it that I came across an article on here (not provided links, and not kept) - author on the Sinclair ZX Spectrum shows how to make a viewer for multiple image formats. And is recommended for use gamma correction is a piecewise linear approximation. Or spreadsheet. Or
modern processors (especially those that are in the chamber) are able eksponentsirovat hardware?
In fact, the standard formula for the gamma correction for the 8-bit image is:
L "= 255 * (L / 255) ^ gamma
and replaces any single table 256x256 or calculation functions of the form

if L & lt; 10
L" = k1 * L
else
L "= k2 * L

Apparently, your experience with processors includes certain processors extra layer that compute algebraic (and perhaps at the same time, and trigonometric?) function hardware?

Tip №1 .
math coprocessor in digital cameras (at least, compact) are not used.

Tip №2
And who needs accurate and pure gamma correction according to the formula? IMHO, each firm sells (and kept secret) own tone curve, which can be much more complex than simple gamma adjustment. Include gamma correction for the output device, but is not identical thereto.

12.11.2002 0:34:00
You do not come across those
Here's my tip:
Calculating exponent 16 bit number with the fixation point
consumes two samples from the table 2h256 numbers (note that it is not 256x256 ) and their multiplication,

ie one multiplication
short exhibitor quickly calculated sine cosine two times slower. Another thing is that there is still need logarithm.

12.11.2002 2:33:00
The system
Computer Graphics files are stored in a linear format gamma correction
http: // www. inforamp. net / ~ poynton / PDFs / Rehabilitation_of_gamma. pdf (page 14).
Maybe windows go to the same.

12.11.2002 10:23:00

what more he can pass? After all, it got less precisely because of the narrowness of his FN.
OK! Let's estimate. . . .
To display the incoming daipazon 10 stops (3D), and going to need those same 3D.
What would be a great bit scanner was not - it does not solve the problem. . . . After all, these emerging
D measured banal - to the light. And every stop coming - this change light sensor 2 times. And if there is no change, then there is no change D. At the same Agfy (here is a link) - a range of useful densities less than 2D. What does it mean to be explained?
Given the fact that the film does not at the stage of developing things like curves (this can be solved question could be "put" in facing the incoming 3D 2D... But..... Alas - all linearly).
now on. . . . about the boundaries of this very smart metering D. (or experience of the photographer) will allow not to lose light and thereby fix the border on the light. And then it's simple. . . .
go through the levels. . . . . ADC same linearly. . . . . each stop - 2 times decrease. . . . . If negative 2D and density - is the density. . . . . more of her not to pull. It is about 7 bits suffice to convey that. Increasing the bit depth and levels do not lead to anything - one hell will be transferred to one level. Or there will be a continuous noise.
It turns out that the negative fixed much, but as soon as shown - all infinite sense.
These included 10 stops castrated to 7 (and sometimes less). (T. E. The lower level stops - cut).

This is the same when I initially have 16 bit linear TIFF. . . . . Next, take it reduced to 8-bit.
And if I save, and then again try to make it a 16-bit, then nifiga of this good will - the whole layer of the lower levels (parts) is cut off. And when you consider that on the negative is linear, then. . . . The difference is palpable. If
who worked with 16 bit Typhoid, knows the difference. . . .
Or even easier - try to reduce kantrast in Photoshop the same 8-bit image. . . . . And then return it - how many pieces back. Hanging contrast forehead does not solve the problem, too.

slide in easier - have a range of output. . . . And while changing the density (and we're talking about beneficial changes), it means that there is a part. But we have to fix the upper limit of the light. . . . . . . If so, and thus more useful level, and then more detail. In the light exactly as in the shadows - well, there is the same level change (maybe because of the non-linear plot something stretches).

12.11.2002 12:07:00
Ok! Let's. . count up. . .
quote:
:
To display the incoming daipazon 10 stops (3D), and going to need those same 3D.
right, and now tell me the device is capable of showing these 3D. Suppose you called a slide projector. Then we have to prepare for such a negative pokazometru - I for one do not know a simple way (just shove a negative frame - does not go as undervalued QC and the negativity he still ). You can use an LCD projector - such data have been about 500: 1, but about their linearity is not all obvious, and only 2, 7D. Therefore, it's still worth taking into consideration that to save a scene with 3D (for posterity ) is in the negative - one (and is already well and better than the slide), but simply to show them - is another. And if we can not easily pull out of the black box all the information, even though we know that it is put there, does not mean that she was gone. And you tend to say so.
quote:
What would be a great bit scanner was not - it does not solve the problem. . . . After all, these emerging
D measured banal - to the light. And every stop coming - this change light sensor 2 times. And if there is no change, then there is no change D.. . . If negative 2D and density - is the density. . . . . more of her not to pull. It is about 7 bits suffice to convey that.
Wait a second, wait a second - Analog changes in brightness even just 2 times and can be expanded to 8 bits (and at 10, 12, 14... There photon passes 1h00, 1h01, 1h02... 2h00) bit: bit scanner nothing to do with - before a lot of this was discussed and with your participation as well. And the fact that some scanners CCD big noise - this is simply the stage - after all there and PMT scanners.
quote:
now on. . . . about the boundaries of this very smart metering D. (or experience of the photographer) will allow not to lose light and thereby fix the border on the light. And then it's simple. . . .
. . . lose shade or distort the tonal transfer nonlinear plot, you should really finish . Generally strange - after all the negative does the same thing as the slide, only better (or experience the photographer ) and more, only the information in it "a bit archived", and, I emphasize, without loss, and access to it is only slightly more complicated than slide (though complicated by the presence of the mask).
quote:
This is the same when I initially have 16 bit linear TIFF. . . . . Next, take it reduced to 8-bit.
And if I save, and then again try to make it a 16-bit, then nifiga of this good will - the whole layer of the lower levels (details) trimmed
Here here I am just about this inaccuracy and your recalled, stressing the word ANALOG (though also, of course, relative - within the grain structure - but that's another topic).
quote:
slide in easier - have a range of output. . . . And while changing the density (and we're talking about beneficial changes), it means that there is a part. But we have to fix the upper limit of the light. . . . . . . If so, and thus more useful level, and then more detail.
Okay, and I look at you, I repeat once more - scan in 256 colors your slide (about 6 bits per color) and count the number of remaining parts It says only that the slide and "skilful" actions You can "drop / lose / not get it" part. But does not mean that "snaps a lot, but as soon as shown - devil all confused." Just need to extract corresponds to a negative rather than a slide method. By the way, if we are to be honest to the end, then slide QC little overstated in relation to reality and before viewing (or scanning), it must be a little lower the contrast. Even though it probably depends on the film.
 

12.11.2002 12:58:00

go through the levels. . . . . ADC same linearly. . . . . each stop - 2 times decrease. . . . . If negative 2D and density - is the density. . . . . more of her not to pull. It is about 7 bits suffice to convey that. Increasing the bit depth and levels do not lead to anything - one hell will be transferred to one level. Or there will be a continuous noise.

problem inconsistencies DD and negative scanner is solved very simplicity correct exposure t. E. The use of non younger (that you offer), and the most significant bits of the scanner. Only if the noise level will allow the scanner to pull all, without exception, the film stops in strict compliance of its HC.
Pages: 1 2 3

Latitude (dynamic range) CCD sensors. What are the prospects?

info@www.about-digital-photo.com