Within the next few weeks we need to make a decision on which CCD camera(s) to purchase for the Baton Rouge Observatory (BRO). The choices are numerous and range in price from a few thousand dollars to tens of thousands of dollars. Our initial selection criteria matched the camera / scope combination to optimize the pixel image scale and to maximize the chip field of view (FOV). This study showed that we may need to purchase two different cameras. One camera should have an image scale of 1.5 to 2 arcseconds / pixel and a FOV of tens of arcminutes for deep space work, while a second camera would be optimized for planetary pictures with an image scale of ~0.5 arcseconds and a FOV of a few minutes.
Over the last several months I have had the opportunity to use the LSU Department of Physics and Astronomy Meade Pictor 416 CCD camera with very mixed success. This experience has convinced me that our camera selection criteria must be expanded to include parameters that reflect how we will actually use the camera under our relatively poor seeing conditions as well as more subjective parameters such as company reputation and service.
In this report I examine a handful of potential cameras from two firms, Apogee and Santa Barbara Instrument Group (SBIG), which have a reputation for producing high quality products and with which I have had a very positive interaction. The comparison includes not only the relative image scale and FOV, but also provides calculated exposure times that take into account the quantum efficiency, camera noise characteristics and seeing conditions. Also included is a brief explanation of the methodology used in the comparison as well as particular references that I have found useful.
The publications used here include magazine articles as well as online publications. These are all recommended reading.
III. CCD CAMERA PARAMETERS
The eight CCD cameras considered in this report are listed in Table 1. The distinguishing characteristics for each camera includes the pixel size in microns for both the X and Y dimension, the number of pixels in the X and Y dimensions, the readout noise in electrons per pixel and the dark current in units of picoAmpere per cm2. For a given telescope the pixel size sets the image scale, the number of pixels determines the chip FOV and the noise characteristics define the sensitivity of the device. These selected devices cover a broad range of capability and represent what is currently available in the commercial market.
|Table 1: CCD Cameras and Characteristics|
|Camera||CCD Chip||Pixel Size (µm)||Pixel Number||Readout Noise||Dark Current|
|Apogee AP7||SITe SI502A||24.00||24.00||512||512||15||89.1|
|Apogee AP8||SITe SI003A||24.00||24.00||1024||1024||15||89.1|
A. Image Scale and Field of View
The pixel image scale and chip FOV for the various cameras depend upon the telescope characteristics. In our case the BRO telescope will have a 20" diameter primary mirror and an f-ratio of 8.1. The f-ratio can be adjusted somewhat by introducing a focal reducer (to decrease the f-ratio) or a barlow (to increase the f-ratio) into the light path. For now, however, let us assume that we will not do that. The image scale (IS) in arcseconds can then be calculated from,
IS = (8.12 PS) / (DM f)(1)
where PS is the pixel size in microns, DM is the primary mirror diameter in inches and f is the telescope f-ratio. The FOV in arcminutes for the chip is then determined by multiplying the image scale by the number of pixel and dividing by 60. Table 2 shows the X and Y image scale and FOV for each of the CCD cameras if it were used with the BRO telescope.
|Table 2: Camera Image scale and Field of View|
|Camera||Image Scale (arcseconds)||Field of View (arcminutes)||Pixels per Star|
For time periods greater than a few seconds a star image is smeared by atmospheric distortion to a diameter of several arcseconds. For typical-to-good seeing conditions at the BRO, I would guess that “seeing conditions” would broaden star diameters to about 4 arcseconds. The last two columns of the table indicate the number of pixels that would cover such a star image. Ideally, one would want about two pixels to cover a star image and anything more (i.e. over sampling) degrades the sensitivity of the camera. From the table we see that the AP6, AP7 and AP8 somewhat oversample, and the remaining cameras significantly oversample.
This problem can be controlled a bit by “binning” the image, where the charge signals in adjacent pixels are summed prior to being digitized and, in effect, mimicking a CCD which has larger size pixels. For example, if we were to do a 2 x 2 binning with the AP6 the effective pixel size would be 48µ x 48µ, the image scale in Table 2 would increase to 2.4“ x 2.4” and the star image sampling would be about 1.6 pixels. Now we are somewhat undersampling! In this case, we have optimized the sensitivity of the camera, but we are now loosing resolution. (You can't win in this life!) Notice that binning does not change the camera FOV; the AP6, AP8 and AP10 still have a FOV in excess of 20 arcminutes. One could potentially get closer to the sampling ideal for long exposures by using the 2 x 2 binning with the AP10.
The star image widens for long time periods mainly due to atmospheric scintillation. For short exposures (< 0.5 seconds) there is a chance to “freeze” the atmosphere conditions and momentarily have significantly improved “seeing.” Due to the greater sensitivity of a CCD over film a planetary image taken with a CCD generally requires very short exposure times and, thus, can take advantage of small, sub-arcsecond image scales.
On the basis of these arguments we have previously come to the conclusion that we may need to purchase two different CCD cameras for the observatory. One camera would be devoted to deep space (long exposure) imaging requiring a large image scale and FOV. From Table 2 possible candidates for this class include the AP6, AP7, and AP8. The second camera would be dedicated to planetary work, requiring sub-arcsecond image scale and a FOV of only a few minutes. In this case, the SBIG PixCel and ST-4X appear to be ideal candidates. Notice, however, that a camera like the AP10 could potentially do double duty. In normal mode it would have an image scale suitable for planets, while with 2 x 2 binning the AP10 could be used for deep space imaging.
B. Noise, Dark Current and Dynamic Range
There are two major noise sources in a CCD camera which affect performance. The first type, referred to as Readout Noise in Table 1, is the noise introduced by the circuitry responsible for amplifying and digitizing the charge deposited in each pixel by the collected photons. This noise is not a function of time but is introduced each time a charge signal is digitized. For the cameras listed in Table 1, the readout noise ranges from 15 to 30 electrons per pixel. This background in the signal can not be independently measured as it is intrinsic to the electronics which measures the signal. A manufacturers specification (values in Table 1) could be subtracted from the signal, but the fluctuation will contribute to the signal uncertainty and can not be removed. The only way to reduce this noise component is to use a camera with low noise electronics.
The second major noise source is the "dark current" which refers to the thermally generated charge produced in each CCD pixel in the absence of light as a function of temperature and time. The dark current can be measured by taking an image with the camera shutter closed (i.e. a “dark frame”) and subtracted from the “light frame” (i.e. shutter open). Once again, however, the fluctuation in the dark current contributes noise to the light intensity uncertainty.
The dark current level and the consequent noise component can be reduced by cooling the CCD to temperatures below 0o C. Typically, for every 5-6o C of cooling the dark current is reduced in half. For example, if we assume that the "doubling temperature" is 6o C then a 30o C drop in temperature will reduce the dark current by a factor of 32. In an effort to minimize intercomparison of cameras most companies specify the dark current performance in either electrons per pixel (e- / pixel) or picoAmpere per area (pA / cm2) at different temperatures (TS) ranging from -10o C to 25o C. The dark current values listed in Table 1 were all scaled to a temperature of 25o C assuming a doubling temperature of 6o C and, where necessary, converting the units to pA / cm2. The unit conversion from pA / cm2 (IpA) to e- / pixel (Ie) used equation (2),
Ie = IpA 0.0624 PSX PSY(2)
where PSX and PSY are the X and Y dimensions of each pixel in microns and the temperature scaling factor (FT) is determined from equation (3),
FT = 2(ÆT / 6) (3)
where in this case ÆT = 25o C - TS.
All of the cameras listed in Table 1 are equipped with thermoelectric coolers to chill the chip and, consequently, reduce the dark current noise. The coldest chip temperature possible, however, is dependent upon the ambient temperature (TA) and the maximum temperature change possible with the particular cooler (TC). Not all coolers are created equal and companies specify TC from 30o C up to 55o C. Thus, the minimum CCD temperature (Tmin) can vary from camera to camera. In Table 3 the camera cooler capability is listed along with the minimum CCD temperature, assuming TA = 27o C (i.e. a warm Baton Rouge evening), and the dark current expected for the minimum temperature. For these calculations, the temperature scaling factor (3) was determined with ÆT = TA - TC - 25o C.
|Table 3: Camera Cooling and Dark Current|
(e- / pixel sec)
Table 3 shows that with the camera cooler taken into account the relative dark current performance is somewhat different than that when the same temperature is used for the comparison as in Table 1. The AP6, AP7 and AP8 all have about equal performance while the AM13 and ST-4X generate relatively more thermal noise per second. Note that as the dark current is a function of the ambient temperature it is probably important that we monitor the temperature in the observatory as well as on the chip and take dark frames often during an observing session.
Also listed in Table 3, as the last column, is the “Well Depth” or the charge (in electrons) at which each pixel will saturate. The ratio of the Well Depth to the noise level (usually the Readout Noise) is the dynamic range of the camera. This characteristic reflects the ability of a camera to quantitatively detect very dim and very bright pixels within a single image. A large dynamic range helps reduce “blooming” (the smearing of charge from bright pixels into adjacent pixels) and also mitigates the small-field problem for photometry by allowing a greater difference in magnitude between the target star and comparison star. This increases the odds of finding a suitable comparison star in the field. In this context, the AP7 and AP8 have the largest dynamic range for the cameras being compared.
C. Quantum Efficiency
The quantum efficiency (QE) of a sensor provides a measure of how well photons are converted into electronic charge. Within the visible spectrum the photon to electron conversion factor is less than unity and varies as a function of wavelength. At a given wavelength, however, the creation of charge from the incident light is intrinsically linear. In general, CCDs are sensitive to the near infrared (800 to 1000 nm) which is the reason why TV remote control signals show up well on video camcorders and why telescopes need to be equipped with infrared absorbing surfaces to prevent light in this wavelength from scattering into the camera.
The method of manufacture can greatly affect the QE of the CCD. Most CCD sensors are “front illuminated.” This means that incoming light must pass through the polysilicon gate structure as well as the silicon dioxide layer before reaching the photosensitive layer. These structures and layers absorb the light and limit the sensitivity of a front illuminated CCD to wavelengths above 400 nm. Typically, a front illuminated sensor will have a peak QE of 35% to 40% in the red to near infrared wavelengths and a QE of at most a few percent for wavelengths greater than 400 nm. Alternatively, the CCD sensor can be thinned down to the range 15 - 20 microns and turned around so that the incoming light is now focused on the back side. As the light is now directly absorbed in the CCD photosensitive layer the QE of such “back illuminated” sensors is in the range 70% to 85% over most of the visible spectrum and there is significantly enhanced sensitivity to blue and ultra-violet. Table 4 shows the typical QE as a function of wavelength for both front and back illuminated sensors, the efficiency relative to read wavelengths and the expected telescope mirror reflectivity.
|Table 4: Comparison of QE between CCD Sensor Types|
|Color||Wavelength||Front Illuminated||Back Illuminated||Mirror Reflectivity|
From Table 4 we see that the advantages of a back illuminated sensor are quite dramatic. First, in the red and yellow wavelengths the QE is more than double that for a front illuminated sensor. This means that more of the photons collected by the telescope will be converted to charge in the CCD. In turn, this translates to shorter exposure times for a given object and, consequently, a smaller dark current noise component. Second, across the visible wavelength range the QE for a back illuminated sensor drops by only ~20% while the corresponding drop for a front illuminated device is closer to 80%. This means that images taken with a back illuminated CCD at different wavelengths will have approximately equal length exposure times and approximately equal noise contributions allowing quantitative comparisons between the separate images to be more easily accomplished. Further, tri-color pictures taken with red, green and blue filters can be done much faster with a back illuminated CCD than with a front illuminated sensor. For the cameras listed in Tables 1, 2 and 3, the AP7 and AP8 use a back illuminated sensor while the remainder use front illuminated CCD chips.
IV. A COMPARISON BASED UPON PHOTOMETRIC CRITERIA
In addition to providing the public with an exceptional view of the heavens we expect the BRO to function as a teaching laboratory where quantitative astronomy techniques can be learned and practiced. One such laboratory exercise which requires a broad range of skills and which can be applied to a number of different studies would be to make photometric observations of a variable star. Consequently, it would be useful to compare the expected camera performance for photometric measurements under conditions estimated to occur at the BRO.
A. Method for Estimating Exposure Times
In a recent series of CCD Astronomy articles [5,6] Paul M. Rybski outlined a procedure for estimating CCD camera photometric performance. This method is based upon calculating the uncertainty in determining a stellar magnitude for a given exposure time by taking into account noise sources such as sky background, dark current and readout noise. By then setting the stellar magnitude to be measured and estimating a level for the sky background the exposure time necessary to obtain a given uncertainty can be determined for each camera. Further, the exposure time for different light wavelengths (red, green, blue) can likewise be determined by taking into account the camera quantum efficiency and telescope mirror reflectivity.
Similar to many other measurements, the number of “photon” generated electrons seen by the CCD (M) includes the signal electrons from the star to be measured (S) plus the background electrons due to sky background and camera noise (B). These backgrounds can be subtracted so that S = M - B, but the uncertainty in the signal (S) includes the fluctuations in the measurement and in the background. Using a straight forward propagation of error calculation the relative uncertainty in the signal (U) can be determined and is shown in equation 4.
For an exposure time t in seconds, S = st where s is the number of star-generated electrons per second and B is given by equation 5,
B = n(b + d)t + nmR2, (5)
where n is the number of CCD pixels involved in collecting all of the star photons, b is the number of sky-background generated electrons per pixel per second, d is the number of dark-current background electrons per pixel per second, m is the number of separate integrations added together to create the final frame, and R is the readout noise measured in electrons per pixel. By squaring both sides of (4) and substituting for S and B equation 6 is obtained.
U2 = (st + 2n((b + d)t + mR2) / (st)2(6)
Equation 6 can be put in the form of a quadratic equation in t (7), which can be solved for t using (8) where x, y, z are given, respectively, by (9), (10) and (11).
U2 s2 t2 - [s + 2n(b + d)] t - 2nmR2 = 0 (7)
x = U2 s2 (9)
y = - [s + 2n(b + d)] (10)
z = -2nmR2 (11)
For these equations, R is from Table 1 while d is taken from Table 3 and we will assume that we are using only one image so m is equal to 1. Thus, the trick is coming up with reasonable values for n, s and b.
B. The Conditions of the Comparison
To determine the values of s and b defined above, one must choose the minimum magnitude for stars in the photometric study, estimate a value for the intrinsic sky brightness and then propagate the light through the atmosphere and telescope optics taking into account the transmission efficiencies and the conversion of light into electrons by the CCD. For this comparison, I have chosen magnitude 15 to be the study limit and magnitude 17 per square arcsecond for the sky brightness. These numbers are somewhat arbitrary and, in particular, the sky brightness estimate is a guess at best. In his articles, Rybski states that magnitude 18.5 is the sky brightness for an average moonless, rural sky about 30 miles from a major city. This might be comparable to what could be obtained at the old Clinton, LA observatory site, but the best nights at the BRO will almost certainly be worse.
I have also assumed that the star emission and the sky brightness is flat across the visible spectrum and that outside the Earth's atmosphere a magnitude 0 star generates 1000 photons per second per square centimeter per angstrom. Since for every 5 magnitudes the star brightness changes by 100 times we can determine the top-of-atmosphere light flux (K) from equation 12,
K = 10[ (15 - 2M) / 5], (12)
where M in this case is the magnitude. Thus, for the star under study K = 10-3 photon / (sec. cm2 angstrom) and for the sky background K = 1.6 x 10-4 photons / (sec. cm2 angstrom arcsecond2).
To complete the estimates for s and b we need to take into account the various efficiencies from the top of the atmosphere to electrons in the CCD pixels. This is represented by equation 13,
s = AT RT B QE TATM K, (13)
where AT is the clear aperture of the telescope in cm2, RM is the reflectivity of the telescope mirrors at a given wavelength, B is the filter bandwidth in angstroms, QE is the quantum efficiency of the CCD to convert photons of a given wavelength to electrons, and TATM is the light transmission efficiency through the atmosphere. The BRO telescope has a primary mirror diameter of 20" and a secondary mirror diameter of 7.25" which yields a clear aperture of AT = 1760.5 cm2. The reflectivity off both mirror surfaces is given in Table 4 along with the quantum efficiency for both front and back illuminated CCDs as a function of wavelength. Finally, I assumed that the atmospheric transmission is 75%, independent of wavelength, and that we have filters which are 100% efficient across a bandwidth of 220 angstroms.
Under these conditions, equation 13 indicates that in the yellow wavelength a front illuminated CCD will have s = 86 electrons / second from a 15th magnitude star while a back illuminated CCD will have s = 205 electrons / second. The value for b is obtained in a similar fashion, but now the image scale of the camera (Table 2) must be taken into account to obtain the number of background electrons per pixel per second. Thus, for a camera with an image scale of 1.2 arcseconds / pixel the sky background contributes 20 electrons / (second pixel) in a front illuminated CCD and 46 electrons / (second pixel) in a camera with a back illuminated chip.
Finally, the number of pixels involved in the photometric measurement (n) must be estimated. This was done by assuming a “seeing conditions” star diameter where half the photons are contained within the diameter while the other half are spread over an area 10 times larger. As mentioned in section III.A typical to good seeing conditions at the BRO are assumed to produce star diameters of 4 arcseconds and for a 1.2 arcsecond image scale camera n would be 34 x 34 = 1156 pixels.
Now it may become apparent why oversampling star images with a CCD camera should be avoided. First the resolution of a long time exposure is determined by the seeing conditions not necessarily by the image scale and the number of “signal” electrons from the star is fixed no matter how many pixels are involved. However the sky background and, in particular, the readout noise contributes to the measurement uncertainty in proportion to the number of pixels that are necessary to collect all of the star photons. Thus, a camera that significantly over samples will have a poorer signal to noise ratio than one which is properly matched to the telescope and seeing conditions.
C. The Calculated Exposure Times
Using equations 8 to 11, the conditions and assumptions discussed above and the camera parameters listed in section III, the estimated exposure times necessary to determine a stellar magnitude to 10% uncertainty is shown in Table 5 for each camera.
|Table 5: Estimated Exposure Times for a 10% Photometric Measurement|
The first two columns of the table show the required exposure time in minutes for wavelengths around 6000 angstroms and the relative factor difference between the cameras with respect to the AP7 and AP8. These times would be relevant to taking “gray scale” images or to a photometric exercise at a single wavelength. The difference in performance between the back illuminated CCD cameras (i.e. the AP7 and AP8) and the remaining cameras which all use front illuminated devices is quite striking. At least three pictures could be taken with the AP7 or AP8 in the time it takes any of the other cameras to take a single image. This could be particularly important if there are a number of “stacked up” observatory users or if several objects are to be observed during a given session.
The remaining columns in the table are the estimated exposure times in the red, green and blue wavelengths, the total time (in hours) to complete all three images and the relative total exposure factor between cameras. Here the difference between the AP7, AP8 and the other cameras is astonishing! Now the relative difference is on the order of a factor of 10 to 20. To take a tri-color image the AP7 would need about 20 minutes, while the AM13 would need close to 7 hours. This tremendous difference in relative performance is quite understandable. For the back illuminated cameras the quantum efficiency is relatively uniform across the visible spectrum (see Table 4) so that the exposure time varies from ~5 minutes for red wavelengths to ~7 minutes in blue. The remaining cameras use front illuminated chips where the blue exposure time exceeds 2 hours due to the very low QE in this wavelength.
To make matters worse for the front illuminated cameras, the times in Table 5 do not include the dark frame exposures needed to subtract the dark current background from the images. Including the dark frame exposures typically doubles the time necessary to take an image. Thus, with the AM13 one would need an “all nighter” to take a tri-color picture. However, as shown in section III.B the CCD dark current will be a function of the ambient temperature and this temperature will likely vary throughout the night. So for accurate background subtraction one would want to interleaf dark frames with the light frames and the exposure time per frame should be less than the rate at which a change in the ambient temperature occurs. The long blue exposures would, therefore, need to be composed of multiple images. In this case, m in equation 6 would be greater than one and for m greater than about 3 or 4 the readout noise would dominate the photometric measurement uncertainty and longer exposure times would not improve the accuracy.
Finally, one might consider that the BRO telescope system is specified to track an object over a 20 minute time interval to less than 1 arcsecond error without the need for any guiding. For the long exposures required by most of the cameras guiding may become necessary and this would defeat our intent to fully automate the BRO. However, with the AP7 or AP8 a tri-color image could be taken in less than 20 minutes, within the telescope guiding specification, and, thus, could be well suited to unattended operation.
V. THE CAMERA TRADE-OFFS AND RECOMMENDATIONS
For planetary work the SBIG PixCel or ST-4X may be the cameras of choice. Both have the small image scales and FOV necessary for taking planetary images while the PixCel might be somewhat better matched in these parameters with the BRO telescope. Planets are generally very bright and require short (< 1 second) exposures. This implies that dark current noise is less important than readout noise. In this case the ST-4X has a lower readout noise than the PixCel.
Another problem associated with bright planetary images is that exposure times less than a tenth or even a hundredth of a second may be needed to keep the image from saturating. For these short exposures one either needs to use neutral density filters to cut down on the light or have a camera with “electronic shutter” capability. In such a camera a portion of the CCD chip is shielded under a metal baffle and is used as an image buffer. The image from the unshielded section of the CCD can be very quickly moved to the buffer where it is then read out at normal speed. This provides for very fast “shutter” speeds. Neither the PixCel nor the ST-4X is claimed to have such an electronic shutter. The ST-4X does have a half-frame mode which uses one half of the chip as an image buffer for the other half. This is similar to the electronic shutter, but in this case the buffer is unshielded and, thus, would be light sensitive during the readout. The exact exposure capabilities of the PixCel still need to be determined.
For these reasons it is unclear whether either the PixCel or the ST-4X is suitable optimized for the BRO planetary work. More work is needed to decide this issue or to locate a more appropriate imager. Usually, these small chip cameras cost less than $2,000 which makes them relatively easy to fit into the equipment budget later in the project.
For the deep space camera we can rule out the AM13 on the basis that it has the highest dark current at the nominal operating temperature, a low dynamic range, and a relatively small FOV. While the SBIG ST-8 has relatively good noise characteristics, it also suffers from a low dynamic range, a FOV that is even smaller than the AM13 and the smallest image scale of all the cameras which would result in images that are significantly oversampled. Thus, the ST-8 can be ruled out as well.
Of the remaining cameras, the AP10 has the largest FOV (24 arcminutes by 24 arcminutes) but the image scale would result in pictures which are oversampled. This could be solved by using the AP10 in a 2 x 2 binning mode which would bring the image scale up to 1.4 arcseconds without changing the FOV. In this mode the AP10 would have noise characteristics that are similar to the AP6 which has an image scale of 1.2 arcseconds and a slightly smaller, but comparable FOV. The AP10 does have a better dynamic range relative to the AP6, but comes equipped with only a 14 bit ADC. Thus, the AP10 ADC would potentially saturate if the camera were put into 2 x 2 binning mode. The AP6, on the other hand, has a 12 bit ADC mode which does not appear to cover the full dynamic range and a 16 bit ADC mode which would be useful for 2 x 2 bins but would be overkill for normal operation.
For these reasons the AP10 and AP6 appear to be roughly equivalent, but the AP10 might have a slight advantage in that for 1 x 1 binning its image scale is roughly what is needed for planetary observations. One might, therefore, be able to use the same camera for both planetary and deep space imaging by merely switching the binning. However, the AP10 appears to be equipped with a normal mechanical shutter which would not be able to handle the short exposure times necessary for planetary pictures. A neutral density filter could be used to decrease the light intensity and this filter could be mounted on a filter wheel to be able to move it in and out of the camera FOV.
Neither the AP6 nor the AP10, however, can compare in performance to the AP7 and AP8. Both of these cameras use back illuminated CCD chips and have low noise characteristics, a reasonable image scale, the largest dynamic range of any of the cameras and a 16 bit ADC to match the dynamic range. The back illuminated chips have very high quantum efficiencies across the visible wavelengths and need a factor of 3 less time than the AP6 or AP10 to take a single picture at mid wavelengths or a factor of 10 less time to take a tri-color image.
The difference between the AP7 and AP8 is field of view. The AP7 uses a 512 x 512 pixel chip and, consequently, has a FOV of 10 arcminutes by 10 arcminutes, while the AP8 uses a 1024 x 1024 chip and has a FOV of 20 arcminutes by 20 arcminutes. One should note that even though the AP7 has a smaller FOV than the AP6 or AP10, the shorter required exposure time allows the AP7 to mosaic the same or larger sky area during the time it would take the AP10 (or AP6) to take one image. The ideal deep space camera, however, appears to be the AP8. It has all the characteristics of the AP7 as well as the large field of the AP6. With the AP8 we could image a 1 degree by 1 degree sky area in less than 50 minutes! The AP8 is the camera I would recommend that we purchase for the BRO, if we can afford it.
|Table 6: Recommended Deep Space Cameras and Cost|
|Camera||Grade 2||Grade 1|
Table 6 lists the cameras I recommend for deep space work, in order of priority, along with their pricing for two different grades of CCD chips. (One should note that another $3000 to $4000 must be added to these prices for the purchase of a high quality filter wheel.) The two grades refer the number of defects in the chip. Grade 1 is generally defect free, has lower yields during fabrication and has a higher cost. What constitutes a Grade 2 chip can vary between manufacturers by usually there are no column or row defects and pixel defects are of lower sensitivity rather than completely dead. Generally, Table 6 yields further evidence to support the truism that “you get what you pay for.” The most expensive camera is the Grade 1 AP8 and the least expensive (with the exception of the AP7) is the Grade 2 AP6. The camera which does not fit this rule is the AP7 which only comes in a Grade 1 version and costs less than $10,000. Thus, the wide field AP8 is a factor of 2.7 to 3.2 more expensive depending upon the grade of the chip. While I do recommend the AP8, I also believe that will need to think very carefully about whether this extra field of view is worth the expense.
Last updated by Frederick J. Barnett on Thursday, January 20, 2005 11:52:02 AM