MACROBUTTON MTEditEquationSection2 Equation Chapter 1 Section 1 SEQ MTEqn h * MERGEFORMAT SEQ MTSec 1 h * MERGEFORMAT SEQ MTChap 1 h * MERGEFORMAT Physics Basis of AFIT Sensor & Scene Emulation Tool (ASSET) for Remote Sensing ApplicationsFernando FernandezThe Air Force Institute of Technology Dept. of Elec. & Comp. Eng. WPAFB, OH 45433 Fernando.Fernandez@afit.
eduAbstract. Modeling of electro-optical and infrared (EO/IR) sensors in the past 30 years has been of high importance, especially for the military industry. Modeling can facilitate the understanding of a system’s behavior without expensive and time-consuming testing of an actual system in the real world where truth data is required but is sometimes impractical to obtain for real sensors. Additionally, testing a system’s behavior by experimentation might not be a viable option due to difficulties in obtaining the system’s hardware. The AFIT Sensor and Scene Emulation Tool (ASSET) is a physics-based model used to emulate wide field of view (WFOV) EO/IR sensors. However, to truly represent real WFOV sensor, the data used must accurately emulate real scene and sensor characteristics.
- Thesis Statement
- Structure and Outline
- Voice and Grammar
- Conclusion
Also, modeling for EO/IR systems must provide realistic scene conditions and have control over target, background, atmosphere, and sensor artifacts. The intent of this paper is to provide a detailed description and a comprehensive overview of ASSET. IntroductionCurrently available high-fidelity modeling tools are generally intended for small fields of view (FOV) CITATION Bro03 l 1033 1 CITATION You17 l 1033 2. Even though some simulators may be used to cover larger areas (e.
g. WFOV), they can become computationally expensive. ASSET was designed to emulate WFOV sensor and is therefore more computationally efficient at generating synthetic data sets with realistic radiometric, noise, and sensor properties representative of a broad range of scenes and sensors operating in the visible thru thermal infrared wavelength.
The purpose of this paper is to provide a detailed description and a comprehensive overview of ASSET for an understanding of the end-to-end process of photons leaving a source (sun and background) to overhead sensor and its ultimate conversion to digitized signal data; and to develop an understanding of electro-optical and infrared (EO/IR) remote sensing applications that will be implemented in the AFIT Sensor ; Scene Emulation Tool (ASSET). The discussion presented in the following section will describe each of the steps taken to go from photons to counts in order to model radiometrically realistic scenes and sensor response data. The model begins with a high-resolution source image with samples.
Characteristics of the sensor, scene, source, viewing geometry, and noise emulated are specified with and ASCII text configuration file containing all the user-provided parameters and files. To start, ASSET takes the high-resolution image and the configuration file as an input. Next, calibration is applied, which results in top of the atmosphere radiance. Now, with the high-resolution image and all input parameters provided, source radiance is added to the scene. This includes reflected radiance, thermally emitted radiance, and path radiance. Once all scene parameters are included in the source image, the image is resampled to samples using the user-specified oversample factor . Taking the inverse Fourier transform of the product of the frequency response of the source image, optics, and detector response results in an oversampled representation of the source image. The image is then integrated spatially by the detectors and simultaneously sampled at the detector array which reduces the number of samples from samples to pixels.
After sampling at the detector array with all scene content incorporated, fixed pattern noise (FPN) is added to the detector array. After FPN is included in the detector array, ASSET returns the source image with random samples of shot noise drawn from an approximation of the Poisson distribution with a mean proportional to the sum of photo-generated signal and bias current. The noise components associated with the electronics such as thermal, read, and flicker noise are then added. Finally, the detector frame now in units of electrons and including most noise sources, is quantized by dividing the signal by the conversion gain . The resulting count values are rounded down, introducing quantization noise. Each of the steps in this end-to-end process must be understood to model realistic scenes and sensor response data.ASSET Model DescriptionRadiometryThe fundamental radiometric quantities are shown in Table 1.
Radiance is the radiant flux emitted or reflected by a surface, per unit solid angle per unit projected area, and spectral radiance is the radiance per unit wavelength. Radiance implies integration over all wavelengths. Power is the amount of energy per unit time. The amount of power a source deliver per unit solid angle is intensity. Both exitance and irradiance have units of power per unit area, but exitance is power exiting a surface and irradiance is the amount of power incident on a surface. Radiance incident on a surface can be described by three processes that occur in any material surface. These are spectral absorptance (), spectral reflectivity (), and spectral transmissivity (). The sum of the ratio of these three must equal one due to conservation of energy.
Furthermore, since any material above 0 k emits heat, the emission of radiation must be considered. Spectral emittance () is the ratio of emitted radiation (from surface) to that of a blackbody at the same temperature. Symbol Quantity Equation Units Energy Joule Flux W Intensity W/sr Exitance W/m2 Irradiance W/m2 Radiance W/m2-sr* The conversion from watts to photons/second is accomplished using the photon energy ?/hcTable SEQ Table * ARABIC 1. Fundamental radiometric unitsFigure 1 illustrates how reflected, emitted, and path radiance contributes to the total spectral radiance at the aperture of the sensor.
In the presence of the atmosphere, for both reflected and emitted components, spectral radiance is attenuated by the atmosphere and path radiance is added to obtain the total spectral radiance at the sensor, MACROBUTTON MTPlaceRef * MERGEFORMAT , MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 1)where is the reflected radiance, is the emitted radiance, is the path radiance, and is the atmospheric transmission from surface to sensor. Since the detectors emulated in ASSET respond directly to photons, the total radiance at the sensor is converted to photon flux. If we multiply the radiance at the detector with the throughput of the system , where is the solid angle subtended by the optics at the detector, is the area of the detector, and the product is the throughput of the system, we get flux in units of photons.Figure SEQ Figure * ARABIC 1. Spectral radiance contributing to total spectral radiance at the apertureA solid angle is a 2-D angle in a 3-D space subtended at the center of a sphere illustrated in Figure 2. It is a measure of how large the object appears to be to an observer from the center of a sphere CITATION Pal10 l 1033 3 CITATION Dri13 l 1033 4. A solid angle, , has units of steradians (sr) and is the angle subtended at the center of a sphere by an area on the surface of the sphere, , MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 2)where A is the area on the surface of the sphere in squared meters and R is the radius of the sphere in meters.
?AR?ARFigure SEQ Figure * ARABIC 2. SteradianThe maximum area on a spherical surface is 4?R2. Therefore, the maximum steradians is given by 4?R2/ R2 = 4? sr.
In addition, in a typical far-filed imaging system, R is typically significantly larger than any dimension of A and the area on the surface can be assumed to be constant CITATION Dri13 l 1033 4. Radiation that is successfully reflected or emitted from the source can be reflected in various ways depending on the characteristics of the surface CITATION Cai10 l 1033 5. Mirror like surfaces, for example, reflectance occurs at the same angle to the surface normal as the incident angle but on the opposite side of the normal. In a Lambertian source, radiance is independent of direction and emitted or reflected radiance is equal in all directions; for a Lambertian surface, radiance is related to exitance (or irradiance) by, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 3)where is the power exiting a surface. ASSET frequently uses Lambertian approximations to describe the angular distribution of radiant power from the scene. A Lambertian source intensity is proportional to the angle of observation and decreases as the angle moves away from the normal as shown in Figure 3.
This change in intensity is compensated by an increase in the area perceived by the sensor so that the scene appears to have a constant radiation CITATION Dri13 l 1033 4.Figure SEQ Figure * ARABIC 3. Lambertian sceneIn ASSET, the throughput, also known as etendue, is used to get the total band-integrated photon flux falling on the detector and is given by. MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 4)Figure 4 Illustrates the optical path for the case where the source image fills the detector where the area of the source, the area of the optics, the area of the detector, is the solid angle subtended by the ground, and are the solid angles subtended by the optics from the source and detector respectively, and is the solid angle subtended by the detector. For this case we get the following throughput relationship: MACROBUTTON MTPlaceRef * MERGEFORMAT . MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 5)Figure SEQ Figure * ARABIC 4. Optical Path (source image fills the detector)When choosing system parameters for calculation purposes it is important to choose parameters that makes calculations easier.
Since and products are easy to determine for any imaging system, any of these two pairs can be used for calculation purposes CITATION Pal10 l 1033 3. In ASSET the product is used, as the detector size and optics are always constant and easy to determine for any imaging system.Equation (1) shows the total spectral radiance at the aperture in units of radiance as a function of wavelength where emitted, reflected, and path radiance all contribute to the overall spectral radiance incident at the aperture. Given that a scene is viewed by a sensor with at a certain spectral bandwidth, the total radiance is computed in ASSET by integrating the spectral radiance over the band of interest. If the sensor is only responsive to certain wavelengths, which are usually specified by the sensor’s relative spectral response (RSR) as shown in Figure 5, we can generally integrate from ?1 to ?2.
The RSR is the overall relative spectral response of the system, , peaked normalized to one.?1 ?2 ?1 ?2 Figure SEQ Figure * ARABIC 5. Overall relative spectral response of the systemAssuming a Lambertian source, the total band-integrated per-pixel emitted radiance measured by the sensor is found using, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 6)where is the spectral scene emissivity, is the blackbody irradiance, is the atmospheric transmission from scene to sensor, and is the relative spectral response of the system.
Blackbody radiation is radiation emitted from a source with 100% emissivity at all wavelengths and is described by Planck’s blackbody equation, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 7)where c1 and c2 are constants and T is temperature in degrees kelvin. Figure 6 shows the blackbody radiation as a function of wavelength at different temperatures.Figure SEQ Figure * ARABIC 6. Spectral exitance of a perfect blackbody at different temperaturesIn the same manner, for a Lambertian source, the band-integrated per-pixel reflected radiance measured by the sensor is found using , MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 8)whereis the spectral scene reflectivity, is the atmospheric transmission from sun to syrface, is the atmospheric transmission from surface to sensor, is the relative spectral response of the system, and is the seasonally-adjusted top of the atmosphere (TOA) solar irradiance for the user-specified scene location, date, and time CITATION You17 l 1033 2. The product is sometimes grouped as one term and is referred as the solar irradiance . The total per-pixel radiance measured by the sensor within a spectral band is found using, MACROBUTTON MTPlaceRef * MERGEFORMAT MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 9)where is the total reflected radiance at the sensor, is the total emitted radiance at the sensor, and is the total path radiance at the sensor (path radiance is discussed in section 2.
1.2. Atmosphere). In ASSET, there are three different cases to generate scene radiance. A brief description for each of these cases will be discussed next. (1) For Landsat-8 inputs, the user may specify the bounds of scene radiance in the configuration file (Figure 7), and radiance is obtained by directly scaling source image to the min and max radiance bounds in units of W/m2-sr. (2) The user may specify instead scene reflectivity (or emissivity) bounds in the configuration file shown in Figure 7, and reflectivity or emissivity is obtained by directly scaling source image to bounds specified in the configuration file.
These are used to later generate scene radiance using the specified atmosphere option (discussed in section 2.1.2 Atmosphere).Figure SEQ Figure * ARABIC 7. Source options in the configuration fileLastly, (3) ASSET loads reflectance, emissivity, and/or temperature maps (if specified) which are later used to generate scene radiance using the specified atmosphere option. For cases (2) and (3), scene radiance is computed using Equations (6), (8), and (9). Where , , and (and for non-uniform atmosphere) are derived from a user-defined atmosphere option obtained from a database generated using MODTRAN.
If either reflectance or emissivity maps are not specified, the other can be obtained using the relation where is the scene emissivity and is the scene reflectivity. Figure 8 shows examples of emissivity and reflectivity maps used in ASSET.Figure SEQ Figure * ARABIC 8. Examples of emissivity and reflectivity mapsAtmosphereAs radiation propagates through the atmosphere, some of the radiation is absorbed and scattered by the atmosphere. Both the absorption and scattering components are wavelength dependent CITATION Dri13 l 1033 4 CITATION Cai10 l 1033 5. Figure 9 illustrates atmospheric transmission as a function of wavelength from the visible to infrared spectrum. Figure SEQ Figure * ARABIC 9. Wavelength dependent atmospheric transmission CITATION Cai18 l 1033 6Detailed models are available that can be used to accurately model atmospheric effects for different conditions and scenarios.
In ASSET atmospheric transmission and path radiance are obtained from MODTRAN standard atmospheres. Currently in ASSET, path radiance only accounts for the absorption coefficient, but future development will include an algorithm that accounts for the scattering component. Once the absorption coefficient is obtained from MODTRAN, the transmission loss and can be determined using , MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 10)where ? is the absorption coefficient, and z is the path length. Path radiance is defined as the radiation by atmospheric particles that contributes to the total flux on the detector CITATION Dri13 l 1033 4. Currently there are four user-specified atmospheric options available in ASSET: ignore, uniform, scaled, and full atmosphere.
“Ignore” atmosphere is a specific case of uniform atmosphere, where both and are equal to one, and for all wavelengths. This special case is only valid when viewing outside the atmosphere or source image, clouds, and target signals already include atmospheric effects. Both “uniform” atmosphere and “scaled” atmosphere uses a single atmosphere profile for all geometries and is either a user-provided file or obtained from a database generated using MODTRAN. Figure 10 show an example of an atmosphere profile used for atmospheric transmission and path radiance.
The band-integrated values for path radiance and atmospheric transmission are used for both “uniform” and “scaled”, but for a scaled atmosphere those values are scaled from “reference path”.Figure SEQ Figure * ARABIC 10. Examples of (1) Path radiance, (2) surface-to-sensor atmospheric transmission, and (3) sun-to-surface atmospheric transmission for a single atmosphere profile”Uniform” atmosphere as the name implies, has a uniform atmosphere across the scene where atmospheric and solar quantities are independent of geometry (altitude, elevation, path) and therefore constant across the scene. In this case scaling is inaccurate for substantial changes in geometry and therefore only valid when the atmosphere can be assumed to be uniform across the scene (e.
g. small FOV). For a uniform atmosphere the total radiance emitted and reflected within a spectral band measured by the sensor are found using Equations (6) and (8), where , ,and are constant across the scene, or constant across all pixels; not to be confused with constant across all wavelengths. The total per pixel path radiance measured by a sensor within a spectral band is found using, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 11)for a uniform atmosphere is used at every pixel to create a uniform path radiance across the scene. Finally, the total per-pixel radiance measured by the sensor within a spectral band is found using Equation (9).In the case that a “scaled” atmosphere is specified by the user, a single atmosphere profile is used, and its atmospheric and solar quantities are scaled from “reference” path.
Reference path is from center of scene at zero altitude (z0 = 0 m) in the direction of the sensor at elevation (?0). This case is only valid for near-surface targets with scenes with no clouds and small FOV. Accurate only for slight changes in altitude and elevation across the scene (i.e. accuracy decreases as deviation from reference geometry increases)Figure SEQ Figure * ARABIC 11. Scaled atmosphere scene geometryFigure 11 illustrates the scene geometry for a “scaled” atmosphere.
In the figure, (zi,?i) are the initial (original) altitude and elevation angle, (zf,?f) are the final (scaled) altitude and elevation angle, s is the distance from a point in the scene to sensor, and ? is the atmospheric transmission from a point in the scene to sensor. The atmospheric transmission from sun-to-surface and surface-to-sensor scaled from reference value are related to the original atmospheric transmissions by, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 12)and, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 13)respectively. Now that a scaled spectral atmospheric transmission has been introduced, the spectral solar irradiance is obtained by multiplying the TOA solar irradiance with the sun-to-surface atmospheric transmission given in Equation (12). MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 14)Assuming a Lambertian source, the total per-pixel emitted radiance within a spectral band for a scaled atmosphere is found by substituting Equation (13) into Equation (6), MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 15)where is the scaled transmission from surface-to-sensor, zn is the scaled altitude, and ?n is the scaled elevation angle. It should now be evident how the emitted radiance is scaled from “reference” path using a scaled atmospheric transmission. In the same manner as that of Equation (15), both path and reflected radiance vary with altitude and elevation. Substituting Equations (13) and (14) into Equation (8) the total band-integrated per-pixel radiance reflected by the source is found,, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 16)where and are the scaled atmospheric transmission from surface-to-sensor and scaled solar irradiance at ground respectively. Finally, path radiance is scaled as reciprocal ratio of emissivities (scaled to initial).
Multiplying by emissivity ratio approximates for small changes in path, replacing initial emissivity with scaled emissivity. The emissivity is defined by the relationship where is the atmospheric transmission and is the emissivity. This emissivity-transmission relationship is important to determine the total band-integrated per-pixel path radiance. Given the initial and scaled transmissions of the atmosphere, the path radiance is descibed as. MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 17)where and are the scaled and initial atmospheric transmissions and is the initial spectral path radiance. The total band-integrated per-pixel radiance measured by the sensor for a scaled atmosphere is found using. MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 18)Figure 12 further illustrates how ASSET obtains the band-integrated path radiance, sun-to-surface atmospheric transmission, and surface-to-sensor atmospheric transmission across the scene for a scaled atmosphere. From this figure we see that path radiance and surface-to-sensor transmission are inversely proportional to each other; as path length from the sensor (located at the center of the scene) increases, surface-to-sensor transmission also increases, and path radiance decreases as expected.
This is because path radiance is a strong function of atmospheric transmission. Then we see that both solar irradiance and sun-to-surface atmospheric transmission decrease as path length from sun (located at the top left corner of the scene) increases. Figure SEQ Figure * ARABIC 12. Diagram for path radiance, solar irradiance, and atmospheric transmission as a function of geometry for each pixel in the scene”Full” atmosphere, also known as non-uniform atmosphere, is used when non-uniform atmosphere profiles are needed in the scene, where atmospheric and solar properties are calculated from all line of sight (LOS) paths from the sensor to scene points. If the user specifies “full” atmosphere as the atmospheric option, instead of using a single atmosphere profile, ASSET returns , , from a database generated using MODTRAN and interpolates to altitude and elevation angle of each sample in the scene to obtain , , for each sample in the scene as a function of wavelength, where are samples for the high-resolution image. Note that this time the sun-to-surface atmospheric transmission was not obtained from MODTRAN, this is because was obtained directly from MODRTRAN, which already includes sun-to-surface atmospheric transmission. Figure 13 shows a diagram on how ASSET obtains the band-integrated path radiance, atmospheric transmission, and solar irradiance across the scene for a non-uniform atmosphere as a function of geometry.
Figure 13 shows that the surface-to-sensor atmospheric transmission and path radiance are inversely proportional to each other and as path length from the sensor increases (located at the center of the scene), transmission increases and path radiance decreases. Figure SEQ Figure * ARABIC 13. Diagram for path radiance, solar irradiance, and atmospheric transmission as a function of geometry for each pixel in the sceneUnlike “uniform” and “scaled” atmosphere, “full” atmosphere cannot be obtained from a user-provided file and is obtained from the ASSET database only. A non-uniform atmosphere is recommended for scenarios where atmospheric path and solar conditions vary significantly (e.g.
WFOV) and where accurate scaling of target radiometry with altitude is needed. For a non-uniform atmosphere, the total band-integrated emitted and reflected radiance measured by the sensor are found using Equations (6) and (8), where , and will vary across the scene as shown in Figure 13. The total per pixel path radiance measured by a sensor within a spectral band is found using Equation (10). Finally, the total radiance measured by the sensor within a spectral band is found using equation (9).
In summary, both “uniform” and “scaled” atmosphere use a single atmosphere profile, but for a scaled atmosphere path radiance, solar irradiance, and atmospheric transmission are scaled for path geometry; a more interesting comparison is the one between “scaled” and “full” atmosphere. Both are good approximations for small FOVs, but as you get away from the center, as in the case for WFOVs, a “full” atmosphere is recommended. While “scaled atmosphere” is a good approximation, as deviation from sensor (top left corner) increases, accuracy decreases as shown in Figure 14.
Figure 14 shows the total radiance measured by the sensor for all atmospheric options. At first sight measured radiance for the “ignore” and “uniform” cases look identical, but in the “uniform” case reflected and emitted radiance from the source is attenuated in a uniform manner and a uniform path radiance is added. “Ignore” atmosphere is a special case of uniform atmosphere where atmospheric transmission is one and path radiance is zero.Figure SEQ Figure * ARABIC 14. Radiance measured by the sensor for all atmosphere optionsOpticsThe optical components in the imaging system will introduce some limitations that will affect the performance of the system. The efficiency of an imaging system affects the amount of signal measured by the sensor CITATION Cai10 l 1033 5. The transmission of the optics and quantum effienecy (discussed in section 2.
1.4 Detector Array) of the detector among other sensor characteristics, will play a key role in defining the overall efficiency of the system. This section will discuss background knowledge on the effects of optics on the total amount of radiation collected by the optics, and the effects of self-emission, which is important when imaging in the MWIR through LWIR.
The spectral transmission of the optics , affects the amount of energy that arrives at the detector from the total energy captured by the receiver aperture. In addition, secondary mirrors, lenses, and filters will block and attenuate some of the incoming radiation and self-emission from the optics will also contribute to the total radiation measured at the detector array CITATION Nel04 l 1033 7. Figure SEQ Figure * ARABIC 15. Optical path for a typical imaging systemFigure 15 depicts the optical components for a typical optical system. Each of the lenses seen in Figure 15 will attenuate the incoming radiation. In addition, when self-emission is introduced after the first lens () due to emission from the first lens, the sum of self-emission and the attenuated incoming radiation measured at the sensor will be attenuated by . All radiation that was attenuated by plus its self-emission () will be attenuated again by the subsequent mirrors until it reaches the detector array. Self-emission for mirror is computed by where ?n and Tn are the emissivity and temperature for the lens respectively.
The band-integrated radiance measured at the detector is found using, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 19)where ?o is the optical transmission and is the filter transmission in front of the detector. If we assume uniform temperature across the optical path we get the relation that and if we expand Equation (19) and group like terms we get:, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 20)The term on the left corresponds to incoming radiance from the source attenuated by all optical elements and the term on the right corresponds to the total radiance emitted by the optical system (assuming ). If we simplify the equation using a change of variables and due to conservation of energy we get the relation and if we assume the optics to have zero reflectivity () we get, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 21)where is the overall self-emission and is the overall optical transmission. ASSET generates the spectral self-emission in the optical system and is then band-integrated to obtain the total self-emission, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 22)where is the spectral self-emission and is the relative spectral response of the system.
Since the detectors emulated in ASSET respond directly to photons, the total per-pixel radiance at the detector is converted to photon flux by multiplying the radiance at the detector times the throughput of the system CITATION You17 l 1033 2. Since the detector size and the optics solid angle subtended at the detector are always constant and easy to determine for any imaging system, the total photon-flux at the detector is obtained using MACROBUTTON MTPlaceRef * MERGEFORMAT , MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 23)where is the solid angle subtended by the optics at the detector, is the area of the detector (pixel size), and the product is the throughput of the system.Now that all scene content has been incorporated into the model, we now account for spatial variations in the detector equation. Equation (23) is the total per-pixel photon-flux at the detector. Now that we introduce spatial variations, both the photon flux measured at the detector and the radiance arriving at the detector became a function of position. The total photon-flux measured by the detector array as a function of position is given by, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 24)where are samples for the high-resolution source image that serves to identify which sample is being indexed. To account for the effects of the optical system, the source image is convolved with the point spread function (PSF) of the optics. The PSF of an imaging system is the spatial impulse response of a system that accounts for diffraction effects on the optics.
The optical transfer function (OTF) of an imaging system is the Fourier transform of the impulse response , also called PSF. The modulation transfer function (MTF) is the absolute value of the OTF. To account for the effects of the optical system, the Fourier transform of the PSF is computed and multiplied with the Fourier transform of the source image to account for blurring introduced by the optics. ASSET currently uses the gaussian approximation to estimate the PSF, but future development will allow the user to specify any arbitrary PSF CITATION You17 l 1033 2. Additionally, the image is convolved with the detector response to account for the effects of sampling by the detector. Taking the inverse Fourier transform of the product of the frequency response of the source image, optics, and detector response will result in an oversampled representation of the source image (Discussed in section 2.
1.4 Detector Array). Detector ArrayThe detector in an imaging system transforms an optical signal in photons into an electrical signal in electrons. This electrical signal is proportional to incident radiation arriving at the detector CITATION Dri13 l 1033 4. The detector in the imaging system will introduce some limitations that will affect the performance of the system, for example, the quantum efficiency of an imaging system will affect the amount of signal measured by the sensor CITATION Cai10 l 1033 5. To account for the effects of the detector, the total photon-flux measured by the detector given in Equation 18 is multiplied by the integration time to obtain the total number of photons collected by the detector array.
The per-pixel number of photons collected by a detector over a finite time follow Poison statistics and is a random variable whose mean is proportional to the expected number of photons CITATION Cai10 l 1033 5. The detector frame now in units of photons, is multiplied with the user-defined quantum effienecy ? of the detector to obtain the total number of electrons in the array. In ASSET, quantum efficiency is user- defined and is the average fraction of photons that are converted to electrons by the detector. The total number of photo-electrons in the detector frame generated by the source is given by, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 25)where is the total photon-flux measured by the detector array, is the high-resolution source image in electrons, is the integration time, and ? is the quantum efficiency of the detector.
Quantum effienecy is a function of wavelength, but in ASSET we assume it to be constant and its wavelength dependency is accounted for in the system’s relative spectral response . To account for the effects of sampling by the detectors, the image is then multiplied by the MTF of the optics and the detector’s MTF.The model begins with a high-resolution source image with samples and after including source radiance in the scene, the image is resampled to using the user-specified oversample factor , which is the minimum number of source image samples per sensor pixel.
As discussed in the previews section, taking the inverse Fourier transform of the product of the frequency response of the source image , optics , and detector response results in an oversampled representation of the source image, , MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 26)and we get the oversampled array. Figure 16 illustrates how the oversample array is obtained by multiplying the image with the optical and detector MTFs CITATION Ste18 l 1033 8.Figure SEQ Figure * ARABIC 16. Examples of the optical and detector MTFs used to obtain the oversampled imageThe image is then integrated spatially by the detectors and simultaneously sampled at the detector array CITATION Cai18 l 1033 6:, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 27)where is the oversampled source image and is the oversample factor. Equation (28) reduces the number of samples from to pixels. The detector frame has pixels, and the ratio of pixel to sample or is the oversample factor .
After sampling at the detector array with all scene content incorporated, fixed pattern noise (FPN) is added to the detector array.The noise components associated with the detector are shot noise and FPN. The latter is a due to non-uniformity in the detector response. There are two primary components to non-uniformities in the detector frame, (1) dark offset non-uniformity (DSNU) , and (2) photo response non-uniformity (PRNU) ; the term fixed pattern noise (FPN) usually refers to these two components and is due to either detector material imperfections or read-out integrated circuit (ROIC) CITATION You17 l 1033 2. The latter refers to variations in how a pixel responds to incident radiance and is a multiplicative factor of the photo-generated electrons. In ASSET, the photo-generated signal in electrons is then perturbed by user-defined random and pattern non-uniformities to obtain the total number of electrons per frame. DSNU refers to the pixel-to-pixel variations in the offset of the pixel value when no light is present at the detector surface and does not depend on signal CITATION Hol07 l 1033 9. Fixed-pattern noise is not temporal and is fixed when viewing a static scene.
In ASSET, DSNU is included in the Bias current and is computed by multiplying the dark current rms in electrons times an array of normally distributed random numbers and bias voltage in electrons is added, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 28)where is the voltage applied across the detector in electrons, is the user-specified dark current rms, and is matrix of normally distributed random numbers and is due to DSNU. To account for perturbations due to the non-uniformities in the detector, the signal generated from the source image is multiplied times the PRNU and electrons due to Bias are added, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 29)where accounts for FPN due to PRNU, accounts for FPN due to DSNU and dark current, and is the source image with FPN. Figure 17 shows an example of how non-uniformities are calculated in ASSET.Figure SEQ Figure * ARABIC 17. Examples of non-uniformities in ASSET; PRNU is multiplied with the background signal and bias with DSNU is added to obtain the detectro frame with FPNShot noise is caused by fluctuations in electrical currents due to the discrete arrival of photons in the detector. The number of electrons counted during an integration time follows a Poisson distribution with variance proportional to the mean number of electrons collected during the interval, which are randomly sampled to determine the actual total number of electrons CITATION You17 l 1033 2. Both photo-generated electrons and dark current are random variables that contribute to the variation in shot noise.
In addition, since shot noise is dependent on signal, and because FPN affects the number of collected electrons, FPN should be included in shot noise. ASSET returns the source image with random samples of shot noise, , drawn from an approximation of the Poisson distribution with a mean proportional to the sum of photo-generated signal and bias current with FPN included (). Shot noise is added in MATLAB using the command: MACROBUTTON MTPlaceRef * MERGEFORMAT MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 30)where fastpoiss.m is a MATLAB function that gives the number of electrons with random arrival times, and is the mean signal of photo-generated electrons and bias current with FPN included. Currently, ASSET computes shot noise without including FPN and self-emission, but future development will fix the issue.
The error will rarely impact accuracy, unless electrons due to self-emission and FPN are large. Figure 18 shows an example of the source image with FPN and shot noise included on the left and an image of shot noise in the right obtained by subtracting the image with FPN from the image with shot noise . From the figure it should now be evident the shot noise is dependent on the mean signal, in the image shot noise is larger in areas where the signal falling on the detector is larger. In other words, the signal coming from ground generates more electrons than the signal coming from the water and, therefore, more shot noise is generated on the ground.Figure SEQ Figure * ARABIC 18. Examples of a source image with shot noise on the left and shot noise on the rightElectronicsThe electronics in an imaging system transforms an analog signal in electrons into a digital signal in counts, also known as digital numbers (DNs). This digital signal is proportional to incident radiation arriving at the detector CITATION Dri13 l 1033 4.
The electronics in the imaging system will introduce noise that will affect the performance of the system. The noise components associated with the electronics are thermal, read, flicker, and quantization noise. In addition to noise we add hardware offset (or bias) due to the systems electronics.Thermal noise, or “white” noise, arises from the random motion of carriers in any electrical conductor; any material that is not at 0 K produces electrons. Since the detector and electronics materials are not at 0 k, they will generate noise CITATION Cai10 l 1033 5. It the capacitor of the detector is attached to an analog-to-digital converter, then thermal noise can be expresses as the root mean square (rms) in photo-generated electrons by , MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 31)where is the rms of the number of thermal noise electrons, is the integration time, is the Boltzmann’s constant, is temperature of the electronics in kelvins, is the electron charge, and is the resistance of the circuit.
Since the resistance is in parallel with the capacitor, the noise equivalent bandwidth is and we get that the thermal noise rms is, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 32)where is the rms of the number of thermal noise electrons and is the capacitance of the circuit. Thermal noise is “white” Gaussian noise, since it has a constant magnitude for frequencies below 1012 Hz (i.e.
power spectral density of thermal noise is constant with frequency). Note that Equation (32) is now independent of resistance, but now it depends on the circuit’s capacitance. In ASSET, thermal noise is added as random draws from time-depended normal distributions with user-defined variance ,, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 33) in the equation, is the thermal noise rms and is a Gaussian random process with a mean of zero and a unit variance. Read noise is the variance associated with “reading out” the signal collected by the detector and is similarly “white” noise. It is characterized as the noise measured with zero incident signal and zero integration time, and it comprises the noise added by the read-out electronics. In ASSET, read noise is added as random draws from time-independent normal distributions with user-defined variance ,, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 34)where is the read noise rms and is a “white” Gaussian random process with zero mean and unit variance. As each pixel in the detector is read, some electrons are randomly lost or gained from the signal.
Unlike thermal noise, read noise is independent of integration time. Regardless of how long you integrate you will have read noise in every frame, but a long enough integration time will ensure the measured signal to be well above the read noise CITATION Ric18 l 1033 10.Flicker noise, also known as noise, is present in the electronic and is related to the mean current traveling through the detector. Because a dc current is always present in a photoconductor, flicker noise is always present CITATION Der96 l 1033 11.
The power spectral density of flicker noise decreases with increase in frequency and is only important at frequencies from 0 to 100 Hz CITATION Dri13 l 1033 4. In other words, flicker noise will be lower than white noise at frequencies higher than 100 Hz. Flicker noise can be expresses as the root mean square (rms) in photo-generated electrons by MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 35)where is the dc current through the detector, is the noise proportionality factor, is the electron charge, is the electrical frequency, and is the frequency exponent ( for “pink” and for “brown” noise) and is usually 1. In ASSET, flicker noise is added as random draws from normal distributions with user-defined variance and frequency dependence, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 36)where is the flicker noise rms and is a Gaussian random process with zero mean and unit variance shaped as . In this equation, the “pink” (or “brown”) noise is simulated in MATLAB and takes an additional argument which shapes the spectral characteristics of the data as . The total analog signal in the system with noise included is given by, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 37)where , , and are signals in electrons generated by thermal, read, and flicker noise and is the source image at the detector with shot noise and FPN included. Figure 19 shows an example of how the total analog signal is obtained in ASSET with images of each noise.The source image , whose signal is now in units of electrons and including most noise sources, is quantized by dividing the signal by the conversion gain derived in ASSET from user-defined well-depth , number of bits , and analog gain factor ,, MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 38)then,.
MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 39)The resulting count values are rounded down, introducing quantization noise CITATION You17 l 1033 2. Quantization noise is directly proportional to the number of bits used, given n bits, the signal can be separated into quantization levels. In ASSET, quantization noise is added to the signal by directly converting the signal to counts using an analog-to-digital function in MATLAB that converts the analog signal to a digital signal in counts to integers based on user-defined number of bits. In addition, real systems will typically that results in a hardware offset. In ASSET, a hardware offset () represents an offset (or bias) of the system due to the system’s electronics (“hardware”). It is included in ASSET to add a bias term that does not contribute to photon or electron noise. Like CG, hardware offset is a term used to represent more complicated underlying physics.
In ASSET hardware offset is in counts but is added as electrons in the detector frame by adding the corresponding number of electrons after Poisson sampling (i.e. shot noise), , MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 40)where is the analog signal and is the conversion gain used to convert from counts to electrons. Some sensors will collect a certain number of frames on orbit and instead of downlinking all the frames at once, the sensor co-adds (or sums) all those frames and sends only one frame. Generally done due to limited bandwidths (i.e.
cannot download all frames). This can be computed in ASSET by summing over all frames until the downlink saturation level is reached, and if saturation occurs, the digital gain is used to decease the number of counts. Digital gain is used in cases where the counts of the downlink frames are larger than and is usually used to prevent saturation in the downlink frame. Digital gain is expressed by , where is the digital gain coefficient, and it is multiplied with the sum of all frames in counts:. MACROBUTTON MTPlaceRef * MERGEFORMAT SEQ MTEqn h * MERGEFORMAT ( SEQ MTEqn c * Arabic * MERGEFORMAT 42)In the equation, if is zero we get and the signal will remain unchanged, but if is one we get and the signal is reduced by half. Therefore, for any given digital gain coefficient , the resulting numbers of bits is reduced from bits to bits.ConclusionASSET is a physics-based model used to emulate wide field of view (WFOV) EO/IR sensors and generate synthetic data sets with realistic radiometric, noise, and sensor properties representative of a broad range of scenes and sensors operating in the visible thru thermal infrared wavelength. It is not designed to replace high fidelity models or to accurately simulate real scenes or sensors but to provide realistic data representative of real sensors CITATION You17 l 1033 2.
The purpose of this paper was to provide a detailed description and a comprehensive overview of ASSET for an understanding of the end-to-end process of photons leaving a source to overhead sensor and its ultimate conversion from electrons to counts. Combining each of the pieces discussed in this paper results in a complete analysis approach to a final end-to-end sensor process.Currently in ASSET, path radiance only accounts for the absorption coefficient, but future development will include an algorithm that accounts for the scattering component. In addition, future progress will allow the user to specify any arbitrary PSF or detector response. Some of the noise sources are currently implemented incorrectly.
The errors will be minor, but these will be corrected going forward. Although ASSET provides realistic noise, detector, and electronics properties, the model needs significant improvement in order to have full control on sensor characteristics. Future development will add higher fidelity modeling of the FPA behavior to the current ASSET model to more accurately model a sensor’s detector response to incident irradiance, where the user will have full control on sensor FPA and electronics characteristics, allowing all sensor information to be provided as an input in ASSET. Development will continue to introduce improvements that allow custom detector responsivity, filter response, self-emission, and modeling of quantum efficiency, responsivity, dark current, and noise based on detector circuit and electronics (e.g.
model thermal noise based on sensor circuitry). References BIBLIOGRAPHY 1 S. D. Brown and E.
J. Ientilucci, “Advances in wide-area hyperspectral image simulation,” SPIE 5075, Targets and Backgrounds IX: Characterization and Representation, Watkins, W. R., Clement, D., and Reynolds, W. R.
, eds., 110, International Society for Optics and Photonics, 2003. 2 S. R. Young, B.
J. Steward and K. C. Gross, “Development and Validation of the AFIT Sensor and Scene Emulator for Testing (ASSET),” in Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXVIII, 2017. 3 J. M. Palmer and B.
G. Grant, The Art of Radiometry, Washington: SPIE Press, 2010. 4 R. G. Driggers, M. H. Friedman and J. M.
Nichols, Introduction to Infrared and Electro-Optical Systems, Boston: Artech House, 2013. 5 S. C. Cain, Direct-Detection LADAR Systems, Washington: SPIE Press, 2010.
6 S. C. Cain, “Instructor Notes – Introduction to LiDAR,” Air Force Institute of Technology, 2018.
7 J. E. Nelson, “THESIS: Infrared Methods for Daylight Acquisitions of LEO Satellites,” Department of Defense, Wright Patterson Air Force Base, 2004.8 B.
J. Steward, “Instructor Notes – ASSET Legacy Brief,” 2018.9 G. C. Holst and T. S.
Lomheim, CMOS/CCD Sensors and Camera Systems, Florida and Washington: JCD Publishing and SPIE Press, 2007. 10 R. S. W. Jr.
, “An Astrophotographer’s Gentle Introduction to Noise,” SKY ; Telescope, 15 April 2018. Online. Available: http://www.
skyandtelescope.com/astronomy-blogs/imaging-foundations-richard-wright/astrophotography-gentle-introduction-noise/. Accessed 1 06 2018.11 E. L. Dereniak and G.
D. Boreman, Infrared Detectors and Systems, New York: John Wiley ; Sons, 1996. 12 J. R. Janesick, Photon Transfer, Washington: SPIE Press, 2007.