This manual serves as an important reference for users of the Center for Earth Observation (CEO), and especially students in the course Observing the Earth from Space (OEFS). It also provides a concise general introduction to the field of remote sensing and digital image analysis. Part I provides an introduction to the facilities of the lab and their use. Part II is an introduction to the fundamental characteristics and sources of remotely sensed data. Part III describes the major composite datasets used in the CEO. Basic image processing techniques are outlined in Part IV. Parts V and VI will steer you in the right direction to find more information on these topics.
The OEFS course is taught using the remote sensing software package ENVI from ITT Visual Information Solutions. Most of the specific instructions in this guide, as well as in the more extensive CEO Online Documentation section, focus on how to do various functions using either ENVI or ERMapper. Because no one software package will be best for every feature, the CEO Lab also provides a variety of other leading geospatial software. These include ERDAS Imagine and ERMapper by Leica Geosystems, and ArcGIS by ESRI. Please see a member of the CEO staff if you have specific software needs.
Students taking the OEFS course should be aware that some of the information in this guide might change during the course of the semester. Such changes will be announced in class or lab and will most likely be posted on the Classes V.2 server. The Observing the Earth From Space web page is the source for the most up to date class information; check it regularly. Please report to the course instructors any inaccuracies or problems you discover with this guide or the lab.
In addition to these printed manuals, you should take advantage of several sources of on-line help.
For most topics, especially concerning ENVI or Observing the Earth From Space, use the web!!!
Several web pages provide very useful information for this course. From the CEO's home page you will be able to access the following pages, among others:
Use the CEO and OEFS email lists to contact other lab users.
Email is a powerful tool for getting help. There are mailing lists for students of the Observing Earth From Space course " oefs-list@pantheon" and for all Yale remote sensing users "firstname.lastname@example.org" for this purpose. If the answer to your question is not in any of the above sources, you can send a question to all the people on the appropriate list via email. In most cases, someone else will have run into your problem before and will offer a solution, suggestion, or at least moral support.
You may also consult the instructors and TA's for the Observing the Earth From Space course individually. Specific office hours will vary.
The spectrometer uses a fiber optic cable with a foreoptic attachment to limit the sensor field-of-view. A detector array in the spectrometer captures photons that are converted and stored as electrons. The stored electrons are converted from a voltage to digital data and transferred to a PC as "raw" digital numbers.
The spectrometer measures three specific radiation quantities: reflectance, radiance, and irradiance. It uses a specially configured Toshiba laptop computer to perform the numerous calibrations and reference corrections that are required when measuring radiation quantities. These calibrations and corrections remove the "dark current" portion of the signal associated with thermal electrons and produce signal ratios to adjust for varying ambient lighting.
Detailed operating instructions can be found in the Observing Earth From Space class exercise "Experiments with a Personal Spectrometer". A copy of this document is located in the On Line Documentation section of the CEO website.
Note: The Geoposition window lists cell coordinates using X and Y labels. The Image Subset Wizard uses Row and Column notation. The X and Y order is NOT the same as the Row and Column order. The Start Row in the wizard is equivalent to the "Top Left Cell Y" in the Geoposition window.
Digital elevation data can be used to great advantage when studying the environment. Remote sensing and GIS software programs can use these data to digitally enhance images, revealing previously hidden topographical relationships. There are a variety of DEM products available today, with resolutions ranging from 30 to 1,000 meters. These data are normally stored as raster datasets using a signed 16-bit data type. Please read the DEM document in FAQ section of the CEO web site to learn more about the various datasets and how to obtain and import them.
The primary source of elevation data for the U.S. is the USGS National Map Seamless Server. These data are constantly updated with the better data as it becomes available. The server provides complete coverage of the conterminous U.S. at 1 arc second (30 meter). Most of the country is also available at 1/3 arc second (10 meter) resolution. There are guidelines on how to download and import DEM data in the FAQ section of the CEO web site.
SRTM data are also available for the U.S. at a 30 meter resolution and 90 meter resolution globally. See the CEO FAQ on SRTM DEM data. Version 1 of these data had many voids so you should always select the filled-finished version 2. The CEO also has a nationwide 100 meter dataset in ERMapper format on CDs. You should see a member of the CEO Staff if you wish to use this dataset.
Global elevation data are generally available at 1000 m and 90 m resolutions. The GTOPO30 data have 1000 m resolution. This product was developed by the USGS in 1996. The CEO has the complete global coverage in ERMapper format online at N:\ERM_files\GTOPO30_DEM. You can learn more about this data at the USGS GTOPO30 site.
The Shuttle Radar Topography Mission (SRTM) mapped 80% of the Earth's surface in 2000. These data have now been released in one degree tiles with a resolution of 90 meters. Currently the data have some voids and gaps but the data are in the process of being "cleaned". You can learn more about processing the SRTM data on the CEO web site.
The USGS offers the ASTER Global Digital Elevation Model which is a global 30 meter resolution elevation data that has been created by the ASTER sensor. This is version 1 data with data viods and irregual coastlines and other issues. You can search for these data on the NASA WIST site.
The field of satellite based remote sensing is constantly expanding, with new sensors being launched by several governments and corporations each year. Many of these sensors capture increasingly complex data and store them in new formats. As a result, the requirements and techniques for importing and exporting images change often. This challenge is mitigated to some degree by maturing data format standards and expanded software capabilities.
If you obtain data that is in a format you are not familiar with, you should first try to open the file directly with the software you are using. If this does not work, then look for an import process that may work with your data. For example, ERMapper can now directly open many image graphics formats such as JPG and TIFF. It can also open the hierarchical data format (HDF) that is being used extensively by NASA. Once you open these data it may be necessary to save them in ERMapper format to perform advanced functions with the software. ERMapper also has many different import options listed under the Utilities section of the main menu. In some cases you may be able to open an image using the ENVI software package. You can then save the image in ERMapper format for subsequent processing.
While many types of imagery can be opened easily with ERMapper and other software, some data files require special processing before you can use them successfully. The CEO staff has documented detailed instructions on how to locate and import various types of data. You should look on the CEO web site in the Online Documentation section for specific instructions. If you are still not able to process the data you have obtained, contact a member of the CEO staff for assistance.
ERMapper has two built in utilities for creating annotations on your image. You can add an "Annotation Overlay" or a "Map Composition Overlay" or combinations of the two to your algorithm to add things like text, vectors, scale bars and so on. There are too many options to go into detail here, so you should see the sections of the ERMapper User Guide that discusses Annotations and Map composition for instructions on using these features.
Some of the features included in the annotation and map composition overlays are very powerful and useful. For example, scale bars, north arrows, lat/long grids, circles, labeling, point features, etc... To take full advantage of these features you should learn the ERMapper vector file format, which is described in the ERMapper Customizing manual.
Some users combine ERMapper and PowerPoint to create effective presentation graphics. You can use the annotation tools in ERMapper to add a scale bar and perhaps a legend or north arrow then save the result as a JPG image. The image can placed in a PowerPoint slide where you can easily add a title, text boxes, circles, arrows and other graphics to complete the graphic. You can also do the same in other graphics programs or Microsoft Word.
1. What kind of image should I use for my project?
This is a complex question. Some of the basic issues you must consider are how frequently do you need to "look" at the surface of the earth? Are you interested in changes between two decades or two growing seasons? How much detail do you need to see on the ground? Is a 30 m pixel sufficient? If you are working on a very large area, perhaps you will need to use an image with a 1000 m pixel. What portions of the electro magnetic spectrum are of interest for the work you are doing? What budgetary constraints do you have? Must you stay with free images or can you afford a high resolution image that may cost $3,500? Please contact the CEO staff for assistance in evaluating which of the different sensors may provide the information you will need for your research.
2. Now I know what I want, where can I find an image?
The CEO Data Archive should be the first place to look for imagery. If you are not able to find the data you need in the archive, there are many places on the Internet that allow you to search for imagery and download or order it. The CEO maintains a list of good sites to look for imagery. From the main web page follow the CEO Links and navigate to the "Data Search Sites" section.3. What projection should I use?
In many cases it's pretty arbitrary which projection you should use. One overriding factor in this decision is whether you already have some other information that is in a given projection. If so, certainly use that projection. See the discussion in the Geometric Corrections section of the Guide for more information on map projections in general, or the discussion in the ERMapper Users Forum for more information on how ERM deals with map projections.
4. Someone is giving me an image, what format should I ask for?
There are two formatting considerations when obtaining your own imagery. First, you need to decide on the physical media that will transport the image from the source system to the CEO. We can handle, in order of decreasing preference; ftp transfer over the Internet, CD-ROM, or DVD-ROM, 100MB or 250MB ZIP disk, 8mm tape.
Second, you need to decide on the format of the image file. ERM has a long list of formats that it can import (see the options under "Utilities | Import Raster" for a complete list). Some common options that work well are: GeoTIFF, a flat binary file (you need to know the number of rows/cols and how many bytes per pixel), the Mac PICT format, and an ASCII grid. ERDAS IMG files and Arc/Info export format files can also be used.
5. How do I burn a CD or DVD?
Each PC has the software package Click'N Burn. This program has good online documentation. Two of the PCs also have DVD burners which use RecordNow Max software to burn DVDs. If you are having difficulty, please see a member of the CEO staff or one of the course TA's.
In principle, minerals may be uniquely identified based on their spectral signature. However, in practice, the spectral and spatial resolution of most passive sensors is too poor for the degree of detail necessary to do this. Figure 2 illustrates the spectral signatures of several minerals. Notice that broad variations (over several microns) occurs between some minerals, but others have the same general shape and may only be distinguished by subtle variations in the intensities of their signatures, or by the presence/absence of distinctive, narrow absorption bands in the spectra. The spectral signatures of geologic objects is further complicated by variations in weathering and surface cover, as well as the blending of individual component signatures into one combination signature for each pixel. Although these limitations might prevent identification of a specific mineral or rock from an image alone, areas with similar signatures may be identified and objects may be placed in broad groups based on their spectral signatures. The actual identification of these groups may then be confirmed with ground truth. This approach works best in regions with little obscuring ground cover such as deserts and mountains above the tree line.
Sensors equipped with thermal bands and with sufficient temporal resolution may be used to estimate the thermal inertia of an area, potentially enabling an analyst to more accurately identify an object or composition of a region.
Applications involving plant life are generally better able to take advantage of the spectral information contained in an image acquired by current day sensors for several reasons. One major reason for this is that vegetation has a very distinctive spectral signature in the visible and near infrared (NIR) which is detectable by sensors with low spatial, spectral, or radiometric resolution. Figure 3, figure 4, and figure 5 illustrate typical reflectance spectra from a few types of vegetation. The key features in these plots are vegetation's high NIR reflectance in general, and the relative reflectance of grass, deciduous and coniferous trees. Chlorophyll and water are the substances that dominate the spectral signature of plant life. Different concentrations of these compounds produce marked differences in the spectra of different plants, allowing one to make estimates of plant health, biomass, and species identification. Figure 6, figure 7, and figure 8 demonstrate how the spectral signature of vegetation changes with variations in water content, biomass, and plant health.
The spectral signature of water bodies themselves are largely dependent on water depth and suspended matter . Radiation received at a sensor will have penetrated the water to some depth, reflected off suspended matter or off the bottom, then traveled back through the water to the surface. Water is a strong absorber of EM radiation, especially at longer wavelengths; blue light will travel through water for a few tens of meters or more while infrared light is absorbed almost immediately at the surface. The spectral signature of a water body therefore is composed of the spectral signature of the reflecting surface (the bottom or suspended particle) minus whatever is absorbed or scattered as the light rays travel through the overlying water. This effect is shown in figure 9, which plots the spectral reflectance of water with a sandy bottom at various depths. Notice that the overall shape of the plot remains similar at different depths, but that the intensity drops off as the water gets deeper.
Water will also affect the signature of bare soil or sand. Figure 10 is a plot of wet and dry sand reflectance. Notice that the water decreases the reflectance of the sand more severely at longer wavelengths. This effect is also illustrated in figure 11 which plots three spectra acquired on a wet lawn. The plot demonstrating high reflectance in the NIR is from grass on this lawn. The grass in the middle plot is partially covered with water so that only the tips of the blades of grass stick out of the water. A puddle of water covers the grass in the third plot by about 2-3 inches. Note that the water decreases the reflectance at all wavelengths, but that this effect is stronger at longer wavelengths.
Spatial analysis can be a powerful tool for identifying and characterizing large-scale features such as folds, faults, and drainage patterns. Remotely sensed data provide the ability to efficiently map extremely large areas.
Spatial analysis is generally less useful than spectral analysis for identifying different groups of plant life. In general variations in plant texture can be subtler than large geologic features and can occur quite frequently, increasing the effect of (typically) high frequency noise in the data. However, patterns of plant life, perhaps determined through spectral techniques, can often give clues to the underlying geology of an area. For example, a transition from one species of plant to another might indicate a transition from one soil type to another.
RADAR is an acronym for RAdio Detection And Ranging. The earliest radar systems operated in the radio band of the electromagnetic spectrum from approximately 1 to 10m. Modern radar systems transmit in the shorter wavelength microwave band from approximately 0.8cm to 1m. A radar system produces frequent, short bursts of microwave energy and measures the strength of the reflected echo, sometimes referred to as backscatter. Longer-wavelength radar systems can penetrate clouds and some surfaces such as sand and snow. This makes it an ideal tool for imaging tropical regions that have almost constant cloud cover. It has also been used to locate ancient stream beds in desert areas.
Two common forms of radar are not used to image the earth's surface. One is the Doppler radar system, otherwise known as the radar gun. It uses Doppler frequency shifts to measure relative differences in speed of the reflector and target. Plan Position Indicator (PPI) radar systems feature a rotating antenna with a circular sweeping display. These are commonly used for weather forecasting and air traffic control.
Side Looking Airborne Radar (SLAR) systems are used to image the earth's surface. These systems have an antenna fixed to the bottom of an airplane or spacecraft that is typically pointed to the side of the flight path. The side looking scheme was devised so that airplanes could fly parallel to the border of a hostile nation and "look" into the enemy territory.
Radar systems transmit energy in the microwave portion of the electromagnetic spectrum using wavelengths from approximately 0.75 cm to 100 cm. This range is divided into 8 bands, each identified by an alphabetic code (Table 3). These random letter designations were assigned during World War II as a security measure.
Range in cm
|Ka||0.75 - 1.1|
|K||1.1 - 1.67|
|X||1.67 - 2.4|
|C||3.75 - 7.5|
|S||7.5 - 15|
|L||15 - 30|
|P||30 - 100|
POLARIZATION - Radar systems transmit energy in either a horizontal (H) or vertical (V) polarized plane. Systems generally receive reflected energy in the same plane as was transmitted. These are referred to as HH or VV systems. Horizontal systems are usually better at discriminating rectangular features such as buildings and fields, while vertical systems are usually better at discriminating between vertical features such as trees. More sophisticated systems have two receiving antennas and capture reflected energy with the opposite polarity. These are referred to as HV or VH systems. Some advanced radar systems can transmit and receive both polarities and produce four images of an area; HH, VV, HV, and VH. These multi-polarity sensors offer greater information, similar to the capabilities of multi-spectral images used by passive sensors.
INTERPRETATION - Satellite images produced by passive sensors record the variations in reflectivity and absorption of objects across the electro-magnetic spectrum. Interpretation of radar images is significantly different than interpretations of passive sensors. Radar images record variations in structure, texture, and electrical properties of the targeted surfaces.
Surface slope has a significant impact on the macro-scale interpretation of radar images. Slopes that face an antenna (foreslopes) are brighter than slopes facing away from the antenna. As the foreslopes approach perpendicular to the radar beam, their reflectance becomes brighter. This is known as foreslope brightening. Foreslopes take less time to image than backslopes. This phenomenon, called foreshortening, results in foreslopes being recorded shorter than they really are. Objects with very steep foreslopes will appear to lean toward the radar source. This is a result of the radar beam intercepting the top of the object before the base and is known as layover.
Surface roughness produces micro-scale relief on radar images. Radar-smooth surfaces will cause the transmitted energy to reflect away from the antenna. These surfaces appear dark on an image. Typical radar-smooth surfaces are calm water and paved roads. Radar-rough surfaces produce a diffuse reflection, resulting in a brighter image. Examples of radar-rough surfaces are cobbles, old-growth forest canopies, and surface waves on water. Apparent roughness on radar images is also dependent on radar wavelength and relative angle between the radar beam and the target surface.
The dielectric constant is a measure of an objects' ability to conduct or reflect microwave energy. For most objects, this phenomenon has no significant impact on a radar image. As surface moisture increases, the dielectric constant and reflectivity increase. This would make a recently irrigated field appear brighter than a similar field without irrigation. Metallic objects such as bridges and railroad tracks act as amplifiers and appear very bright on radar images.Where to find more information about RADAR at the Center for Earth Observation?
One of the first places to look is the textbook Remote Sensing and Image Interpretation by Lillesand and Kiefer, which can be found in the CEO lab or in one of the Yale libraries. Chapter 8 - Microwave Sensing provides a thorough background on radar systems and their special image processing techniques. Journal articles are also a source of relevant information. For example, the December 1995 issue of Photogrammetric Engineering & Remote Sensing (PE&RS) has three articles related to the use of Synthetic Aperture Radar (SAR) systems and sea ice. Other sources of information about radar are the remote sensing software packages used at the CEO, and websites for the various radar systems.
The ERMapper Applications Manual has a chapter dedicated to SAR imagery in mineral and oil exploration. Two case studies are described, outlining the reasons why radar imagery was appropriate for these projects. The Applications Manual is available in the CEO lab. It can also be found on-line by selecting the Help button from the main ERMapper menu. ERMapper has another on-line manual for radar. This is the ER Radar Manual. It provides a detailed description of specific processing and analysis techniques and algorithms used within ERMapper.
Two other remote sensing software packages available at the CEO are ERDAS Imagine and ENVI. Each application has a set of tools and on-line documentation for processing radar images. In addition, ENVI has several tutorial exercises to learn how to process and interpret radar data. The ENVI tutorial manual can be found in the CEO lab.
The World Wide Web is a vast source of information, some of it even on radar! You should begin by going to the CEO Links section of the CEO Home Page. There is a section for Radar which includes links to several of the more important radar providers.
The CEO also has sample images and browse software for RADARSAT, SIR-C, and JERS-1 data. These image samples should help you to understand the capabilities and challenges involved with using radar data. These samples are stored in the cabinet that contains the CEO Data Archive. See any of the CEO staff if you wish to work with these data sets.
LIDAR is an acronym for LIght Detection And Ranging. This active remote sensing system transmits pulses of laser light from an airborne platform and records the time delay of the reflection to measure the distance between the aircraft and the surface. When global positioning systems are integrated with the lidar, surface maps can be generated with sub-meter accuracies.
Lidar systems are able to record multiple returns at each pulse. This means that multiple surfaces can be measured at the same time. It is possible to map a forest canopy and the forest floor, or the surface and depth of a water body.
Figure 12 is a graphic comparison of the spatial and spectral resolutions of each of the various sensors discussed below. Each band on each sensor is represented by a rectangle on this plot. The x-axis on this plot is calibrated to wavelength (log scale to show detail at short wavelengths), so each band's width in the x-dimension spans the wavelengths to which it is sensitive. The y-axis on this plot is arranged by sensor, and each box's y-dimension is proportional to its spatial resolution. This convention is the same for plots 12a and 12b, the difference being that the spatial scale on the y-axis changes.
|Landsat-1||7/23/72||1/6/78||18 days/920 km||MSS-RBV|
|Landsat-2||1/22/75||2/25/82||18 days/920 km||MSS-RBV|
|Landsat-3||3/5/78||3/31/83||18 days/920 km||MSS-RBV|
|Landsat-4||7/16/82||--||16 days/705 km||MSS-TM|
|Landsat-5||3/1/84||--||16 days/705 km||MSS-TM|
|Landsat-7||4/15/99||--||16 days/705 km||ETM+|
Unfortunately, these sensors were plagued with technical problems and they were replaced on Landsat 4 with the Thematic Mapper sensors.
band 1: (green, 0.50-0.60µm) This region corresponds to the green reflectance of healthy vegetation and is useful for mapping detail, such as depth or sediment in water bodies. Cultural features such as roads and buildings also show up well in this band.
band 2: (red, 0.60-0.70µm) Chlorophyll absorbs these wavelengths in healthy vegetation. This band is useful for soil and geologic boundary discrimination.
band 3: (near IR, 0.70-0.80µm) The near IR is responsive to vegetation biomass and health.
band 4: (near IR, 0.80-1.10µm) Band 4 is very similar to band 3. It is used for vegetation discrimination, penetrating haze, and water/land boundaries.
band 1: (blue, 0.45-0.52µm) Water increasingly absorbs EM radiation at longer wavelengths, so band 1 provides the best data for mapping depth/detail of water covered areas. It is also used for soil/vegetation discrimination, forest mapping and distinguishing cultural features.
band 2: (green, 0.52-0.60µm) Like MSS band 1, this corresponds to the green reflectance of chlorophyll in healthy vegetation.
band 3: (red, 0.63-0.69µm) This band is useful for distinguishing plant species, soil and geologic boundaries.
band 4: (near IR, 0.76-0.90µm) Band 4 corresponds to the region of the EM spectrum which is especially sensitive to varying vegetation biomass. It also emphasizes soil/crop and land/water boundaries.
band 5: (mid IR, 1.55-1.74µm) This region is sensitive to plant water content which is a useful measure in studies of vegetation health. This band is also used for distinguishing clouds, snow and ice.
band 6: (thermal IR 10.40-12.50µm) This region of the spectrum is dominated completely by radiation emitted by the earth and is useful for crop stress detection, heat intensity, insecticide applications, thermal pollution and geothermal mapping.
band 7: (mid IR, 2.08-2.35µm) This region is used for mapping geologic formations and soil boundaries. It is also responsive to plant and soil moisture content.
The Enhanced Thematic Mapper Plus (ETM+) sensor captures data using the same seven bands as the TM sensors. One major feature of this enhanced sensor is the addition of a panchromatic band with 15m spatial resolution and a bandwidth from 0.52 to 0.90 µm. The second major enhancement is the increase in spatial resolution of the thermal band (6) from 100m to 60m.
The Scan Line Corrector on the ETM+ sensor failed in May 2003. As a result there are gaps between each line of data. The USGS will sell images that have these gaps filled in with data from previous scenes. While this produces a gap free picture, it should not be used for change detection analysis. Currently Landsat 5 images are being sold again and NASA is exploring options to replace the sensor.
The U.S.G.S.is the primary source of Landsat data in the United States. They have the largest collection of images for the U.S. and a very large catalogue of images from other parts of the world. As of January 2009 these data are now free. Several other countries have established Landsat ground receiving stations which also archive and distribute imagery. These stations are known as the Landsat Ground Station Operations Working Group (LGSOWG).
One can search these archives online through the USGS EarthExplorer website at http://earthexplorer.usgs.gov.
Availability and pricing of satellite images changes frequently. You should explore the Image Archives and CEO Links - Image Archives section or the CEO FAQs page and/or see a member of the CEO staff for current information.
The SPOT (System pour l'Observation de la Terre) satellite was launched into an 832 km polar orbit on 2/33/86 by a multi-national cooperation, primarily France, Sweden and Belgium. SPOT repeats its Orbit every 26 days but has the capability of 'off-nadir' viewing (looking at a scene to either side of the ground-track). This capacity increases SPOT's potential temporal resolution to 3-4 days, a significant improvement when studying short time-scale phenomena like volcanic eruptions or fires.
The SPOT ground track has a swath width of 60 km nadir and 80 km off-nadir, and images are stored in 60 km along-track segments. Unlike the MSS and TM instruments, SPOT acquires its data using a 'push broom' method as opposed to a side-scanning mirror. The platform carries two identical high-resolution-visible (HRV) scanners, which may be used in either of two modes.
SPOT 4 was successfully launched on March 24, 1998. The satellite features the new high-resolution visible/infrared (HRVIR) sensor package and a "vegetation" sensor package. The HRVIR differs from the HRV in that it includes a new band in the short-wave infrared range that is very sensitive to soil and leaf moisture. The "vegetation" sensor package captures data using the same four bands of the electromagnetic spectrum but has a pixel resolution of 1km and a swath width of 2,250 km. This will provide near-global coverage daily.
SPOT 5 was launched on May 4, 2002. This satellite features a pair of 5m panchromatic sensors that can be combined to produce a 2.5m black and white image. It also has a 10m multi-spectral sensor. This satellite carries the same "vegetation" sensor package as SPOT 4.
In panchromatic mode the HRV has an IFOV of 10m2, stores data with 8-bit resolution in only one band which spans the visible region of the spectrum.
band 1: (0.51-0.73µm) The radiometric information content of this band is very similar to that of a black and white photograph. SPOT panchromatic images are not very useful by themselves for classification of landscapes. However, their very high spatial resolution makes them useful for visual interpretation, digitally sharpening lower-resolution multi-spectral data, and generation of stereo pairs.
In XS (multi-spectral) mode the HRV has an IFOV of 20m2, stores data with 8-bit resolution in three spectral bands which are similar to bands 1,2, and 4 of the MSS.
band 1: (green, 0.50-0.59µm) Like MSS band 1, this corresponds to the chlorophyll reflectance of healthy vegetation.
band 2: (red, 0.61-0.68µm) Like MSS band 2, this band is useful for distinguishing plant species, soil and geologic boundaries.
band 3: (near IR, 0.79-0.89µm) Like MSS band 4, this band is sensitive to varying vegetation biomass and emphasizes soil/crop and land/water boundaries.
band 4: (short-wave IR, 1.5-1.75µm) (HRVIR only) This band has a high degree of sensitivity to soil and leaf moisture.
The U.S. distributor of SPOT imagery is the SPOT Image Corporation. Customer services at SPOT Image may be reached at 1-800-ASK-SPOT. SPOT Image's educational support program is substantial. Level 1 imagery (basic radiometric and geometric corrections) is available at the educational price of $1000 per scene, as opposed to the standard commercial price of $2600.
CNES, SPOT Image, SPOT User's Handbook, 3 Volumes (Volume 1: Reference Manual, Volume 2: SPOT Handbook. Volume 3: SPOT Handbook Appendices), Centre National d'Etudes Spatials and SPOT Image Corporation, Toulouse, France and Reston, VA.
On the web at: http://www.spot.com/
The Advanced Very High Resolution Radiometer (AVHRR) was first launched with TIROS-N on 10/19/78 and has flown on each of the subsequent NOAA satellites through NOAA-14. NOAA-15 was successfully launched in May, 1998 and is being brought online as of June, 1998. The AVHRR flown aboard TIROS-N, NOAA-6, NOAA-8 and NOAA-10 has four spectral channels, while those flown aboard NOAA-7, NOAA-9, NOAA-11, NOAA-12 and NOAA-14 have an additional thermal infrared channel. The nominal orbital altitude for each of these satellites of 833 km gives a repeat orbit every 8-9 days, but the swath width of the AVHRR is approximately 2,400 km, which allows for complete global coverage every day. The data is stored as 10-bits per pixel. The IFOV for the AVHRR is 1.1 km by 1.1 km at the nadir and 2.4 km by 6.9 km at the edges of the image.
The AVHRR transmits data in three modes. Data is continuously broadcast at full spatial resolution, and may be received/stored by any station within line of sight that is capable of capturing the signal. Data acquired directly from the satellite in this manner is known as High Resolution Picture Transmission (HRPT). Full resolution is also stored using onboard tape recorders for selected regions then dumped to a ground station once per orbit. Datasets recorded then dumped in this manner are known as Local Area Coverage (LAC) data and have the same resolution characteristics as HRPT. In addition to these high resolution AVHRR data, lower spatial resolution data sets, known as Global Area Coverage (GAC), are maintained for all regions. These lower resolution datasets are produced by sampling every third scan line and averaging four out of every five pixels along each scan line, resulting in approximately 4 km resolution.
The AVHRR's four or five spectral bands are used primarily for mapping large areas, especially when good temporal resolution is required. Applications include snow cover and vegetation mapping; flood, wild fire, dust and sandstorm monitoring; regional soil moisture analysis; and various large-scale geologic applications.
band 1: (visible, 0.58-0.68µm) The blue-green region of the spectrum corresponds to the chlorophyll absorption of healthy vegetation.
band 2: (near IR, 0.725-1.10µm) This region is sensitive to varying vegetation biomass and emphasizes soil/crop and land/water boundaries.
band 3: (IR, 3.55-3.93µm) A thermal band which detects both reflected sunlight and earth-emitted radiation and is useful for snow/ice discrimination and forest fire detection.
band 4: (thermal IR, 10.30-11.30µm) A band useful for crop stress detection and locating/monitoring geothermal activity. This channel is also commonly used for water surface temperature measurements.
band 5: (thermal IR, 11.50-12.50µm) Similar to band 4, this channel is often used in combination with band 4 to better account for the effects of atmospheric absorption, scattering, and emission.
Two very good sources of AVHRR data are the USGS EarthExplorer website at http://earthexplorer.usgs.gov and the Satellite Active Archive operated by NOAA/NESDIS at http://www.saa.noaa.gov. These are fully searchable archives, complete with browse images. Full datasets are available on 8mm tapes for $50. The SAA also offers a discount for multiple images ordered at one time with a charge of $50 for the first image on each tape and $30 each for as many more images that will fit on the tape. There is also a provision for obtaining image subsets less than 10 megabytes in size for free via ftp.
Many other sites offer AVHRR data in various formats, covering various locales with varying pricing schemes. Generally these are sites with HRPT stations that post whatever they get for some short time period. Good places to start looking are the University of Miami, Louisiana State University, University of Hawaii, University of Colorado and Dundee University in England.
Kidwell, K. B., NOAA Polar Orbiter Data Users Guide, NOAA/NESDIS/NCDC, Satellite Data Services Division, Washington D.C., 1995.
In the imaging mode, three bands of information are routinely acquired:
Visible band: The GOES visible band is sensitive to most of the visible region of the spectrum, from about 0.4 to 0.7 µm. Radiation in this region of the spectrum is entirely composed of reflected or backscattered sunlight and is therefore indicative of the earth's albedo, an important quantity for computing the radiation budget of the earth's atmosphere. This band is only activated during the day. Visible images are acquired at 1 km resolution, but are often distributed at 8 or 16 km resolution for practical purposes.
IR band: The GOES IR band that is usually acquired in imaging mode is centered around 11.2 µm, located in the infrared window. Emitted radiation from the earth's surface and atmosphere dominates this region of the spectrum. Images of this band are usually printed as negatives so clouds (low IR emitters, due to their low temperature) appear white. This band is commonly used to estimate a temperature profile for the earth's atmosphere, among other applications. IR images are commonly acquired at 7 km resolution and resampled to 8 or 16 km resolution for distribution.
Water vapor band: The GOES water vapor band most commonly used for imaging is sensitive to radiation around 6.7 µm, though bands centered at 12.7 and 7.3 µm (also sensitive to H2O emission), and 13.3 (CO2) or 3.9 (window) µm are sometimes substituted. Images of this band are also commonly printed as negatives for the same reason as the IR band. Water vapor images give a picture of upper-tropospheric moisture distribution and are commonly acquired at approximately 16 km resolution.
The sounding mode of the VAS is primarily used for research purposes because it cannot operate independently of the imager.
Currently GOES-12, referred to as GOES - East, is centered at 75W. GOES-10 is now GOES - West and is centered at 135W. GOES 8 and 11 are in stand by orbits. GOES-9 has been redirected to the northern Pacific.The imager on the GOES I-M series has five spectral bands:
The GOES I-M sounder acquires data in 18 infrared bands and one visible band. See Menzel and Purdom for detailed descriptions of both the imager and sounder instruments.
The GOES Project home page http://rsd.gsfc.nasa.gov/goes/ has many pointers to other servers
The IKONOS satellite was launched on 24 September 1999 at the Vandenberg Air Force Base, California. It is the world's first commercial, high-resolution satellite. It has 1-meter ground resolution for panchromatic band (nominal at <26deg off nadir) and 4-meter for multispectral bands (nominal at <26deg off nadir). The IKONOS has a swath width 13km at nadir and an along track distance of 13km for an individual scene. The sun-synchronous orbit has an altitude of 423 miles/681 kilometers. Revisit frequency is 2.9 days for 1-meter resolution; and 1.5 days for 1.5-meter resolution.
Panchromatic mode: In panchromatic mode, the ground resolution is 1-meter, and data are stored in only one band which spans the visible to near infrared region of the electro-magnetic spectrum.
band 1: (0.45-0.90Ám) The radiometric information content of this band is very similar to that of a black and white photograph. The very high spatial resolution makes these images useful for visual interpretation and for digitally sharpening lower-resolution multi-spectral data.
Multispectral mode: In multispectral mode, the ground resolution of each band is 4-meter, and data are stored in four spectral bands which are same as bands 1,2, 3 and 4 of Landsat 4&5.
band 1: (blue, 0.45-0.52Ám) Same band range as Landsat 4&5 TM band 1. Water increasingly absorbs EM radiation at longer wavelengths, so band 1 provides the best data for mapping depth/detail of water covered areas. It is also used for soil/vegetation discrimination, forest mapping and distinguishing cultural features.
band 2: (green, 0.52-0.60Ám) Same band range as Landsat 4&5 TM band 2, this corresponds to the green reflectance of chlorophyll in healthy vegetation.
band 3: (red, 0.63-0.69Ám) Same band range as Landsat 4&5 TM band 3. This band is useful for distinguishing plant species, soil and geologic boundaries.
band 4: (near IR, 0.76-0.90Ám) Same band range as Landsat 4&5 TM band 4. It corresponds to the region of the EM spectrum which is especially sensitive to varying vegetation biomass. It also emphasizes soil/crop and land/water boundaries.
Space Imaging offers the ability to fully browse and buy imagery, products and services through their website, http://www.spaceimaging.com/level2/level2buy.htm
On the web at: http://www.spaceimaging.com/aboutus/satellites/IKONOS/ikonos.html
The RADARSAT satellite was launched in November of 1995 and has been operating continuously since that time. The RADARSAT system was developed in Canada and launched by NASA in exchange for data access rights. RADARSAT-1 has an orbital altitude of 798 km and an inclination of 98.6 degrees and circles the earth 14 times a day. It has a sun-synchronous orbit that allows it to rely on solar, rather than battery, power and provides satellite overpasses at the same local mean time. RADARSAT-2 is planned to launched in the year 2001.
The Synthetic Aperture Radar (SAR) sensor on RADARSAT-1 can be directed from an incidence angle of 10 to 60 degrees, in swaths of 45 to 500 km in width. This produces image resolutions ranging from 8 to 100 meters. RADARSAT-1 has a repeat cycle of 24 days, but covers the Artic daily and can reach any part of Canada in three days. Using the 500 km swath width, equatorial coverage can be repeated every six days.
SAR Characteristics: RADARSAT-1 operates in the C-band at a frequency of 5.3GHz and a wavelength of 5.6 cm. The antenna polarization is HH, meaning that the system transmits and receives energy in the horizontal plane.
RADARSAT International handles the commercial distribution of RADARSAT images. Images cost several thousand dollars apiece. Specific missions can be planned to acquire images at required locations, times, and resolutions for an additional fee. RADARSAT International can be contacted at: http://www.rsi.ca/pricelist/price.htm
The scientific and educational community can acquire low-cost imagery from the NASA funded Alaska SAR Facility (ASF) housed at the University of Alaska Fairbanks. To learn more about the ASF and data availability, check out their web site at: http://www.asf.alaska.edu/
The Japan Earth Resources Satellite (JERS) was launched on 11 February 1992. It acquired images from 24 August 1992 to 31 December 1996. JERS-1 has an orbital altitude of 570 km and an inclination of 98 degrees. It has a repeat cycle of 44 days and operated in s sun-synchronous mode.
The SAR sensor on JERS-1 has an off-nadir angle of 35 degrees, with a swath width of 75 km. This produces an image resolution of 18 meters. JERS-1 operates in the L-band at a frequency of 1275 MHz.
JERS-1 also has an optical sensor package OPS. OPS has a 75 km swath width and a pixel resolution of 18m x 24m. The sensor captures reflected energy in seven spectral bands from the visible to mid-infrared wavelengths. It can also produce stereoscopic images.
band 1: (green. 0.52-0.60 Ám). This corresponds to the green reflectance of chlorophyll in healthy vegetation.
band 2: (red. 0.63-0.69 Ám). This band is useful for distinguishing plant species.
band 3: (near IR. 0.76-0.86 Ám). This band is especially sensitive to plant biomass.
band 4: (near IR. 0.76-0.86 Ám). This band operates at the same wavelength as band 3 but is aimed at 15.3 degrees forward to produce stereoscopic images.
band 5: (mid IR. 1.60-1.71 Ám). This band is sensitive to plant water content.
band 6: (mid IR. 2.01-2.12 Ám). This band is used to map geologic formations and is responsive to plant and soil moisture.
band 7: (mid IR. 2.13-2.25 Ám). This band is used to map geologic formations and is responsive to plant and soil moisture.
band 8: (mid IR. 2.27-2.40 Ám). This band is used to map geologic formations and is responsive to plant and soil moisture.
Information regarding data availability and cost can be obtained at the Earth Observation Center of the National Space Development Agency of Japan at the following website: http://www.eoc.nasda.go.jp/homepage.html
On the web at: http://www.eoc.nasda.go.jp/guide/satellite/sat_menu_e.html
On 18 December 1999 the Terra spacecraft (formerly known as EOS AM-1) was launched. Terra has an orbital height of 705km with a sun-synchronous, near-polar orbit. This spacecraft contains 5 new sensor packages to study the earth's surfaces and atmosphere. For the latest update, and a great deal of more information on the Terra mission, check out the NASA web site at: http://terra.nasa.gov/The following is a brief synopsis of each of the sensor packages:
ASTER is designed to obtain high spatial resolution global, regional, and local images of the Earth. This sensor records 14 spectral bands of data ranging from visible, through short-wave infrared, to thermal. An ASTER scene covers an area of approximately 60 km by 60 km and data is acquired simultaneously at three resolutions. It has a spatial resolution of between 15 and 90 meters and is capable of 3-D stereoscopic viewing. It is anticipated that within 2 years a global 30-meter elevation model will be created using this package.
Bands 1 through 3 have a spatial resolution of 15m and cover the visible and near IR portions of the spectrum. Two receivers operate in the near IR wavelength, one pointing to nadir, one pointing backwards to produce stereoscopic images. Bands 4 through 9 operate in the short-wave IR portion of the spectrum and have a spatial resolution of 30m. This portion of the ASTER sensor failed in May 2008. Bands 10 through 14 operate in the thermal IR portion of the spectrum and have a spatial resolution of 90m. See the Appendix of the CEO online ASTER document for specific information about the wavelengths for each band of data.
For additional information about this sensor visit the ASTER web site at: http://asterweb.jpl.nasa.gov/. For information about locating and importing ASTER data see the FAQ section of the CEO website.
CERES will be used to measure solar-reflected and Earth-emitted radiation at the Earth's surface and the top of the atmosphere. This will be used to measure the earth's radiation balance daily. More information can be found at the ECERES web site at: http://asd-www.larc.nasa.gov/ceres/ASDceres.html
This package features nine cameras pointing at various angles through the atmosphere. It will capture four bands of data at the red, green, blue, and near-infrared portions of the spectrum. It is designed to determine the amount, type and height of clouds and measure atmospheric aerosol particles. For specific information about MISR, visit the web site at: http://www-misr.jpl.nasa.gov/
This package captures 36 bands of data, in the visible and IR portions of the spectrum, at a spatial resolution of between 250m and 1km. It is designed to provide coverage of the Earth's land, oceans, and atmosphere. It will provide global coverage every two days. For specific information about this sensor visit the MODIS web site at: http://modis.gsfc.nasa.gov/. For information about locating and importing MODIS data see the Online Documentation section of the CEO website.
This package is designed to observe carbon monoxide and methane in the lower atmosphere, and its interaction with the land and oceans. It has a spatial resolution of 22km and a swath width of 640km. More information can be obtained at the MOPITT web site.
The Aqua spacecraft (formerly known as EOS PM) was launched on May 24, 2002. Its mission is to study the Earth's water cycle; including evaporation, water vapor, clouds, ice, snow, and soil moisture. Like its sister satellite Terra, Aqua has an orbital height of 705km with a sun-synchronous, near-polar orbit. It crosses the equator 4 hours later than Terra at approximately 2:30 PM local time. This spacecraft contains 6 sensor packages. For the latest update, and a great deal of more information on the Aqua mission, check out the Aqua project web site.
Aqua carries the same MODIS and CERES sensors as Terra that are described above. It also carries four new sensors described briefly below:
This package is designed to develop accurate temperature profiles within clouds. It features 2378 infrared channels and 4 visible/near infrared channels. You can learn more at the AIRS website.
This is a 12 channel, all weather, passive microwave sensor designed to measure precipitation rate, sea surface winds and temperature, water vapor and ice. The ability to measure these geophysical parameters will contribute to our understanding of the Earth's climate. You can learn more about AMSR-E on the Internet
This 15 channel sensor designed to create upper atmosphere temperature profiles. It has a cloud filtering capability and can measure temperature at five different levels simultaneously. You can learn more about AMSU at the AeroJet site.
This is a 4 channel sounder developed by Brazil. It is designed to obtain humidity profiles throughout the atmosphere. You can learn more at the HSB web site.
SeaWiFS stands for the Sea-viewing Wide Field-of-view Sensor. The objective of this project is to gather data on global ocean bio-optical properties. Various types and quantities of marine phytoplankton can be identified by observing subtle changes in the oceans color. This ocean color data contributes to the study of ocean primary production and biogeochemistry.
The SeaStar spacecraft carrying the SeaWiFS instrument was launched on 1 August 1997. It has a 705km, sun-synchronous orbit. It features a spatial resolution of 1.1km and a nominal swath width of 2,800 km providing daily coverage of the world's oceans. The SeaWiFS instrument records information in 8 bands of approximately 20nm in width ranging from 400nm to 885nm. Find out more about the SeaWiFS project at the following web site: http://seawifs.gsfc.nasa.gov/SEAWIFS.html
SeaWiFS data is continuously being evaluated, recalibrated, and refined. As of November 1999 the third phase of data reprocessing was being finalized. Details of the reprocessing can be found at http://www.yale.edu/ceo/Documentation/sea_reproc.html
|What||Conterminous US Biweekly||Global 1 KM|
|Who||EDC: Eidenshink Weinheimer Madigan||EDC: Eidenshink Faundeen Foreign Ground Stations|
|Spectral Res.||Calibrated AVHRR channels 1-5 NDVI 3 solar geometry measures date||Calibrated AVHRR channels 1-5 NDVI 3 solar geometry measures date|
|Spatial Res.||1 Km||1 Km|
|Temporal Res. (Composite Period)||14 Days||10 Days: 3 per month (01-10, 11-20, 20-end of month)|
|Radiometric Res.||Raw 1&2: 0.5% reflectance (8bit) Raw 3-5: 0.5 K (8 bit) NDVI: 0.01 (8 bit) geom: 1 degree (8 bit)||Raw 1&2: 0.1% reflectance (16 bit) Raw 3-5: 0.17 K (16 bit) NDVI: 0.01 (8 bit) geom: 1 degree (8 bit)|
|Map Projection||Lambert Azimuthal Equal Area||Goode's Interrupted Homolosine|
|Size of 1 full image (MB)||13||694 (8 bit) 1388 (16 bit)|
|Composite Technique||(89, 92-95)
1. Calc viewing geometry
2. Raw -> Radiance
3. Calc NDVI
4. Geometric registration
5. Max NDVI composite
(90,91) 1. Raw -> Radiance 2. Calc viewing geometry 3. Geometric Registration 4. Compute NDVI 5. Max NDVI composite
|1. Raw -> Radiance 2. Calc viewing geometry 3. Geometric Registration 4. Compute NDVI 5. Max NDVI composite 6. Atmospheric Corrections: a) Rayleigh Scattering (Teillet 90) b) Ozone (Teillet 91)|
|Composite Dates||1989 - 1995 (winter discontinuities)||April 1992 - September 1996|
|Status/Availability||All available on CDROM||April 92 - Sep 93 and Feb 95 - Dec 95 avail via ftp|
|Satellite(s)||89 - 94 NOAA 11 95 NOAA 14||89 - 94 NOAA 11 95 NOAA 14|
|What||Conterminous US Land Cover Characteristics||Global Land Cover Characteristics|
|Who||USGS & U. Nebraska - Lincoln||USGS & U. Nebraska - Lincoln|
|Raw Data||Conterminous US Biweekly AVHRR NDVI composites further composited to 1 month. 8 Bands: March - October 1990||Global 1 Km 10 Day NDVI composites further composited to 1 month. 12 Bands: April 1992 - March 1993|
|Spatial Res.||1 Km||1 Km|
|Original Classification||Seasonal Land cover: 159 classes||Seasonal Land Cover: 205 classes|
|Derived Classifications||USGS LULC (Anderson level II) 26 classes, Simple Biosphere Model: 20 SiB classes + 7 mosaic classes, Biosphere/Atmosphere Transfer Scheme: 19 BATS classes + 9 mosaic classes||Global Ecosystems: 94 classes, IGBP LCC: 17 classes, USGS LULC (Anderson level II): 27 classes, Simple Biosphere Model: 20 classes, Biosphere/Atmosphere Transfer Scheme: 20 classes|
|Derived Summary of Seasonal Characteristics||Onset of greenness, Peak of greenness, Duration of greenness, Vegetation Characteristics (?), Perennial/Annual image, Leaf Longetivity (evergreen vs. deciduous)||Onset of greenness, Rate of greenup, Peak of greenness, Senescence, Rate of Senescence, Duration of greenness, Time integrated NDVI|
|Map Projection||Lambert Azimuthal Equal Area||Goode's Interrupted Homolosine|
|Status (9/96)||Complete||NA Complete SA & Africa "coming soon"|
USGS Global Land Information Center: http://edcwww.cr.usgs.gov/webglis
Mission to Planet Earth Scientific Data: http://www.hq.nasa.gov/office/mtpe/
* Additional source information can be found via the World Wide Web as well as in the "Climate Image Datasets" black binder available in the CEO lab.
The term georeferencing refers to the process of assigning map coordinates to specific pixels in a raster dataset. If the image itself is already in a known map projection, the values of individual pixels need not be altered. Instead, one simply assigns a coordinate to each pixel, which retains its original data value.
In certain cases, however, the image will not already be in the desired map projection. The process of projecting raster data onto a plane and forcing it to conform to a given map projection is known as rectification. During the rectification process data values from the original raster grid must be extrapolated onto the new, rectified grid. The method used to assign these values is known as resampling or warping. Depending on the situation, one might use any of several common resampling methods to accomplish a given rectification.
For the most part rectification, by definition, involves georeferencing since all map projections are associated with a coordinate system. However, in some cases, one might only care if the image in question aligns with the grid of another raster dataset, not if the image is in any given projection! In a case such as this, rather than assigning map coordinates to image pixels, one assigns image coordinates to the pixels, and then performs a warp (rectification process). This process, aligning the grids of two raster datasets, is known as registration and is necessary for combining datasets of different types, for example image and topography, as well as for change detection. Image to image registration only involves georeferencing if the reference image (not the one being warped) is already georeferenced.
Data that has been rectified to a particular projection is known as geocoded.
Unfortunately, it is impossible to create a completely distortion-free planar representation of a round object. Compromises must be made between the accuracy of area, shape, scale and direction as represented on the map. Different applications require different balances of these factors, and in some cases demand that the map satisfy some additional functional characteristic, plotting great circles as straight lines, or maintaining constant scale along a satellite ground track for example. The wide variation in uses of maps has lead to the development of many different map projections, which can be grouped based on their distortion-free characteristics as follows:
Just as one chooses a given map projection for a given application, one must choose a particular sphere or ellipse to approximate the Earth for each application. Different shaped spheres and ellipses fit the actual shape of the Earth better in some places than in others. Some are very close fits in one place, but not in another, while some are pretty good approximations everywhere, but don't really fit perfectly anywhere. When a particular sphere or ellipse is "pinned down" to a particular point on the surface of the Earth by specifying a tie point, or point of tangency, the pair are known as a datum. A specific datum and map projection pair are required to compute the transformation between latitude/longitude and map coordinates.
Most typical CEO users are likely to acquire images processed to levels 2, 3 or 4.  When purchasing imagery, consider the following trade off; geometric fidelity and navigational accuracy are obtained by warping the image which degrades the radiometric accuracy of the resulting image. Conventional wisdom states that applications requiring very accurate classifications or conversion of digital numbers to physical values (radiance, albedo, temperature etc..) obtain better results by performing these operations on "raw" data (level 2) and then performing geometrical corrections as opposed to deriving quantitative data from already geometrically correct images. On the other hand, accurate geometric correction is both time consuming and difficult. Those experienced with a particular type of imagery are likely to always acquire the closest thing to raw as possible, preferring to perform all the corrections based not only on the published calibration information, but also on vast stores of personal experience. After much hard work, such an approach is likely to yield better results and more accurate answers than an approach based on corrections applied by the vendor. However, much experience with a particular data type, and with satellite image processing techniques in general, is required to make this approach pay off. Applications that can afford reduced radiometric fidelity (and the higher cost of corrected imagery) and/or do not have the requisite ability/time/experience to perform all the corrections, may be better served by purchasing imagery with higher levels of pre-processed geometric correction.
The UTM is a series of Transverse Mercator projections established by the U.S. Army in 1947 to provide a standard for worldwide mapping. Under this system, the world is divided into sixty zones, each six degrees in longitude. Distances in the x-direction (Eastings) and in the y-direction (Northings) are measured from the origin for each zone (where the zone's central meridian intersects the equator). Table 6 lists the UTM zones.
The UTM does not have a preferred datum, and is commonly used with whatever datum best approximates the region being mapped. Snyder  points out that the U.S.G.S. uses the Clarke 1866 ellipsoid for all land under U.S. jurisdiction except Hawaii where the International ellipsoid is used. In ERMapper, use the "NAD27" datum for the Clarke 1866 ellipsoid. Most of Connecticut is in UTM Zone 18 with a small portion of eastern Connecticut in UTM Zone 19. Connecticut data will typically use the "NAD27" datum with the Clarke 1866 ellipsoid.
Table 6: Universal Transverse Mercator zones, central meridians, and longitude ranges.
All values are listed in full degrees east (E) or west (W) from the Greenwich prime meridian (0). [From ERDAS, Inc., 1991]
|Zone||Central Meridian||Longitude Range||Zone||Central Meridian||Longitude Range|
The SOM was designed to overcome these limitations by introducing time as a projection parameter. This approach allows the spacecraft-Earth geometry to change over the course of an orbital period and permits the central line of the projection to be curved rather than straight. The projection is not perfectly conformal, but errors are extremely small within the (~185 km) swath width of the Landsat satellites. ERMapper does not support the SOM because it is so complicated. This is a problem because many of the scenes in the CEO's archive have been processed to the SOM projection and are therefore not navigable using ERMapper without rewarping these scenes to a different projection using GCPs. See Snyder's book or appendix D of the Landsat Data User's Handbook for formulae and a more complete description of this important projection. It may be possible to import these images using ERDAS Imagine, reprojecting the image to a more standard projection, then converting the image to ERMapper format for subsequent processing.
Contrast stretching is important because regions that are spatially contiguous often have similar spectral signatures. Take for example a desert, ice cap, or ocean. Data points across the image will all have very similar values, and will result in a bleak image unless the contrast between these values is enhanced.
Figure 14, figure 15, and figure 16 graphically illustrate a few contrast enhancement strategies. We use a plot with a double y-axis to illustrate the contrast enhancement.  The x-axis is simply all possible values of the original data. In these examples, the original data ranges from 0-10. One y-axis represents the frequency that each x-axis value occurs in the original dataset (the number of pixels with each x-axis value in the dataset). This generates a histogram of dataset values which is plotted as a filled in curve. The second y-axis lists the possible range of output values, in these cases 0-255. The second curve on each of these plots, which does not have the area underneath it filled in, represents the transform from input values to display values. The linear transform stretches a range of given input values equally. The histogram equalization function stretches the input values more in the ranges that have a higher concentration of pixels. The threshold function takes all input pixels below a certain value and assigns them the same output value, and all input pixels above that value to a different output value.
To perform these enhancements, one operates on the entire dataset with a filter (kernel in ERMapper) using a process called convolution. A filter is simply a small array of numbers carefully chosen to relate a pixel with its neighbors in a certain way. Convolving the filter with the dataset simply means to multiply each value in the filter by its corresponding value in the dataset, sum up the results and divide by the sum of the values in the filter. This process of applying a filter to a dataset is often called the "sliding window" method, because you center the filter (window) around a pixel, convolve it with the dataset values falling within the window, then slide the filter to center around the next pixel. Convolution is not the same as matrix algebra, because one computes a new value for only one pixel at a time, regardless of the size of the filter.
An example helps. In this case consider a 3x3 piece of a much larger dataset which contains a locally high value surrounded by smaller values:
|Original Pixel and Neighbors|
We'll convolve this windowed out piece of the dataset with a 3x3 average filter (see Figure 17) and a 3x3 sharpen filter (see Figure 18).
The computation for the average filter is (1x10 + 1x12 + 1x9 + 1x11 + 1x20 + 1x9 + 1x9 + 1x10 + 1x10) / 9 = 11.1.
The computation for the sharpen filter is (-1x10 + -1x12 + -1x9 + -1x11 + 14x20 + -1x9 + -1x9 + -1x10 + -1x10) / 9 = 23.2. The resulting windows are:
|Figure 17||Figure 18|
Note that only the center pixel has changed. After convolving with the average filter, the pixel with the locally high value is more like its neighbors, whereas the sharpen filter enhances the difference between the center pixel and its neighbors. In a real dataset, this process would be repeated for each pixel and its neighbors that fall within the window of the filter. These filters can be of any size but must usually have an odd number of values in each direction so that the window is symmetric around the pixel in question.
|4/3||Vegetation Vigor (brighter is more)|
|2/1||Water Depth (darker is deeper)|
|2/3||Variations in Iron Content|
|5/7||Variations in Clay Content|
|(4-3)/(4+3)||NDVI (common, standard vegetation index)|
Table 8 lists some of the useful TM band combinations. The common convention used for describing the band combinations used in a given image is to list the bands used in Red, Green, and Blue order. In other words, the image created by using the red gun to display band 4, the green gun to display band 2, and the blue gun to display band 1 would be called "4,2,1 - R,G,B" for short.
|R,G,B||Comments & Applications|
|3,2,1||True Color. Water depth, smoke plumes visible|
|4,3,2||Similar to IR photography. Vegetation is red, urban areas appear blue. Land/water boundaries are defined but water depth is visible as well.|
|4,5,3||Land/water boundaries appear distinct. Wetter soil appears darker.|
|7,4,2||Algae appears light blue. Conifers are darker than deciduous.|
|6,2,1||Highlights water temperature.|
|7,3,1||Helps to discriminate mineral groups. Saline deposits appear white, rivers are dark blue.|
|4,5,7||Also used for mineral differentiation.|
|7,2,1||Useful for mapping oil spills. Oil appears red on a dark background.|
|7,5,4||Identifies flowing lava as red/yellow. Hotter lava is more yellow. Outgassing appears as faint pink.|
For an example, consider a supervised classification performed on bands 3 and 4 of a TM dataset. First the analyst selects homogeneous groups of pixels representing the regions of interest. In our example, we'd like to find all the pixels falling into three groups, vegetation, bare soil and water. The values of each band for the pixels used for these training groups may be plotted against each other (see figure 19). In this plot each point represents a pixel, its position is determined by its value in band 3 and 4 of the dataset. Notice that the pixels in a training region do not all have identical values in these two bands, their values fall over some range in each band, forming a cloud in this spectral space.
For each pixel, the computer then computes which of these groups is most likely to contain it. Several common methods are currently used to make this decision. The most straightforward method, called the minimum distance method, simply places the pixel in question into the group with the closest training region mean. The parallelepiped method compares the value of each pixel to the maximum and minimum values of the pixels in each training region and assigns the pixel to a group only if it falls within this range. The maximum likelihood algorithm takes the distribution of pixels in a training region into account when deciding how to group a pixel. Think of it as a minimum distance method that also considers the distribution of the other pixels in the group. Suppose all the pixels in one training region have very similar spectral values, like the water region in our example. Imagine also that another training region contains pixels with a wide spread of values, like the vegetation group in our example. Let's say the pixel in question is located an equal distance from the means of each of these groups, it is more likely that the pixel belongs to the group with a larger variance in values of its training members.
For references to works on specific topics see the extensive bibliographies following each chapter in your textbook. Also see the "Key References" sections earlier in this Guide. In addition to this list, we have included copies of the annual index from the journals Photogrammetric Engineering and Remote Sensing and Remote Sensing of Environment in Section VI. Abstracts from these and most of the other journals listed below are searchable on cdrom or on-linein the Geology library. Ask the librarians for help.
PE & RS Annual Index by Author