Monday, December 10, 2018

Lab 8 - Spectral signature analysis & resource monitoring

Goal and Background

The goal of this lab assignment was to gain experience with measuring and interpreting the spectral reflectance of various Earth features as well as performing basic Earth resource monitoring using remote sensing band ratio techniques.

Methods

Part 1 - Spectral Signature Analysis

For part one of this lab, the task was to measure the spectral reflectance of 12 different surface features present in the image provided by the professor.  These features were standing water, moving water, deciduous forest, evergreen forest, riparian vegetation, crops, dry soil, moist soil, rocks, asphalt highway, airport runway, and concrete surface.  To do this, I first used Drawing > Polygon tool to draw a polygon on the feature I wished to view the reflectance of.  With this polygon made, the next step is to select Raster > Supervised > Signature Editor tool.  Using this tool, I was able to view the reflectance for all 12 of the features needed as well as view a spectral plot of each features signature.

Part 2 - Resource Monitoring

For the second part of this lab, the task was to use band ratio techniques to monitor the health of vegetation as well as the iron content of soil.  To first find vegetation health, I used the Raster > Unsupervised > NDVI tool, inputted the correct data, and ran the tool.  Then, I took the output image and opened it in Arcmap to create a more visually pleasing map with five distinct classes.  To measure soil iron content, the Raster > Unsupervised > Indices tool was used with the 'Ferrous Minerals' function chosen as an input.  Once the tool was run, I similarly opened the output image in Arcmap to once again create a better looking map with five distinct classes.

Results


Spectral Reflectance of Standing Water
Spectral Reflectance of Dry vs Moist Soil
Spectral Reflectance of All Features Tested

Sources

Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey

Thursday, December 6, 2018

Lab 7 - Photogrammetry

Goal and Background

The primary goal of this lab exercise was to develop our skills in various photogrammetric tasks to applied to aerial and satellite images.  This lab was designed to help us comprehend the mathematics of calculating visual scale, the area and perimeters of features, and relief displacement.  This lab also introduced us to the concepts of stereoscopy and orthorectification. 

Methods

Part 1 - Scales, Measurements, and Relief Displacement

Part 1 of this lab was focused around calculating scale, area and perimeter measurements, and relief displacement.  To calculate the scale of the aerial images we were given, the equation s = pd/gd was used whereis the scale, pd is the photo distance, and gd is the real world distance.  To calculate the perimeter and area of features, the 'Measure > Polygon' and 'Measure > Polyline' digitizing tools in Erdas Imagine were used to calculate the perimeter of an object in meters and in miles and the area of the same object in acres and hectares.  Finally, to calculate the relief displacement of a tall object we were given an aerial photograph, its scale, and the altitude of the sensor at the time the image was taken.  Using this information the equation d = (h * r)/H was used to calculate the relief displacement.  In this equation d is the displacement, h is the height of the real world object, found by using the provided scale and measuring the photo height of the object, r is the distance from the top of the object to the principal point of the image, and H is the height of the camera.

Part 2 - Stereoscopy

Part 2 of this lab was all about creating stereoscopic images using both a DEM and a LiDAR derived DSM.  To do this, the 'Terrain > Anaglyph > Anaglyph Generation' tool was used in Erdas Imagine.  Using the DEM and DSM as well as the provided image of the city as the inputs, the tool was run and the output images saved.  These output images are Anaglyph images that can be viewed with a Stereoscope. 

Part 3 - Orthorectification 

Part 3 of this lab was all about using the Erdas Imagine Lecia Photogrammetric Suite (LPS) for triangulation and orthorectification.  Using the images provided by the professor, I first created a new Photgrammetric Project, added in the necessary images, and specified the sensor to correct the Interior Orientation.  Next, to correct the image horizontally, I used the 'Classic Point Measurement' Tool to add GCP's to the image I was working on orthorectifying from a reference orthorectified image provided by the professor.  Once eleven GCP's were added to the first image, I repeated steps for creating GCP's with a second image to correct the image vertically.  Once all teh GCP's were collected for both reference images and corrected both horizontally and vertically, I ran the 'Automatic Tie Point Generation Properties' tool to collect 40 tie points.  Finally, the 'Start Ortho Resampling Process' tool from IMAGINE Photogrammetry Interface could be run.  In the 'Ortho Resampling' Dialog, I added the images to the correct inputs and ran the tool.  Once the tool was finished running, the output images were properly orthorectified.  

Results


Sources

National Agriculture Imagery Program (NAIP) images are from United States Department of Agriculture, 2005
Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of Agriculture Natural Resources Conservation Service, 2010.
Lidar-derived surface model (DSM) for sections of Eau Claire and Chippewa are from Eau
Claire County and Chippewa County governments respectively.
Spot satellite images are from Erdas Imagine, 2009.
Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009   
National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009.

Sunday, November 18, 2018

Lab 6 - Geometric Correction

Goal and Background

The goal of this particular lab was to introduce us to the very important remote sensing concept of geometric correction of remotely sensed images.  This lab was set up so that we would gain experience with the two major forms of geometric correction, image-to-map rectification and image-to-image registration, and understand the processes of implementing one over the other. 

Methods

Part 1

First, open Erdas Imagine with two separate viewers and put the provided distorted image in one viewer and the provided reference map in the other viewer.  Make sure that the viewer with the distorted image is selected and then click on the Multispectral tab under the raster options and select the Control Points tool.  This will open the Set Geometric Model dialog where you will scroll down and select the option of Polynomial and click OK.  Hitting OK will open up two new tools, the Multipoint Geometric Correction tool and the GCP Tool Reference Setup tool.  On the latter tool, accept the default option Image Layer (New Viewer) and click OK.  Next, navigate to the folder that contains your images and add the reference image and click OK on the Reference Map Information dialog that pops up.  This will then open the Polynomial Model Properties (No File) dialog and accept the default settings by clicking Close.  Next, in the Multipoint Geometric Correction Window, remove any GCP's that were present in the image to begin with and begin adding your own.  Click on the Create GCP tool and add GCP's in similar locations spread evenly across the image to both the reference image and the distorted image for only the first three GCP's.  As this is a first order polynomial transformation only three GCP's are needed for the model solution to be considered current but it is always recommended to collect more than the minimum amount of GCP's.  For the fourth GCP, simply add one to one image and the matching GCP will be added to the other image automatically.  Next, you must reduce the Root Mean Square (RMS) Error to below 2.0 or less, though 0.5 or less is the standard but as this is the first time doing this, 2.0 is more achievable for learning purposes.  Zoom in to the GCP's and move them around on the distorted and reference images until they match up and the RMS Error is within allowable limits.  Once this is done, run the Display Resample Image Dialog and save the output image to the appropriate location and let the too run, dismissing it once it is finished.  You should now have a new geometrically corrected image.

Part 2

For part 2, open Erdas Imagine and bring in the distorted and reference images in the same way you did for part 1.  Next, click on the Control Points button under the Multispectral tab as you did in part 1 and once again select Polynomial under the Set Geometric Model dialog and click OK on the GCP Tool Reference Setup tool.  Import your reference image and then click OK on the Reference Map Information dialog.  On the Polynomial Model Properties dialog change the Polynomial Order from 1 to 3 and click Close.  Next, click on the Create GCP tool and add in 9 GCP's to both images in a similar location and then add 3 more to just one of the images in a similar way to what was done in part 1.  Move the GCP's around until you get an RMS Error of below 1.0 in this case and once this is achieved, run the Display Resample Image Dialog and save the output image in the necessary folder.  Make sure to change the Resampling Method to Bilinear Interpolation under the Resample Image Window and keep all of the other default settings and then click OK to run the tool.  After the tool is completed, you will have a geometrically corrected image using a third order polynomial with bilinear interpolation.  

Results

Part 1 Multipoint Geometric Correction Window with 4 GCP's and an RMS error of .9721

Part 2 Multipoint Geometric Correction Window with 12 GCP's and an RMS error of .1701

Sources

Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey.

Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.

Saturday, November 10, 2018

Lab 5 - LiDAR Remote Sensing

Goal and Background

The goal of this lab was to begin to develop a basic knowledge of LiDAR data and its fundamental formats and uses. This lab tested our ability to analyze and interpret LiDAR data in a variety of ways as well as taught us the basics of using ArcMap and Erdas Imagine functions to develop different ways of viewing and analyzing the data.  The primary ways of developing these skills was through the creation and processing of various terrain and surface models as well as an intensity image from point cloud.

Methods

Part 1: Point cloud visualization in Erdas Imagine

Using the files provided by the professor, the first step to demonstrate LiDAR data was to open a new Erdas Imagine viewer and add in all of the LAS as Point Cloud (.las) files, making sure to click NO on the LOD warning screen that will appear. Next, to check the location of the .las tile, open ArcMap and load the provided shapefile of the study area, QuarterSections_1.shp and use the label field Quarter_1 to locate a particular .las file's tile position on the tile index shapefile that is opened in ArcMap.  For the majority of the rest of this lab, ArcMap will be used as it is easier to process lida point clouds.

.las files in Erdas Imagine

QuarterSections_1.shp shapefile with labels based on each block

Part 2: Generate a LAS dataset and explore lidar point clouds with ArcGIS

Using the provided .las files once again, open ArcMap and navigate to the ArcCatalog.  Once there, navigate to the folder with all of the .las files and right click, selecting New>LAS Dataset.  Name the new dataset  Eau_Claire_City and right click on it in the ArcCatalog to open its LAS Dataset Properties window. Next, click on the add data button and add all of the .las files and then go to the Statistics tab and hit the Calculate button.  Next, coordinate systems must be added and to find out which ones to add, go to the metadata .xml file for the data and open it in notepad and find the horizontal and vertical coordinate systems.  In this case, we will use the  NAD 1983 HARN Wisconsin CRS Eau Claire (US Feet) for the horizontal and NAVD 1988 US feet for the vertical.  Navigate to these coordinate systems for both the XY Coordinate Systems tab and the Z coordinate system tab and select and apply them.  Next, put the properly formatted LAS Dataset into the ArcMap window by dragging the created Eau_Claire_City.lasd into the window.  Right click the layer in the TOC and go to its properties and classify it with 8 instead of 9 classes.  Using the LAS Dataset toolbar, play around with the different surface options from the Surface drop down menu such as Aspect, Slope, and Contour.

Aspect Surface

Contour Surface

Slope Surface

With the Contour surface selected, right click on the Eau_Claire_City.lasd layer and go to its properties.  Here you can adjust the contour interval as well as other settings. Next, go to the filters tab and view the differences between the different predefined settings like All (default), Ground, Non Ground, and First Return and pay attention to the differences in what classes and/or returns each different setting uses.

Next, return to the full extent view and set the points to Elevation and the filter to First Return.  then, go to the LAS Dataset toolbar and select the  LAS Dataset Profile View tool.  Find a object on the map that you know the shape of, in this case an old rail bridge now used as a pedestrian bridge to downtown, and use the tool to create a box around the object to display it.

Part 3: Generation of Lidar derivative products

Section 1: Deriving DSM and DTM products from point clouds

First, open the ArcToolbox and navigate to Conversion Tools > To Raster > LAS Dataset to Raster and launch the LAS Dataset to Raster tool. It should look like this.
Input your LAS Dataset and give it the necessary name.  Set the Value Field as Elevation, the Cell Type to Maximum, the Void Filling to Natural_Neighbor, the Sampling Type to Cellsize, and the sampling value to 6.56168 which is equal to 2 meters and then click OK to run the tool.  Next. activate the 3D Analyst extension is active by going to  Customize > Extensions > 3D Analyst.  Then in ArcToolbox go to 3D Analyst Tools > Raster Surface > Hillshade to launch the Hillshade tool.
Input the Digital Surface Model (DSM) you created in the the previous steps and run the tool to create a hillshade of the DSM. 

To create a Digital Terrain Model (DTM), you will use the same LAS Dataset to Raster tool but instead of setting the filter to First Return, you will set it to Ground. Input the LAS Dataset and set the Interpolation to Binning, the Cell Assignment Type to Minimum, the Void Fill method to  Natural Neighbor, the Sampling Type to CellSize, and the Sampling Value to the same 6.56168.  Run the tool to create the DTM.  Next, run the Hillshade tool again in the same way as before but input the DTM instead.

Section 2: Deriving Lidar Intensity image from point cloud

To create an intensity image, set the LAS Dataset to Points and the Filter to First Return before running the LAS Dataset to Raster tool. Set the Value Field to Intensity, Binning Cell Assignment Type to Average, Void fill to Natural Neighbor, and Cell Size to 6.56168.  Run the tool to create the intensity image and then export the image as a TIFF file to be opened in Erdas Imagine for viewing.

Results

Digital Surface Model (DSM) with first return

Digital Terrain Model (DTM)

Hillshade of DSM

Hillshade of DTM

LiDAR Intensity Image of Eau Claire, WI

Sources

Lidar point cloud and Tile Index are from Eau Claire County, 2013. Eau Claire County Shapefile is from Mastering ArcGIS7th Edition data by Margaret Price, 2016.

Friday, October 26, 2018

Lab 4 - Miscellaneous Image Functions

Goal and Background

Lab 4 in GEOG 338 Remote sensing had us explore and become comfortable with many of the miscellaneous image functions that are built into the Erdas Imagine software.  The image functions that were used in this lab include image subsetting by created an AOI file, image fusion using the Resolution Merge tool, radiometric enhancements using the Haze Reduction tool, linking the Erdas Imagine viewer to Google Earth, changing image resolution using the Resample Pixel Size tool, image mosaicing using both the Mosaic Express tool and the MosaicPro tool, and image differencing by using the Model Maker to create an image showing change from one year to another.

Methods

Part 1

To create image subsets, the first step is to import the provided image into the Erdas Imagine Viewer.  Next, using the Raster toolbar, create an Inquire Box on the image and position where necessary.  Then from the Raster toolbar click the Subset & Chip button and click on the Create Subset Image tool and create and save the image from the Inquire Box.  To create a subset image from an AOI file you must load both the image and a shapefile of the area desired into the Erdas Imagine viewer.  Then, highlight the shapefile in the viewer and hot the Paste From Selected Layer button on the home tab and then use the Subset & Chip button like earlier to create the image.

Part 2

To merge two images to increase the spatial resolution of an image you must first open the Raster tab in Erdas and click the Pan Sharpen drop down then select the Resolution Merge tool.  Select the images you wish to fuse together to pan sharpen them and select which resampling technique to use, in this case Nearest Neighbor was used.

Part 3

To improve image spectral and radiometric quality, haze rudction can be used.  To do this, go to the Raster toolbar and click on the Radiometric drop down menu before selecting the Haze reduction tool.  Select the image you wish to run the tool on then allow it to run, giving you an image of much better quality.

Part 4

To link your image in the Erdas Imagine viewer and Google Earth you must first open up the desired image in Erdas and then click on the Help button on the Home tab.  In the search bar type in 'Google' and click search.  Once the results appear, click on the Connect to Google Earth button which will launch Google Earth and then click on both the Match GE to View and the Sync GE to view buttons to fully sync the two views.

Part 5

To resample an image and change its pixel size you must open up the Raster toolbar and click on the Spatial drop down menu.  Next, select the Resample Pixel Size tool and select the image you wish to resample.  Use the default Nearest Neighbor resample method or the Bilinear Interpolation method and set the pixel dimension to the desired size then click the Square Cells box

Part 6

To mosaic images using the Mosaic Express tool first open the Raster toolbar then click on the Mosaic drop down menu before selecting the Mosaic Express tool.  Select which images to add, making sure to put them in the correct load order and then run the tool to receive your mosaiced image output.  To do this in the MosaicPro tool click on the same Mosaic drop down menu but instead select the MosaicPro tool option.  Bring in both images making sure to select the Compute Active Area option under the Image Area Options tab and insert both images into the MosaicPro window.  Next, make sure that the images are loaded in the correct order with the proper one on the bottom.  Next, run Color Corrections using the Histogram Matching and use the Overlay Overlap Function.  Finally, process your image to create the mosaiced output image.

Part 7

To create an image that shows the areas that experienced change between when two images were taken using the model maker you must first open the Model Maker tool under the Toolbox toolbar.  Once the Model Maker tool is running, insert two raster objects, a function object, and an output object and connect them all.  In the rasters, insert the two images you wish to view the change for then in the function box subtract the values from one image from the other and then add a constant, in this case 127 to get the output.  Then using another model, take the output from the first model and run a Either IF OR function using the change/no change threshold calculated from the image metadata to find out which areas experienced changed and which did not.  Then, open the output image of this second model in Arcmap and overlay it over the original image showing the study area to create a map showing areas of change.

Results

image subset from Inquire Box

image subset from AOI (area of interest file)

image mosaic using Mosaic Express

image mosaic using MosaicPro


histogram of change image with upper and lower bounds

map showing areas of change from 1991-2011

Sources

Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey. Shapefile is from Mastering ArcGIS 6th edition Dataset by Maribeth Price, McGraw Hill. 2014.