The purpose of this lab is to do a change detection analysis using landsat imagery in ERDAS Imagine. My study area is the Tri-City area of Southern Ontario Including Kitchener/Waterloo, Cambridge and Guelph. I am attempting to see the change of the urban footprint from 1990 to 2013.
The steps include:
Atmospheric correction, Radiometric calibration, Change Detection, Change Classification
Atmospheric Correction (PCI)
I do not have access to ERDAS Atmospheric correction tools and although I could use a TOA and haze removal in ERDAS, I am instead using the ATCOR module in PCI. This introduces an extra step of translating data between the two software.
The correction did not make a large difference as the scenes had little atmospheric interference. The 1990 landsat 5 imagery was not corrected as the image was clear to begin with. The 2013 imagery was corrected with haze reduction, and water and cloud masks. To then move the ATCOR 2013 image into ERDAS you must open the file tab in the content viewer (vs map tab) right click and select translate (Export). Select the .img file type and select the bands to translate. Make sure for earlier bands you select the correct landsat (TM or MSS, most data is TM)
It should also be noted that if you are using a DEM for your translation to remove shadow and other topography errors, your dem must be larger than your landsat imagery. One source for such imagery is etopo earth dem which is accurate to 1 arc second. A second option is SRTM data from EarthExplorer. SRTM data will likely need to be mosaicked to cover the entire landsat scene.
In the case of the 2013 imagery I am not convinced the the atmospheric correction is better than the original image as it darkened a lot of areas on the ground which complicated my classification in the change detection step.
Radiometric Correction (ERDAS)
Assuming that ATCOR worked on both of your images you may be able to just load the .img files (i.e ATCOR file created and translated). Otherwise you may need to import (manage data –> import data) or stack the files (spectral –> layer stack) in ERDAS. Tutorials for these steps are available under the tutorial section of the website. When you open your imagery it may appear dark. If this is the case you can try stretching the imagery using spectral options or you can try subsetting the imagery to remove the areas if possible that are causing the large contrast and darkening of the image (i.e remaining clouds or hazy areas).
For this specific scene radiometric correction was unsuccessful due to extreme variances between the images. Southern Ontario is a heavily farmed area meaning that the land cover is constantly changing from bare to vegetated, with changing vegetation. This is not idea for change detection. In retrospect a scene between harvest and snow fall would have been more appropriate for this study and may have resulted in better results.
Additional issues can include rainfall events which can preferentially affect different plots of land depending on soil type.
Change Detection (ERDAS)
If the imagery was matched well enough you can use raster subtraction to detect differences (raster functions in ERDAS). For this example the image histograms were too different to run direct change detection (subtraction). Alternately, discriminant function change detection tool can be used.
If spectral differences can’t be used you can classify the imagery and then use the Discriminant Function Change Detection tool. The images can be classified using either supervised or unsupervised classification. Supervised is appropriate for high accuracy results for a minimal number of images as this method is time consuming.
For this project I did a supervised change detection method. Including the supervised classification of both images and the application of the clump and eliminate tools.
TriCity Area, Ontario
Urban Landcover 1990 (blue)
Urban Landcover change (1990-2013 in purple)
The change detection was fairly successful using this method. The growth of many of the major cities in southern Ontario is apparent in the resulting imagery shown above.
Issues encountered were non permanent land cover change (farming) that influenced spectral change detection. Success was much better when using a supervised classification although there were still some issues detecting urban from bareland. To minimize this cropping to specific study areas could be helpful. There is some change detection error likely introduced between the two dates due to the method, but AOI’s chosen typically overlapped between both images.
The 1990 imagery achieved better classification results with a simpler process. I believe this is because the imagery was brighter possibly due to a recent rainfall in the 2013 imager (would need to be compared to weather data).
Overall results could have been improved through more rigorous selection techniques of the imagery.
I attempted the delta cue change detection method with ERDAS but was unsuccessful again due to the land cover change associated with farming. This method may be more successful between harvest and snow fall.
Orthorectifying and Mosaicking in PCI Geomatica
Creating a Project
Select File–> New Project and create a new project. The following screen will appear. Select your folder and file name. Ensure that the settings below are chosen.
Next set the projection and pixel spacing for both the output projection and GCP projection.
Next set the Focal Length and scale of the project. This should be available in the flight calibration report. Select Fiducial locations (middle, edges or both) and calculate locations. If you don’t have this information then select compute from length and input the length of the image edges. If you do not have this data estimate the length as best as possible.
Creating a DEM
I recommend that before creating your Ground Control Points, you recreate your DEM using the project projection in the *.pix format. To create a new DEM open up the Import & Build DEM step and select the create DEM from Raster.
Select your DEM (or DEMs) and push the down arrow button to add it to the Set of DEMs to merge box. This will make the OK button selectable.
Ok will open up the following dialogue. Defaults should be acceptable. Use Select to choose your file location and type your preferred file name into the Output DEM text box.
The first step is to open up the imagery that you need to orthorectify and mosaick. Select Data Input and choose the button to open new images as shown below. A dialogue will appear. Select your images and add them to the project.
To locate the fiducial marks on your image for orientation and model building purposes, select an image and click open. Two dialogue boxes will open including the Viewer image below and the fiducial marks dialogue to the right.
The second screen is the Fiducial Mark Collection dialogue. If you select the centre of the fiducial marks located in 4 locations on the image viewer, and then click set within the fiducial marks dialogue box, the appropriate mark location X and Y position will be filled in. Do this for all 4 fiducial marks. Ensure that you are using the correct location for fiducial marks (i.e corners or edges) from the project settings.
Make sure to save your project at regular intervals throughout this process!!!!
Selecting Ground Control Points
The next is providing GCP’s. There are several ways to do this.
1) Have ground markings that have been located on the ground and are obvious in the imagery (i.e white x’s)
2) Using a vector file such as roads to mark intersections. This is not always accurate as roads can change over time or be inaccurately located depending on the scale of your vector file.
3) Use an already geocoded image to do a comparison GCP selection. This entails selecting a feature that is duplicated in both sets of imagery (preferably close to the ground) and assigning the point a number on both imageries. This assigns the unlocated point a real location. If you load a DEM this location can use an extracted elevation. 4 points must be chosen to generate a model and provide accuracy points. The point type can be changed to GCP’s or check points. Check points are used to test accuracy. This project did not utilize check points but only considered the residual report and visual comparison.
Select a DEM and a geocoded image. Open up the image that you would like to orthorectify from the project images dialogue. Once the image is open find a location and select the preferred point. Select the matching point on each image, and on each separate viewer select use point. Not that the imagery needs to be the working image and not a reference image. Once you select use point on both viewers, you can extract the elevations and accept the point. If you checkmark “compute model” after 4 GCP’s have been selected the model will update.
After selecting 4 GCP’s you can try the automatic GCP tool to add additional points to potentially increase accuracy. I did not use this tool as my accuracy was good within 50cm.
Selecting Tie Points
Now that the imagery has been given a real world location, the images are tied to each other for a more accurate mosaicking result. This allows higher accuracy at the edges of the imagery and better correlation between images.
Points can be selected in a manner similar to GCP’s only using two aerial photography images instead of one and a geocoded image (or other method).
In this situation I was able to select the automatic tie point and establish 7 tie points between the 3 images used.
Use the project over view beside the tie point buttons to look at the foot print of the imagery over the mosaicked raster.
Compute the Model
Click the compute model button as shown below.
If successful it will simply tell you that the Bundle Adjustment was successful.
OrthoRectify your images.
Select the images to rectify and click the arrow button to add it to the images to process box.
Edit the settings shown below to preferred settings and select ok.
Mosaick the OrthoImages
The Define Mosaic tool will allow you select the images that you wish to mosaic in an overview screen.
After you select the images you want to mosaick you can run the automatic mosaick tool (tool on far right of above dialogue).
This opens up the window to the right. Set up a location for your final mosaic and your demo mosaic. Explore the highlighted settings and select generate demo to find the best match before creating the mosaic.
Print and Examine Residual Report
In the Ground Control Point option select the residual report icon.
As you can see to the right, my accuracy was .5m in ground units for GCP’s and approximately 2m for tie points. This makes sense as tie points occur the furthest from Nadir and are the last accurate locations. Higher accuracy might be obtained from selecting manual points.
Orthorectifying and Mosaicking in ERDAS Imagine
Open the LPS Extention for Erdas and create a new project. Step one is to set the projection.
Set the average flying height and select Edit Camera. Enter appropriate information (Focal length and Fiducial number).
After the project is created, right click on the images folder and select add.
Click the Int. button on one of your images to open up the following dialogue
Click the viewer button to see the image.
The image dialogue box is shown below. Check the sensor and make sure that the fiducials are appropriate (i.e middle or corners, middle should be approx (0,113 for X,Y top middle). Input as much data as you have available such as principle points, distortions etc. Navigate to the appropriate location and choose the crosshairs to choose the location of the active fiducial. Erdas will approximate the next location and move your location there. This becomes fairly accurate. Do this for all images. Make sure residuals are reasonable for your project.
Open the point measurement tool (edit).
This opens up a window wit ha side by side viewer containing 3 views of each scene based on zoom level. In order to rectify to a geocoded image select the identified horizontal (geocoded) and vertical (dem) reference buttons and click the “use viewer as reference”. This will open up the geocoded image beside one of your aerial images.
Reference Points and Tie Points
To create a reference point select the Add option on the top right of the window. This adds a point to the reference layer. Find a location in both images that match and select the cross hair to select the point in the image. You will need to select the cross hair once for each image. This will create a table below the images as shown to the right. Change the Type to Full and the Usage to Control. Repeat this process at least 4 times for each image and then select the triangle to compute the bundle adjustment.
Once complete, under right viewer select a different image and continue adding control points.
To make a tie point use the tie point setting and create a tie point between two non ortho-rectified image. I found that it works best to just reference two images to the same point. Once this is done compute the bundle adjustment and look at the report selecting process–>triangulate report.
To add more tie points automatically, in the main window go to process – automatic tie point generation and use the default settings. This will create a number of tie points. This reduced my RMSE from 22 to 11 pixels. It is assumed that additional control points could further reduce this error.
The tie point creator in ERDAS is extremely robust and allows for creation of tie points between images of one flightline as well as then tie points between multiple flightlines. You can increase your accuracy by analyzing the summary reports of these actions and creating your model more accurately. These settings can be changed within the auto tie point generation properties menu. This is beyond the scope of this comparison but there are many options within triangulation properties to aid in more accurate results.
Orthorectifying is very simple. Either select the ortho option in the viewer beside the image or go to process –> ortho rectification –> ortho resampling. Choose the appropriate save file, DEM and resampling method (cubic suggested). Select ok and it will produce an orthorectified image using your model.
To mosaick the imagery simply select process–> mosaick and select the appropriate DEM. Follow the instruction in this ERDAS mosaicking tutorial. I would recommend adjusting the seamline selection method
Selecting the Nadir option helps remove edge of image artifacts.
Overall I found my Erdas results less satisfactory and I found the process more confusing than in PCI. This is definitely the choice if you need extremely detailed control and don’t mind putting in extra time in order to achieve accuracy closer to that from PCI. As I have expressed in past posts, I much prefer the process of mosaicking in PCI and typically get better results. This example is no exception. Once again, I am sure through options it is possible to achieve results as good as PCI, but PCI seems to achieve them more readily with less user input.
Results and Comparison
Overall I preferred the results achieved from PCI Geomatica. There is no question however that Erdas has a better viewer interface for point selection and smarter fiducial choosing as it automatically moves you to the next fiducial zone. It is possible that PCI has a similar function that needs to be turned on but if so I am unaware of the tools location.
Additionally Erdas has many options tie point creation and generation and the ability to have excellent control over your accuracy and flightlines. Despite this, I found that I achieved better results in a more timely manner within PCI Geomatica with fewer edge affects on my final mosaick.
Purpose and Study Area
The purpose of this assignment is to use a LiDAR DEM of the Middleton, Nova Scotia, Canada area in order to model flooding. All of the steps in this process including hydrological flattening, hydrological enforcement, watershed delineation and incremental flooding will be discussed. This is completed at various resolutions and compared to datasets with barriers.
The data used for this study was collected by the Applied Geomatics Research Group within the Nova Scotia Community College. The lab design and workflow was supplied by the Centre of Geographic Sciences in Lawrencetown, Nova Scotia. The hydrological vectors including streams, rivers and lakes were extracted from the Nova Scotia Geomatics Centre Base Data (1:10,000 scale) which were derived from 1:40,000 scale aerial photography spanning 1987 to 1997. The LiDAR DEM was collected on August 18, 2010 with an Optech ALTM 3100 system. The DEM was made from ground poitns using TIN interpolation and have been transformed into orthometric heights (CGVD28).
Hydrological flattening involves assigning lakes and streams elevation values to remove “bad” points. What this means is when LiDAR is flown, the results over the water can be variable. For the purposes of watershed modelling we need for the lakes to be a lower elevation than the streams flowing into the lakes or it can create inaccurate flow direction models. In the case of rivers we do not want to assign a single elevation but instead provide an elevation gradient derived from the dem. Model builder was used for many of these processes.
To get a representative water level the first step is extracting shoreline elevations using the downloaded waterbodies file. If such a file is not available, the polygons can be drawn by hand. Use the polygon vertices (Feature vertices to points) to extract DEM elevations. (Add Surface Information). Next the minimum elevation is determined using Summary Statistics and Get Field Value. Essentually Summary Statistics creates a table with all of the nodes and their new DEM values and by querying the minimum value. Once this value is determined, the lake can be extracted from the DEM using Extract by Mask (use lake polygon for this). The raster calculator can be used to assign the minimum value to anything in the raster that is not null (i.e where the lake was extracted). Once the lake has been extracted from the DEM and it’s value reassigned, it can be mosaicked back into the original dem using the mosaic tool.
The river flattening is completed similar to the lake flattening, but requires that an elevation trend be applied. The river shoreline elevations are derived similar to the lakes method (Feature Vertices to Points, Add Surface Information). These points are then passed to the Trend tool to interpolate a raster that covers the water body with a flat gradient surface. The surface was then clipped to the river outline and mosaicked into the DEM that has been updated with flattened lakes.
Hydrological Flattening Discussion
This will result in flattened lakes and flattened but trending rivers. This removes irregularities and erroneous elevations that may affect water flow direction. This process could be improved by increasing the accuracy of the river and lake outlines by inputting them manually or finding a larger scale water body file. These false elevation artifacts could create false contour lines (depending on the scale). If orthorectifying an image to the dem the false elevations in the river could skew the results and affect the pixel accuracy.
Watersheds were created using the flow direction tool, flow accumulation tool, the fill tool and the watershed in ArcGIS. The pour point is placed on the area of highest accumulation. Barriers such as bridges and culverts can appear to be solid in LiDAR when in reality water can flow under or through this infrastructure. Location of Culverts would significantly aid in modelling waterflow especially if size of culvert was included. Refer to the image to the right, focusing on areas with red dots where the road crosses the river.
This process could be further improved by analyzing additional accumulation zones where there are no rivers (areas with barriers such as terrain or roads) and then using aerial photography to search for evidence of culverts and adding them into the model. More realistic models may result from larger DEM’s which cover a larger catchment area.
The flattened LiDAR DEM with barriers removed supplied the most realistic results, followed by the 20m DEM and finally the LiDAR DEM without barriers removed. The results could be better if the DEM covered a larger area and allowed for additional watersheds to be recognized. Were this the case the 2m Flattened DEM with removed barriers may show trends more similar to the 20m DEM, assuming that the 20m DEM recognizes more regional trends.
20m DEM with Barriers, no flattening
2m DEM with barriers and flattening
2m DEM with barriers removed and flattening
Incremental flooding was emulated using a for loop to cycle through min and max flood levels and setting flood increments. The Flood mask was created by using the Raster Calculater to created a mask covering a DEM cell value range (i.e <10) and a binary value for covered cells (i.e cells below that value are given a value of 1, cells outside that range are not given a value). Areas such as water treatment plants have water but are not connected to the flood points. To ensure that only areas with connectivity to the pour point are flooded, connectivity must be tested. To test connectivity the Raster to Polygon tool was used to create a polygon from the flood raster. Any polygons that were connected to the pour point were selected using the spatial join tool. Select the flood polygon as the Target Feature and the pour point as the Join Feature. One-to-One and Keep all target features was turned off. intersect was the spatial operation. The output will have only polygons that intersect the pour point.
Most of the results are similar. The unflattened DEM flooded a lesser extent likely due to a false barrier created from elevation spikes in a narrow part of the river. This is visible below and is possibly even more obvious at greater flood increments. Shown below. Additionally the unflattened imagery often resulted in a strip of elevated terrain in the middle of the river. The cause of this is uncertain.
The clearest difference occurs in the hydrological flattened images between the barrier and removed barrier models. The model with barriers removed penetrates the east west highway and floods the other side before the model with barriers intact. This is demonstrated below. This is an expected results. The barrier removal process was very conservative. Removing larger portions of the boundary may cause a larger gap between the flood results of the two models. See below.
Overall the flooding mostly matches the watershed. The extents of flooding go beyond the watershed and are less affected by ridged terrain perpendicular to the river as it is parallel to flood direction.
Flooding as it crosses the barrier. Barrier present (pink) vs barrier removed (blue)
Results and Discussion
The tools used in model builder for the construction of this project and modelling of flood data were very affective. I ran all of the tools from within model builder so that I could customize a few features. For lake flattening I ran an iterator that selected the next row in a table before running the model until the row value was null. This removed manual selection of each lake before the running of the model to flatten it. For flood modelling I used integers instead of uneven numbers so that I could avoid issues with data compatibility since the flood model toolbox accepts only integers. The flood model was set up to use metres and not centimetres. If greater accuracy was required the flood model could be set to use centimetres. Additionally instead of parsing the raster name and adding the value of water level for polygon output I manually changed the base of the name and the suffix using %title% (i.e DEM_%WaterLevel%).
Issues I encountered
1) The attempt to run a <8m flood raster with errors. Solution was that there was no data below 8m and this was resulting in no polygons. The polygon to raster extraction does not work if the shapefile is empty.
2) I did not run the fill tool on the raster before running the flow direction tool. This resulted in a black and white flow direction raster ranging from 1 to 255. I’m not sure why this is but flow accumulation analysis did not work.
The only concern with the results is that the barriers may not be a representative of the width of the actual culvert or flow under a bridge. Smaller flood increments may provide more information about the behavior of the flooding around terrain features.
It should be noted that for the 20m DEM it would have been appropriate to place a second pour point directly north of the east side of the east west road feature in the study area.
Comparison to USGS Base Specifications
This data was processed to an acceptable level as set out by the USGS LiDAR base specifications. Breaklines included Rivers and Lakes. The breaklines were used to create flat single elevation lakes and flat but trending rivers based on the breaklines. Keep in mind that this method is for terrestrial accuracy and may not represent the accurate elevation. Coupling the information with gauge data could provide more accurate results for hydrologic purposes.
The purpose of this assignment is to use open data in Erdas and ArcMap to create a combined mosaic. In this case the terrestrial data has a resolution of 30 arc seconds with no bathymetric topo while the global data has 1 arc minute accuracy across oceans and land. Combining these two data sets will result in one dem with terrestrial dem data that is of higher resolution than the bathymetric dem. For more information on arc seconds I recommend referring to http://www.esri.com/news/arcuser/0400/wdside.html. To download the global dem at 1 arc minute resolution go to http://www.ngdc.noaa.gov/mgg/global/global.html. I used the ETOPO1 Bedrock: Grid Registered Geotiff.
Next go to http://www.ngdc.noaa.gov/mgg/topo/gltiles.html and choose a continent. You will create a dem of that continent so for ease I recommend not using an area with islands. Download the tiles necessary for your area (i.e for Africa F, K, G).
Assign Location to GLOBE Data in ERDAS
The GLOBE Data needs to be assigned an origin location. To do this import it into ERDAS using the Import Data option in the manage data ribbon. Then use metadata change map model to adjust the origin. More information is available in this tutorial.
Mosaic the GLOBE Data
The GLOBE data is going to get clipped to the shape of your continent. There are several ways to do this which will be discussed, but the first step is to download the world landfile polygons. http://openstreetmapdata.com/data/land-polygons
This data can be downloaded to whatever detail level you are interested in. For Africa I downloaded the largest polygons available. These sources were all separate polygons, but were that not the case you would utilize the explode tool which is accessed by setting your multipolygon file to be edited, then opening the advanced editing options. The file I downloaded actually required that the polygon including Africa be separated from the rest of Europe and Asia. In order to split the polygon the layer was set to be edited and the split polygon tool from the basic editor was used. The unwanted polygons were deleted and the feature was extracted as a new shapefile.
As mentioned, there are two ways to clip the globe data to a DEM. The first method occurs in ERDAS during the mosaicking step. The second occurs in ArcMap AFTER the mosaicking step.
The next step was to Mosaic the GLOBE Data in ERDAS. The process of mosaicking is discussed in this tutorial. I had best results using default mosaic settings. If you are going to mosaic in ERDAS follow the “Erdas Clip” Guide below before running the mosaic. Otherwise run as shown in the tutorial and prepare to load ArcMap for the “ArcMap Clip” option.
Next you can change a setting in Mosaic pro to in Edit –> Output Options and add a clip boundary. This should clip to an AOI. To create an AOI with your shape file you must open the shape file in ERDAS, select and copy the vector in the shapefile, then paste it into your new AOI layer (new –> 2D Map –> AOI).
The method I used was the “Extract by Mask” tool in ArcGIS Spatial Analyst. This creates a subset of your Africa Data.
Next we clip our etopo data to a square around the target continent. This can be done in a similar fashion in Arc or completed in the next step which mosaics the two different topographies. If you are already using ArcMap, I would recommend clipping it in this environment by creating a new shapefile with a square polygon that denotes the area of interest. Use the Extract by Mask once again but on the etopo layer. Extract any layers created in ArcMap to *.img files by right clicking on the file and selecting Data — > Export Data and select the location and *.img file.
Mosaic the DEMs
Before the DEMs can be mosaicked in ERDAS (This step can also be done in ArcMap) the etopo data must be projected to geographic wgs84 lat long. Do this in metadata –> edit –> change projection. You may also need to ensure that the new GLOBE mosaic has the same and correct projection. Recompute the statistics on both again as shown in the tutorial above. Load them into mosaic pro ensuring that the africa GLOBE is the top layer.
The data being loaded in should look like this
Run the mosaick in ERDAS and use your created rectangle as an AOI (could also just create this in ERDAS to avoid exporting it from ARCGIS).
Alternatively use the mosaic tool under datamanagement–>raster–>raster dataset –> mosaic. Make sure that africa (in this case) is set to be used as your first layer.
The next step is to create a painted relief terrain image using the terrain ribbon. Additionally the DEM was loaded as a pseudocolour image. The colour ranges were set separately for land and water by querying the necessary records and applying a colour ramp using the table –> colour tool.
The chosen study area for this assignment was near Memphis Tennessee including three scenes running North to South along the Mississippi River. The Landsat imagery was downloaded both from libra.developmentseed.org and explorer.usgs.gov/. The images downloaded were on path 23, rows 35 (August 8, 2014), 36 (August 22, 2014) and 37 (August 22, 2014). The listed images were selected for low snow and cloud cover. Row 35 and 37 were downloaded from libra and row 36 was downloaded from the usgs explorer site as the download from libra would not work. The mississippi river was chosen for it’s variety of land cover types and relatively flat terrain.
The downloaded scenes are in WGS 1984 and projected in UTM Zone 20. Data could have been reprojected in either ERDAS or PCI to NAD83, but this step was not deemed necessary for this project.
Figure 1 shows the study area near Memphis Tennessee
Figure 2 shows the location of the landsat imagery before any corrections or mosaicking.
Figure 3 shows an overview of the mosaicked landsat 8 imagery in PCI.
Figure 3 shows an overview of the results of mosaicking in ERDAS using feathering and histogram balancing.
Mosaicking in PCI
In order to mosaick in the PCI Geomatica software first the focus interface must be loaded so that the landsat images can be imported for atmospheric correction. To import the landsat imagery the *.tar.gz file (or *.tar.bz file in libra) must be extracted. The extracted bands include a *.mtl file which can be dragged into the PCI focus environment to open either Pan, Multispectral or Thermal imagery.
The atcor correction was then run on these files with default settings. The results were better than the originals but only marginally (See right).
Next a project was created in orthoengine, the images to be mosaicked were defined and the automatic mosaic tool was used with defaults and the overlap method to produce a new imagery. The results were nearly seamless with good colour balancing (shown in Figure 3).
Mosaicking in ERDAS
In order to mosaick in ERDAS the landsat imagery must be imported using the manage data –> import data tool. Once complete the multispectral imagery is available for manipulation. Due to extension limitations and the determination in PCI that the atmospheric correction was not absolutely required, the erdas data was not corrected. To accomplish this using available licenses would have required dark pixel subtraction or TOA correction using self built models in spatial model.
The 2D Mosaic Pro tool was used for the mosaicking of the three images. The results were not seamless and despite trying many settings good results were not obtained. Default settings vs averaging and feathered seamline methods are shown to the right. The best results were obtained using default settings. Attempting colour corrections seemed to result in worse results.
Non default settings attempted include seamline settings such as feathered edges, overlapping edges and averaging as well as colour corrections including colour balancing and histogram balancing and illumination equalization.
Manual radiometric adjustments may aid in better results (i.e adjusting the most northern image manually) but this was not done for this project.
Comparison of ERDAS to PCI
Overall the better result was achieved through PCI Geomatica. The colour balancing was more complete and the result was more seamless. In ERDAS there was a clear difficulty with making the images radiometrically similar using automatic settings.
The ERDAS workflow is slightly more intuitive with the use of Mosaic Pro but the difficulty with showing demos is a clear set back. ERDAS has a functionality for this but I had difficulty making it work.
PCI creates a responsive edge between the two images instead of a straight line such as ERDAS which may be related to the better results. The downside with PCI is that the resultant image is darker than in ERDAS, but this is easy to fix by changing very simple contrast and brightness settings.
It is possible that atmospheric correction may have provided an improved ERDAS result and with proper extensions this could be better tested.
Additionally the import workflow in PCI is very user friendly and does not require exporting within the software. In using downloads from the Libra website there were no compatibility issues in PCI while ERDAS required file conversion from .tar.bz to .tar.gz which resulted in corruption of one file. PCI prevents this by having the user extract the data before importing it into the software.
Visual comparisons of some attempted mosaicking settings are shown below. PCI obtained great results on the first try so the settings were not changed resulting in only one PCI mosaic image.
The study area is two canyons in New Mexico within the Sandia Mountains east of Albuquerque. The North Canyon is Pino Canyon and the South Canyon is Oso Canyon. The purpose of the study is to use NDVI to determine areas that have been affected by drought related insect infestation and to compare the health of the two canyons. High resolution imagery of the area was downloaded from http://www.bernco.gov/download_Orthoimagery/, aster data was downloaded from glovis, landsat data was downloaded from earth explorer and the study area shapefiles along with project instruction were provided by James Norton with the Centre of Geographic Sciences. The data is projected in StatePlane NAD 1983, HARN (US Feet), New Mexico Central, FIPS 3002.
After acquiring the necessary datasets, they were loaded into arcmap and the appropriate data frame coordinate system was selected.
The first step was to combine the aster bands into one composite image using Raster Processing — Composite Bands tool.
After the composite was created the raster was clipped to the study area using Raster Processing — Clip Tool
Before analyzing the aster imagery using the NDVI indices, control areas need to be outlined. This means that we need to analyze the aerial photography and create polygons around zones of dead trees and living trees. A shapefile called trees was created and a column called “status” was added. As control area polygons were drawn there were designated as “live” or “dead”.
The composite aster image was then classified using NDVI Indices through the image analysis window. (Window — Image Analysis)
After the results were computed the data was compared to the original control study sites as shown to the right.
This method was not as successful as hoped because of the affect of shadows, cloud, bare ground and soil moisture. Dead trees are fairly colourless and therefore are likely to be classified with bare soil. Variable tree density also affects the NDVI as bare ground has a similar response to dead trees. This can give a low NDVI even when the area looks green. It could potentiall be useful to classify the area when there is snow cover after windy weather.
The brightest area in the image are mostly valleys that follow river ways. This has a higher NDVI likely due to an increase in tree density in lower lying areas. Additional soil moisture could play a role as could north facing shadows which reduce reflectance.
If you look at the image to the left, it is noticeable that the shadowed areas have higher NDVI values. The dead area is the darkest spot in the image. The live tree zone is in the middle of the grayscale for the image.
Next to compare the overall statistics of the two images the extract by mask tool which extracts raster data based on vector layers, was used. Statistics were extracted from the grayscale NDVI classified file individually using the Osa and Pino ROI vector files. This can be done visually (Pino appears darker which would suggest it has lower tree health)
The results showed that Oso had a mean of 122.5 and Pino had a mean of 116.5. This shows that Oso does contain more healthy trees than Pino Canyon (or at least more trees!).
Overall the multispectral imagery did well at determining tree density but the NDVI analysis was not quite accurate enough to distinguish bareground from areas with dead trees.
For fun the area was looked at in arcglobe with the gps points plotted on a hike track within the Pino Canyon. The results are shown below.
Advanced Image Enhancement – Atmospheric Correction, Principal Component Analysis (PCA), Tesselated Caps (TCT), Normalized Difference Vegetation Index (NDVI)
Remote Sensing Image Analysis
Some of the most basic image analysis you can do is classification of land cover types. In this case the first step would be to enhance your image using a variety of options, then run a classification either unsupervised (no ground control points) or supervised (using human defined spectral signatures using areas of interest and adding them to the signature editor).
Landsat Imagery analysis can do a lot more than just classify landcover types. Examples of additional methods include Fourier Analysis, Principal Component Analysis (PCA) and colour space transformations. This allows the extraction for more information from an image.
The purpose of this project is to learn three new methods of image analysis including Principal Components Analysis (PCA), Tasseled Cap Transformation, and Normalized Difference Vegetation Index (NDVI) on a Landsat scene. Landsat scenes can be downloaded from earthexplorer.usgs.gov/.
If you have a version of ERDAS Imagine and would like to follow this workflow I would recommend downloading imagery in the GeoTIFF file format. This will give you one TIFF file for each spectral band. The imagery should already be orthorectified.
Top-Of Atmosphere Reflectance
Basic classification and image enhancement use DN Values. These can be analyzed and used for spectral signatures. This represents the amount of reflected energy but does not consider incident energy (variable with sun angle and time of year)
The conversion of DN values to TOA reflectance values can help with analysis especially when analyzing vegetation indices.
This project will use TOA reflectance which considers seasonal sun angle variations. More complete sun information can be found using an ATCOR3 atmospheric correction (PCI Geomatics).
During the winter months when the earth is tilted away from the sun, the incident radiation is of lower intensity. The TOA reflectance accounts for this change.
Converting DN to TOA Reflectance
First you must import the landsat 8 data. You can do this in ERDAS or PCI. To convert from the provided (by Landsat 8 USGS) DN’s to the rescaled TOA reflectance, a rescaling coefficient is provided in the metadata file. This also has information on the thermal constants to convert TIRS to brightness temperature.
Although Landsat satellites image from nadir, the sun is coming in from a tangent (solar elevation angle or sun-elevation) and this needs to be considered.
This can be done in both ERDAS and PCI. I do not have the proper ERDAS extension so the Spatial Modeller was used to calculate the math manually. An example is shown in the tutorial. I did not have the best of luck with this so I redid the atmospheric corrections in PCI. The steps for this are shown here.
The slider about demonstrates that the TOA correction did not make a large difference in the imagery. Using an ATCOR correction or an imagery with more haze issues may result in a larger difference between the two images. Another factor that affects the ability of the correction is the wide scope in landcover type. The mountains create have cloud cover, snow cover and shadowing affects. This is difficult the bright and dark responses of these features skew the spectral range. An area with only one main type of landcover may see more variable results.
Principal Components Analysis
The principal components analysis (PCA) is a method that condenses information into intercorrelated variables by collecting the majority of variance into the first band. The remaining variance is assigned to additional bands depending on how many bands the user inputs. This removes redundancy and decreases file size.
Although this may not seem important when you are dealing with only 3-7 bands, as the band number increases on more recent satellites, this can be a huge time saver. For example, imagine that you are dealing with hyperspectral imagery that includes 256 bands. Reducing the number of bands can be very beneficial for analysis.
The results of this analysis are shown below. More information on completing this step can be found here.
The principal component analysis (PCA) was very useful in this image as it helped remove a lot of the dark shadowing affects caused by terrain in the mountains to the west. It’s strengths are in determining the locations of high urban density and abnormal spectral signatures (such as clouds and snow which appear bright red in this band combination).
Additionally this analysis method is also very good at distinguishing bare ground from vegetation although it does not recognize vegetation health or growth stage.
This method is fast and requires less processing time than many other types of imaging. Unlike supervised classification it does not require much user input although to reduce your file size as much as possible it is advisable to consider the eigenvalues and only output as many principal components as necessary. The table created in this project is shown below. Three principle components were used as >99% of the of the results fell within these three dimensions.
Tesseled Caps Transformation
The Tesseled Cap Transformation makes three components that are related to brightness, greenness and wetness.
The tasseled-cap transform takes a linear combination of the original image bands which is useful for vegation mapping. Each new band is created by summing image band 1 x a constant + imaage band 2 x a constant for example. For more information select “view” when running the tool and it will show you the modeller.
TCT outputs as many bands as are input but in a landsat image the output bands 4 and 6 are often just noise.
Band 1 is brightness, band 2 is greenness and band 3 is wetness. Depending on the combination you can focus on getting results for any of these three variables.
These bands are difficult to analyze when displayed in combination but they can be experimented with to help distinguish wetness, brightness and greenness of the imagery.
In this example, TCT is good at distinguishing water, urban areas and vegetation. This can be used to help determine urban development, landuse change and ground surface moisture contents.
Analysis of this enhancement seems more complex than PCA and NDVI and would require further study.
Normalized Difference Vegetation Index
The main use of NDVI classification is vegetation productivity. By analyzing reflectance and absorption plant growth and health can be determined. This uses the equation (NIR-Red)/(NIR+Red).
The values range from -1 to 1. Values around zero are typically barren ground (rock, sand, snow). Low positive values are shrub and grassland and high positive values are quickly growing productive plants. Negative values correspond to water.
After the NDVI raster is created you can open it in psuedocolour and change the colours to suit your needs as shown below.
The imagery above does a really great job at distinguishing new growth from old growth and areas with a high production value. It distinguishes vegetation within the mountains and can help determine areas of the city with more growth.
The TOA did not work using the ERDAS modeller. This did work in PCI although it was not used for the subsequent enhancements. Each of the different enhancement provides variable results and is useful for determining different features of the landscape such as new growth (NDVI), bare soil (PCA) and urban environments (TCT and PCA). TCT was very useful for determining soil and vegetation moisture as well as distinguishing urban landscapes from bare soil.
LiDAR – Sensor Calibration Using TerraMatch
The purpose of this study is to account for and correct various sensor misalignment issues. There are three types of sensor calibration including rigorous (sensor specific, uses range data), quasi-rigorous (error specific, no range data) and non-rigorous(purely data driven, no trajectory etc).
The corrections will be completed by Terramatch which uses a Quasi-Rigorous technique and mostly corrects boresight errors including roll, pitch and heading errors.
Boresight Roll Error – positions shift across-track. Due to angling of the laser. (perpendicular to flightline).
Boresight Pitch Error – As you move away from nadir you can increase the position shift (parallel to flight line).
Boresight Heading Error – positions rotate around the mirror.
Within TerraMatch there are two approaches for calibration including surface matching (comparison of TINS of overlapping flightlines), and tie-line matching which compares features like roof tops. This project focusses on the surface matching method.
The data for this study was collected by the Applied Geomatics Research Group out of Middleton Nova Scotia. The LiDAR points were collected on Julian Day 332 2007. The data being used is only a subset of the entire survey. The provided points are projected in UTM 20 NAD83 (CSRS98) while the trajectory data is not projected and has a datum of NAD83 (CSRS98). The LiDAR system used was the Optech ALTM 3100 LiDAR system.
Steps for completing mis-alignment corrections in TerraMatch include:
2) Open and create a terrascan project
3) Add your area of interest as a reference layer
4) Import points to your terrascan project (ensure a proper projection) including the creation of blocks
5) Remove unwanted blocks and recreate the study area including only required blocks
6) Create a subset of calibration blocks and load them into a new project. Complete following steps on the calibration blocks
7) Add trajectory data and separate into flightlines either by clipping to polygon or splitting at a laser gap of 20 seconds.
8) Deduce what flightline each point belongs to using the macro tool. Run the modify line numbering, deduce line numbers and delete by flightline macros. Classify points from anylines to a default class and then assign flightlines based on time stampe using the deduce flightline macro. Finally delete any unclassified points.
9) Classify imagery by running a classification macro. Process flightlines individually.
10) Determine main error types (roll, pitch, heading) by viewing profiles. Horizontal displacement between the flightlines in a profile perpendicular to flightpath will demonstrate roll errors. Horizontal displacement parallel to flightpath suggests pitch errors. For more information about errors, look here.
11) Open TerraMatch and Run the measure match tool. Use project points (make sure your project is point to the appropriate folder and has been saved) with a max triangle of 20. Use ground class. Run the tool and save a *.txt report.
12) Run the find match tool on project points using the ground class do it on the whole dataset based on the correction determined from step 10. Apply the dataset and write to a new directory instead of overwriting current data if preferred.
13) Point your project to your calibration blocks and run the measure match tool a second time on your corrected calibration blocks. Save your results and compare to the original measure match report
14) Repeat step for each error type making sure to use the most recently corrected calibration blocks
15) Run this process on your entire project once you have tested and chosen your settings
16) Correct for zshift using the find match tool. Test it using pre and post measure match reporting.
17) If the correction has shifted the data so that it is outside of the blocks you need to delete and reimport the points.
As discussed briefly above it is important to determine what types of errors your dataset is experiencing. To do this you should look at profiles of LiDAR point cloud both parallel to and perpendicular to the flight path. It is best to do this in several locations. The data should be displayed by the flightline. I focused on block 5 for my misalignment analysis. I set my top view window to display points by class and my profile to show points by flightline. This was helpful for determining misalignment profile locations. It is ideal to test this in areas with peaked roofs. Only the ground and building classes were turned on to aid in this process.
Profile 1 will be the best for showing Roll errors as it is perpendicular to the flight path.
Profile 2 will show Pitch errors as it is parallel to the flight path.
Although the mis-alignment errors are minimal in this dataset, it is fairly clear that the points from various flight paths line up less accurately in profile 1. This is consistent with the conclusion that roll causes the largest mis-alignment errors in this dataset.
Perpendicular Profile of Uncorrected Points (Profile 1)
Perpendicular Profile of Corrected Points (Profile 1)
Parallel Profile - Uncorrected Points (Profile 2)
Parallel Profile of Corrected Points (Profile 2)
Overall the calibration decreased the magnitude of the shift between flightlines. This is obviously both visually and by looking at the generated reports. Generally an acceptable level of error is 15cm. This means that your dz RMS values for your find match analysis needs to be lower than 15cm. In this case, the corrections completed on the calibration blocks resulted in an RMS value of 0.0799 from 0.1712. The RMS for the entire survey should be the same since the same correction was applied. The shift by flightline is shown in the summary tables below.
The similarity in the original and final shift magnitudes of the calibrated and survey blocks show that the calibration dataset is a good approximation.
No major issues were encountered during this project. One difficulty encountered was displaying the LiDAR points by flightline. The flightline numbering of points was completed properly but it was preferred that the data be displayed in swaths. The data was unable to be displayed this way even though the data was sorted by time stamp which should have made preferential display of points instead of the lack of consistency encountered.
Additionally when running the find match correction tool the inclusion of buildings did not decrease the shift as was hoped. This meant that the data was rerun using the ground surface only. Adjusting classification settings could have aided in this.
Because it was determined that pitch errors were minimal they were not corrected for.
It is clear below that the corrections decreased the magnitude of the shift between flightlines. The average original shift was 0.2734 and decreased to 0.09761. The value seen below is the average by block while the shifts listed above is averaged by points. The starting dz RMS is 0.1712 and 0.0799.
Each step of the survey decreases the magnitude of the shift. It is possible that a better classification could result in a larger decrease in flightline shift. The data went from 0.2716 to 0.09712 then to 0.08562 the Z Shift applied. The numbers shown below are averaged by the blocks while the numbers just listed are a total average of the points.
LiDAR Processing Project
The purpose of the project is to learn how to manage large datasets required for large LiDAR projects including file organization, project creation, tiling, and classification of points. Each of these workflows will be discussed in detail.
The LiDAR data being studied is from the Lawrencetown area in the Annapolis Valley of Nova Scotia Canada. The dataset was supplied to the Centre of Geomatic Sciences by the Applied Geomatics Research Group out of Middleton, Nova Scotia, associated with Nova Scotia Community College. Refer to the location map below.
The suggested folder format included one folder each for the following data, blocks, las files, reports, macros, and study area (recommended by Rob Hodder, COGS 2015). These folders will come to make more sense throughout the course of the project.
Creating a Project
The creation of a project requires many different steps including the creation of a microstation design file, the creation of a terrascan project, creation of blocks (or tiles) over the study area, and finally the import of the LiDAR *.las files.
Creating a Microstation Design Project
Creating a new project is a process completed through the microstation file menu. Default settings were mostly used with the exception of the 2d seed setting which was switched to 3d seed as appropriate for the LiDAR data import. This was completed, and a microstation project was born. Upon project creation 4 viewers were opened in microstation. The purpose of the 4 viewers is to make it easier to view from multiple angles (top, bottom, profile, oblique etc). For a step by step walkthrough with screen grabs of this process, refer to this tutorial.
Creating Project Levels
In microstation, as with most GIS and photo editing softwares, the final product is made up of layers of files. Two levels were created for this project including one layer for the study area and one layer for the blocks that were going to be created in a later step. Tasks that can be completed in level manager include moving objects between layers, setting active layers and changing layer properties. If you want more information, click here!
Importing Reference Layers
A reference layer is anything you are going to use for your project that is not a LiDAR *.las file. In this case study, the file to be imported was the study area or area of interest. The purpose of using a study area boundary is to ensure that you aren’t analyzing points outside of the scope of your project. Eliminating these unnecessary points will decrease processing time and the file sizes associated with your project.
For step by step instructions on importing reference layers, click here.
Save your results in the microstation menu (file — save). The next step will be to create a terrascan project.
Creating a Project
Five options were considered when creating this project : project name, project directory, block size, storage method and block prefix. Most of these settings are simply based on user preference. The block size for this project was set to 1000m. This means that each block is 1000mx1000m or 1kmx1km. The size that you set your blocks to may depend on your computer processing power, the overall size of your project, the density of data gathered or, most likely, all of the above.
The purpose of creating blocks is so that the data can be segmented for loading and processing purposes. The LiDAR points in each block can be loaded and/or processed at anytime without loading/changing the entire dataset. As you can imagine if you have 200 million points over a 10x10km study area, there is benefit in only needing to load 20 million of them at a time.
Blocks can be created as overlapping or non-overlapping. For this case study we used non-overlapping blocks. Creating overlapping blocks would create a buffer between the two blocks which is beneficial for the production of dems or contours.
During block creation the settings used were for outside blocks – create grid blocks, block overlap – no overlap, and scanner numbers – use from file. Other settings were left as default.
After the blocks were created, it was noticeable that many of the blocks were outside of the study area. This means that the project had points outside of the study area. This could be a result of testing the hardware, navigating beyond the study area to line up points, location errors etc. It was determined that these blocks did not need to be used as they did not focus on the study area. Thus, they needed to be removed from the TerraScan project.
To remove unwanted blocks the steps taken were as follows
1) Delete all of the files from within your TerraScan project window
2) Delete all of the files from your blocks windows directory
3) In the viewer, delete the footprint of unwanted squares.
4) Within your project recreate the blocks based on boundaries (instead of imported points)
5) Change the settings used in block creation. Instead of create grid blocks, select ignore outside blocks.
This recreates blocks based on the remaining boundaries and ignores points outside of the remaining blocks. In the example there were approximately 58 million points. By redefining the project 6.7 million were ignored.
Due to the multiple returns that are the result of LiDAR data collection, each point in a point cloud can represent one of many features including ground surface, low vegetation, high vegetation and buildings/infrastructure. For viewing aid it is important to classify the points based on it’s feature type. For this project a macro was designed for the purpose of distinguishing between the mentioned features but specifically targeted the creation of a good dem layer. The macro that was designed initially reclassified all points into a default class. The steps to follow are as described below
1) Points were classified into the isolation category. A point was considered isolated if there were 3 or less points within 1.2 m of any point. If this was true the point was moved from the default layer to the low point layers.
2) Points were classified into the low point category. A point was considered low if it was 0.5 m below the points within a 3 m radius. This was done for a group of points numbering 6 or less. These points were reclassified into the low point layer. This was run again using the single point numbering option using 0.3m and a 3 m radius.
3) Next the ground points were determined. All future point reclassification relies on a defined ground class. The ground class was run twice to try to capture points from different types of terrain. In the first ground reclassification a point was reclassified if it had a max building size of 60, a terrain angle of 88, iteration angle of 6, iteration distance of 1.4. Other settings were left as default. The second round of reclassified ground points were done using a building size of 60, terrain angle of 60, iteration angle of 6 and an iteration distance of 1.2. This was an attempt to improve the accuracy of the ground points in very flat areas.
4) Next height above ground from 0.3 to 200 metres was classified as default. This was to minimize the data reduction from isolation and low point routines in the vegetation foilage.
5) Not all ground points were included in the original ground file. I ran a height above ground from -0.15 to 0.25 on the low point classification and re-classified all points in that zone as ground points. Additionally I classified all default points from 0-0.3m as ground points as well. This helped the most in areas with very low point density.
6) Removed below ground points with a standard deviation of 3 and z-tolerance of 0.2
7) Classified Low, Medium and High vegetation using the height above ground tool. Low vegetation was considered 0.25 to 1m, medium vegetation 1-2m and high vegetation 2-200m.
8) Finally the building routine was run moving points from high vegetation to building using a building size of 40 and a z-tolerance of 0.5.
In lamens terms this macro removed isolated points that didn’t meet the requirements of having 3 other points within a 1.2 m radius buffer. Next the low points were removed. To do this the software takes the average height of all the points within a radius (3 m) and removes the point if it is x distance below the average. In this case x distance was 0.3 m. With unrepresentative points removed, the ground layer was classified. The parameters used to classify low and isolated points initially removed some of the foilage returns. To add those back into the analysis all points higher than 0.3 m above the ground were reclassified to default. Next all points within a certain distance from the ground within the default or low points classes were moved to the ground class. Points below the ground surface were removed. Different vegetation levels were classified and the buildings were removed from the high vegetation class.
LandsatLandsat view of blocks 15, 16, 19, 20 over Lawrencetown Nova Scotia, Canada
ClassifiedClassified view of blocks 15, 16, 19, 20 over Lawrencetown Nova Scotia, Canada Legend : Orange – Ground, Green – Vegetation, Red – Building, Pink – Noise, White – Unclassified.
Results and Discussion
The classification of this LiDAR point cloud was complicated slightly by the varying point density on a few of the flightlines as discussed in the classification section. Additional difficulties included the removal of the points below the surface, the retention of foilage and achieving a solid ground surface.
Points below surface need to be removed before the ground is classified or they will be used to define the ground class which is often not appropriate. By removing isolated points and points that are lower than the average in a buffer the ground surface results are more accurate. Because of the sparse point cloud in foilage, a lot of vegetation points were removed with this filter. To fix this problem all points at a certain height above ground were reclassified to default.
Areas with lower point density decreased the overall DEM accuracy as a wider buffer had to be used for the classification of isolated and low points. Essentially the classification of isolated points is dependent on the point density. In this data set the density ranged between 1.1 and 0.2m depending on vegetative cover and flight speed, overlap and possibly atmospheric interference. The ground classification could have been more accurate in the 0.2-0.5m density areas if it didn’t have to be broadened for the lower density zones.
Roofs were best detected with building parameters of 40m building size and 0.5 Z tolerance. Initially a Z tolerance of 0.2 was used but it captured almost 50% less rooftops. Normal rules were applied.
For a step by step guide on how to apply macros, refer to this documentation.
The results of the classification can be seen in the two profiles below.
This is a project completed for the Centre of Geomatic Sciences LiDAR Operations course.
Supervised Classification is an important skill for classifying land cover into specific cover types. Supervised classification is the most appropriate classification method when ground truthing and ground control points have been collected.
While image orthorectification can be automated, these processes do not always work making it important to know how to do it manually. This can especially be difficult when there have been a lot of infrastructure changes as demonstrated in this poster.
This is an image that has been enhanced. I experimented with different bands and chose as appropriate which bands supplied the best contrast between features. After this was complete, brightness, sharpness and contrast filters were applied.