Importing *.las Files into Global Mapper The first step is selecting the “open your own data files” button or the open folder icon. After doing this a dialogue box will pop up asking for your projection. Choose the appropriate projection and select ok. Ensure that the data is opened as a point cloud and hit […]
The import manager will open the dialogue to the right. Select the data to be imported and the export location. Click Ok. The next dialogue has information that needs to be provided (highlighted below). For the example dataset BIL and Signed 16 bit should be used. For more info on what signed/unsigned bits are look […]
MicroStation can only read PXYZ (point, easting, northing elevation (orthometric height)) files. You must first change your files to this format and save as *.csv. Load in a vector (*.shp file) of control point locations. This can aid in determining accuracy issues. Go to MicroStation–>Tools–>Toolboxes and select it. Turn on XYZ Text Next you […]
Open up Terrascan and Read Points. It’s important to only open every 100th point for processing time. The next step is to export a lattice model. Grid spacing of 500m is acceptable because the geoid model has km long arcs. Filling gaps will prevent gaps in your TIN. Three decimal places will give you mm […]
The NDVI tool displays stages of growth by normalizing the difference between the Red and Near Infrared bands using the equation (NIR-Red)/(NIR+Red).
Purpose The purpose of this lab is to do a change detection analysis using landsat imagery in ERDAS Imagine. My study area is the Tri-City area of Southern Ontario Including Kitchener/Waterloo, Cambridge and Guelph. I am attempting to see the change of the urban footprint from 1990 to 2013. The steps include: Atmospheric correction, Radiometric […]
Orthorectifying and Mosaicking in PCI Geomatica Creating a Project Select File–> New Project and create a new project. The following screen will appear. Select your folder and file name. Ensure that the settings below are chosen. Next set the projection and pixel spacing for both the output projection and GCP projection. Next set the […]
Purpose and Study Area The purpose of this assignment is to use a LiDAR DEM of the Middleton, Nova Scotia, Canada area in order to model flooding. All of the steps in this process including hydrological flattening, hydrological enforcement, watershed delineation and incremental flooding will be discussed. This is completed at various resolutions and compared […]
Purpose The purpose of this assignment is to use open data in Erdas and ArcMap to create a combined mosaic. In this case the terrestrial data has a resolution of 30 arc seconds with no bathymetric topo while the global data has 1 arc minute accuracy across oceans and land. Combining these two data sets […]
Overview The chosen study area for this assignment was near Memphis Tennessee including three scenes running North to South along the Mississippi River. The Landsat imagery was downloaded both from libra.developmentseed.org and explorer.usgs.gov/. The images downloaded were on path 23, rows 35 (August 8, 2014), 36 (August 22, 2014) and 37 (August 22, 2014). The […]
Study Area The study area is two canyons in New Mexico within the Sandia Mountains east of Albuquerque. The North Canyon is Pino Canyon and the South Canyon is Oso Canyon. The purpose of the study is to use NDVI to determine areas that have been affected by drought related insect infestation and to compare […]
Advanced Image Enhancement – Atmospheric Correction, Principal Component Analysis (PCA), Tesselated Caps (TCT), Normalized Difference Vegetation Index (NDVI)
Remote Sensing Image Analysis Some of the most basic image analysis you can do is classification of land cover types. In this case the first step would be to enhance your image using a variety of options, then run a classification either unsupervised (no ground control points) or supervised (using human defined spectral signatures using […]
LiDAR – Sensor Calibration Using TerraMatch Purpose The purpose of this study is to account for and correct various sensor misalignment issues. There are three types of sensor calibration including rigorous (sensor specific, uses range data), quasi-rigorous (error specific, no range data) and non-rigorous(purely data driven, no trajectory etc). The corrections will be completed by Terramatch […]
LiDAR Processing Project Case Study The purpose of the project is to learn how to manage large datasets required for large LiDAR projects including file organization, project creation, tiling, and classification of points. Each of these workflows will be discussed in detail. The LiDAR data being studied is from the Lawrencetown area in the Annapolis […]
Geoprocessing and Spatial Analysis
This is a study on the potential market for an ethnic grocery store in Halifax NS. It is not a full evaluation as it only considers 3 variables. Each variable was broken into 5 groupings and assigned a weight. The variable weight values were added up for each sample zone and a weighted suitability was calculated for primary, secondary and tertiary polygons.
The purpose of this portfolio piece was to consider 6 variables from European Countries that are present on the World Bank Data webpage and group them using different methods including : No spatial constraints, continuity edges corners, delauney triangulation, k nearest neighbours and spatial weights.
This was an exercise in stepping through the different stages of regressions in order to determine which datasets were statistically the best to model a chosen variable (in this case food). This process included exploratory regression, OLS regression and geographically weighted regression. The results from each step was used in the next. Model fit, residuals and collinearity were just some of the variables considered during the course of this project.
The goal of this project was to use the concentric rings method to analyze the trade area around a selected business (in this case ethnic grocery store). Proportional Sums and Averages were calculated to determine demographics and income etc in the first concentric ring.
GIS and Thematic Maps
Purpose and Study Area
The purpose of this assignment is to use a LiDAR DEM of the Middleton, Nova Scotia, Canada area in order to model flooding. All of the steps in this process including hydrological flattening, hydrological enforcement, watershed delineation and incremental flooding will be discussed. This is completed at various resolutions and compared to datasets with barriers.
The data used for this study was collected by the Applied Geomatics Research Group within the Nova Scotia Community College. The lab design and workflow was supplied by the Centre of Geographic Sciences in Lawrencetown, Nova Scotia. The hydrological vectors including streams, rivers and lakes were extracted from the Nova Scotia Geomatics Centre Base Data (1:10,000 scale) which were derived from 1:40,000 scale aerial photography spanning 1987 to 1997. The LiDAR DEM was collected on August 18, 2010 with an Optech ALTM 3100 system. The DEM was made from ground poitns using TIN interpolation and have been transformed into orthometric heights (CGVD28).
Hydrological flattening involves assigning lakes and streams elevation values to remove “bad” points. What this means is when LiDAR is flown, the results over the water can be variable. For the purposes of watershed modelling we need for the lakes to be a lower elevation than the streams flowing into the lakes or it can create inaccurate flow direction models. In the case of rivers we do not want to assign a single elevation but instead provide an elevation gradient derived from the dem. Model builder was used for many of these processes.
To get a representative water level the first step is extracting shoreline elevations using the downloaded waterbodies file. If such a file is not available, the polygons can be drawn by hand. Use the polygon vertices (Feature vertices to points) to extract DEM elevations. (Add Surface Information). Next the minimum elevation is determined using Summary Statistics and Get Field Value. Essentually Summary Statistics creates a table with all of the nodes and their new DEM values and by querying the minimum value. Once this value is determined, the lake can be extracted from the DEM using Extract by Mask (use lake polygon for this). The raster calculator can be used to assign the minimum value to anything in the raster that is not null (i.e where the lake was extracted). Once the lake has been extracted from the DEM and it’s value reassigned, it can be mosaicked back into the original dem using the mosaic tool.
The river flattening is completed similar to the lake flattening, but requires that an elevation trend be applied. The river shoreline elevations are derived similar to the lakes method (Feature Vertices to Points, Add Surface Information). These points are then passed to the Trend tool to interpolate a raster that covers the water body with a flat gradient surface. The surface was then clipped to the river outline and mosaicked into the DEM that has been updated with flattened lakes.
Hydrological Flattening Discussion
This will result in flattened lakes and flattened but trending rivers. This removes irregularities and erroneous elevations that may affect water flow direction. This process could be improved by increasing the accuracy of the river and lake outlines by inputting them manually or finding a larger scale water body file. These false elevation artifacts could create false contour lines (depending on the scale). If orthorectifying an image to the dem the false elevations in the river could skew the results and affect the pixel accuracy.
Watersheds were created using the flow direction tool, flow accumulation tool, the fill tool and the watershed in ArcGIS. The pour point is placed on the area of highest accumulation. Barriers such as bridges and culverts can appear to be solid in LiDAR when in reality water can flow under or through this infrastructure. Location of Culverts would significantly aid in modelling waterflow especially if size of culvert was included. Refer to the image to the right, focusing on areas with red dots where the road crosses the river.
This process could be further improved by analyzing additional accumulation zones where there are no rivers (areas with barriers such as terrain or roads) and then using aerial photography to search for evidence of culverts and adding them into the model. More realistic models may result from larger DEM’s which cover a larger catchment area.
The flattened LiDAR DEM with barriers removed supplied the most realistic results, followed by the 20m DEM and finally the LiDAR DEM without barriers removed. The results could be better if the DEM covered a larger area and allowed for additional watersheds to be recognized. Were this the case the 2m Flattened DEM with removed barriers may show trends more similar to the 20m DEM, assuming that the 20m DEM recognizes more regional trends.
20m DEM with Barriers, no flattening
2m DEM with barriers and flattening
2m DEM with barriers removed and flattening
Incremental flooding was emulated using a for loop to cycle through min and max flood levels and setting flood increments. The Flood mask was created by using the Raster Calculater to created a mask covering a DEM cell value range (i.e <10) and a binary value for covered cells (i.e cells below that value are given a value of 1, cells outside that range are not given a value). Areas such as water treatment plants have water but are not connected to the flood points. To ensure that only areas with connectivity to the pour point are flooded, connectivity must be tested. To test connectivity the Raster to Polygon tool was used to create a polygon from the flood raster. Any polygons that were connected to the pour point were selected using the spatial join tool. Select the flood polygon as the Target Feature and the pour point as the Join Feature. One-to-One and Keep all target features was turned off. intersect was the spatial operation. The output will have only polygons that intersect the pour point.
Most of the results are similar. The unflattened DEM flooded a lesser extent likely due to a false barrier created from elevation spikes in a narrow part of the river. This is visible below and is possibly even more obvious at greater flood increments. Shown below. Additionally the unflattened imagery often resulted in a strip of elevated terrain in the middle of the river. The cause of this is uncertain.
The clearest difference occurs in the hydrological flattened images between the barrier and removed barrier models. The model with barriers removed penetrates the east west highway and floods the other side before the model with barriers intact. This is demonstrated below. This is an expected results. The barrier removal process was very conservative. Removing larger portions of the boundary may cause a larger gap between the flood results of the two models. See below.
Overall the flooding mostly matches the watershed. The extents of flooding go beyond the watershed and are less affected by ridged terrain perpendicular to the river as it is parallel to flood direction.
Flooding as it crosses the barrier. Barrier present (pink) vs barrier removed (blue)
Results and Discussion
The tools used in model builder for the construction of this project and modelling of flood data were very affective. I ran all of the tools from within model builder so that I could customize a few features. For lake flattening I ran an iterator that selected the next row in a table before running the model until the row value was null. This removed manual selection of each lake before the running of the model to flatten it. For flood modelling I used integers instead of uneven numbers so that I could avoid issues with data compatibility since the flood model toolbox accepts only integers. The flood model was set up to use metres and not centimetres. If greater accuracy was required the flood model could be set to use centimetres. Additionally instead of parsing the raster name and adding the value of water level for polygon output I manually changed the base of the name and the suffix using %title% (i.e DEM_%WaterLevel%).
Issues I encountered
1) The attempt to run a <8m flood raster with errors. Solution was that there was no data below 8m and this was resulting in no polygons. The polygon to raster extraction does not work if the shapefile is empty.
2) I did not run the fill tool on the raster before running the flow direction tool. This resulted in a black and white flow direction raster ranging from 1 to 255. I’m not sure why this is but flow accumulation analysis did not work.
The only concern with the results is that the barriers may not be a representative of the width of the actual culvert or flow under a bridge. Smaller flood increments may provide more information about the behavior of the flooding around terrain features.
It should be noted that for the 20m DEM it would have been appropriate to place a second pour point directly north of the east side of the east west road feature in the study area.
Comparison to USGS Base Specifications
This data was processed to an acceptable level as set out by the USGS LiDAR base specifications. Breaklines included Rivers and Lakes. The breaklines were used to create flat single elevation lakes and flat but trending rivers based on the breaklines. Keep in mind that this method is for terrestrial accuracy and may not represent the accurate elevation. Coupling the information with gauge data could provide more accurate results for hydrologic purposes.
The chosen study area for this assignment was near Memphis Tennessee including three scenes running North to South along the Mississippi River. The Landsat imagery was downloaded both from libra.developmentseed.org and explorer.usgs.gov/. The images downloaded were on path 23, rows 35 (August 8, 2014), 36 (August 22, 2014) and 37 (August 22, 2014). The listed images were selected for low snow and cloud cover. Row 35 and 37 were downloaded from libra and row 36 was downloaded from the usgs explorer site as the download from libra would not work. The mississippi river was chosen for it’s variety of land cover types and relatively flat terrain.
The downloaded scenes are in WGS 1984 and projected in UTM Zone 20. Data could have been reprojected in either ERDAS or PCI to NAD83, but this step was not deemed necessary for this project.
Figure 1 shows the study area near Memphis Tennessee
Figure 2 shows the location of the landsat imagery before any corrections or mosaicking.
Figure 3 shows an overview of the mosaicked landsat 8 imagery in PCI.
Figure 3 shows an overview of the results of mosaicking in ERDAS using feathering and histogram balancing.
Mosaicking in PCI
In order to mosaick in the PCI Geomatica software first the focus interface must be loaded so that the landsat images can be imported for atmospheric correction. To import the landsat imagery the *.tar.gz file (or *.tar.bz file in libra) must be extracted. The extracted bands include a *.mtl file which can be dragged into the PCI focus environment to open either Pan, Multispectral or Thermal imagery.
The atcor correction was then run on these files with default settings. The results were better than the originals but only marginally (See right).
Next a project was created in orthoengine, the images to be mosaicked were defined and the automatic mosaic tool was used with defaults and the overlap method to produce a new imagery. The results were nearly seamless with good colour balancing (shown in Figure 3).
Mosaicking in ERDAS
In order to mosaick in ERDAS the landsat imagery must be imported using the manage data –> import data tool. Once complete the multispectral imagery is available for manipulation. Due to extension limitations and the determination in PCI that the atmospheric correction was not absolutely required, the erdas data was not corrected. To accomplish this using available licenses would have required dark pixel subtraction or TOA correction using self built models in spatial model.
The 2D Mosaic Pro tool was used for the mosaicking of the three images. The results were not seamless and despite trying many settings good results were not obtained. Default settings vs averaging and feathered seamline methods are shown to the right. The best results were obtained using default settings. Attempting colour corrections seemed to result in worse results.
Non default settings attempted include seamline settings such as feathered edges, overlapping edges and averaging as well as colour corrections including colour balancing and histogram balancing and illumination equalization.
Manual radiometric adjustments may aid in better results (i.e adjusting the most northern image manually) but this was not done for this project.
Comparison of ERDAS to PCI
Overall the better result was achieved through PCI Geomatica. The colour balancing was more complete and the result was more seamless. In ERDAS there was a clear difficulty with making the images radiometrically similar using automatic settings.
The ERDAS workflow is slightly more intuitive with the use of Mosaic Pro but the difficulty with showing demos is a clear set back. ERDAS has a functionality for this but I had difficulty making it work.
PCI creates a responsive edge between the two images instead of a straight line such as ERDAS which may be related to the better results. The downside with PCI is that the resultant image is darker than in ERDAS, but this is easy to fix by changing very simple contrast and brightness settings.
It is possible that atmospheric correction may have provided an improved ERDAS result and with proper extensions this could be better tested.
Additionally the import workflow in PCI is very user friendly and does not require exporting within the software. In using downloads from the Libra website there were no compatibility issues in PCI while ERDAS required file conversion from .tar.bz to .tar.gz which resulted in corruption of one file. PCI prevents this by having the user extract the data before importing it into the software.
Visual comparisons of some attempted mosaicking settings are shown below. PCI obtained great results on the first try so the settings were not changed resulting in only one PCI mosaic image.
LiDAR – Sensor Calibration Using TerraMatch
The purpose of this study is to account for and correct various sensor misalignment issues. There are three types of sensor calibration including rigorous (sensor specific, uses range data), quasi-rigorous (error specific, no range data) and non-rigorous(purely data driven, no trajectory etc).
The corrections will be completed by Terramatch which uses a Quasi-Rigorous technique and mostly corrects boresight errors including roll, pitch and heading errors.
Boresight Roll Error – positions shift across-track. Due to angling of the laser. (perpendicular to flightline).
Boresight Pitch Error – As you move away from nadir you can increase the position shift (parallel to flight line).
Boresight Heading Error – positions rotate around the mirror.
Within TerraMatch there are two approaches for calibration including surface matching (comparison of TINS of overlapping flightlines), and tie-line matching which compares features like roof tops. This project focusses on the surface matching method.
The data for this study was collected by the Applied Geomatics Research Group out of Middleton Nova Scotia. The LiDAR points were collected on Julian Day 332 2007. The data being used is only a subset of the entire survey. The provided points are projected in UTM 20 NAD83 (CSRS98) while the trajectory data is not projected and has a datum of NAD83 (CSRS98). The LiDAR system used was the Optech ALTM 3100 LiDAR system.
Steps for completing mis-alignment corrections in TerraMatch include:
2) Open and create a terrascan project
3) Add your area of interest as a reference layer
4) Import points to your terrascan project (ensure a proper projection) including the creation of blocks
5) Remove unwanted blocks and recreate the study area including only required blocks
6) Create a subset of calibration blocks and load them into a new project. Complete following steps on the calibration blocks
7) Add trajectory data and separate into flightlines either by clipping to polygon or splitting at a laser gap of 20 seconds.
8) Deduce what flightline each point belongs to using the macro tool. Run the modify line numbering, deduce line numbers and delete by flightline macros. Classify points from anylines to a default class and then assign flightlines based on time stampe using the deduce flightline macro. Finally delete any unclassified points.
9) Classify imagery by running a classification macro. Process flightlines individually.
10) Determine main error types (roll, pitch, heading) by viewing profiles. Horizontal displacement between the flightlines in a profile perpendicular to flightpath will demonstrate roll errors. Horizontal displacement parallel to flightpath suggests pitch errors. For more information about errors, look here.
11) Open TerraMatch and Run the measure match tool. Use project points (make sure your project is point to the appropriate folder and has been saved) with a max triangle of 20. Use ground class. Run the tool and save a *.txt report.
12) Run the find match tool on project points using the ground class do it on the whole dataset based on the correction determined from step 10. Apply the dataset and write to a new directory instead of overwriting current data if preferred.
13) Point your project to your calibration blocks and run the measure match tool a second time on your corrected calibration blocks. Save your results and compare to the original measure match report
14) Repeat step for each error type making sure to use the most recently corrected calibration blocks
15) Run this process on your entire project once you have tested and chosen your settings
16) Correct for zshift using the find match tool. Test it using pre and post measure match reporting.
17) If the correction has shifted the data so that it is outside of the blocks you need to delete and reimport the points.
As discussed briefly above it is important to determine what types of errors your dataset is experiencing. To do this you should look at profiles of LiDAR point cloud both parallel to and perpendicular to the flight path. It is best to do this in several locations. The data should be displayed by the flightline. I focused on block 5 for my misalignment analysis. I set my top view window to display points by class and my profile to show points by flightline. This was helpful for determining misalignment profile locations. It is ideal to test this in areas with peaked roofs. Only the ground and building classes were turned on to aid in this process.
Profile 1 will be the best for showing Roll errors as it is perpendicular to the flight path.
Profile 2 will show Pitch errors as it is parallel to the flight path.
Although the mis-alignment errors are minimal in this dataset, it is fairly clear that the points from various flight paths line up less accurately in profile 1. This is consistent with the conclusion that roll causes the largest mis-alignment errors in this dataset.
Perpendicular Profile of Uncorrected Points (Profile 1)
Perpendicular Profile of Corrected Points (Profile 1)
Parallel Profile - Uncorrected Points (Profile 2)
Parallel Profile of Corrected Points (Profile 2)
Overall the calibration decreased the magnitude of the shift between flightlines. This is obviously both visually and by looking at the generated reports. Generally an acceptable level of error is 15cm. This means that your dz RMS values for your find match analysis needs to be lower than 15cm. In this case, the corrections completed on the calibration blocks resulted in an RMS value of 0.0799 from 0.1712. The RMS for the entire survey should be the same since the same correction was applied. The shift by flightline is shown in the summary tables below.
The similarity in the original and final shift magnitudes of the calibrated and survey blocks show that the calibration dataset is a good approximation.
No major issues were encountered during this project. One difficulty encountered was displaying the LiDAR points by flightline. The flightline numbering of points was completed properly but it was preferred that the data be displayed in swaths. The data was unable to be displayed this way even though the data was sorted by time stamp which should have made preferential display of points instead of the lack of consistency encountered.
Additionally when running the find match correction tool the inclusion of buildings did not decrease the shift as was hoped. This meant that the data was rerun using the ground surface only. Adjusting classification settings could have aided in this.
Because it was determined that pitch errors were minimal they were not corrected for.
It is clear below that the corrections decreased the magnitude of the shift between flightlines. The average original shift was 0.2734 and decreased to 0.09761. The value seen below is the average by block while the shifts listed above is averaged by points. The starting dz RMS is 0.1712 and 0.0799.
Each step of the survey decreases the magnitude of the shift. It is possible that a better classification could result in a larger decrease in flightline shift. The data went from 0.2716 to 0.09712 then to 0.08562 the Z Shift applied. The numbers shown below are averaged by the blocks while the numbers just listed are a total average of the points.