Proceed to GeoCommunity Home Page


SpatialNewsGIS Data DepotGeoImaging ChannelGIS and MappingSoftwareGIS JobsGeoBids-RFPsGeoCommunity MarketplaceGIS Event Listings
HomeLoginAccountsAboutContactAdvertiseSearchFAQsForumsCartFree Newsletter
Proceed to GeoCommunity Home Page

Sponsored by:


TOPICS
Features

Channel Guides

Imagery Sources

Imaging Software

Hardware

Partners/Sponsors

Education

Gallery

Imaging Events

Publications

Submit News

Community


SpatialNews Daily Newswire!
Subscribe now!

Latest Industry Headlines
Blue Marble Releases Geographic Calculator 2017 with New Quality Control Tool for Seismic Survey Data
OGC Seeks Comment on Candidate InfraGML Part 7 Land Division Standard

Latest GeoBids-RFPs
Cartography Training-VA
A & E Services-OR
Remote Sensing-UT
Surveying and Mapping-WA
GPS Locators-MN

Recent Job Opportunities

Recent Discussions
DEM to DTM in Inroads
GZ File
LiDAR-derived DEM
space syntax
DEM data for Israel

Imaging, Photogrammetry, Surveying and GPS - GIS Data Collection for the 21st Century

By John W Allan, Leica Geosystems GIS & Mapping Division, UK (Posted June, 2002)

As GIS becomes a standard management tool throughout many organisations, questions are beginning to be asked about the creation, maintenance and quality of the data that resides in the database. This paper looks at the various methods of GIS database creation with an emphasis on new update and maintenance techniques from the surveying world. It also addresses the use of new imagery data sources for data capture and discusses new capabilities for managing and updating the quality of existing databases.
Data or Information? - The information in the database goes through 4 distinct phases as shown in Figure 1. Not only does the data have to be created in the first place, (from which the information is gathered), but there is also the issue of updating and maintaining the information. Updating and maintenance can be carried out using a variety of methods, depending upon the scale of the data and its usage. Some of the methods are covered in later sections of this paper. However, it is important to understand exactly what is meant by updating. Is it to improve the initial quality of the digital data or is it to add new information that was either previously unavailable or wasn’t obvious from the initial source? The former is covered under the Quality Control section of the paper. The latter however can be addressed by many techniques.

For large coverage areas, imagery from satellites or high altitude cameras can be used. This covers a large area but only contains simple information. At the opposite end of the scale, where complex, “subjective” information is required, it is GPS that provides the tool and the operator that provides the human decision making process that results in the “complexity” of information. The coverage in this instance is normally linked to a single “point” or object, representative of the GPS’ accuracy. In between these extremes lie the aerial sources of data that can be captured at many scales to suit the application. Obviously, the scale of the imagery defines the level of information that can be captured. The other two phases from the diagram, analysis/visualization and application/information usage depend very much on the software application being used and as such will not be covered in this paper in any detail.
Acquisition & Creation of GIS databases from imagery
The era of 1-meter satellite imagery presents new and exciting opportunities for users of spatial data. With satellites from Space Imaging (IKONOS), ImageSat (EROS) and DigitalGlobe Inc (QuickBird) already in orbit, capturing imagery at up to 61 cm resolution, high resolution imagery will add an entirely new level of geographic knowledge and detail to the intelligent maps and GIS databases that we create from imagery.

Some of the latest developments also include digital airborne sensors. These new devices can be thought of as “airborne satellites”, utilizing the same digital imagery capture systems as satellites but offering the flexibility of capture of aerial systems (See Figure 3). These new sensors will capture multispectral data at extremely high rates and at resolutions of the operators choosing. When linked with data from airborne Lidar systems, which can give centimeter accuracy DTMs, they will provide the basis for highly accurate but low cost base mapping anywhere in the world.
Is high-resolution imagery making a difference?
There is no doubt that the GIS press has been deluged with high- resolution imagery for the last 12 months. Showing an application with an imagery backdrop provides an immediate visual cue for readers. Without the imagery backdrop, the context is lost and the basic map, comprising polygons, lines and points becomes more difficult for the layman to interpret. It is the context or visual clues that provide the useful information and it is this information that is the inherent value of the imagery.

The higher the resolution of the imagery, the more man made objects that can be identified. The human eye - the best image processor of all - can quickly detect and identify these objects. If the application is therefore one that just requires an operator to identify objects and manually add them into the GIS database, then the imagery is making a positive difference. It is adding a new data source for the GIS Manager to use.

However, if the imagery requires information to be extracted from it in an automated and semi automated fashion (for example, a land cover classification), it is a different matter. If the same techniques that were developed for earlier lower resolution satellite imagery are used on the high-resolution imagery, (such as maximum likelihood classification), the results can actually create a negative impact. Whilst lower resolution imagery isn’t affected greatly by artifacts such as shadows, high-resolution data can be. Lower resolution data also “smoothes” out variations across ranges of individual pixels, allowing statistical processing to create effective land cover maps. Higher resolution data doesn’t do this - individual pixels can represent individual objects like manhole covers, puddles and bushes - and contiguous pixels in an image can vary dramatically, creating very mixed or “confused” classification results. There is also the issue of linear feature extraction. Lines of communication on a lower resolution image (such as roads) can be identified and extracted as a single line. However, on a high- resolution image, a road comprises the road markings, the road itself, the kerb (and its shadow) and the pavement (or sidewalk). A very different method of feature extraction is therefore needed. Figure 4 shows the range and variety of information contained in a high-resolution image and the problems caused by shadows, overhanging trees and parked cars.

It’s not just the spatial resolution that can affect the usage of the imagery. With 11 bit imagery becoming available, the ability of the GIS to work with high spectral content imagery becomes key. 11 bit data means that up to 2048 levels of grey can be stored and viewed. If the software being used to view the imagery assumes it is 8 bit (256 levels), then it will either a) display only the information below the 255 level (creating either a black or very poor image) or b) try to compress the 2048 levels into 256, also reducing the quality of the displayed image considerably. Having 2048 levels allows more information in shadowy areas to be extracted as well as enabling more precise spectral signatures to be defined to aid in feature identification. However, without the correct software, this added “bonus” can easily turn into a problem.

One other area that needs to be addressed in terms of usage is the actual availability of data to the end user. Application papers tend only show us the finished results without giving any indication of the actual project itself and the problems that may have been encountered in the actual running of the project. In many instances, availability of data is limited, especially from spaceborne sensors and users have to look elsewhere for data.

An increasingly common source of image data is therefore existing aerial survey photographs. With the massive improvement in scanning technology and orthophoto production software, these old photo archives can be readily made available to GIS users. No licensing fees are required (as the organization generally owns the photography) and the data can easily be made available internally within the organization. The only downside is the question of how recent the imagery is. Contrast this with the high-resolution satellite data. If it is not archived data, then the data has to be acquired, which is dependent upon both the weather and other demands on the satellite. If it is acquired then it has to be processed and shipped out via tape or CD/DVD (as bandwidth is limited) and finally, it usage is limited by licensing - single user, multiple user, site usage etc. pricing is therefore a key issue. The message here is clear. High-resolution satellite data will not replace other sources of data - it will in fact only complement them.

Finally, the issue of digital versus analog is also being addressed in this new digital age. Old airphotos need to be scanned to convert them to a digital format. New digital airborne cameras get around this step, providing high quality airborne imagery at any user defined resolution. Depending upon the application and the levels of accuracy needed, cameras ranging in price from the hundreds to the millions of dollars can be used. The drop in price and increased availability of GPS units is also aiding the growth in the use of low cost digital cameras for GIS applications. Attached to remotely controlled aircraft or helicopters, they can provide very high-resolution, targeted aerial surveys for specific applications.

Information (and its extraction) is the key element

As mentioned above, high-resolution imagery from both aerial and space borne sensors provides a challenge to the user community in terms of information extraction. The human eye and brain can identify objects in the image but the computer finds it difficult. If we cannot automate this process, then we will most certainly lose out on some of the major economic benefits of the imagery.

If the human brain can do it, why can’t the computer? Well it actually can if it uses rules or knowledge based processing, just as the human brain does. The brain can make a decision on an image very quickly by understanding and using context. If we see grassland in the center of an urban development, we can easily decide that it is a park, as opposed to agricultural land. To make this decision we are using knowledge and experience to create expertise and computer based expert systems are beginning to emerge that mimic this process.

For many years, expert systems have been used successfully for medical diagnoses and various information technology (IT) applications but only recently have they been applied successfully to GIS applications.

Statistical image processing routines, such as maximum likelihood and ISODATA classifiers, work extremely well at performing pixel- by-pixel analyses of images to identify land-cover types by common spectral signature. Expert-system technology takes the classification concept a giant step further by analyzing and identifying features based on spatial relationships with other features and their context within an image.

Expert systems contain sets of decision rules that examine spatial relationships and image context. These rules are structured like tree branches with questions, conditions and hypotheses that must be answered or satisfied. Each answer directs the analysis down a different branch to another set of questions.

The beauty of an expert system is that because the rules, also called a knowledge base, are created by true experts (such as foresters or geologists), the system can be used successfully by non-experts.

In terms of satellite images, the knowledge base identifies features by applying questions and hypotheses that examine pixel values, relationships with other features and spatial conditions, such as altitude, slope, aspect and shape. Most importantly, the knowledge base can accept inputs of multiple data types, such as digital elevation models, digital maps, GIS layers and other pre- processed thematic satellite images, to make the necessary assessments.

In forestry, for example, an expert classification might identify one stand of trees as a specific species because they grow only at certain elevations and on southwest-facing slopes of less than 30 degrees. Another region within the image having similar spectral values might be interpreted as grass because it only occurs next to roadways in suburban areas. And another category may be labeled as an orchard because the trees grow in regular patterns.

Because many of these examples rely on information contained in data other than satellite images, it’s easy to understand that expert system-technology is more of a decision-support tool than merely an image classifier. In fact, a satellite image isn’t even necessary. With the help of expert system-technology, the military already has benefited from cross-country mobility knowledge bases that consider soil type, land cover, elevation data and current weather reports to determine optimal routes for a certain type of vehicle to traverse an area. The beauty of the expert system however is that whenever new sources of information become available, they can be easily incorporated. For example, even though the mobility analysis can be carried out without imagery, the accuracy of the analysis can be affected by the ground conditions. If a satellite image can be used to extract moisture content (i.e. the “mud” factor), then it can be added to the knowledge base and used as part of a rule. One other key element of the experts system is the “traceability” of the process. Figure 6 shows that by simply querying the resultant map, the rule that was used to create the output can be displayed and verified.
3D information
One area of high growth is the population of 3D data into GIS databases. This is becoming more common due to the increased availability of stereo imagery and the drop in cost of software that enables 3D feature extraction and model texturing. The use of 3D obviously helps in certain decision-making processes, but the speed of uptake of 3D feature extraction has been phenomenal. As computer hardware increases in capability and 3D PC games become the norm, so the GIS industry wants to look and work in a “real world” environment. Not only is simple 3D data required to give slope, aspect and a range of other environmental inputs, but it is now widely being augmented with texture based models, based on the 3D measurements to create flythroughs and visualizations that bring a new realism to GIS.
Quality Control of data
Capturing the spatial information though is just one part of the process. The quality of the information needs to be checked to ensure its accuracy. The best way of doing this is to use survey data that is traditionally of very high accuracy. Unfortunately, integrating survey data with GIS data in the past has been difficult. New software is however becoming available that helps this, allowing data from both GPS and TPS to be included in the GIS and used to assess the accuracy of the existing data whether by linking survey points to points in the data or by applying a “quality measure” of the original data.

Most GIS databases were created through the digitizing of paper maps. Whilst the results seem relatively accurate, few are checked to ensure true absolute accuracy. In Figure 8, we can see a “typical” GIS display, showing houses and land parcels. On first glance it seems a good representation of what is on the ground. However, when we overlay ground survey data as in Figure 9, it becomes obvious that the absolute positions of the buildings on the ground differ from their location in the database and we can easily see the error or offset. What the new generation software will do is enable spatial data stored in the GIS to be either corrected using the survey data, where the actual points, lines and polygons are shifted to their correct location, or to be “virtually” modified. This means that the database is not actually changed, but that features in it are “linked” to true survey points, enabling the correct positional information to be used in any GIS process.
Maintenance
In addition to the quality control issue is the maintenance of the data. When the database is created, only the information that is to hand can be included. From imagery and paper maps, this may be very simple descriptive information that is relevant to the original map scale. What cannot be input is more subjective information that can only be gathered by a human actually looking at the feature and making an assessment. For instance, a forester might want to include in his database the species of tree, its health, ground conditions and the effect of any local pollution.

This information cannot be gathered from maps or images, it must be collected on the ground using mobile GIS. Figure 10 shows a typical low cost GIS oriented GPS system that enables this. These systems, capable of 1-2 m accuracy on the ground now use common GIS software as their data collection methods and simple PDA hardware. The ability to customize the software allows field based data collection to be carried out simple and easily and is being widely used by utility companies, local governments and environmental organizations around the world.
Future trends
The remote sensing and photogrammetric industries are going through a massive change, becoming more closely integrated with a fast growing and competitive GIS industry. The surveying industry is about to enter the same phase. What is clear is that imagery and the technology associated with preparing it for GIS and extracting information from it is becoming a key part of GIS systems worldwide. It is important that GIS software changes to take account of this new and extensive user requirement and that the industry as a whole begins to provide services that match the demands of these new users. What we shall see over the next 2- 3 years is:

* A much broader range of imagery becoming available, based on new and existing sources of data

* More regular revisit capabilities, enabling higher frequency change detection and monitoring applications

* The growth of specialist services using new digital camera/GPS technology to provide targeted, low cost aerial surveys for specific applications

* More integrated GPS applications within GIS, imaging and photogrammetry software to improve quality and positional accuracy.

About the Author:
John W Allan
Leica Geosystems, GIS & Mapping Division
Telford House
Fulbourn
Cambridge CB1 6DY
United Kingdom.

Submit a technical paper or comments to
imaging@geocomm.com

More GeoImaging Features

Sponsored by:

For information
regarding
advertising rates
Click Here!

Copyright© 1995-2014 MindSites Group / Privacy Policy

GeoCommunity™, Wireless Developer Network™, GIS Data Depot®, and Spatial News™
including all logos and other service marks
are registered trademarks and trade communities of
MindSites Group