Friday, August 7, 2015
I received the DIRSIG image this morning! After downloading it and opening it up, I realized that there are only three possible ROI's that I can use: trees, dirt, and grass. I would have preferred more ROI's, but for now, I will have to make the best of only three. Several of the images were also misaligned, so I had to rotate them and stretch them to make the pixels the same size and the image in the correct direction. Then, I randomly selected 200 pixels containing trees, 200 containing dirt, and 200 grass pixels and created a ROI for each terrain. However, when I tried to classify the NNDiffuse image, none of the classification systems would generate anything but a preview of the classified image. This means that the classification system is working, but it will not fully display. On Monday, I will continue troubleshooting.
These past three days after vacation, I have been waiting for the DIRSIG image. I had to do some research on the properties of an image cube, then sent the necessary data to a CIS student, who is creating a synthetic image of farmland. The synthetic image is important because it has perfect ground truth, which I can use to create regions of interest of the correct pixels. Then I can use these ROI's to classify the remainder of the image, then use the same ROI's to classify the non-sharpened image. After comparing the ground truth to the classification of each, I can do some statistical analysis (such as how much more accurate NNDiffuse was than the non-sharpened image). While waiting, I finished as much of my powerpoint presentation as I could. Currently, I am slightly more than halfway finished with the slides of my presentation. I also completed the final version of my abstract, which is now posted to this blog. If all goes well, I will receive the DIRSIG image tomorrow so that I can conduct the remainder of my research.
Determining the accuracy of pan-sharpening programs is rather subjective because it has been largely used for enhancing visual analysis. However, there is some research on one such program--NNDiffuse Pan-Sharpening--that tested the expected effectiveness of NNDiffuse using the standard spectral methods of Euclidean Distance and Spectral Angle Mapper. In this project, we extend those results to test how accurate NNDiffuse is in practice through its effect on the accuracy of image classification. NNDiffuse was applied to a synthetic image where perfect truth of the scene content is known. Different strategies for identifying training and testing pixels for the unsharpened and sharpened images were defined and assessed to quantify the effects of NNDiffuse on the accuracy of image classification. Application of these strategies are expected to improve land cover classification results using pan-sharpened images.
Tuesday, August 4, 2015
Band 1: 0.45-0.52μm (blue).
Provides increased penetration of water bodies, as well as supporting analysis of land use, soil, and vegetation characteristics.
Band 2: 0.52-0.60μm (green).
This band spans the region between the blue and red chlorophyll absorption bands and therefore corresponds to the green reflectance of healthy vegetation.
Band 3: 0.63-0.69μm (red).
This is the red chlorophyll absorption band of healthy green vegetation and represents one of the most important bands for vegetation discrimination. Spectral Cover of Landsat Sensors 19
Band 4: 0.76-0.90μm (reflective infrared).
This band is responsive to the amount of vegetation biomass present in the scene. It is useful for crop identification and emphasizes soil-crop and land-water contrasts.
Band 5: 1.55-1.75μm (mid-infrared)
This band is sensitive to the amount of moisture in plants and therefore useful in crop draught and in plant vigor studies.
Band 6: 2.08-2.35μm (thermal infrared)
This band measures the amount of infrared radiant flux emitted from surface.
Band 7: 2.08-2.35μm (mid-infrared)
This is an important band for the discrimination of geologic rock formation. It is effective in identifying zones of hydrothermal alteration in rocks.
Monday, August 3, 2015
I have been experimenting with the Parallelepiped Classification unsupervised classification system. It takes a while to classify every pixel in the image, so I used a subset of the image (similar to cropping a photo) and processed the spatial subset--fewer pixels to classify means the classification system finishes sooner. However, the Parallelepiped Classification Software does not seem to work properly. It often incorrectly classifies pixels and covers the majority of an image with a single pixel type. It is possible that this error could be caused by the ROIs I chose, which may be contaminated by non-ROI data (for instance, a building and a tree in the same pixel, but classified as a building).