Friday, August 7, 2015

Day 19, Aug 7, 2015

I received the DIRSIG image this morning!  After downloading it and opening it up, I realized that there are only three possible ROI's that I can use: trees, dirt, and grass.  I would have preferred more ROI's, but for now, I will have to make the best of only three.  Several of the images were also misaligned, so I had to rotate them and stretch them to make the pixels the same size and the image in the correct direction.  Then, I randomly selected 200 pixels containing trees, 200 containing dirt, and 200 grass pixels and created a ROI for each terrain.  However, when I tried to classify the NNDiffuse image, none of the classification systems would generate anything but a preview of the classified image.  This means that the classification system is working, but it will not fully display.  On Monday, I will continue troubleshooting.

Day 15 - 18, Aug 3 - Aug 6, 2015

These past three days after vacation, I have been waiting for the DIRSIG image.  I had to do some research on the properties of an image cube, then sent the necessary data to a CIS student, who is creating a synthetic image of farmland.  The synthetic image is important because it has perfect ground truth, which I can use to create regions of interest of the correct pixels.  Then I can use these ROI's to classify the remainder of the image, then use the same ROI's to classify the non-sharpened image.  After comparing the ground truth to the classification of each, I can do some statistical analysis (such as how much more accurate NNDiffuse was than the non-sharpened image).  While waiting, I finished as much of my powerpoint presentation as I could.  Currently, I am slightly more than halfway finished with the slides of my presentation.  I also completed the final version of my abstract, which is now posted to this blog. If all goes well, I will receive the DIRSIG image tomorrow so that I can conduct the remainder of my research.

Final Abstract

Determining the accuracy of pan-sharpening programs is rather subjective because it has been largely used for enhancing visual analysis. However, there is some research on one such program--NNDiffuse Pan-Sharpening--that tested the expected effectiveness of NNDiffuse using the standard spectral methods of Euclidean Distance and Spectral Angle Mapper. In this project, we extend those results to test how accurate NNDiffuse is in practice through its effect on the accuracy of image classification. NNDiffuse was applied to a synthetic image where perfect truth of the scene content is known. Different strategies for identifying training and testing pixels for the unsharpened and sharpened images were defined and assessed to quantify the effects of NNDiffuse on the accuracy of image classification. Application of these strategies are expected to improve land cover classification results using pan-sharpened images.

Tuesday, August 4, 2015

Important Stuff I Don't Want to Forget

Band 1: 0.45-0.52μm (blue).
Provides increased penetration of water bodies, as well as supporting analysis of land use, soil, and vegetation characteristics.

Band 2: 0.52-0.60μm (green).
This band spans the region between the blue and red chlorophyll absorption bands and therefore corresponds to the green reflectance of healthy vegetation.

Band 3: 0.63-0.69μm (red).
This is the red chlorophyll absorption band of healthy green vegetation and represents one of the most important bands for vegetation discrimination.  Spectral Cover of Landsat Sensors 19

Band 4: 0.76-0.90μm (reflective infrared).
This band is responsive to the amount of vegetation biomass present in the scene. It is useful for crop identification and emphasizes soil-crop and land-water contrasts.

Band 5: 1.55-1.75μm (mid-infrared)
This band is sensitive to the amount of moisture in plants and therefore useful in crop draught and in plant vigor studies.

Band 6: 2.08-2.35μm (thermal infrared)
This band measures the amount of infrared radiant flux emitted from surface.

Band 7: 2.08-2.35μm (mid-infrared)

This is an important band for the discrimination of geologic rock formation. It is effective in identifying zones of hydrothermal alteration in rocks.

Monday, August 3, 2015

Day 14: Thursday, July 23, 2015

I have been experimenting with the Parallelepiped Classification unsupervised classification system.  It takes a while to classify every pixel in the image, so I used a subset of the image (similar to cropping a photo) and processed the spatial subset--fewer pixels to classify means the classification system finishes  sooner.  However, the Parallelepiped Classification Software does not seem to work properly.  It often incorrectly classifies pixels and covers the majority of an image with a single pixel type.  It is possible that this error could be caused by the ROIs I chose, which may be contaminated by non-ROI data (for instance, a building and a tree in the same pixel, but classified as a building).

Friday, July 24, 2015

Day 13: Wednesday, July 22, 2015

This morning I had a drivers test.  I passed!!  When I got home, my mom would not let me leave the house without making myself--and her--an omelet.  I arrived at the CIS building around 12:00.  However, I had forgotten that Wednesdays are pizza days at CIS.  I would have gladly eaten a second lunch, but just as I walked into the food lounge, pizza time was over and the lecture in the presentation room (where food is not allowed) began.  My stomach got the better of me and I snuck a slice in.  The presentation itself was very interesting: Dr. Dube presented on the possible impacts solar weather and the need to develop an early warning system.  He stated that, should we be struck by a large solar flare such as the Carrington event, the largest storm ever to hit Earth, it would take societies across the globe around 10 years to fully recover.  Also, he also mentioned that in the future, if and when we begin to colonize mars, the thin atmosphere of the Red Planet will do very little to protect from solar storms, and X-Rays from the storm could fry any life on the planet.  Protective barriers or shelters would need to be extremely thick--if a large storm were to hit mars, around thirty feet of concrete would be minimally sufficient to protect colonists.  I think Dr. Dube's presentation was very interesting.  He did several things to make it so: first, he showed us how important his research area is, exploring many possible applications in depth.  Secondly, he did not bore us to sleep with math and astronomical jargon, but rather kept his speech simple enough for the audience to easily understand--after all, the speaker's objective is not to look smart, but to inform the audience.  Finally, he used gestures, changes in tone, and even some humor to keep the audience engaged.  When I begin planning and practicing my presentation, I will try to incorporate all of these techniques.

Day 12: Tuesday, July 21, 2015

Today we took a field trip to the Mees Observatory in the Bristol Hills.  I got to the Cis building at 3:00 so that I would not exceed the maximum recommended hours per day (8), and did research on which classification method would be best for my project.  So far, the only one that I can figure out how to use  is Parallelepiped classification, which often fails to classify vegetation and city building pixels correctly.  
At 6:00, we left for Mees.  We ate dinner at Amiels (free 14" subs!) and continued driving to Bristol.  When we got there, we congregated like sheep as swarms of mosquitoes ate us alive.  I should have worn long pants.  Since it was still light out, we looked at the only thing visible--the moon.  Then, several of us went for a hike through the woods on nearby hiker trails.  Before it got too dark, we returned to the observatory and looked at stars.  When it finally did get dark enough we could see an arm of the Milky Way, a satellite, and forming stars.  Aside from being educational, the visit was a great  social event.  The summer interns exchanged phone numbers and snap chat names so that we could keep in touch during and after the internship.  Too soon, however, it was time to return home, and we packed into the bus and rolled down the hill.  

Day 11: Monday, July 20, 2015

Today I attempted to construct and execute a decision tree in ENVI to classify the pixels of an image.  I hoped to use the spectral data supplied in the Regions of Interest tab to filter the pixels through the classification tree, but when I tried to execute the tree, all pixels would filter into a single category.  I then did some research on decision trees in ENVI and watched several tutorials, but none of them explained my error.  Therefore, I may have to use a provided image classification system such as Parallelepiped Classification to identify pixels.

Monday, July 20, 2015

Day 10: Friday, July 17, 2015

This morning I did some research on determining ground truth and incorporating it into useable data, but I am still not completely sure how a ground truth image can be developed; every image has pixels that prevent perfect resolution, and therefore cannot portray the ground truth perfectly.  When Dr. Vodecek is available, I will have him to help me understand ground truth data collection and manipulation.  After lunch, I did more research on using the spectral angle mapper tool in ENVI, which I will use to measure the accuracy of the NNDiffuse program.  Later, Dr. Vodecek visited us and changed the method and objective of my project.  Now, I will use a terrain classification system to identify sections of an image as water, forest, road, etc. and comparing the results of the classifications system that acted on a NNDiffuse sharpened image of the same photograph.  The new direction for this project seems less complicated than the previous direction and should not require software other than ENVI.

Day 9: Thursday, July 16, 2015

I have been reconsidering fractal landscapes as a way to test the accuracy of NNDiffuse (measured in spectral angle difference and euclidean distance difference between the ground truth and the processed image) as a function of scene complexity (measured in number of cycles the landscape-generating algorithm has worked).  If I can figure out a way to create a fractal that is both spectrally and spatially fractalic, then fractals could be extremely useful to my project.  Today, I did a large amount of research on fractal landscape generating software.  Many are expensive ($50-$1000) and do not create satellite imagery, but several, such as Grome, can create satellite photography (which can be used for NNDiffuse) as well as near-ground simulations (which can be used to ascertain the ground truth).  However, a license for Grome is around 400 euros, which is a bit over budget.

Thursday, July 16, 2015

----- ABSTRACT -----



All image-sharpening programs have varying levels of accuracy, and different programs are appropriate for different scenarios.   However, the determination of the accuracy of image sharpening programs is currently rather subjective.   In this project, we hope to quantify the accuracy of one such program--NNDiffuse PAN Sharpening--by applying it to a synthetic image and comparing the results to the ground truth, or near-surface observations.  This will allow us to develop a system that will quantify the accuracy of the NNDiffuse in a variety of situations.  Once our study is complete, further research may then be conducted to determine the accuracy of other image sharpening programs and determine which software is the most appropriate for a given situation.

Day 8: Wednesday, July 15, 2015

Today, I searched the internet for synthetic satellite imagery or a means to make such images.  I emailed Dr. Vodecek and asked him to tell me where to find one.  He referred me to a local RIT student and gave me his contact information.  This student has DIRSIG (Digital Imagery and Remote Sensing Image Generation) synthetic images of farmland that I could use to study the differences between ground truth and the NNDiffuse satellite image.  I cannot use the DIRSIG software myself because it requires time-consuming training and would cost $2500.  I also looked into other possible synthetic imagery software such as Mirametrics and MATLAB, but they do not have the same image generation capabilities as DIRSIG.

Day 7: Tuesday, July 14, 2015

Since yesterday afternoon, I have been thinking about how to quantify the accuracy of the NNDiffuse software.  I thought of a new approach to this challenge; if I can generate a fractal landscape, maybe I can measure the relationship between euclidean distance and scene complexity.  Fractal landscapes could be useful for my project because I can easily control scene complexity.  However, fractal geometry cannot test spectral angle, and therefore could only provide half of the information that I would like to use for my analysis.  I have therefore come to the conclusion that, although a fractal landscape could possibly be useful, I will only attempt to use them if I have extra time at before the end of the internship since they can only provide half the data necessary to test the accuracy of the NNDiffuse algorithm.

Wednesday, July 15, 2015

Day 6: Monday, July 13, 2015

I spent this morning considering how to quantify the accuracy of the NNDiffuse Pan Sharpening algorithm.  I read and reread several scholarly articles in an attempt to understand how others have quantified the accuracy of sharpening techniques.  Many of them used the difference of the spectral angles of the "ground truth" and the sharpened image (Ground truth is what is actually on the ground in the area depicted by the image).  They also use the difference in Euclidean distance between two random points of the ground truth and the sharpened image.  In order to determine ground truth, Dr. Vodecek suggested using synthetic imagery.  This will allow me to accurately compare features of ground truth to those of the sharpened image.

Tuesday, July 14, 2015

Day 5: Friday, July 10, 2015

I spent this morning trying to find a way to operate the NNDiffuse Pan Sharpening program on ENVI Classic.  After spending several hours searching for the program on ENVI Classic, I asked Dr. Vodecek where to find it.  We soon came to the conclusion that ENVI Classic is too outdated to have the NNdiffuse algorithm on it, so I will have to use the updated version of ENVI for my project.  I then used the data I had downloaded on Wednesday from USGS to test out the image sharpening algorithm.

Friday, July 10, 2015

Day 4: Thursday, July 9, 2015

The biggest advancement we made today was figuring out how to layer bands into a single file.  This morning, Anna and I worked to learn how to do this, but were interrupted by a fire drill and eventually resorted to asking for Dr. Vodecek's help.  However, Dr. Vodecek was unfamiliar with the newest version of ENVI and decided to ask other ENVI users if they could help.  After this, he came to the conclusion that the older version of ENVI (ENVI Classic) would better suit our purposes for our research.  After helping us, Dr. Vodecek came to the conclusion that the older version of ENVI (ENVI Classic) would better suit our purposes for our research.  After a brief introduction to the software (which has many of the same features and is, in my opinion, easier to use than the current version of ENVI), we began to work on our projects.  I first downloaded satellite data of New York City from USGS to test the functionality of NNDiffuse Pan Sharpening, which uses the algorithm that I will test through my research.  However, I was unable to download a color image onto ENVI Classic, and therefore had to use a low-resolution (30m pixel) black and white image in conjunction with the sharper (15m pixel) black and white Band 8 image for the image sharpening function.  As a result, the sharpened image seemed almost as precise as the Band 8 image.  However, when I use a low resolution color image instead of a black and white one, the data displayed through the color should make up for the slight loss in accuracy due to the sharpening function.

Thursday, July 9, 2015

Day 3: Wednesday, July 8, 2015

This morning I used ENVI for several hours to try to understand it.  I downloaded new satellite data from the USGS website, then used several image sharpening systems to practice with them.  Image sharpening algorithms take some time to process the images, so in the future, cropping the images before this stage should greatly reduce the processing time.  After lunch, I spent the remainder of the day attempting to layer the bands into a single file to facilitate data manipulation.  Although my attempts were in vain, I did figure out many of the less intuitive functions of ENVI such as topographic mapping, band algebra, and regions of interest (and their corresponding functions).

Tuesday, July 7, 2015

Day 2: Tuesday, July 7, 2015

We began today with team-building activities in the Red Barn, where I learned the names of several other interns while helping solve challenges presented by a faculty member.  After this, I set up my RIT account so that I could download ENVI, a program that I will use for my project to analyze remote sensing data.  For the remainder of the day, we played with the program and tried to figure out how to use it.

Day 1: Monday, July 6, 2015

Today was the first day working at the Center for Imaging Science.  We first went on a tour around the building, during which we learned about Chester Carlson, the philanthropist who donated the building.  We also visited many of the labs and learned about many of the projects that go on in the Center, in addition to the equipment used for these projects.  For instance, we saw the three electron microscopes (one scanning, two transmission), and their applications to biology and nanoscience.  After the tour, we worked in the Fishbowl to create a presentation that introduced ourselves and explained the basics of ImageJ.  We also set up our blogs.  We then had lunch, after which we broke up into groups and went to our individual labs.  In Remote Sensing (my group), Dr. Vodecek told us our projects.  My project for this summer will be to quantify the accuracy of image sharpening using a high-resolution (15 meter pixels) black and white image and a lower resolution (30 meter) color image.  After we knew our individual projects, Dr. Vodecek gave each of us a scholarly paper relevant to our topics.  Although I did not understand a large portion of the paper (named Nearest-neighbor diffusion-based pan-sharpening algorithm for spectral images), it did give me a good overview of the work that I will be doing and the resources that I can use.