Friday, July 24, 2015
This morning I had a drivers test. I passed!! When I got home, my mom would not let me leave the house without making myself--and her--an omelet. I arrived at the CIS building around 12:00. However, I had forgotten that Wednesdays are pizza days at CIS. I would have gladly eaten a second lunch, but just as I walked into the food lounge, pizza time was over and the lecture in the presentation room (where food is not allowed) began. My stomach got the better of me and I snuck a slice in. The presentation itself was very interesting: Dr. Dube presented on the possible impacts solar weather and the need to develop an early warning system. He stated that, should we be struck by a large solar flare such as the Carrington event, the largest storm ever to hit Earth, it would take societies across the globe around 10 years to fully recover. Also, he also mentioned that in the future, if and when we begin to colonize mars, the thin atmosphere of the Red Planet will do very little to protect from solar storms, and X-Rays from the storm could fry any life on the planet. Protective barriers or shelters would need to be extremely thick--if a large storm were to hit mars, around thirty feet of concrete would be minimally sufficient to protect colonists. I think Dr. Dube's presentation was very interesting. He did several things to make it so: first, he showed us how important his research area is, exploring many possible applications in depth. Secondly, he did not bore us to sleep with math and astronomical jargon, but rather kept his speech simple enough for the audience to easily understand--after all, the speaker's objective is not to look smart, but to inform the audience. Finally, he used gestures, changes in tone, and even some humor to keep the audience engaged. When I begin planning and practicing my presentation, I will try to incorporate all of these techniques.
Today we took a field trip to the Mees Observatory in the Bristol Hills. I got to the Cis building at 3:00 so that I would not exceed the maximum recommended hours per day (8), and did research on which classification method would be best for my project. So far, the only one that I can figure out how to use is Parallelepiped classification, which often fails to classify vegetation and city building pixels correctly.
At 6:00, we left for Mees. We ate dinner at Amiels (free 14" subs!) and continued driving to Bristol. When we got there, we congregated like sheep as swarms of mosquitoes ate us alive. I should have worn long pants. Since it was still light out, we looked at the only thing visible--the moon. Then, several of us went for a hike through the woods on nearby hiker trails. Before it got too dark, we returned to the observatory and looked at stars. When it finally did get dark enough we could see an arm of the Milky Way, a satellite, and forming stars. Aside from being educational, the visit was a great social event. The summer interns exchanged phone numbers and snap chat names so that we could keep in touch during and after the internship. Too soon, however, it was time to return home, and we packed into the bus and rolled down the hill.
Today I attempted to construct and execute a decision tree in ENVI to classify the pixels of an image. I hoped to use the spectral data supplied in the Regions of Interest tab to filter the pixels through the classification tree, but when I tried to execute the tree, all pixels would filter into a single category. I then did some research on decision trees in ENVI and watched several tutorials, but none of them explained my error. Therefore, I may have to use a provided image classification system such as Parallelepiped Classification to identify pixels.
Monday, July 20, 2015
This morning I did some research on determining ground truth and incorporating it into useable data, but I am still not completely sure how a ground truth image can be developed; every image has pixels that prevent perfect resolution, and therefore cannot portray the ground truth perfectly. When Dr. Vodecek is available, I will have him to help me understand ground truth data collection and manipulation. After lunch, I did more research on using the spectral angle mapper tool in ENVI, which I will use to measure the accuracy of the NNDiffuse program. Later, Dr. Vodecek visited us and changed the method and objective of my project. Now, I will use a terrain classification system to identify sections of an image as water, forest, road, etc. and comparing the results of the classifications system that acted on a NNDiffuse sharpened image of the same photograph. The new direction for this project seems less complicated than the previous direction and should not require software other than ENVI.
I have been reconsidering fractal landscapes as a way to test the accuracy of NNDiffuse (measured in spectral angle difference and euclidean distance difference between the ground truth and the processed image) as a function of scene complexity (measured in number of cycles the landscape-generating algorithm has worked). If I can figure out a way to create a fractal that is both spectrally and spatially fractalic, then fractals could be extremely useful to my project. Today, I did a large amount of research on fractal landscape generating software. Many are expensive ($50-$1000) and do not create satellite imagery, but several, such as Grome, can create satellite photography (which can be used for NNDiffuse) as well as near-ground simulations (which can be used to ascertain the ground truth). However, a license for Grome is around 400 euros, which is a bit over budget.
Thursday, July 16, 2015
All image-sharpening programs have varying levels of accuracy, and different programs are appropriate for different scenarios. However, the determination of the accuracy of image sharpening programs is currently rather subjective. In this project, we hope to quantify the accuracy of one such program--NNDiffuse PAN Sharpening--by applying it to a synthetic image and comparing the results to the ground truth, or near-surface observations. This will allow us to develop a system that will quantify the accuracy of the NNDiffuse in a variety of situations. Once our study is complete, further research may then be conducted to determine the accuracy of other image sharpening programs and determine which software is the most appropriate for a given situation.
Today, I searched the internet for synthetic satellite imagery or a means to make such images. I emailed Dr. Vodecek and asked him to tell me where to find one. He referred me to a local RIT student and gave me his contact information. This student has DIRSIG (Digital Imagery and Remote Sensing Image Generation) synthetic images of farmland that I could use to study the differences between ground truth and the NNDiffuse satellite image. I cannot use the DIRSIG software myself because it requires time-consuming training and would cost $2500. I also looked into other possible synthetic imagery software such as Mirametrics and MATLAB, but they do not have the same image generation capabilities as DIRSIG.
Since yesterday afternoon, I have been thinking about how to quantify the accuracy of the NNDiffuse software. I thought of a new approach to this challenge; if I can generate a fractal landscape, maybe I can measure the relationship between euclidean distance and scene complexity. Fractal landscapes could be useful for my project because I can easily control scene complexity. However, fractal geometry cannot test spectral angle, and therefore could only provide half of the information that I would like to use for my analysis. I have therefore come to the conclusion that, although a fractal landscape could possibly be useful, I will only attempt to use them if I have extra time at before the end of the internship since they can only provide half the data necessary to test the accuracy of the NNDiffuse algorithm.
Wednesday, July 15, 2015
I spent this morning considering how to quantify the accuracy of the NNDiffuse Pan Sharpening algorithm. I read and reread several scholarly articles in an attempt to understand how others have quantified the accuracy of sharpening techniques. Many of them used the difference of the spectral angles of the "ground truth" and the sharpened image (Ground truth is what is actually on the ground in the area depicted by the image). They also use the difference in Euclidean distance between two random points of the ground truth and the sharpened image. In order to determine ground truth, Dr. Vodecek suggested using synthetic imagery. This will allow me to accurately compare features of ground truth to those of the sharpened image.
Tuesday, July 14, 2015
I spent this morning trying to find a way to operate the NNDiffuse Pan Sharpening program on ENVI Classic. After spending several hours searching for the program on ENVI Classic, I asked Dr. Vodecek where to find it. We soon came to the conclusion that ENVI Classic is too outdated to have the NNdiffuse algorithm on it, so I will have to use the updated version of ENVI for my project. I then used the data I had downloaded on Wednesday from USGS to test out the image sharpening algorithm.
Friday, July 10, 2015
The biggest advancement we made today was figuring out how to layer bands into a single file. This morning, Anna and I worked to learn how to do this, but were interrupted by a fire drill and eventually resorted to asking for Dr. Vodecek's help. However, Dr. Vodecek was unfamiliar with the newest version of ENVI and decided to ask other ENVI users if they could help. After this, he came to the conclusion that the older version of ENVI (ENVI Classic) would better suit our purposes for our research. After helping us, Dr. Vodecek came to the conclusion that the older version of ENVI (ENVI Classic) would better suit our purposes for our research. After a brief introduction to the software (which has many of the same features and is, in my opinion, easier to use than the current version of ENVI), we began to work on our projects. I first downloaded satellite data of New York City from USGS to test the functionality of NNDiffuse Pan Sharpening, which uses the algorithm that I will test through my research. However, I was unable to download a color image onto ENVI Classic, and therefore had to use a low-resolution (30m pixel) black and white image in conjunction with the sharper (15m pixel) black and white Band 8 image for the image sharpening function. As a result, the sharpened image seemed almost as precise as the Band 8 image. However, when I use a low resolution color image instead of a black and white one, the data displayed through the color should make up for the slight loss in accuracy due to the sharpening function.
Thursday, July 9, 2015
This morning I used ENVI for several hours to try to understand it. I downloaded new satellite data from the USGS website, then used several image sharpening systems to practice with them. Image sharpening algorithms take some time to process the images, so in the future, cropping the images before this stage should greatly reduce the processing time. After lunch, I spent the remainder of the day attempting to layer the bands into a single file to facilitate data manipulation. Although my attempts were in vain, I did figure out many of the less intuitive functions of ENVI such as topographic mapping, band algebra, and regions of interest (and their corresponding functions).
Tuesday, July 7, 2015
We began today with team-building activities in the Red Barn, where I learned the names of several other interns while helping solve challenges presented by a faculty member. After this, I set up my RIT account so that I could download ENVI, a program that I will use for my project to analyze remote sensing data. For the remainder of the day, we played with the program and tried to figure out how to use it.