This article contains the following executables: EES.WK3
Casimir C. "Casey" Klimasauskas is the founder of NeuralWare Inc., a supplier of neural-network development systems and services. Prior to that he worked extensively in machine vision and robotics. He can be reached at Penn Center West IV-227, Pittsburgh, PA 15276; 412-787-8222.
One of the key problems facing the machine vision industry is how to detect specific features in an image. It turns out that even finding a simple feature such as an edge can be difficult, if not impossible. Even though a person looking at a video camera image on a monitor can readily see the boundary between two objects, it may not be so easy to find it with an algorithm. Researchers studying how the eye preprocesses information for the brain use the term "early vision" for the function of the eye that assists in pattern recognition. We can use insights from research in early vision to solve the problem of edge detection by computer.
This article presents an engineering approximation of early vision, written from the perspective of an engineer investigating useful applications of neurally inspired technology. Although the techniques discussed here were suggested by the processes of the human eye, they are not intended to be biologically accurate, nor is the solution intended to be biologically plausible. The architecture of the edge detection system presented here is the empirical result of exploring many blind alleys and dead ends. For this reason, some of the assumptions and function values used here may seem somewhat arbitrary. Their only justification is that they worked.
The edge enhancement system presented here can be implemented in various ways, using different technologies. This article presents two implementations in software (one using the C language and the other using Lotus 1-2-3) and also describes a third implementation using commercially available image processing hardware and software.
You might think of the receptive surface of the eye as an array or grid of photoreceptive elements. Light from the outside world impinges on this photo-receptive array and provokes output from each of the array elements. The output of each of these photoreceptors is passed on to another layer of corresponding neurons that work together to enhance the image.
For purposes of this article, we will call our two-layer network the edge enhancement system (EES). Figure 1 shows the effect of one of the EES processing elements. The connections are shown only from the processing element in the center of the array. This processing element excites its nearest neighbors (shown by "+" near the processing elements) and inhibits those a little further away (shown by "-" near the processing elements). The actual strength of the excitation or inhibition, as a function of distance from the center, is shown in Figure 2. When plotted in three-dimensions, with the magnitude of the excitation or inhibition as the Z-axis, the resulting shape looks like a Mexican hat. For this reason, it is sometimes called a "Mexican hat function" (MHF) or "on-center off-surround." The effect of the Mexican hat function is similar to that of a standard image processing filter known as a "difference of Gaussians."
The connections are shown only for the center processing element in Figure 1, all the other processing elements are connected in a similar fashion.
The EES processing element (shown in Figure 3) computes an internal activation value by computing the weighted sum of the outputs of its neighbors and the weights connecting them. This internal activation value is then transformed by a nonlinear transfer function (such as the clamped linear one shown) to produce an actual output. The clamped linear transfer function was found to work best after sigmoid and hyperbolic tangent transfer functions were tried and found not to work. Notice that the current output of a processing element is fed back onto itself as part of the input for computing its internal activation.
Readers familiar with neural-network types will recognize the EES array of processing elements described as a kind of feedback neural net, (similar to a Hopfield network, but with a fixed pattern of inter-connections). The connections are such that each processing element is trying to decide if it is on an edge or not. When this constraint is satisfied, the processing elements reach a stable output state.
In operation, the outputs of the receptor array are passed on to the EES. The initial values of each of the elements in the EES are equal to their corresponding values in the receptor array. After initialization, the EES goes through several iterations. During each iteration the processing elements obtain inputs from their neighbors (either excitatory or inhibitory) as well as from their current state. From these inputs, they compute a new output transformed through some nonlinear function. In the eye, these processes evolve as a dynamical system obeying a set of continuous differential equations defined by the synapses connecting them.
To develop a good engineering approximation, we need to be able to implement the EES inexpensively and efficiently. This section looks at techniques for accomplishing this with readily available off-the-shelf image processing hardware and software. The two principal image processing techniques discussed here are convolution and look-up tables.
Convolution is a common and powerful technique for filtering images. Very simply, a convolution is a specially designed matrix (or filter) that is combined together with a portion of an image to compute a transformed pixel value. The filter is centered at each pixel in the initial image and the "convolution" of the filter and the image beneath it is computed. The result is the transformed value of the center pixel. The matrix is then moved one pixel to the right and the transformed value of the next pixel is computed. When the filter has been applied, centered at each pixel in the initial image, the resulting transformed image is complete. This is shown in Figure 4.
The convolution of filter and image is arrived at by computing the pairwise product of corresponding elements of the filter and the underlying portion of the image and summing them together. Notice that this is the same as computing the internal activation of the EES processing element shown in Figure 3. This means we can implement the EES neural net by using standard image processing hardware that supports convolution.
Image filtering by use of convolutions is one of the cornerstones of machine vision. By properly selecting the coefficients of the filter, you can detect edges, create high- or low-pass filters, grow or shrink light regions, and quite a variety of other functions. You'll find more information on digital image filtering2 at the end of this article. In practice, implementing an edge detector using a convolution filter is not difficult. The problem that arises is that of finding good filter coefficients, which do an effective job of finding the edges rather than losing or obscuring them.
A second commonly used technique in image processing is called a "look-up table." Just as the name implies, the value of a pixel is applied to the input of a look-up table (usually the address lines of a static RAM array) and a "transformed" value is produced at the output (the contents of that memory location). The mapping function is typically arbitrary and can be defined by the user.
Look-up tables are used to enhance contrast, convert images to black and white (from gray or color), and to produce special effects. The Cherry Coke commercials use this to make the can of Cherry Coke be in color and all else black and white. In our case, they can be used to implement a clamped linear transfer function. To implement a clamped linear transfer function in an 8-bit system, set the mapping RAM to output zero whenever an input in the range 0x80 through 0xff (negative values) is applied. For locations 0x00 through 0x7f, set the mapping RAM to output the same value as the input.
Both the convolution and look-up table techniques are such common tools that both are included in most commercial image processing systems. Together with a pair of frame buffers (also common), we can actually implement a very fast and moderately priced edge enhancement system. Companies that supply suitable hardware and software include Imaging Technologies (ITI), DataCube, Data Translation, and Matrox.
A block diagram of the hardware to implement the EES is shown in Figure 5. To set up the system, we load the block shown as "Filter Coefficients" with the coefficients from the MHF, and the look-up table "Transfer Function" with the values for a clamped linear transfer function; 7 x 7 is the minimum-sized convolution to use for the MHF. Some of the systems mentioned also support 9 x 9 and larger convolutions.
The sequence of processing is as follows:
Because most systems are designed to work with small integers, it will be necessary to make the appropriate translations. This is an example of how neural-network technology can be grafted into existing technology to enhance its performance. With a little thought, it is possible to apply similar techniques to a variety of other problems.
When I began doing research on these filters for a project we are working on, I wanted something that would be easy to work with, and I could quickly try out a variety of parameters. After a littLe thought, I decided to try out my new copy of Lotus 1-2-3. The spreadsheet instance described in this section is the result of those efforts. Though I used Lotus 1-2-3, Release 3.0, it should be possible to implement this with most spreadsheet packages and computers that support a graphing option.
As it turns out, a variety of other techniques could have also been used to do this research. Listing One, page 114, shows a C program that implements the same functions as the spreadsheet, but without the nice graphics or ability to change data as easily as with the spreadsheet. Both the C language implementation and the spreadsheet implementation deal with the more limited problem of a one-dimensional data stream rather than the two-dimensional image processing we have been discussing. Later in this article, I'll discuss how to extend the one-dimensional model to two-dimensions.
Listing Two, page 114, shows the spreadsheet constructed. The numbers to the right are the row numbers. The letters along the bottom represent the column numbers. The Graph capability of Lotus 1-2-3 is used to display the results from processing the one-dimensional signal or data stream. Although not every aspect of the spreadsheet is discussed here, the entire spreadsheet is available on-line or on disk from DDJ.
The first step in constructing the spreadsheet is to set up the "static" data. This consists of all titles, the "Bias" (cell D7), "Low Pass Filter" (range C20..C28), "MHF Filter" (range D20..D28), and "Raw Input Data" (range E16. .E124). Everything else in the spreadsheet is computed. This static data is entered exactly as shown. For the Raw Input Data, 0.00 represents "black" and 1.00 represents "white." Intermediate values may be used. Be careful to put everything in the cell locations shown. After the spreadsheet is constructed, you can move things around to suit your taste.
The calculations for the Low Pass Output data are as follows, assuming that you have entered the static data in the rows and columns shown. Enter the following equation in cell B20:
+$C$20*E16+$C$21*E17+$C$22*E18+$C$23*E19+$C$24*E20+$C$25*E21
+$C$26*E22+$C$27*E23+$C$28*E24or with Lotus 1-2-3, Release 3:
@SUMPRODUCT ($C$20..$C$28,E16..E24)
Then replicate cell B20 throughout the range B21..B120. This column is labeled as "Graph A" as a reminder of which graph range to use to display it. (Line 14 of the spreadsheet.)
Calculations for the neural-network filter are done in a single step. Compute the internal activation and transfer function as follows:
@MAX(0.0,@MIN(1.0,$D$20*E16+$D$21*E17+$D$22*E18+$D$23*E19+$D$24*E20+$D$25*E21+$D$26*E22+$D$27*E23+$D$28*E24-$D$7))
or with Lotus 1-2-3, Release 3:
@MAX(0.0, @MIN(1.0,@SUMPRODUCT($D$20..$D$28,E16..E24)-$D$7))
Then replicate cell F20 throughout the range F21..M120. The @MAX(O,..) clamps the output so it can never go below zero. @MIN(1,..) clamps the output so it can never go above one. The sum of the pair-wise products (or SUMPRODUCT) computes the effect of the neighborhood processing elements on the current one, and includes feedback of the current state. The - $D$7 subtracts off the bias from the internal activation.
The first four and the last four cells in columns F through M are a copy of the values of the cells just prior to them. To replicate the values of the top of the columns, enter:
Cell F16: +F$20
Then replicate it throughout the range F16..M19. To replicate the values at the bottom of the columns, enter:
Cell F121: +F$120
Then replicate it throughout the range F121. .M124. The computation portion of the spreadsheet is now complete. Use the graphing feature of your spreadsheet to construct the graphs described in Figure 6. These two graphs will be used to display the processing effects of various types of inputs and filters on the output data.
EES (Edge Enhancement System):
Format: Lines only
Graph Range Contents
--------------------------------------------------------
B E16..E124 input data
C F16..F124 1st iteration
D H16..H124 3rd iteration
E J16..J124 5th iteration
F M16..M124 8th iteration
HIGHPASS (High Pass Filter):
Format: Lines only
Graph Range Contents
---------------------------------------------
A B20..B120 Low-pass filtered data
B E20..E120 input data
Having constructed the spreadsheet just described, the graph EES should look like the one in Figure 7a. Figure 7b is the same graph with the input range (Range B) reset, so it shows only the output of the network as it evolves. Figure 7c shows the input data and the final (eighth) iteration of the network with intermediate ranges reset (Ranges C, D, E).
The edge data for this experiment was selected to show profiles of two kinds of edges often found in images. In the first kind, light shines on a curved edge or rounded edge resulting in a gradation in intensities. The gradually changing light intensities on the left side of the graph are typical of this kind of edge. The second kind of edge is a ragged edge such as from torn metal. This type shows wide variations in gray level due to specular reflectivity as well as sharp variations in the curvature of the material. This is shown as the very noisy edge on the right of center of the diagram. Notice that the EES does a very nice job of sharpening both edges.
To the far right is a small "blip" in intensities. This blip is of the same magnitude as the one in the center of the main pulse. Notice that the EES was able to pick this out, because of its contrast to the background, while ignoring the noise on the top of the pulse. A little experimentation will show that this is quite a powerful technique. The bias value (in cell D7) can be changed to alter the sensitivity to various features. Changing the shape of the MHF also changes the nature of edges detected. Figure 8 shows the MHF used in the filter.
As it turns out, both edges used in this test tend to be difficult to find using standard image processing techniques. Figure 9 shows what happens when a simple sobel operator is applied to the input. The resulting derivative function does not provide much information about where the edges might be. The problem is not that such a filter is difficult to implement, but that finding a set of coefficients and a filter length, which enhances the edges rather than missing or obscuring them, is a highly heuristic and often frustrating task. My own experience is that it is sometimes impossible.
There are a variety of experiments you can do with the edge enhancement system. One of the things you will discover is that the system can be sensitive to the shape of the MHF as well as the bias. In some ranges, the detected edges actually set up standing waves, which emanate out from the edges!
As I mentioned at the beginning of this article, the EES is an engineering approach to image or signal processing based on biological insights. It was developed heuristically starting with a biological model and studying it until the mechanics of its operation were well understood. As such, there is no formal theory of operation.
Functionally what happens is that the MHF acts as a difference of Gaussians filter. The transfer function clips the negative part of the output, leaving only the positive center peak when the filter is directly over the edge. When the MHF is applied again to the resulting output, it will tend to enhance single peaks but reduce plateaus. As such, lots of noise in the vicinity of an edge will be ignored. However, a single substantial variation against a constant background will be significantly enhanced. Iterating on this process eventually results in groups of saturated processing elements, at most the width of the excitatory part of the MHF. All other processing elements are turned off.
The same principles used in developing the one-dimensional EES apply to the two-dimensional version. Instead of using a single-dimensional vector, a two-dimensional matrix is used. One example of a 9 x 9 MHF is shown in Example 1.
The process of computing the convolution (sum of pair-wise products) with the corresponding portion of a pixel array is the same. Likewise, the clamped linear transfer function uses the same equation used in the spreadsheet. Implementing this on an image processing system will require converting everything to work with small integers, but the process is quite straightforward.
Insights from the operation of the human eye can be used to build improved image enhancement systems, particularly edge enhancement systems. The basic mechanisms involved are capable of turning an image (or one-dimensional signal) with fuzzy and noisy edges into a sharp clean edge-enhanced image.
This technology can enhance the solution of a variety of problems including character recognition, part tracking, part inspection, printed circuit board inspection, ultrasonic image interpretation, target recognition, and so on. Enough similarities exist with traditional image processing techniques that these neural networks can be implemented with traditional image processing hardware and software systems.
NEURAL NETWORKS AND IMAGE PROCESSING
by Casimir C. Klimasauskas
[LISTING ONE]
/* eesc.c -- edge enhancement system in C */
#include <stdio.h>
#define ASize(x) (sizeof(x)/sizeof(x[0])) /* length of array */
/************************************************************************
* PrintGraph() - print out graph of an array of numbers *
*************************************************************************/
FILE *PFOutFp = {stdout};
int PrintGraph( PFarray, ILen, Iny )
float *PFarray; /* pointer to floating piont array */
int ILen; /* length of the array */
int Iny; /* # of points along the y-axis */
{
float FMin, FMax; /* minimum and maximum values */
float FSc, FOff; /* scale & offset */
int Iwx; /* work index */
int Ilx; /* line index */
int ITx; /* temp index */
int IpTx; /* prior line index */
int Ich; /* character to display */
/* --- check that all parameters are "reasonable" --- */
if ( PFarray == (float *)0 || ILen <= 0 || Iny <= 1 )
return( -1 );
/* --- compute minimum and maximum values for array --- */
FMin = PFarray[0];
FMax = PFarray[0];
for( Iwx = 1; Iwx < ILen; Iwx++ ) {
if ( FMin > PFarray[Iwx] ) FMin = PFarray[Iwx];
if ( PFarray[Iwx] > FMax ) FMax = PFarray[Iwx];
}
if ( FMin > 0.0 ) FMin = 0.0;
/* --- from minimum and maximum, compute scale and offset --- */
if ( (FMax - FMin) < .0001 ) {
/* --- assume that all values are the same --- */
FSc = 1.0;
FOff = -FMin;
} else {
FSc = Iny / (FMax - FMin);
FOff = -FSc * FMin;
}
IpTx = 0;
fputc( '\n', PFOutFp );
for( Ilx = Iny; Ilx >= 0; Ilx-- ) {
for( Iwx = 0; Iwx < ILen; Iwx++ ) {
ITx = FSc * PFarray[Iwx] + FOff;
if ( ITx < 0 ) ITx = 0;
if ( ITx > Iny ) ITx = Iny;
if ( Iwx == 0 ) IpTx = ITx;
if ( (IpTx < Ilx && Ilx < ITx) ||
(ITx < Ilx && Ilx < IpTx) ||
(ITx == Ilx) ) Ich = 'x';
else Ich = ' ';
fputc( Ich, PFOutFp );
IpTx = ITx;
}
fputc( '\n', PFOutFp );
}
return( 0 );
}
/************************************************************************
* Convolve() - Convolve a filter with a one-dimensional signal *
*************************************************************************/
int Convolve( PFilter, IFLen, PFInVec, PFResVec, ILen )
float *PFilter; /* pointer to filter coefficients */
int IFLen; /* number of coefficients in filter */
float *PFInVec; /* input signal vector */
float *PFResVec; /* output result vector */
int ILen; /* length of input & result vectors */
{
int IFx; /* filter index */
int IResX; /* result index */
int IResXLast; /* index of last result item */
int IResXFirst; /* index of first result item */
double DRv; /* result value */
/* --- check for things which do not make sense --- */
if ( IFLen <= 0 || ILen <= IFLen ) return( -1 );
if ( PFilter == (float *)0 ||
PFInVec == (float *)0 || PFResVec == (float *)0 ) return( -1 );
/* --- convolve the filter with the signal --- */
IResXFirst = IFLen / 2;
IResXLast = ILen - (IFLen-1)/2;
for( IResX = IResXFirst; IResX < IResXLast; IResX++ ) {
DRv = 0.0;
for( IFx = 0; IFx < IFLen; IFx++ )
DRv += PFilter[IFx] * PFInVec[IResX-IResXFirst+IFx];
PFResVec[IResX] = DRv;
}
/* --- handle left edge specially --- */
DRv = PFResVec[IResXFirst];
for( IResX = 0; IResX < IResXFirst; IResX++ ) PFResVec[IResX] = DRv;
/* --- likewise right edge --- */
DRv = PFResVec[IResXLast-1];
for( IResX = IResXLast; IResX < ILen; IResX++ ) PFResVec[IResX] = DRv;
/* --- we are done --- */
return( 0 );
}
/************************************************************************
* NNCycle() - perform one iteration with Neural Network *
*************************************************************************/
int NNCycle( Bias, PFilter, IFLen, PFInVec, PFResVec, ILen )
float Bias; /* bias for PE */
float *PFilter; /* pointer to filter coefficients */
int IFLen; /* number of coefficients in filter */
float *PFInVec; /* input signal vector */
float *PFResVec; /* output result vector */
int ILen; /* length of input & result vectors */
{
int IFx; /* filter index */
int IResX; /* result index */
int IResXLast; /* index of last result item */
int IResXFirst; /* index of first result item */
double DRv; /* result value */
/* --- check for things which do not make sense --- */
if ( IFLen <= 0 || ILen <= IFLen ) return( -1 );
if ( PFilter == (float *)0 ||
PFInVec == (float *)0 || PFResVec == (float *)0 ) return( -1 );
/* --- convolve the filter with the signal --- */
IResXFirst = IFLen / 2;
IResXLast = ILen - (IFLen-1)/2;
for( IResX = IResXFirst; IResX < IResXLast; IResX++ ) {
DRv = -Bias; /* NN special */
for( IFx = 0; IFx < IFLen; IFx++ )
DRv += PFilter[IFx] * PFInVec[IResX-IResXFirst+IFx];
/* --- apply clamped linear transfer function to output --- */
if ( DRv < 0.0 ) DRv = 0.0; /* NN special */
else if ( DRv > 1.0 ) DRv = 1.0; /* NN special */
PFResVec[IResX] = DRv;
}
/* --- handle left edge specially --- */
DRv = PFResVec[IResXFirst];
for( IResX = 0; IResX < IResXFirst; IResX++ ) PFResVec[IResX] = DRv;
/* --- likewise right edge --- */
DRv = PFResVec[IResXLast-1];
for( IResX = IResXLast; IResX < ILen; IResX++ ) PFResVec[IResX] = DRv;
/* --- we are done --- */
return( 0 );
}
/************************************************************************
* main() - main driver routine *
*************************************************************************/
/* --- Input Signal --- */
float FSignal[] = {
0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20,
0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20,
0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.15, 0.20, 0.25,
0.30, 0.35, 0.40, 0.45, 0.50, 0.55, 0.60, 0.65, 0.70, 0.75,
0.80, 0.80, 0.80, 0.80, 0.80, 0.80, 0.83, 0.80, 0.70, 0.90,
0.80, 0.80, 0.60, 0.90, 0.40, 0.60, 0.30, 0.10, 0.20, 0.20,
0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20,
0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20,
0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20,
0.20, 0.20, 0.20, 0.10, 0.25, 0.30, 0.10, 0.20, 0.20, 0.20,
0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20, 0.20 };
/* --- Result Signal --- */
float FResult1[ ASize(FSignal) ] = {0};
float FResult2[ ASize(FSignal) ] = {0};
/* --- Convolver for Neural Network --- */
float FMHF[] = {
-0.10, -0.60, -0.30, 0.50, 1.10, 0.50, -0.30, -0.60, -0.10 };
/* --- Standard (sobel) edge detector --- */
float FSobel[] = { -1.0, 0.0, 1.0 };
main()
{
int Iwx;
float *PFResA, *PFResB, *PFSwap;
PrintGraph( &FSignal[0], ASize(FSignal), 40 );
fputs( "\n--- Original Signal ---\n\n", PFOutFp );
Convolve( &FSobel[0], ASize(FSobel),
&FSignal[0], &FResult1[0], ASize(FSignal) );
PrintGraph( &FResult1[0], ASize(FResult1), 40 );
fputs( "\n--- Result of applying sobel edge detector to image---\n\n",
PFOutFp );
PrintGraph( &FSignal[0], ASize(FSignal), 40 );
fputs( "\n--- Original Signal ---\n\n", PFOutFp );
PFResA = &FSignal[0];
PFResB = &FResult1[0];
PFSwap = &FResult2[0];
for( Iwx = 1; Iwx <= 8; Iwx++ ) {
NNCycle( .02, &FMHF[0], ASize(FMHF), PFResA, PFResB, ASize(FSignal) );
PrintGraph( PFResB, ASize(FResult1), 40 );
fprintf( PFOutFp, "\n--- Cycle number %d ---\n\n", Iwx );
PFResA = PFResB; /* swap result pointers */
PFResB = PFSwap;
PFSwap = PFResA; /* next ResB */
}
exit( 0 );
}
[LISTING TWO]
Neural Network Based "Edge Enhancement System" 1
Written by: Casimir C. "Casey" Klimasauskas 2
January 6, 1990 3
Lotus 1-2-3 version 3.0 spreadsheet 4
5
6
0.020 Bias 7
8
Low Low Raw 9
Pass Pass MHF Input 10
Output Filter Filter Data 11
12
Iteration 0 1 2 3 4 5 6 7 8 13
Graph A B C D E F 14
15
0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16
0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17
0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18
0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 19
0.00 0.00 -0.10 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20
0.00 0.00 -0.60 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21
0.00 0.00 -0.30 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22
0.00 -1.00 0.50 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23
0.00 0.00 1.10 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 24
0.00 1.00 0.50 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 25
0.00 0.00 -0.30 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 26
0.00 0.00 -0.60 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 27
0.00 0.00 -0.10 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 28
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 29
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 30
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 31
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 32
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 33
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 34
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 35
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 36
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 37
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 38
0.00 0.20 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 39
0.00 0.20 0.03 0.02 0.00 0.00 0.00 0.00 0.00 0.00 40
0.00 0.20 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 41
-0.05 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 42
0.00 0.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 43
0.10 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 44
0.10 0.25 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 45
0.10 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 46
0.10 0.35 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 47
0.10 0.40 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48
0.10 0.45 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 49
0.10 0.50 0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 50
0.10 0.55 0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 51
0.10 0.60 0.04 0.00 0.00 0.00 0.00 0.00 0.00 0.00 52
0.10 0.65 0.05 0.00 0.00 0.00 0.00 0.00 0.00 0.00 53
0.10 0.70 0.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00 54
0.10 0.75 0.15 0.12 0.18 0.28 0.46 0.76 1.00 1.00 55
0.05 0.80 0.17 0.20 0.31 0.49 0.79 1.00 1.00 1.00 56
0.00 0.80 0.15 0.12 0.16 0.26 0.43 0.71 1.00 1.00 57
0.00 0.80 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 58
0.00 0.80 0.05 0.00 0.00 0.00 0.00 0.00 0.00 0.00 59
0.00 0.80 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00 60
0.03 0.80 0.13 0.06 0.03 0.00 0.00 0.00 0.00 0.00 61
0.00 0.83 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00 62
-0.13 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 63
0.10 0.70 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 64
0.10 0.90 0.21 0.08 0.06 0.01 0.00 0.00 0.00 0.00 65
-0.10 0.80 0.18 0.14 0.00 0.00 0.00 0.00 0.00 0.00 66
-0.20 0.80 0.22 0.05 0.00 0.00 0.00 0.00 0.00 0.00 67
0.10 0.60 0.13 0.05 0.00 0.00 0.00 0.00 0.00 0.00 68
-0.20 0.90 0.29 0.27 0.32 0.50 0.78 1.00 1.00 1.00 69
-0.30 0.40 0.26 0.28 0.38 0.58 0.93 1.00 1.00 1.00 70
-0.10 0.60 0.11 0.04 0.05 0.13 0.26 0.50 0.73 0.99 71
-0.50 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 72
-0.10 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 73
0.10 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 74
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 75
0.00 0.20 0.05 0.04 0.03 0.02 0.01 0.00 0.00 0.00 76
0.00 0.20 0.01 0.02 0.02 0.02 0.01 0.00 0.00 0.00 77
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 78
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 79
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 80
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 81
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 82
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 83
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 84
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 85
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 86
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 87
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 88
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 89
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 90
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 91
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 92
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 93
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 94
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 95
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 96
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 97
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 98
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 99
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 101
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 102
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 103
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 104
0.00 0.20 0.01 0.02 0.02 0.01 0.00 0.00 0.00 0.00 105
0.00 0.20 0.06 0.04 0.02 0.00 0.00 0.00 0.00 0.00 106
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 107
-0.10 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 108
0.05 0.10 0.00 0.00 0.00 0.00 0.02 0.05 0.08 0.12 109
0.20 0.25 0.08 0.13 0.19 0.28 0.43 0.67 1.00 1.00 110
-0.15 0.30 0.12 0.14 0.20 0.29 0.45 0.71 1.00 1.00 111
-0.10 0.10 0.00 0.00 0.00 0.02 0.06 0.13 0.25 0.41 112
0.10 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 113
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 114
0.00 0.20 0.05 0.03 0.00 0.00 0.00 0.00 0.00 0.00 115
0.00 0.20 0.01 0.02 0.01 0.00 0.00 0.00 0.00 0.00 116
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 117
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 118
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 119
0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 120
0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 121
0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 122
0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 123
0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 124
Row
A B C D E F G H I J K L M Column