Supervised neural network classification of pre-sliced cooked pork ham images using quaternionic singular values

Share Embed


Descripción

Meat Science 84 (2010) 422–430

Contents lists available at ScienceDirect

Meat Science journal homepage: www.elsevier.com/locate/meatsci

Supervised neural network classification of pre-sliced cooked pork ham images using quaternionic singular values Nektarios A. Valous a, Fernando Mendoza a, Da-Wen Sun a,*, Paul Allen b a

FRCFT Group, Biosystems Engineering, Agriculture and Food Science Centre, School of Agriculture Food Science and Veterinary Medicine, University College Dublin, Belfield, Dublin 4, Ireland b Ashtown Food Research Centre, Teagasc, Dublin 15, Ireland

a r t i c l e

i n f o

Article history: Received 27 April 2009 Received in revised form 14 September 2009 Accepted 17 September 2009

Keywords: Computer vision Pork ham slice Supervised classification Quaternionic singular value decomposition Quaternionic singular values Mahalanobis distance Artificial neural network Multilayer perceptron

a b s t r a c t The quaternionic singular value decomposition is a technique to decompose a quaternion matrix (representation of a colour image) into quaternion singular vector and singular value component matrices exposing useful properties. The objective of this study was to use a small portion of uncorrelated singular values, as robust features for the classification of sliced pork ham images, using a supervised artificial neural network classifier. Images were acquired from four qualities of sliced cooked pork ham typically consumed in Ireland (90 slices per quality), having similar appearances. Mahalanobis distances and Pearson product moment correlations were used for feature selection. Six highly discriminating features were used as input to train the neural network. An adaptive feedforward multilayer perceptron classifier was employed to obtain a suitable mapping from the input dataset. The overall correct classification performance for the training, validation and test set were 90.3%, 94.4%, and 86.1%, respectively. The results confirm that the classification performance was satisfactory. Extracting the most informative features led to the recognition of a set of different but visually quite similar textural patterns based on quaternionic singular values. Ó 2009 Elsevier Ltd. All rights reserved.

1. Introduction Computer vision has been implemented for quality assessment in meats and meat products, overcoming most of the drawbacks of traditional methods, e.g. human inspection and instrumental techniques (Kumar & Mittal, 2009; Quevedo & Aguilera, 2009; Quevedo, Aguilera, & Pedreschi, 2009). For quality grading purposes, image analysis techniques need to take into account the high variability in colour and visual texture of pork hams. Although gray level images can be quite satisfactory from the pattern recognition perspective, colour images alternatively do seem to be perceptually richer. There is no doubt that colour contains very useful information that can help to improve the accuracy of pattern recognition systems (Villegas & Paredes, 2007). Different from gray level images, colour data representations are usually ternary, e.g. ham slice colour image data from a digital camera in the RGB colour space. Over the last decade, growth has been witnessed in both the diversity of techniques and the range of applications regarding colour image analysis (Sangwine & Horne, 1998; Kaya, Ko, & Gunasek* Corresponding author. Tel.: +353 17167342; fax: +353 17167493. E-mail address: [email protected] (D.-W. Sun). URLs: http://www.ucd.ie/refrig, http://www.ucd.ie/sun (D.-W. Sun). 0309-1740/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.meatsci.2009.09.011

aran, 2008; Fathi, Mohebbi, & Razavis, 2009). There are operations in image processing which have been known and understood for many years; however they have not been generalized considerably to colour images (Sangwine & Ell, 1999). Several approaches have been followed to deal with colour images. One of the typical is to process each colour channel separately (Mendoza et al., 2009; Valous, Mendoza, Sun, & Allen, 2009a). A quite recent approach is to encode the three channel components on the three imaginary parts of a quaternion (Denis, Carre, & Fernandez-Maloigne, 2007). There is a growing interest in the applications of quaternion numbers to colour image processing (Smolka & Venetsanopoulos, 2006; Trémeau, Tominaga, & Plataniotis, 2008). Many problems in the area of quaternion-based image processing are still open (Cai & Mitra, 2000). In general, quaternions are an extension of complex numbers to four dimensions and play an important role in colour image processing. They are considered as complex numbers with a vector imaginary part consisting of three mutually orthogonal components. In Cartesian form, quaternion numbers can be represented as follows:

q ¼ a þ ib þ jc þ kd;

ð1Þ

where a, b, c and d are real numbers, and i, j and k are orthogonal imaginary operators. A pure quaternion has a zero real part and a full quaternion has a non-zero real part. Thus, a quaternion number

423

N.A. Valous et al. / Meat Science 84 (2010) 422–430

is a complex number with real and imaginary parts, hence the term hypercomplex (Kantor & Solodovnikov, 1989; Sangwine, 1996). An RGB colour image of size (m,n) may be converted to a quaternion matrix by placing the three colour components into the three quaternionic imaginary parts, leaving the real part zero such that the image function Aq(m,n) is given by the following representation (Moxey, Sangwine, & Ell, 2003):

neural network, using a small portion of informative and uncorrelated singular values computed from the quaternionic SVD of digital colour images, as robust and stable features. To our knowledge this is the first reported use of quaternions and quaternionic singular values in food image analysis and in particular for the classification and quality grading of images of food surfaces.

Aq ðm; nÞ ¼ AR ðm; nÞi þ AG ðm; nÞj þ AB ðm; nÞk;

2. Materials and methods

ð2Þ

where AR(m,n), AG(m,n), and AB(m,n) are the R, G, and B components, respectively, at the pixel coordinates of the image. In this way, the colour image is represented as a matrix of size m  n whose elements are pure quaternions (Ell & Sangwine, 2007). Although real and complex number systems are used to provide arithmetic operations of 1D and 2D data, quaternions can handle algebraic operations of ternary numbers, ergo expressing colour data directly (Pei & Cheng, 1997). A thorough introduction to quaternions can be found in Ward (1997) and Gürlebeck and Sprößig (1997). The quaternionic singular value decomposition (SVD) is a technique to decompose a quaternion matrix (representation of a colour image) into several component matrices, exposing useful properties of the original matrix. SVD has been exploited generally in what is known as reduced-rank signal processing where the idea is to extract the significant parts of a signal (Sangwine & Le Bihan, 2006). In image pattern recognition, identifying and extracting effective features is an important step to successfully complete the task of classification. There are several kinds of image features for recognition such as visual, algebraic, statistical moments and transform coefficients (Hong, 1991). Algebraic features represent intrinsic attributions of the image, ergo various transforms or decompositions can be used to extract them. The quaternionic SVD is an effective algebraic feature extraction method for any colour image (Hong, 1991). Since the quaternionic matrix designates a mathematical representation of a colour image, the computed singular value feature vectors are unique for the colour image and can be used for pattern recognition purposes. SVD has many generalizations and refinements depending on the broad range of its applications in science and engineering. Hong (1991) proved some important properties of the singular value feature vector and used it to recognize facial greyscale images. Recently, Shnayderman, Gusev, and Eskicioglu (2006) explored the feasibility of SVD in developing a new measure that can express the quality of distorted images. Philips, Watson, Wynne, and Blinn (2009) presented an SVD-based feature reduction method to reduce the dimensions of remotely sensed satellite imagery prior to classification. A general review of potential applications of SVD in various aspects of digital image processing can be found in Andrew and Patterson (1976). The SVD of the ham image, expressed with the quaternion arithmetic, produces unique singular values (descriptors) that are identified as algebraic features to potentially recognize/classify ham colour images robustly. These descriptors should allow analysis and interpretation with precision and objectivity for the quality grading of hams. In the context of classification, neural networks can be viewed as powerful, fault tolerant and reasonable alternatives to traditional classifiers (Kulkarni, 2001). Multilayer perceptron (MLP) networks with one hidden layer and sigmoidal hidden layer activation functions are capable of approximating any decision boundary to arbitrary accuracy (Li, Rad, & Peng, 1999), ergo could be employed to learn a given mapping from a finite singular value dataset. MLPs are used due to their popularity and enhanced ability for generalization, which is related to the accurate prediction of data that are not part of the training dataset. A comprehensive description of the salient features of an MLP neural network can be found in Bishop (1995) and Foody (2004). The objective of this study is to classify four qualities of cooked pork ham typically consumed in Ireland, with a supervised MLP

2.1. Pork ham samples preparation Four cooked pork ham qualities were manufactured in the agriculture and food development authority Teagasc (Co. Dublin, Ireland) using Silverside pork leg muscles (M. biceps femoris) without membranes and sinew ends. The muscles were injected with different percentages of brine solutions (wet curing by injection). The resulting product qualities were: premier quality or low yield ham (A1), medium quality hams (A2 and A3) and low quality or high yield ham (A4). The brine formulation, injection level and processing conditions used for the production of each ham quality are summarized in Table 1. Also in the same table, the averaged moisture content (ham slice) per quality is presented along with the standard deviation. The moisture content was determined in quadruplicate by the official AOAC method 950.46, which is related to the determination of moisture content in meat and meat products (AOAC, 1998). Pre-forming tumbling was carried out for 30 min at 6 rpm for all hams. The injected muscles were vacuum tumbled for 6 up to 20 h depending on the quality. The injected and tumbled pork muscles were formed, vacuum packed and pressed into shape using pressure moulds before steam cooking. The hams were cooked at 82 °C to a core temperature of 74 °C. All pork ham samples were chilled to 4 °C before slicing (slice width  2.0 mm). Images were acquired immediately after slicing (90 slices per quality). 2.2. Image acquisition and processing A colour calibrated computer vision system (CVS) as described by Valous et al. (2009a) was used for image acquisition (spatial resolution of 0.0826 mm/pixel). The software package MATLAB v7.4 (MathWorks, USA) along with the open-source quaternion toolbox (Sangwine & Le Bihan, 2005) was used for image processing and singular value extraction. The toolbox allows computations with

Table 1 Injection levels, brine formulation, processing conditions and averaged ham slice moisture content for each quality. Pork ham qualities Injection level (%) Brine formulation (%)

A1 10

A2 20

A3 30

A4 40

Nitrite salt Red brook M20 Dextrose Red brook HYH STPP Carrageenan Sodium ascorbate Water

22.0 – 2.70 – 4.40 – 0.22 70.7

12.0 12.0 – – – – – 76.0

8.60 – 1.08 – 1.70 1.44 0.09 88.5

7.0 – – 12.25 – – – 80.8

Processing conditions Tumbling time (h) Tumbling speed (rpm) Resting time (min)

6 6 30

9 6 30

12 6 30

20 6 30

71.8a ±0.4

72.6a ±0.5

74.9b ±0.6

74.8b ±0.5

End product slice moisture content (%) a–b

Moisture content values with different letters are significantly different (P < 0.05).

424

N.A. Valous et al. / Meat Science 84 (2010) 422–430

quaternion matrices in almost the same way as with matrices of complex numbers. A polynomial transform for calibrating colour signals (Valous et al., 2009a) was used to map the RGB primaries to sRGB (IEC, 1999), ensuring reproducible colour images. Due to the differences in size and shape among the four qualities and the considerable computational delays associated with the quaternionic SVD, the acquired colour images were subsequently cropped in the central region (1024  1024 pixels; equivalent to 7154.1 mm2) to produce sixteen 256  256 pixel sub-images (equivalent to 447.1 mm2) per ham slice image and per quality (Fig. 1). Cropping allowed better scrutiny and interpretation (all analyzed images had the same spatial pixel dimensions) and also kept computation times manageable. The use of SVD on large images is not practical due to memory requirements and because of the long time required to perform the computation (Fish, Grochmalicki, & Pike, 1996; Shnayderman et al., 2006). Thus, the colour calibrated sub-images, expressed as quaternion matrices, were the direct input to the quaternionic SVD algorithm for the extraction of singular values. 2.3. Singular value extraction from quaternionic SVD In general, SVD is related to the diagonalization of a matrix and is a well-used tool in the theory of linear algebra. Without separating the image into three separate colour channel images, it has been demonstrated that this colour image decomposition exist based on quaternion matrix algebra (Zhang, 1997). Specifically, in quaternionic SVD, for any arbitrary m  n quaternion matrix Aq, there exist two quaternion unitary matrices U and V and a diagonal R such that the following factorization exists in the following form (Le Bihan & Mars, 2004):

Aq ¼ U R V T ;

 with R ¼

Rr 0 0

0



matrix V. The diagonal entries of R, called singular values Rr, can be arranged in order of decreasing magnitude and the columns of U and V are called left and right quaternion singular vectors for Aq, respectively. The quaternionic SVD can be rewritten as singular value spectrum decomposition; the matrix Aq with rank r has a Fourier expansion, in terms of the singular values and outer products of the columns of the U and V matrices, of the following form (Gentle, 2007):

Aq ¼

r X

un v Tn rn ;

ð4Þ

n¼1

where un are the left singular vectors (columns of U), mTn the right singular vectors (columns of V), and rr are the real singular values. Eq. (4) shows that this quaternionic factorization, decomposes the matrix into a sum of r rank-1 quaternion matrices, ergo it is said to be rank revealing; the number of non-null singular values equals the rank of the matrix (Le Bihan & Sangwine, 2007). The method for obtaining the SVD in colour images expressed as quaternion matrices can be found in Le Bihan and Mars (2004). From the algorithmic development perspective, deriving the SVD using the Jacobi algorithm produces more accurate results than any other known algorithm (Le Bihan & Sangwine, 2007). A much faster algorithm based on the transformation of a quaternion matrix to bidiagonal form using quaternionic Householder transformations has been developed in a previous study (Sangwine & Le Bihan, 2006). This algorithm is less accurate than the Jacobi algorithm, but has significant computational speed advantages. Moreover, it is the default quaternionic SVD algorithm that has been implemented in the quaternion MATLAB toolbox used in this study. 2.4. Feature space reduction

;

ð3Þ

where U denotes an m  m unitary quaternion matrix, R is an m  n diagonal matrix with non-negative real numbers on the diagonal, and VT is the conjugate transpose of an m  n unitary quaternion

An operation such as a classification that would have been performed on the colour image can now be equivalently performed on the real non-negative diagonal elements rr. Thus, the 256 singular values computed for each of the sixteen sub-images (forming the

Fig. 1. Representative images of the evaluated pork ham qualities: (a) RGB images of the central region (1024  1024 pixels), and (b) sixteen 256  256 pixel sub-images cropped from A1 and used for the SVD computation.

N.A. Valous et al. / Meat Science 84 (2010) 422–430

central region of each ham slice) were linearly rescaled (Barcala et al., 2004), using min–max normalization (values from 0 to 1) and averaged to obtain an array consisting of 256 features, representing each of the 90 ham slices. The final result was a matrix of 256 pre-processed (averaged and normalized) features (F1, F2, F3 . . . F256) per image and per quality. In this way, the pork ham slice was sampled along vertical and horizontal dimensions yielding a set of sixteen sub-images, from which their averaged singular values represented the initial square image. The average coefficients of variation (expressed as percentages), which describe the standard deviations as a percentage of the averaged singular values computed from the sub-images, were (for all features) 10.4% for A1, 11.7% for A2, 10.2% for A3 and 9.2% for A4, meaning that there were not any substantial variations within the square slice images (1024  1024 pixels). Multi-dimensional data may be represented approximately in fewer dimensions due to data redundancies, coming from natural correlations that occur in singular values computed from images (Philips et al., 2009). This fact enables singular values to be represented as a feature-reduced data space. Mahalanobis distance and Pearson product moment correlations were used as dimensionality reduction tools for feature selection. Mahalanobis distance is a distance metric based on feature correlations among ham qualities and is useful as a means to determine similarity, between pairs of groups (Mendoza et al., 2009). Specifically, the Mahalanobis distance computation in MATLAB, among singular values (rr) was carried out across six different ham quality pairs (A1–A2, A1–A3, A1–A4, A2–A3, A2–A4, and A3–A4) as follows (Tay, Zurada, Wong, & Xu, 2007):

Dðp; qÞ ¼ ðp  qÞT Covðp; qÞðp  qÞ;

ð5Þ

where D is the Mahalanobis distance, p and q are the set of singular values for any two ham quality pairs, and given the matrix n  d of the dataset (n data vectors of dimensionality d = 256), the covariance of the ith feature xi (li: corresponding mean) and the jth feature xj (lj: corresponding mean) is computed using the following: n    1X   Cov xi ; xj ¼ xki  li xkj  lj : n k¼1

ð6Þ

Eighteen features (F1, F2, F5, F7, F8, F9, F10, F11, F12, F13, F14, F15, F18, F21, F89, F245, F251 and F252) were selected. This was achieved by choosing (among the 256) the features that exhibited the five largest distances between each pair of ham qualities (denoting maximum separability). This metric was used as an initial scrutiny tool to reduce data dimensionality (feature set S1 = 18). However, some of the selected features are highly correlated, so Pearson product moment correlation coefficients (r) were computed to measure the strength of the linear relationships among the previously selected feature set. This procedure defined a mapping from a typically higher dimensional data space to a space of reduced dimension (feature set S2 = 6; F1, F2, F5, F7, F89 and F252), maintaining key data properties and resulting in a less correlated dataset in the range of correlations 0.6 to 0.6. The Pearson’s r coefficients among the feature set S2 were: F1–F2 (0.47), F1–F5 (0.44), F1–F7 (0.34), F1–F89 (0.37), F1–F252 (0.25), F2–F5 (0.59), F2–F7 (0.56), F2–F89 (0.37), F2–F252 (0.21), F5–F7 (0.60), F5–F89 (0.33), F5–F252 (0.26), F7–F89 (0.39), F7–F252 (0.28), and F89–F252 (0.60). Hence, only a handful of potential highly discriminating features were used as input to train the neural network. 2.4. Supervised artificial neural network classification A supervised MLP neural network was employed for classification. The MLP classifier consists of a set of simple processing units

425

arranged in a layered architecture. MLP is concerned into partitioning the feature space and finding class memberships, determined by the categorical levels of four ham qualities for the given input set S2 (6 features) of rr, with the target to lead to high classification rates. Using the software package STATISTICA 8.0 (StatSoft, USA), the weights connecting the inputs to the hidden neurons and the hidden neurons to the output neurons were adjusted, so that the network could be trained by approximating the underlying functional relationship between the training set (60% randomly selected data; 54 images per quality) and the target ham classes (Rohwer, Wynne-Jones, & Wysotzki, 1994). Given that the size of the training set can have a significant effect on classification accuracy more data were used for training than for validation and testing (Kavzoglu, 2009). The cross entropy penalty (error) function, which is more suitable for classification problems, evaluated the performance of the MLP during training, measuring how close the network predictions were to the targets and how much weight adjustment should be applied by the algorithm in each iteration (Fontenla-Romero, Guijarro-Berdiñas, Alonso-Betanzos, & MoretBonillo, 2005). To assess the performance while under training, a randomly selected unseen validation dataset (20%; 18 images per quality) was chosen as a means of checking how well the network makes progress in modelling the input-target relationship. Such an assessment (cross-validation) is necessary to avoid the overtraining (overfitting) phenomenon, which causes displacement of decision boundaries. Regularization, using weight decay, was also considered for improving the generalization of the MLP, adding a term to the error function which penalizes large weight values. This form of regularization can lead to significant improvements in network generalization (Lerouge & Van Huffel, 1999). The remaining images (18 images per quality) were used as the test set to measure the classification accuracy of the neural network on unseen data. The data selection process for the training, validation and test set was carried out using MATLAB’s random integer number generator function ‘randi’.

3. Results and discussion 3.1. Appearance of ham slices and assessment of ham qualities Fig. 1 shows representative images of the central 1024  1024 pixel region of the four evaluated pork ham qualities, as well as an example of the sixteen 256  256 pixel sub-images cropped from the A1 quality, used as input in the quaternionic SVD computation. The rendition of the sliced ham images in colour designates two main topographical structures that define texture appearance. Fat-connective tissue is depicted as a brighter region due to increased intensity levels, while pores/defects are presented as darker regions. Both these structures are distributed randomly without following specific textural patterns. In general, pork ham slices have complex and inhomogeneous colour surfaces and consist of structures of both coarse and fine resolutions (Mendoza et al., 2009). Simple visual features such as colour and texture as well as spatial features such as shape and distribution of structures, contribute to the complexity in texture appearance. More specifically, inhomogeneities can be attributed mainly to the presence of pores/defects and fat-connective tissue, and colour variations (Valous et al., 2009a). It is apparent that differences can be perceived in the spatial distribution of pores/defects and fat-connective tissue of the studied samples, but it is difficult to ascertain a definitive pattern of visual texture heterogeneity. A previous study (Valous, Mendoza, Sun, & Allen, 2009b) showed more comprehensive trends which defined better the degree of heterogeneity and complexity of the observed visual texture. In that case, the pork ham samples derived from different muscles incorporating

426

N.A. Valous et al. / Meat Science 84 (2010) 422–430

more discriminant visual characteristics than the ham qualities that are being under scrutiny in this work. The studied samples were manufactured entirely from Silverside cuts and therefore appear visually more similar. Nevertheless, pork ham images have randomness in their texture appearance as a common feature, which is difficult to characterize and describe. If appropriate descriptors are defined, classification and quality grading can be achieved successfully. The perceived dissimilarities among qualities emerge mainly due to brine composition and processing conditions, which includes percentage of brine injection and type and duration of mechanical treatment (tumbling), since the raw material (muscles) and cooking conditions were the same. The four hams were structured from several pieces of the same muscle type that reproduced the entire ham, when the pieces were pressed into shape before steam cooking. The trimming of fat-connective tissue (to obtain a leaner appearance) facilitated the extraction of salt-soluble proteins, increasing the binding of muscles and improving the cohesiveness of the slices (Arboix, 2004). The injection of brine ensured a uniform distribution of the constituent ingredients and additives that are necessary to achieve the desired colour and texture pertinent to quality specifications (Casiraghi, Alamprese, & Pompei, 2007). The brine injection level and the ingredients used are characteristic of each product and determine the final quality of the cooked ham. A1 and A2 hams were manufactured with a lower level of brine injection; ergo their end products (sliced hams) are considered of higher quality than A3 and A4. Moreover, in the driest ham (highest quality, A1) the brine injection was more precise as indicated by its lower variability in water content among prepared ham pieces. The increased duration of vacuum tumbling towards the lower quality hams distributed the brine evenly inside the muscle and caused the extraction of salt-soluble proteins from muscle fibres, resulting in considerable cellular damage. This was important for the higher yield ham products (A2, and mostly A3 and A4) in order to bind the individual muscles together during cooking in which the extracted proteins are denatured, thus making the ham slices more compact. The A1 ham was tumbled only for 6 h, keeping intact the meat cellular and fibrous structure as well as the natural texture appearance as much as it could. The moisture content in the final products showed a tendency to correlate with the brine injection level, although statistical differences (P < 0.05) were only evident between the first two and last two ham qualities (Table 1). The colour images of ham surfaces reveal rich and multi-dimensional information that can be used for pattern recognition. In general, colour is a powerful descriptor that often simplifies identification and characterization. However, the evaluated ham surfaces show a pattern of scattered intensities and chromaticities that exhibit high variations which are difficult to characterize and describe with single colour measurements. The magnitude of singular values as intrinsic descriptors could be exploited to differentiate and classify different pork ham qualities that share visually quite similar textural patterns. SVD provides a numerical result of unique real non-negative values for each image, which suggests that they may have the necessary analytical capabilities in discerning and quantifying subtle differences in texture appearance. 3.2. Analysis of quaternionic singular values Fig. 2 depicts the typical linear as well as log–log spectra of the 256 raw (not rescaled) singular values computed for each ham quality (average of 90 images per quality). The embedded graph in Fig. 2a shows the inward-curving of the spectrum. In the embedded graph, the first singular value was not included due to its large magnitude, thus providing better visualization of the falling trend (255 singular values). Fig. 2a shows that the rr curves are practi-

Fig. 2. Spectrum of averaged 256 raw (not rescaled) singular values computed for each quality: (a) linear, and (b) log–log axes.

cally overlapping. This is indicative of the very small variability that the singular values exhibit among the four qualities. Furthermore, experiments in a previous research study demonstrated that seemingly similar textural patterns have closer corresponding singular values (Luo & Chen, 1994) than dissimilar textures. This is especially true for this work, due to the similar textural patterns that the pork ham qualities share. Nonetheless, singular values and their distributions carry useful information on the correlation content of the image elements and their interrelationships (Cannon, Gustafson, & Leonard, 1994). It is well known that the singular values of a greyscale intensity image have a spectrum that rapidly falls close to zero (Rangarajan, 2001). From Fig. 2a, it is apparent that the singular values are in decreasing magnitude along the diagonal (better observed in the embedded plot), while the rapidity with which the curve falls off is considerable. This decay, manifested after the first singular value, provides a relative measure indicative of the structure in the data (r1 P r2 P r3 . . . P r256) and hence in the quaternionic matrix (image). From that, it is evident that the spatial frequencies tend to vary in inverse proportion to the rr, with higher frequencies having smaller rr and vice versa. In Fig. 2b, a linear region is evident which stretches from r2 to around r100. The singular values with relatively larger magnitudes (on the foreside of the spatial distribution) represent the most information of the image. If singular values exhibited similarity in magnitude throughout the spectrum or if the first singular value was dominant while all the

N.A. Valous et al. / Meat Science 84 (2010) 422–430

others were almost zero, the significance of the decomposition would be lower, since the images would have been isotropic and smoother with no textural variations (Mees, Rapp, & Jennings, 1987). The magnitude of the first singular value is significantly larger compared to subsequent values, i.e., r1 varied among qualities from 301.6 to 310.2, while the magnitude of r2 varied from 4.9 to 6.3, and r3 from 3.6 to 4.2. The r1 roughly corresponds to an average representation of the image and thus is closely related to the spectral features, while all the other singular values provide detailed information about the spatial content of the image which relates to the textural features. More specifically, singular values of larger magnitude encapsulate most of the colour and textural information (Ramakrishnan & Selvan, 2006). In this sense, the information carried by singular values is explicit with the bigger ones having larger information capacity, whereas the rest just bring about smaller variation terms. This deduction is important also for the ensuing classification, since the identification of suitable metrics has a performance effect on image recognition. The four features (F1, F2, F5 and F7) that were selected to train the neural network were among the largest singular values. These singular values, which are a latent data structure representation (in semantic terms) of the original ham image, were masked by noisier dimensions, but were revealed by the quaternionic SVD. Therefore these latent features carry important structural and colour frequencies (Li & Park, 2007). The remaining two features (F89 and F252) have magnitudes significantly smaller in comparison, and may appear insignificant or the result of a noisy perturbation of the reduced-rank matrix (Konstantinides & Yao, 1988). These remaining features correspond to the non-deterministic intrinsic components of the image carrying a much smaller information load (Todoriki, Nagayoshi, & Suzuki, 2005). Nevertheless, they appear important as stable, uncorrelated, and discriminating features pertinent to the recognition of the ham images. An additional observation is that the singular value matrix is full rank, meaning that there is no rr equal to zero, even when the noisy dimensions have small values which occurs in most practical situations (Le Bihan & Sangwine, 2003). The fact that the image is full rank signifies that the energy of ham images, described as having random textural content, spreads over nearly all singular values or at least over those selected that can successfully recognize the different qualities. In general, since the pork ham image and its quaternionic SVD have a unique corresponding relationship, the extracted singular values can be regarded as robust features of these images, as they are measures of their energy. In this way, singular values provide the energy information of the image as well as the knowledge of how the energy is distributed. A previous study (Hong, 1991) demonstrated the robustness of the SVD feature vectors regarding the invariance in algebraic and geometric transforms such as orthogonal transforms, rotation, translation and mirror transforms. These properties are very useful for describing and recognizing images. According to the theoretical background in image recognition, singular values extracted from images have very good stability and are more or less not invariant to systemic distortions (scaling, lightness variations, etc.) but proportionally sensitive to them (Pan, Zhang, Zhou, Cheng, & Zhang, 2003). 3.3. Supervised pattern recognition Due to the lack of a concrete rule for choosing the optimum number of neurons for the hidden layer, preliminary trial and error tests were carried out to determine the number of hidden neurons in order to build the neural classifier. In general, the more neurons the hidden layer contains the more flexible it becomes, increasing the classification accuracy for the training data but decreasing the accuracy for the test set (Bishop, 1995). More specifically, after a

427

certain threshold of neurons has been reached, increasing their number beyond that threshold has a marginal effect on the resulting performance of the classifier (Teoh, Tan, & Xiang, 2006). In these tests, a small number of hidden layer nodes (2–5) produced higher training and high generalization error due to underfitting and high statistical bias, while larger number of hidden nodes (7–20) increased the training classification performance, but produced a higher generalization error due to overfitting and high variance. Consequently, the selection of the neural network architecture was based on reaching a compromise between too many and too few neurons in the hidden layer. The best generalizing neural network is not necessarily the one with the fewest number of hidden neurons (Kinser, 2000). In addition, the number of hidden units determines the total number of weights in the network and thus there should not be more weights than the total number of training points in the dataset (Duda, Hart, & Stork, 2001). Fig. 3 shows a schematic of the classifier architecture used in this study. The optimal conditions for the classification were found to be a single hidden layer composed of 6 neurons (6–6–4 architecture; 6 features as input) having symmetric sigmoid (hyperbolic tangent) activation functions, and 4 output neurons corresponding to ham qualities having softmax activation functions, in which the outputs are interpretable as posterior probabilities for the categorical target variables. Using softmax activation functions in the output layer provides a measure of certainty, while classification accuracy is improved (Dunne, 2007). An adaptive supervised feedforward multilayer perceptron classifier, using a variant of the Quasi–Newton method; namely the BFGS/BP (Broyden Fletcher Goldfarb Shanno/Back Propagation) learning algorithm, was employed to obtain a suitable mapping from the input dataset. This algorithm performs better in terms of training speed and accuracy (Hui, Lam, & Chea, 1997) and requires more computation in each iteration and more storage, but has a fast convergence rate (Demuth, Beale, & Hagan, 2009). The network was trained for 58 epochs (passes through the entire training set). A convenient rule of thumb is that when both validation and test datasets produce good and consistent classification results, it could be assumed that the network generalizes well on unseen data. The results of the classification are presented in Table 2 as confusion matrices. The overall correct classification performance for the training, validation and test set of singular values were 90.3%, 94.4% and 86.1%, respectively. The validation and test (generalization) error are virtually always higher than the training error (Duda et al., 2001). From the results, it can be seen that the generalization capability of the neural network is satisfactory for the relatively small dataset of 90 images per quality. The training inputs to the neural classifier contained sufficient information pertaining to the target qualities. In spite of the high variability and complexity of the studied ham samples, the results showed the capacity of rr to provide valuable information in discerning among different qualities. Using a bigger number of uncorrelated features most probably would increase the classification accuracy for the test set. Nevertheless, preliminary classification tests using the ten largest (in magnitude) singular values (r1–r10) as input features to the neural classifier showed that the recognition errors were greater than those presented in Table 2. This could be attributed most likely to the fact that r3, r4, r6, r8, r9 and r10 are highly correlated among themselves and with the already selected uncorrelated features r1, r2, r5, and r7, thus lowering the classification performance (Kavzoglu & Mather, 2002). In relation to this deduction, the principle of parsimony states that the smallest possible number of features should be used so as to give an adequate and uncorrelated representation of the feature space (Chatfield, 1996). The results indicate that the classifier performs well on unseen data (test set). This deduction is based on the experimental results of the MLP-based neural network; therefore it cannot be

428

N.A. Valous et al. / Meat Science 84 (2010) 422–430

Fig. 3. Schematic of the MLP neural network classifier architecture (6–6–4) used for pattern recognition (W are weights and b are biases).

Table 2 Neural classifier output in the form of confusion matrices for the training, validation and test set of quaternionic singular values. Training set Predicted

A1

A2

A1 A2 A3 A4

50a 1 3 0

3 50a 1 0

8 0 46 0

4 0 1 49a

54

54

54

54

90.3%

Validation set A1 17a A2 0 A3 1 A4 0

0 18a 0 0

1 0 17a 0

1 0 1 16

Overall classification performance

94.4%

Image set

a

A3

A4

Image set

18

18

18

18

Test set A1 A2 A3 A4

14 0 4 0

0 16 2 0

1 0 15 2

0 0 1 17a

Image set

18

18

18

18

Overall classification performance

Overall classification performance

86.1%

Denotes classification errors less than 610%.

generalized to other classification techniques. While Fig. 2 illustrated that there are no clear differences in the spatial distribution of rr among the four qualities, results demonstrated that the neural network classifier was capable of handling overlapping rr distributions, using a small definitive set of features as descriptors of discrimination for hams that share visually quite similar textural patterns. Neural networks are considered powerful classification tools because of their non-linear properties and the fact that they make no priori assumptions about the distribution of the data. As a consequence, better results could be expected with neural networks when no simple model exists to accurately describe the underlying process that determines the data distribution (Subramanian, Gat, Sheffield, Barhen, & Toomarian, 1997; Venugopal & Baets, 1994). The analysis of confusion matrices for the test set yields interesting insights into the differences in texture appearance and the global properties of the neural classifier. Due to the fact that pork hams are coming from muscles that exhibit normal biological variations and are subject to industrial processing/storage conditions that are not exactly replicated, it is unrealistic to expect 100% accurate classification at all times. Therefore, it will be necessary to set a cut-off for acceptance of a correctly classified ham slice. More than 80–85% consistent classification rate responding to new samples can be such a threshold, given that a good artificial neural network model has accuracy of more than 80% (Hanif, Lan, Daud, & Ahmad, 2009; Ott, Araníbar, Singh, & Stockton, 2003). The tendency of the misclassification rate is to decrease towards the lower quality ham, with the exception of an increase in A3. Results also show that the A1 ham, in the test set, had the worse classification

performance (classification error; 22%) comparing with the error in the other qualities (616%). This erroneous assignment to a class membership (A3) other than the correct one could be probably attributed to the lack of a sufficiently diverse training set of singular values for this ham quality. Another interesting deduction is that the driest samples (A1) were more difficult to classify. Even when the generalization for this quality was weak (less than 80%), there is still a significant dependency between rr and neural network output due to the results in the training and validation set. A3 apparently shared some of the underlying visual texture characteristics of the A1 quality and the reduced generalization performance of the classification, for this case, seems to suggest the hypothesis that singular values preserve important topological properties of the input images. On the other hand, the lowest quality A4 ham produced singular values that had optimal discriminating properties, with only one misclassified image. The A4 pork ham was manufactured with the highest level of brine injection and an increased duration of tumbling, which resulted in wetter surface appearance and an intermediate degree of visual roughness (Valous et al., 2009b). Regardless of this textural complexity, singular values captured relevant information that provided a good level of differentiation. The overall satisfactory classification results derive as well by the representation of the input data in the smaller dimension feature space in a stepwise process. Correlated variables are truly redundant in the sense that no additional information is gained by adding them to the feature space (Guyon & Elisseeff, 2003). In addition, data rescaling has a positive effect in the classification, because it preserves all data relationships exactly, compresses the normal range if any outliers exist, reduces the estimation errors and the computation time needed in the training process (Sola & Sevilla, 1997). Classification errors could be reduced further if the number of images used for training was increased (Kavzoglu, 2009). Nevertheless, given that the the problem is readily classifiable, the amount of training samples used was enough in order to provide acceptable performance levels at a reasonable computational cost (Crowther & Cox, 2006). Singular values, as algebraic features, have revealed a good deal of information about the different qualities of pork hams. The differences among ham qualities are encapsulated as spatial variations in geometry and spectral characteristics that occur on a smaller scale. More specifically, these variations could be due to spatial dissimilarities in directional or spectral scattering characteristics and due to distributions of structures such as pores/defects and fat-connective tissue. The magnitude and spatial location of singular values are considered, since they play important roles in classifying image contents and conveying semantic meanings. The accuracy of the classification is a good measure for the reliability of the features. The results have demonstrated that singular values extracted from quaternionic SVD bear the distinction between different ham qualities and can be used as robust, reliable and stable features for classification and quality grading. The procedure seems that it is better suited in establishing a pattern of textural appearance heterogeneity from visually non-discriminating food colour images. The quaternionic singular values

N.A. Valous et al. / Meat Science 84 (2010) 422–430

capture important information regarding the appearance of ham images and could be used in combination with other image features to improve the classification accuracy. Moreover, singular values could be used to provide in the context of classification, a quantitative measure of the level of brine injection and type and duration of mechanical treatment (tumbling) in pork hams, quite robustly. For applications related to other foods, either fresh produce or processed, the robustness of the quaternionic SVD technique needs to be explored further. 4. Conclusions Simple visual features such as colour and texture, as well as spatial features such as shape and distribution of structures (fatconnective tissue and pores/defects) contribute to the complexity of texture appearance. The quaternionic representation of ham images, treating RGB colour components as a single unit instead of as separate components, is very effective. The advantage of using quaternion arithmetic is that colour images, which are perceptually richer, can be represented and analyzed as a single entity, improving the accuracy of pattern recognition models. Algebraic features represent intrinsic attributions of an image. The quaternionic SVD is an effective method of extracting algebraic features from ham images. Singular values describe completely and univocally the intrinsic information of a quaternionic matrix, ergo they can be used as features for the classification of cooked pork ham slices. An adaptive MLP classifier was successfully employed for the classification of four ham qualities with similar appearances, using a reduced feature space of singular values. The dimensionality reduction procedure excluded atypical features and discarded the redundant information. The overall correct classification performance for the test set was 86.1%. The results confirm that the classification performance was satisfactory. Considering the complexity of texture appearance, it is difficult to get perfect classification rates using neural networks based on certain selected features. Nonetheless, accurately extracting and selecting the most informative features as inputs to the MLP classifier, led to the recognition of a set of different but visually quite similar textural patterns based on quaternionic singular values. Acknowledgements The authors gratefully acknowledge the Food Institutional Research Measure (FIRM) strategic research initiative, as administered by the Irish Department of Agriculture, Fisheries and Food, for the financial support. References Andrew, H. C., & Patterson, C. L. (1976). Singular value decompositions and digital image processing. IEEE Transactions on Acoustics, Speech and Signal Processing, 24(1), 26–53. AOAC (1998) (16th ed.). Official methods of analysis of AOAC international (Vol. 2). Gaithersburg, MD: Association of Official Analytical Chemists [39.1.02]. Arboix, A. A. (2004). Ham production: Cooked ham. In W. Jensen, C. Devine, & M. Dikeman (Eds.), Encyclopedia of meat sciences (pp. 562–567). Oxford: Elsevier. Barcala, J. M., Fernández, J. L., Alberdi, J., Jiménez, J., Lázaro, J. C., Navarrete, J. J., et al. (2004). Identification of plastics using wavelets and quaternion numbers. Measurement Science and Technology, 15, 371–376. Bishop, C. M. (1995). Neural networks for pattern recognition. Oxford: Clarendon Press [pp. 116–164]. Cai, C., & Mitra, S. K. (2000). A normalized color difference edge detector based on quaternion representation. In Proceedings of the IEEE international conference on image processing (pp. 816–819), 10–13 September 2000, Vancouver, Canada. Cannon, D. M., Gustafson, S. C., & Leonard, J. D. (1994). Natural scene feature extraction using singular value decomposition. In Proceedings of SPIE visual information processing III (Vol. 2239, pp. 80–91), 4–5 April 1994, Orlando, USA. Casiraghi, E., Alamprese, C., & Pompei, C. (2007). Cooked ham classification on the basis of brine injection level and pork breeding country. LWT – Food Science and Technology, 40, 164–169.

429

Chatfield, C. (1996). Model uncertainty and forecast accuracy. Journal of Forecasting, 15(7), 495–508. Crowther, P. S., & Cox, R. J. (2006). Accuracy of neural network classifiers as a property of the size of the data set. In Proceedings of the tenth international conference on knowledge-based intelligent information and engineering systems (Part III, pp. 1143–1149), 9–11 October 2006, Bournemouth, UK. Demuth, H., Beale, M., & Hagan, M. (2009). Neural network MATLAB toolbox 6: User’s guide. Natick: The Math Works Inc. Denis, P., Carre, P., & Fernandez-Maloigne, C. (2007). Spatial and spectral quaternionic approaches for colour images. Computer Vision and Image Understanding, 107, 74–87. Duda, R. O., Hart, P. E., & Stork, D. G. (2001). Pattern classification (2nd ed.). New York: John Wiley and Sons [pp. 282–349]. Dunne, R. A. (2007). A statistical approach to neural networks for pattern recognition. New Jersey: John Wiley and Sons [pp. 35–51]. Ell, T. A., & Sangwine, S. J. (2007). Hypercomplex Fourier transforms of color images. IEEE Transactions on Image Processing, 16(1), 22–35. Fathi, M., Mohebbi, M., & Razavi, S. M. A. (2009). Application of image analysis and artificial neural network to predict mass transfer kinetics and color changes of osmotically dehydrated kiwifruit. Food and Bioprocess Technology. doi:10.1007/ s11947-009-0222-y. Fish, D. A., Grochmalicki, J., & Pike, E. R. (1996). Scanning singular-valuedecomposition method for restoration of images with space-variant blur. Journal of the Optical Society of America A, 13(3), 464–469. Fontenla-Romero, O., Guijarro-Berdiñas, B., Alonso-Betanzos, A., & Moret-Bonillo, V. (2005). A new method for sleep apnea classification using wavelets and feed forward neural networks. Artificial Intelligence in Medicine, 34(1), 65–76. Foody, G. M. (2004). Supervised image classification by MLP and RBF neural networks with and without an exhaustively defined set of classes. International Journal of Remote Sensing, 25(15), 3091–3104. Gentle, J. E. (2007). Matrix algebra: Theory, computations, and applications in statistics. New York: Springer [pp. 41–144]. Gürlebeck, K., & Sprößig, W. (1997). Quaternionic and Clifford calculus for physicists and engineers. Chichester: John Wiley and Sons [pp. 1–371]. Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3, 1157–1182. Hanif, N. H. H. M., Lan, W. H., Daud, H. B., & Ahmad, J. (2009). Classification of control measures for asthma using artificial neural networks. In Proceedings of the ninth IASTED international conference on artificial intelligence and applications (pp. 639–069), 17–18 February, Innsbruck, Austria. Hong, Z.-Q. (1991). Algebraic features extraction of image for recognition. Pattern Recognition, 24(3), 211–219. Hui, L. C. K., Lam, K.-Y., & Chea, C. W. (1997). Global optimisation in neural network training. Neural Computing and Applications, 5, 58–64. IEC, (1999). In IEC 61966-2-1: Multimedia systems and equipment – Colour measurements and management – Part 2-1: Colour management – Default RGB color space – sRGB. International Electrotechnical Commission (IEC), Geneva, Switzerland. Kantor, I. L., & Solodovnikov, A. S. (1989). Hypercomplex numbers: An elementary introduction to algebras. New York: Springer-Verlag [pp. 15–24]. Kavzoglu, T. (2009). Increasing the accuracy of neural network classification using refined training data. Environmental Modelling and Software, 24(7), 850–858. Kavzoglu, T., & Mather, P. M. (2002). The role of feature selection in artificial neural network applications. International Journal of Remote Sensing, 23(15), 2919–2937. Kaya, A., Ko, S., & Gunasekaran, S. (2008). Viscosity and color change during in situ solidification of grape pekmez. Food and Bioprocess Technology. doi:10.1007/ s11947-008-0169-4. Kinser, J. M. (2000). The minimum number of hidden neurons does not necessarily provide the best generalization. In Proceedings of SPIE applications and science of computational intelligence III (Vol. 4055, pp. 11–17), 24–27 April 2000, Orlando, USA. Konstantinides, K., & Yao, K. (1988). Statistical analysis of effective singular values in matrix rank determination. IEEE Transactions on Acoustics, Speech and Signal Processing, 36(5), 757–763. Kulkarni, A. D. (2001). Computer vision and fuzzy-neural systems. New Jersey: Prentice-Hall PTR [pp. 227–280]. Kumar, S., & Mittal, G. S. (2009). Rapid detection of microorganisms using image processing parameters and neural network. Food and Bioprocess Technology. doi:10.1007/s11947-008-0122-6. Le Bihan, N., & Sangwine, S. J. (2003). Color image decomposition using quaternion singular value decomposition. In Proceedings of the international conference on visual information engineering (pp. 113–116), 7–9 July 2003, Saint Martin d’Heres, France. Le Bihan, N., & Mars, J. (2004). Singular value decomposition of quaternion matrices: A new tool for vector-sensor signal processing. Signal Processing, 84, 1177–1199. Le Bihan, N., & Sangwine, S. J. (2007). Jacobi method for quaternion matrix singular value decomposition. Applied Mathematics and Computation, 187, 1265–1271. Lerouge, E., & Van Huffel, S. (1999). Generalization capacity of neural networks for the classification of ovarium tumours. In Proceedings of the twentieth symposium on information theory in the Benelux (pp. 149–156), 27–28 May 1999, Haasrode, Belgium. Li, C. H., & Park, S. C. (2007). Neural network for text classification based on singular value decomposition. In Proceedings of the seventh international conference on computer and information technology (pp. 47–52), 16–19 October 2007, Fukushima, Japan.

430

N.A. Valous et al. / Meat Science 84 (2010) 422–430

Li, Y., Rad, A. B., & Peng, W. (1999). An enhanced training algorithm for multilayer neural networks based on reference output of hidden layer. Neural Computing and Applications, 8, 218–225. Luo, J. -H., & Chen, C. -C. (1994). Singular value decomposition for texture analysis. In Proceedings of SPIE applications of digital image processing XVII (Vol. 2298, pp. 407–418), 26–29 July 1994, San Diego, USA. Mees, A. I., Rapp, P. E., & Jennings, L. S. (1987). Singular value decomposition and embedding dimension. Physical Review A, 36(1), 340–346. Mendoza, F., Valous, N. A., Allen, P., Kenny, T. A., Ward, P., & Sun, D.-W. (2009). Analysis and classification of commercial ham slice images using directional fractal dimension features. Meat Science, 81(2), 313–320. Moxey, C. E., Sangwine, S. J., & Ell, T. A. (2003). Hypercomplex correlation techniques for vector images. IEEE Transactions on Signal Processing, 51(7), 1941–1953. Ott, K.-H., Araníbar, N., Singh, B., & Stockton, G. W. (2003). Metabonomics classifies pathways affected by bioactive compounds: Artificial neural network classification of NMR spectra of plant extracts. Phytochemistry, 62(6), 971– 985. Pan, Q., Zhang, M.-G., Zhou, D.-L., Cheng, Y.-M., & Zhang, H.-C. (2003). Face recognition based on singular-value feature vectors. Optical Engineering, 42(8), 2368–2374. Pei, S.-C., & Cheng, C.-M. (1997). A novel block truncation coding of color images using a quaternion-moment-preserving principle. IEEE Transactions on Communications, 45(5), 583–595. Philips, R. D., Watson, L. T., Wynne, R. H., & Blinn, C. E. (2009). Feature reduction using a singular value decomposition for the iterative guided spectral class rejection hybrid classifier. ISPRS Journal of Photogrammetry and Remote Sensing, 64, 107–116. Quevedo, R. A., & Aguilera, J. M. (2009). Computer vision and stereoscopy for estimating firmness in the salmon (Salmon salar) fillets. Food and Bioprocess Technology. doi:10.1007/s11947-008-0097-3. Quevedo, R. A., Aguilera, J. M., & Pedreschi, F. (2009). Color of salmon fillets by computer vision and sensory panel. Food and Bioprocess Technology. doi:10.1007/s11947-008-0106-6. Ramakrishnan, S., & Selvan, S. (2006). Image texture classification using exponential curve fitting of wavelet domain singular values. In Proceedings of the IET international conference on visual information engineering (pp. 505–510), 26–28 September 2006, Bangalore, India. Rangarajan, A. (2001). Learning matrix space image representations. In Proceedings of the third international workshop on energy minimization methods in computer vision and pattern recognition (pp. 153–168), 3–5 September 2001, Sophia Antipolis, France. Rohwer, R., Wynne-Jones, M., & Wysotzki, F. (1994). Neural networks. In D. Michie, D. J. Spiegelhalter, & C. Taylor (Eds.), Machine learning, neural and statistical classification (pp. 84–106). Hertfordshire: Ellis Horwood. Sangwine, S. J. (1996). Fourier transforms of colour images using quaternion or hypercomplex, numbers. Electronic Letters, 32(21), 1979–1980. Sangwine, S. J., & Ell, T. A. (1999). Hypercomplex auto- and cross-correlation of color images. In Proceedings of the IEEE international conference on image processing (pp. 319–322), 24–28 October 1999, Kobe, Japan. Sangwine, S. J., & Le Bihan, N. (2005). Quaternion toolbox for MATLAB. Software library, licensed under the GNU General Public License. . Accessed 13.02.09.

Sangwine, S. J., & Horne, R. E. N. (1998). The present state and the future of colour image processing. In S. J. Sangwine & R. E. N. Horne (Eds.), The colour image processing handbook (pp. 1–3). London: Chapman and Hall. Sangwine, S. J., & Le Bihan, N. (2006). Quaternion singular value decomposition based on bidiagonalization to a real or complex matrix using quaternion Householder transformations. Applied Mathematics and Computation, 182, 727–738. Shnayderman, A., Gusev, A., & Eskicioglu, A. M. (2006). An SVD-based greyscale image quality measure for local and global assessment. IEEE transactions on Image Processing, 15(2), 422–429. Smolka, B., & Venetsanopoulos, A. N. (2006). Noise reduction and edge detection in color images. In R. Lukac & K. N. Plataniotis (Eds.), Color image processing: Methods and applications (pp. 75–102). Boca Raton, FL: CRC Press/Taylor and Francis. Sola, J., & Sevilla, J. (1997). Importance of input data normalization for the application of neural networks to complex industrial problems. IEEE Transactions on Nuclear Science, 44(3), 1464–1468. Subramanian, S., Gat, N., Sheffield, S., Barhen, J., & Toomarian, N. (1997). Methodology for hyperspectral image classification using novel neural network. In Proceedings of SPIE Algorithms for multispectral and hyperspectral imagery III (Vol. 3071, pp. 128–137), 22–23 April 1997, Orlando, USA. Tay, A. L. P., Zurada, J. M., Wong, L.-P., & Xu, J. (2007). The hierarchical fast learning artificial neural network (HieFLANN) – An autonomous platform for hierarchical neural network construction. IEEE Transactions on Neural Networks, 18(6), 1645–1657. Teoh, E. J., Tan, K. C., & Xiang, C. (2006). Estimating the number of hidden neurons in a feed forward network using the singular value decomposition. IEEE Transactions on Neural Networks, 17(6), 1623–1629. Todoriki, M., Nagayoshi, H., & Suzuki, A. (2005). Temporal fluctuation of singular values caused by dynamical noise in chaos. Physical Review E, 72, 036207. Trémeau, A., Tominaga, S., & Plataniotis, K. N. (2008). Color in image and video processing: Most recent trends and future research directions. EURASIP Journal on Image and Video Processing, 581371. doi:10.1155/2008/581371. Valous, N. A., Mendoza, F., Sun, D.-W., & Allen, P. (2009a). Colour calibration of a laboratory computer vision system for quality evaluation of pre-sliced hams. Meat Science, 81(1), 132–141. Valous, N. A., Mendoza, F., Sun, D.-W., & Allen, P. (2009b). Texture appearance characterization of pre-sliced pork ham images using fractal metrics: Fourier analysis dimension and lacunarity. Food Research International, 42(3), 353–362. Venugopal, V., & Baets, W. (1994). Neural networks and statistical techniques in marketing research: A conceptual comparison. Marketing Intelligence & Planning, 12(7), 30–38. Villegas, M., & Paredes, R. (2007). Face recognition in color using complex and hypercomplex representations. In Proceedings of the third iberian conference on pattern recognition and image analysis, part I (pp. 217–224), 6–8 June 2007, Girona, Spain. Ward, J. P. (1997). Quaternions and cayley numbers: Algebra and applications. Dordrecht: Kluwer Academic [pp. 1–163]. Zhang, F. (1997). Quaternions and matrices of quaternions. Linear Algebra and Its Applications, 251, 21–57.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.