• No results found

4.9 Anpassning av PLS till specifik m¨ atapplikation

4.9.2 Kalibrering och optimering

D˚a m¨atapparaturen f¨or¨andrades efter att datat som anv¨andes f¨or studien insamlats kan inte de framtagna resultaten anv¨andas f¨or att kalibrera eller optimera n˚agon modell. D¨aremot kan rutinerna ˚ateranv¨andas d˚a nytt data finns tillg¨angligt. Tro- ligtvis kommer uppst¨allningen p˚a processindustrin anv¨andas till aktiva akustiska m¨atningar och d˚a kommer ytterligare unders¨okningar kr¨avas f¨or att best¨amma prediktionsmodellernas utformning.

Kapitel 5

Diskussion

P˚a grund av of¨orutsedda omst¨andigheter kunde inte aktiv akustisk spektrosko- pi utv¨arderas inom detta examensarbete. Datamaterial fanns inte tillg¨angligt i tillr¨ackligt stor utstr¨ackning i god tid. Tillg˚angen till datamaterial har visat sig vara en mycket viktig faktor f¨or best¨amma vilka metoder som kan anv¨andas. De m¨atningar som referensmaterialet baseras p˚a ¨ar dyra och ¨ar begr¨ansande f¨or hur komplexa modeller som ¨ar l¨ampliga.

Ett artificiellt neuralt n¨atverk kan teoretiskt prediktera de s¨okta storheterna b¨attre ¨

an PLS d˚a det ¨aven tar h¨ansyn till icke-linj¨ara samband. Det kr¨avs dock ett myc- ket st¨orre datamaterial f¨or att tr¨ana ett n¨atverk med tillr¨acklig storlek d˚a det tenderar att modellera brus om inte det f¨orh˚allandevis stora antalet vikter och lo- adingmatrisen kan kalibreras tillr¨ackligt. Om stora m¨angder kalibreringsdata finns tillg¨angligt skulle neurala n¨atverk kunna anv¨andas men om data ¨ar l¨attillg¨angligt ¨

ar troligtvis nyttan med akustisk spektroskopi liten.

Resultatet fr˚an detta arbete kan endast appliceras p˚a de m¨atningar med passiv akustisk spektroskopi som unders¨okts. De kan vara v¨agledande d˚a de aktuella metoderna skall utv¨arderas inf¨or anv¨andandet av akustisk spektroskopi i liknan- de situationer med avseende p˚a m¨atsystemets utformning, m¨atapplikation och tillg˚ang p˚a referensdata.

Det ¨ar m¨ojligt att aktiv akustisk spektroskopi skulle kunna introducera fler icke- linj¨ara samband mellan akustiska signaler och s¨okta storheter som ett artificiellt neuralt n¨atverk skulle kunna prediktera b¨attre ¨an linj¨ar PLS. Vid aktiva m¨atningar kommer ytterligare ing˚aende variabler introduceras i form av det ing˚aende ljudet. Om detta ljud h˚alls konstant kommer dessa variabler att elimineras och antalet variabler vara lika stort som vid passiva m¨atningar. Problemet med antalet ka- libreringspunkter kvarst˚ar dock, eftersom komplexiteten hos systemet ¨okar med introduktionen av akustiska vibrationer till r¨or och fluid. Det ¨ar d¨arf¨or inte rim- ligt att antalet kalibreringspunkter som kr¨avs f¨or att uppn˚a samma kvalitet p˚a prediktionen minskar.

36 Diskussion M¨ojligheten f¨or att med god tillg˚ang till kalibreringsdata erh˚alla en b¨attre pre- diktion ¨okar med anv¨andandet av aktiv akustisk spektroskopi d˚a den frekvens- beroende absorptionen och liknande effekter torde f˚a st¨orre betydelse vid ana- lysen. Denna m¨ojlighet m˚aste dock st¨allas emot det eventuellt ¨okade behovet av kalibreringsdata f¨or att skapa en modell. En aktiv akustisk m¨atuppst¨allning med v¨aldigt god tillg˚ang till kalibreringsdata skulle kunna anv¨anda sig av olika feedback-konstruktioner som varierar det ing˚aende ljudet och analyserar frekvens- spektra, fasf¨or¨andringar och impulssvar dynamiskt. Komplexiteten p˚a ett s˚adant ¨

okar dock dramatiskt och de eventuella vinsterna i prediktionsf¨orm˚aga ¨ar os¨akra. Det vore d¨arf¨or intressant att utf¨ora m¨atningar p˚a ett v¨alk¨ant system d¨ar kalibre- ringsdata genereras kontinuerligt f¨or att unders¨oka s˚adana fr˚agor som behovet av kalibreringsdata n¨ar m¨atningarna utf¨ors med aktiv ist¨allet f¨or passiv akustisk spektroskopi.

Uppdelningen av datamaterialet kan ha p˚averkat resultaten i viss utstr¨ackning. Duplexmetoden ger en uppdelning som ger goda matematiska f¨oruts¨attningar f¨or att skapa valideringsset. F¨orhoppningen var att den eventuella tidskorrelationens inverkan skulle minskas. N¨ar tekniken skall implementeras kommer inga liknande uppdelningar g¨oras, all tillg¨anglig information kommer att anv¨andas till att skapa en modell. Eventuellt kan korsvalidering anv¨andas f¨or att best¨amma antalet laten- ta variabler som skall anv¨andas. Scenariot kommer att likna en blockuppdelning d¨ar tidskorrelationen inte kommer att kunna bortses fr˚an. Att en blockuppdel- ning inte anv¨andes i detta arbete grundas i att m˚alet med arbetet var att j¨amf¨ora metoder i allm¨anhet inte deras stabilitet gentemot ”nya” data. Resultatet fr˚an j¨amf¨orelse med blockuppdelning skulle emellertid vara intressant utifr˚an en imple- mentationssynvinkel f¨or att unders¨oka uppdelningens inverkan p˚a prediktionsfel och residualer.

¨

Aven anv¨andningen av RMSEP och uppdelningen i kalibrerings- och valideringsset kan diskuteras d˚a det inte kommer att anv¨andas f¨or att utforma modeller inf¨or slutlig implementering. ˚Aterigen beror valet av arbetss¨att p˚a att m˚alet avser en utv¨ardering av metoderna generellt och inte specifikt f¨or en s¨arskild till¨ampning. Det ¨ar m¨ojligt att artificiella neurala n¨atverk eller hybridmodellen har l¨agre predik- tionsfel ¨an PLS f¨or just den storlek p˚a dataset och den m˚alvariabel som anv¨andes. Detta kan inte uteslutas utan st¨orre dataunderlag. Det som unders¨okningarna visar ¨ar endast att storleken p˚a kalibreringssetet inte ¨ar tillr¨ackligt stort f¨or att tr¨ana ett neuralt n¨atverk till b¨attre prediktiv f¨orm˚aga ¨an en PLS-modell baserad p˚a samma antal observationer. Gr¨ansen f¨or n¨ar artificiella neurala n¨at ger b¨attre prediktioner skulle kunna ligga mellan antalet kalibreringspunkter och det tota- la antalet observationer i datasetet. Allts˚a skulle fler observationer beh¨ovas f¨or att utr¨ona om gr¨ansen verkligen g˚ar d¨ar. Detta resonemang leder dock till ett stegvis ¨okat behov av data ¨anda tills datamaterialet ¨ar tillr¨ackligt omfattande f¨or att visa att n¨atverket ¨ar b¨attre ¨an PLS, om det ens n˚agon g˚ang intr¨affar. De un- ders¨okningar som gjordes med temperatur som m˚alvariabel antogs vara tillr¨ackligt lika de f¨or de andra m˚alvariablerna s˚a att slutsatserna fr˚an temperatur-modellerna kan appliceras ¨aven p˚a de andra. Om detta antagande ¨ar sant s˚a kan gr¨ansen f¨or antalet n¨odv¨andiga observationer vid anv¨andandet av artificiella neurala n¨at

37 h¨ojas avsev¨art j¨amf¨ort med storleken p˚a kalibreringsseten. I likhet med andra fr˚agest¨allningar skulle fr˚agan om likheten mellan prediktion av temperatur och andra m˚alvariabler kunna utredas n¨armre med st¨orre tillg˚ang till datamaterial. Om framtiden p˚avisar en m¨ojlighet att anv¨anda artificiella neurala n¨atverk i och med nya m¨atapplikationer finns en grund f¨or att snabbt kunna utv¨ardera om det ¨

ar l¨ampligt eller ej. Vidare kan framtagna strukturer och funktioner anv¨andas f¨or att med relativt liten arbetsinsats j¨amf¨ora alternativa metoder f¨or multivariat analys av akustisk spektroskopidata.

P˚a basis av de resultat som framkommit verkar PLS och liknande metoder vara b¨ast l¨ampade f¨or anv¨andning vid applikationer liknande de unders¨okta. Exempel- vis skulle ”Partial M-Regression”, PRM[29] eller ”Wavelet Transform-Multi Re- solution Spectra”, WT-MRS[6] unders¨okas med liknande metoder som anv¨ants i detta arbete. Valet av analysmodell kan bara till en viss del p˚averka hela m¨atuppst¨allningens prestanda. D¨arf¨or ¨ar det viktigt att instrumentet som hel- het utv¨arderas och f¨orb¨attras utifr˚an vilka f¨or¨andringar som ger st¨orst ¨okningar i prestanda och att inte fokusera alltf¨or mycket p˚a den multivariata statistiska analysen.

Kapitel 6

Slutsatser

PLS visade sig vara b¨attre l¨ampat f¨or prediktering av s¨okta egenskaper utifr˚an akustisk spektroskopidata ¨an alla andra metoder som unders¨oktes. Bayesiska n¨atverk visade sig inte vara l¨ampliga f¨or ¨andam˚alet och unders¨oktes d¨arf¨or in- te n¨armare. Vid en vidareutveckling av tekniken f¨oresl˚as d¨arf¨or att PLS eller liknande metoder s˚asom PRM[29] eller WT-MRS anv¨ands[6].

Litteraturf¨orteckning

[1] Intervju med Dr. Anders Bj¨ork, G¨oteborg, 26/11 2009.

[2] Herv´e Abdi. Encyclopedia of measurement and statistics, chapter Partial Le- ast Square Regression PLS-Regression. A Sage reference publication. SAGE, Thousand Oaks California, USA, 2007. ISBN 978-1-412-91611-0.

[3] Yvonne Aitom¨aki. Towards a Measurement of Paper Pulp Quality: Ultraso- nic Spectroscopy of Fiber Suspensions. Licentiatavhandling, Lule˚a Tekniska Universitet, 2006.

[4] Irad Ben-Gal. Bayesian networks. Encyclopedia of Statistics in Quality and Reliability, 2007.

[5] Anders Bj¨ork. Chemometric and signal processing methods for real time monitoring and modeling : Applications in the pulp and paper industry. PhD thesis, KTH, Chemistry, 2007.

[6] Anders Bj¨ork and Lars-G¨oran Danielsson. Spectra of wavelet scale coefficientd from process acoustic measurements as input for pls modeling of pulp quality. Journal of Chemometrics, 16:521–528, 2002.

[7] Oscar Cardfeldt. Passive Acoustic Spectroscopy as a detection method of viscosity and other process parameters. Master’s thesis, Chalmers tekniska h¨ogskola, 2009.

[8] Eugene Charniak. Bayesian networks without tears. AI Magazine, 12:50–63, 1991.

[9] Barry W. Connors. Medical Physiology, chapter 11 Physiology of Neurons, pages 280–294. Elsevier Saunders, 2005.

[10] S. de Jong. SIMPLS: an alternative approach to partial least squares regres- sion. Chemometrics and Intelligent Laboratory Systems, 18:251–263, 1993. [11] Fr´ed´eric Despagne and D. Luc Massart. Neural networks in multivariate

calibration. The Analyst, 123:157R–178R, 1998.

42 LITTERATURF ¨ORTECKNING [12] Erdal Din¸c, Fatma Demirkaya, Dumitru Baleanu, Y¨ucel Kadioglu, and Ekrem Kadioglu. New approach for simultaneous spectral ana- lysis of a complex mixture using the fractional wavelet transform. Communications in Nonlinear Science and Numerical Simulation, 15 (4):812–818, 2010. ISSN 1007-5704. doi: DOI:10.1016/j.cnsns. 2009.05.021. URL http://www.sciencedirect.com/science/article/ B6X3D-4W8VW2X-D/2/6eff6999108a76b716997344d559557d.

[13] Michal Daszykowski et al. TOMCAT: A MATLAB toolbox for multi- variate calibration techniques. Chemometrics and Intelligent Laborato- ry Systems, 85(2):269–277, 2007. ISSN 0169-7439. doi: DOI:10.1016/ j.chemolab.2006.03.006. URL http://www.sciencedirect.com/science/ article/B6TFP-4JX9V38-1/2/00ff2babbcd6ff49edfd761b3a61a217. [14] T. Hill and P. Lewicki. STATISTICS Methods and Applications. StatSoft,

2007.

[15] Lydia E. Kavraki. Dimensionality reduction methods for molecular motion, May 2010. URL http://cnx.org/content/m11461/1.10.

[16] Andriy Kupyna, Elling-Olav Rukke, Reidar Barfod Sch¨uller, H˚akon Helland, and Tomas Isaksson. Partial least square regression on frequency shift applied to passive acoustic emission spectra. Journal of Chemometrics, 21(3-4):108– 116, 2007.

[17] Andriy Kupyna, Elling-Olav Rukke, Reidar Barfod Sch¨uller, and Tomas Isaksson. The effect of flow rate, accelerometer location and tempera- ture in acoustic chemometrics on liquid flow: Spectral changes and ro- bustness of the prediction models. Chemometrics and Intelligent Labo- ratory Systems, 93(1):87–97, 2008. ISSN 0169-7439. doi: DOI:10.1016/ j.chemolab.2008.04.007. URL http://www.sciencedirect.com/science/ article/B6TFP-4SCD9WK-1/2/3e75d19a7c32a1ffe105b6bbd6a218f7. [18] Thomas Liljenberg, Stefan Backa, Lennart Thegel, and Mats ˚Abom. Active

acoustic spectroscopy. United States Patent No. 20040006409, January 2004. URL http://www.freepatentsonline.com/20040006409.html.

[19] Tobias Lindgren and Sven Hamp. Biomass monitoring using acoustic spectroscopy. IEEE Sensors Journal, 6:1068–1075, 2006.

[20] Tormod Næs, Tomas Isaksson, Tom Fearn, and Tony Davies. Multivariate Calibration and Classification, chapter Appendix A, pages 285–315. NIR Publications, 2004.

[21] Tormod Næs, Tomas Isaksson, Tom Fearn, and Tony Davies. Multivaria- te Calibration and Classification, chapter 2 Introduction, pages 5–9. NIR Publications, 2004.

[22] Tormod Næs, Tomas Isaksson, Tom Fearn, and Tony Davies. Multivariate Calibration and Classification, chapter 5 Data compression by PCR and PLS, pages 27–38. NIR Publications, 2004.

LITTERATURF ¨ORTECKNING 43 [23] Tormod Næs, Tomas Isaksson, Tom Fearn, and Tony Davies. Multivariate Ca- libration and Classification, chapter 12 Other methods to solve non-linearity problems, pages 137–154. NIR Publications, 2004.

[24] Tormod Næs, Tomas Isaksson, Tom Fearn, and Tony Davies. Multivariate Calibration and Classification, chapter 13 Validation, pages 138–175. NIR Publications, 2004.

[25] Tormod Næs, Tomas Isaksson, Tom Fearn, and Tony Davies. Multivariate Calibration and Classification, chapter 6 Interpreting PCR and PLS solutions, pages 39–54. NIR Publications, 2004.

[26] John Noble and Timo Koski. Bayesian Networks: An Introduction. John Wiley & Sons Ltd, Chichester, UK, 2009.

[27] John Noble and Timo Koski. Intervju med Dr. John Noble, Link¨oping, 7/12 2009, 2009.

[28] R. Schaefer and P. Hauptmann. Acoustic Impedance Measurement using PLSR based Analysis of Ultrasonic Signals. Ultrasonics Symposium, IEEE, 1:178–181, 2005.

[29] Sven Serneels, Christophe Croux, Peter Filzmoser, and Pierre J. Van Espe- na. Partial robust M-regression. Chemometrics and Intelligent Laboratory Systems, 79:55–64, 2005.

[30] James R. Thompson and Jacek Koronacki, editors. Statistical Process Con- trol: The Deming Paradigm and Beyond, chapter Multivariate Approaches, pages 289–320. Chapman & Hall/CRC, second edition, 2002.

[31] Marc Valente, Riccardo Leardi, Guy Self, Giorgio Luciano, and Jean Pi- erre Pain. Multivariate calibration of mango firmness using vis/nir spectroscopy and acoustic impulse method. Journal of Food Enginee- ring, 94(1):7–13, 2009. ISSN 0260-8774. doi: DOI:10.1016/j.jfoodeng. 2009.02.020. URL http://www.sciencedirect.com/science/article/ B6T8J-4VRP1YT-2/2/a05bf2848ea340bdd32fa61eb518d533.

[32] Mattias Wahde. An Introduction to Adaptive Algorithms and Intelligent Machines, chapter 2 Architectures for adaptive systems, pages 4–13. Biblio- tekets Reproservice, Chalmers University of Technology, 5th edition, 2006. [33] Mattias Wahde. An Introduction to Adaptive Algorithms and Intelligent

Machines, chapter 3 Methods for adaptation and learning, pages 18–94. Bib- liotekets Reproservice, Chalmers University of Technology, 5th edition, 2006. [34] Svante Wold, Henrik Antti, Fredrik Lindgren, and Jerker ¨Ohman. Orthogonal signal correction of near-infrared spectra. Chemometrics and Intelligent Labo- ratory Systems, 44(1-2):175 – 185, 1998. ISSN 0169-7439. doi: DOI:10.1016/ S0169-7439(98)00109-9. URL http://www.sciencedirect.com/science/ article/B6TFP-3VF9V1R-F/2/d952cea6aa6147e3b50790fff891c0e3.

44 LITTERATURF ¨ORTECKNING [35] Svante Wold, Michael Sj¨ostr¨om, and Lennart Eriksson. PLS-regression: a basic tool of chemometrics. Chemometrics and Intelligent Laborato- ry Systems, 58(2):109–130, 2001. ISSN 0169-7439. doi: DOI:10.1016/ S0169-7439(01)00155-1. URL http://www.sciencedirect.com/science/ article/B6TFP-44B4XN8-6/2/902049f55bd33375bb5ae90aac740e74. [36] J. Yang and G.A Dumont. Classification of acoustic emission signals via

Hebbian feature extraction. Proceedings. IJCNN-91-Seattle: International Joint Conference on Neural Networks, pages 113–118, 1991.

Bilaga A

¨

Oversikt ¨over funktioner

MLR.m

% ---

% function: DATA=MLR(DATA,exponents) % ---

% Aim:

% Perform a MLR based on DATA.X to predict DATA.Y % ---

% Input:

% DATA, information container as decribed in % initializeDATA.m

% expontents, exponents for x exponents=[0 1 2] means [1 x % x^2] PCR.m % --- % function: DATA=PCR(DATA,h,exponents) % --- % Aim:

% Perform a Principal Component Regression based on DATA.X % to predict DATA.Y

% --- % Input:

% DATA, information container as decribed in % initializeDATA.m

% h, number of principal components

% expontents, exponents for x exponents=[0 1 2] means [1 x 45

46 Oversikt ¨¨ over funktioner x^2] PLS.m % --- % function: % DATA=PLS(DATA,manualCrossvalidation,maxNumberOfPCs) % --- % Aim:

% Perform a PLS based on DATA.X to predict DATA.Y % ---

% Input:

% DATA, information container as decribed in % initializeDATA.m

% manualCrossvalidation, perform a manual crossvalidation % (select a number latent variables manually)

% maxNumberOfPCs, maximum number of latent variables to use % in crossvalidation addNoise.m % --- % Function: X=addNoise(X,percentage) % --- % Aim:

% Add normally distributed random numbers with

% std=(percentage of (max-min)) and mean=0 for each variable % ---

% Input:

% X, data matrix

% percentage, percentage of (max-min) to use as std for noise % Output: % X, data matrix autoscale.m % --- % Function: DATA=autoscale(DATA) % --- % Aim:

% Autoscales (mean=0, std=1) predictors and predictand % ---

47

% Input:

% DATA, information container as decribed in initializeDATA.m

% --- % Output:

% DATA, information container as decribed in % initializeDATA.m createMonitoring.m % --- % Function: DATA=createMonitoring(DATA) % --- % Aim:

% use a fourth of the calibration observations for % monitoring of training

% --- % Input:

% DATA, information container as decribed in % initializeDATA.m

% --- % Output:

% DATA, information container as decribed in % initializeDATA.m divideDATA.m % % --- % Function: % DATA=divideDATA(DATA,fractionToValidation,setMonitoring) % --- % Aim:

% Divide the data into calibration-, prediction- and % optionally a

% monitoring set % --- % Input:

% DATA, information container as decribed in % initializeDATA.m

% fractionToValidation, fraction of the material use for % validation

48 Oversikt ¨¨ over funktioner

% --- % Output:

% DATA, information container as decribed in % initializeDATA.m

hybrid.m

% ---

% function: [DATA NN]=hybrid(DATA,latentVariables,numberOfPCs,sizeOfNN) % ---

% Aim:

% Prediction of DATA.Y from DATA.X based on a combination % of PLS and ANN

% --- % Input:

% DATA, as described in initializeDATA.m

% latentVariables, number of latent variables to be used % numberOfPCs, number of principal components to be used % sizeOfNN, number of nodes in hidden layer of ANN % ---

% Output:

% DATA, as described in initializeDATA.m

% NN, Neural net settings including MATLAB neural net % object NN.net initializeDATA.m % --- % Function: NN=initializeNN(DATA,sizeOfNN) % --- % Aim:

% This function initializes/resets information-container % DATA to default

% values

% --- % Output:

% DATA, information container as described in % % initializeDATA.m

49

% ---

% Function: NN=initializeNN(DATA,sizeOfNN) % ---

% Aim:

% Initializes/resets neural net % ---

% Input:

% DATA, information container as decribed in % % initializeDATA.m

% sizeOfNN, size of hidden layer % ---

% Output:

% NN, information container for neural network

nnPCA.m

% Perform a neural network prediction based on PCA

optimizeHybrid.m % ---

% Function: [DATA,NN]=optimizeNN(DATA,NNsize,nPC) % ---

% Aim:

% Optimize structure of hybrid model % ---

% Input:

% DATA, information container as decribed in % % initializeDATA.m

% NNsize, initial guess for size of hiddel layer

% nPC, initial guess for number of principal components % lv, initial guess for number of latent variables % ---

% Output:

% DATA, information container as decribed in % initializeDATA.m

% NN, information container for neural network as described % in initializeNN

50 Oversikt ¨¨ over funktioner % --- % Function: [DATA,NN]=optimizeNN(DATA,NNsize,nPC) % --- % Aim: % Optimize structure of NN % --- % Input:

% DATA, information container as decribed in % initializeDATA.m

% NNsize, initial guess for size of hiddel layer

% nPC, initial guess for number of principal components % ---

% Output:

% DATA, information container as decribed in % initializeDATA.m

% NN, information container for neural network as described % in initializeNN optimizePCR.m % --- % Function: DATA=optimizePCR(DATA,exponents) % --- % Aim:

% Optimize structure of hybrid model % ---

% Input:

% DATA, information container as decribed in % initializeDATA.m

% expontents, exponents for x exponents=[0 1 2] means [1 x x^2] % ---

% Output:

% DATA, information container as decribed in % initializeDATA.m

optimizePLS.m % ---

% Function: [DATA ERRORR]=optimizePLS(DATA) % ---

% Aim:

51

% --- % Input:

% DATA, information container as decribed in % initializeDATA.m

% --- % Output:

% DATA, information container as decribed in % initializeDATA.m

% ERRORR, vector containing RMSEP-values for all tested % number of lv:s optimizePRM.m performPCA.m % --- % function: DATA=performPCA(DATA) % --- % Aim:

% Perform a PCA on the X-matrix % ---

% Input:

% DATA, information container as decribed in % initializeDATA.m plotIteration.m % --- % function: plotIteration(newCoord,oldCoord) % --- % Aim:

% Visualize the latest optimization iteration through a plot % ---

% Input:

% newCoord, the new coordinate to plot % oldCoord, the old coordinate to plot

52 Oversikt ¨¨ over funktioner plotYhatY.m % --- % function: plotYhatY(DATA,NN,hybrid) % --- % Aim:

% Plot predicted y versus observed y for the different % models

% --- % Input:

% DATA, information container as decribed in % initializeDATA.m

% NN, information container for neural network as described % in initializeNN

% hybrid, boolean for determining if to plot hybrid model % or not plotYhatYPCR.m % --- % function: plotYhatYPCR(DATA,NN,hybrid) % --- % Aim:

% Plot predicted y versus observed y for PCR model % ---

% Input:

% DATA, information container as decribed in % initializeDATA.m projectOntoPCs.m % --- % Function: DATA=projectOntoPCs(DATA) % --- % Aim:

% Projects the X-data onto principal components % for PCA model

% --- % Input:

% DATA, information container as decribed in initializeDATA.m % ---

% Output:

53 projectOntoPCsHybrid.m % --- % Function: DATA=projectOntoPCsHybrid(DATA) % --- % Aim:

% Projects the X-data onto principal components % for hybrid model

% --- % Input:

% DATA, information container as decribed in initializeDATA.m % ---

% Output:

% DATA, information container as decribed in initializeDATA.m

projectOntoPCsPCR.m % ---

% Function: DATA=projectOntoPCsPCR(DATA) % ---

% Aim:

% Projects the X-data onto principal components % for PCR model

% --- % Input:

% DATA, information container as decribed in initializeDATA.m % ---

% Output:

% DATA, information container as decribed in initializeDATA.m

setDATA.m

% ---

% function: DATA=setDATA(source,factor,whichOne,noise) % ---

% Aim:

% Load data from file according to inputs % ---

% Input:

% source, either ’anders’ or ’EKA’ depending on which data to load

% factor, if specified every factor:th observation is used. % if not specified for EKA, manual measurements are used

54 Oversikt ¨¨ over funktioner

% whichOne, specifies lower boundary of particle-size % interval to read

% noise, if true, noise will be added to X-matrix, see % addNoise.m

% --- % Output:

% DATA, information is stored in the container DATA, see % initializeDATA.m

% for details

Related documents