A new issue of this journal has just
been published. To see abstracts of the papers it contains (with links through
to the full papers) click here:
Selected
papers from the latest issue:
Hierarchical Multivariate Regression-based Sensitivity Analysis Reveals Complex Parameter Interaction Patterns in Dynamic Models
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems
Kristin Tøndel, Jon Olav Vik, Harald Martens, Ulf G. Indahl, Nicolas Smith, Stig W. Omholt
Dynamic models of biological systems often possess complex and multivariate mappings between input parameters and output state variables, posing challenges for comprehensive sensitivity analysis across the biologically relevant parameter space. In particular, more efficient and robust ways to obtain a solid understanding of how the sensitivity to each parameter depends on the values of the other parameters are sorely needed. We report a new methodology for global sensitivity analysis based on Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR)-based approximations (metamodelling) of the input–output mappings of dynamic models, which we expect to be generic, efficient and robust, even for systems with highly nonlinear input–output relationships. The two-step HC-PLSR metamodelling automatically separates the observations (here corresponding to different combinations of input parameter values) into groups based on the dynamic model behaviour, then analyses each group separately with Partial Least Squares Regression (PLSR). This produces one global regression model comprising all observations, as well as regional regression models within each group, where the regression coefficients can be used as sensitivity measures. Thereby a more accurate description of complex interactions between inputs to the dynamic model can be revealed through analysis of how a certain level of one input parameter affects the model sensitivity to other inputs. We illustrate the usefulness of the HC-PLSR approach on a dynamic model of a mouse heart muscle cell, and demonstrate how it reveals interaction patterns of probable biological significance not easily identifiable by a global regression-based sensitivity analysis only. Applied for sensitivity analysis of a complex, high-dimensional dynamic model of the mouse heart muscle cell, several interactions between input parameters were identified by the two-step HC-PLSR analysis that could not be detected in the single-step global analysis. Hence, our approach has the potential to reveal new biological insight through the identification of complex parameter interaction patterns. The HC-PLSR metamodel complexity can be adjusted according to the complexity of the input–output mapping of the analysed dynamic model through adjustment of the number of regional regression models included. This facilitates sensitivity analysis of dynamic models of varying complexities.
Source:Chemometrics and Intelligent Laboratory Systems
Kristin Tøndel, Jon Olav Vik, Harald Martens, Ulf G. Indahl, Nicolas Smith, Stig W. Omholt
Dynamic models of biological systems often possess complex and multivariate mappings between input parameters and output state variables, posing challenges for comprehensive sensitivity analysis across the biologically relevant parameter space. In particular, more efficient and robust ways to obtain a solid understanding of how the sensitivity to each parameter depends on the values of the other parameters are sorely needed. We report a new methodology for global sensitivity analysis based on Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR)-based approximations (metamodelling) of the input–output mappings of dynamic models, which we expect to be generic, efficient and robust, even for systems with highly nonlinear input–output relationships. The two-step HC-PLSR metamodelling automatically separates the observations (here corresponding to different combinations of input parameter values) into groups based on the dynamic model behaviour, then analyses each group separately with Partial Least Squares Regression (PLSR). This produces one global regression model comprising all observations, as well as regional regression models within each group, where the regression coefficients can be used as sensitivity measures. Thereby a more accurate description of complex interactions between inputs to the dynamic model can be revealed through analysis of how a certain level of one input parameter affects the model sensitivity to other inputs. We illustrate the usefulness of the HC-PLSR approach on a dynamic model of a mouse heart muscle cell, and demonstrate how it reveals interaction patterns of probable biological significance not easily identifiable by a global regression-based sensitivity analysis only. Applied for sensitivity analysis of a complex, high-dimensional dynamic model of the mouse heart muscle cell, several interactions between input parameters were identified by the two-step HC-PLSR analysis that could not be detected in the single-step global analysis. Hence, our approach has the potential to reveal new biological insight through the identification of complex parameter interaction patterns. The HC-PLSR metamodel complexity can be adjusted according to the complexity of the input–output mapping of the analysed dynamic model through adjustment of the number of regional regression models included. This facilitates sensitivity analysis of dynamic models of varying complexities.
Highlights
► The behaviour of a nonlinear dynamic model of the mouse heart cell was studied. ► Hierarchical Cluster-based PLSR was used for multivariate sensitivity analysis. ► The observations were clustered, and local PLSR models were calibrated. ► The global and local regression coefficients were used as sensitivity measures. ► Complex interaction patterns between the input parameters were detected.Bootstrap based Confidence Limits in Principal Component Analysis – a case study
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems
Hamid Babamoradi, Frans van den Berg, Åsmund Rinnan
Principal Component Analysis (PCA) is widely used as a tool in (exploratory) data investigations for many different research areas such as analytical chemistry, food- and pharmaceutical-research, and Multivariate Statistical Process Control. Despite its popularity, not many results have been reported thus far on how to calculate reliable confidence interval limits in PCA estimates. And, like all other data analysis tasks, results of PCA are not complete without reasonable expectations for the parameter uncertainties, especially in the case of predictive model objectives. In this paper we will present a case study on how to calculate confidence limits based on bootstrap re-sampling. Two NIR datasets are used to build bootstrap confidence limits. The first dataset shows the effect of outliers on bootstrap confidence limits, while the second shows the bootstrap confidence limits when the data has a binomial distribution. The different steps and choices which have to be made for the algorithm to perform correctly will be presented. The bootstrap based confidence limits will be compared with the corresponding asymptotic confidence limits. We will thereby conclude that the confidence limits based on the bootstrap method give more meaningful answers and are to be preferred over its asymptotic counterparts.
Source:Chemometrics and Intelligent Laboratory Systems
Hamid Babamoradi, Frans van den Berg, Åsmund Rinnan
Principal Component Analysis (PCA) is widely used as a tool in (exploratory) data investigations for many different research areas such as analytical chemistry, food- and pharmaceutical-research, and Multivariate Statistical Process Control. Despite its popularity, not many results have been reported thus far on how to calculate reliable confidence interval limits in PCA estimates. And, like all other data analysis tasks, results of PCA are not complete without reasonable expectations for the parameter uncertainties, especially in the case of predictive model objectives. In this paper we will present a case study on how to calculate confidence limits based on bootstrap re-sampling. Two NIR datasets are used to build bootstrap confidence limits. The first dataset shows the effect of outliers on bootstrap confidence limits, while the second shows the bootstrap confidence limits when the data has a binomial distribution. The different steps and choices which have to be made for the algorithm to perform correctly will be presented. The bootstrap based confidence limits will be compared with the corresponding asymptotic confidence limits. We will thereby conclude that the confidence limits based on the bootstrap method give more meaningful answers and are to be preferred over its asymptotic counterparts.
Highlights
► Bootstrap confidence limits are superior to asymptotic ones since they are data-based. ► Bootstrapping is useful for outlier detection. ► Bootstrapping algorithm in PCA is illustrated step-by-step. ► Modification for Orthogonal Procrustes rotation step is presented. ► Bootstrapping can be useful in MSPC.Calibration transfer in model based analysis of second order consecutive reactions
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems
Maryam Khoshkam, Mohsen Kompany-Zareh
In the present work, UV–VIS spectroscopic data from a second order consecutive reaction between ortho-amino benzoeic acid (o-ABA) and diazoniom ions (DIAZO) with one intermediate was studied. Since o-ABA was not absorbing species in visible region of interest the closure rank deficiency problem did not exist. Analysis of simulated and experimental data shows that in presence of variations between spectra of pure species in different data matrices, applying the model based methods to augmented datasets leads to inaccurate results. The application of a calibration transfer method as an additional step in the hard modeling procedure improves the precision of results, and accurate estimation of reaction rate constants are obtained. Effect of different types of spectral variations including intensity, shift or broadening are tested in simulated data. The proposed method is compared to Local Spectra Mode of Analysis (LSMA) which is proposed by Puxty et al. A comparison of the results shows that the proposed method is more efficient than LSMA and leads to less uncertainty in estimated rate constants and less percent error in the relative residuals.
Source:Chemometrics and Intelligent Laboratory Systems
Maryam Khoshkam, Mohsen Kompany-Zareh
In the present work, UV–VIS spectroscopic data from a second order consecutive reaction between ortho-amino benzoeic acid (o-ABA) and diazoniom ions (DIAZO) with one intermediate was studied. Since o-ABA was not absorbing species in visible region of interest the closure rank deficiency problem did not exist. Analysis of simulated and experimental data shows that in presence of variations between spectra of pure species in different data matrices, applying the model based methods to augmented datasets leads to inaccurate results. The application of a calibration transfer method as an additional step in the hard modeling procedure improves the precision of results, and accurate estimation of reaction rate constants are obtained. Effect of different types of spectral variations including intensity, shift or broadening are tested in simulated data. The proposed method is compared to Local Spectra Mode of Analysis (LSMA) which is proposed by Puxty et al. A comparison of the results shows that the proposed method is more efficient than LSMA and leads to less uncertainty in estimated rate constants and less percent error in the relative residuals.
Highlights
► Calibration transfer method is used in a second order consecutive reaction. ► The data were analyzed based on hard modelling methods. ► Calibration transfer was used as an extra step inside the procedure. ► The considered system is very sensitive to intensity changes. ► Calibration transfer in hard modelling methods improves the results.Fault diagnosis of Tennessee Eastman process with multi-scale PCA and ANFIS
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems
C.K. Lau, Kaushik Ghosh, M.A. Hussain, C.R. Che Hassan
Fault diagnosis in industrial processes are challenging tasks that demand effective and timely decision making procedures under the extreme conditions of noisy measurements, highly interrelated data, large number of inputs and complex interaction between the symptoms and faults. The purpose of this study is to develop an online fault diagnosis framework for a dynamical process incorporating multi-scale principal component analysis (MSPCA) for feature extraction and adaptive neuro-fuzzy inference system (ANFIS) for learning the fault-symptom correlation from the process historical data. The features extracted from raw measured data sets using MSPCA are partitioned into score space and residual space which are then fed into multiple ANFIS classifiers in order to diagnose different faults. This data-driven based method extracts fault-symptom correlation from the data eliminating the use of process model. The use of multiple ANFIS classifiers for fault diagnosis with each dedicated to one specific fault, reduces the computational load and provides an expandable framework to incorporate new fault identified in the process. Also, the use of MSPCA enables the detection of small changes occurring in the measured variables and the proficiency of the system is improved by monitoring the subspace which is most sensitive to the faults. The proposed MSPCA-ANFIS based framework is tested on the Tennessee Eastman (TE) process and results for the selected fault cases, particularly those which exhibit highly non-linear characteristics, show improvement over the conventional multivariate PCA as well as the conventional PCA-ANFIS based methods.
Source:Chemometrics and Intelligent Laboratory Systems
C.K. Lau, Kaushik Ghosh, M.A. Hussain, C.R. Che Hassan
Fault diagnosis in industrial processes are challenging tasks that demand effective and timely decision making procedures under the extreme conditions of noisy measurements, highly interrelated data, large number of inputs and complex interaction between the symptoms and faults. The purpose of this study is to develop an online fault diagnosis framework for a dynamical process incorporating multi-scale principal component analysis (MSPCA) for feature extraction and adaptive neuro-fuzzy inference system (ANFIS) for learning the fault-symptom correlation from the process historical data. The features extracted from raw measured data sets using MSPCA are partitioned into score space and residual space which are then fed into multiple ANFIS classifiers in order to diagnose different faults. This data-driven based method extracts fault-symptom correlation from the data eliminating the use of process model. The use of multiple ANFIS classifiers for fault diagnosis with each dedicated to one specific fault, reduces the computational load and provides an expandable framework to incorporate new fault identified in the process. Also, the use of MSPCA enables the detection of small changes occurring in the measured variables and the proficiency of the system is improved by monitoring the subspace which is most sensitive to the faults. The proposed MSPCA-ANFIS based framework is tested on the Tennessee Eastman (TE) process and results for the selected fault cases, particularly those which exhibit highly non-linear characteristics, show improvement over the conventional multivariate PCA as well as the conventional PCA-ANFIS based methods.
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
J.M. Amigo, A. Gredilla, S. Fdez-Ortiz de Vallejuelo, A. de Diego, J.M. Madariaga
Understanding metal behaviour in estuaries is a difficult task. It involves highly dynamic systems continuously subjected to fast changes in environmental conditions governed by alternating physico-chemical parameters and human activity. In order to distinguish the most important environmental factors that determine the behaviour of trace elements in a polluted estuary, water (superficial and deep, at low and high tides) and sediment samples were collected along the Nerbioi–Ibaizabal River estuary (Basque Country, Spain) every three months for six years. The environmental dataset consisted on the concentration of trace elements (Al, As, Cd, Co, Cu, Cr, Fe, Mg, Mn, Ni, Pb, Sn, V and Zn in sediment samples and eight of them in water samples) and physico-chemical properties (temperature, redox potential, dissolved oxygen percentage, conductivity, pH for water samples and also the content of carbonates, total organic carbon (TOC), humic acids (HA) and fulvic acids (FA) in sediment samples). The study of these datasets with canonical correlation analysis revealed the existence of a strong link between different physico-chemical properties and the concentration of certain metals in the estuary. In sediments, for example, strong correlations were found between the carbonate content and some of the studied trace elements. In the water, on the other hand, evident relationships were found between the salinity and the pollutants.
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
J.M. Amigo, A. Gredilla, S. Fdez-Ortiz de Vallejuelo, A. de Diego, J.M. Madariaga
Understanding metal behaviour in estuaries is a difficult task. It involves highly dynamic systems continuously subjected to fast changes in environmental conditions governed by alternating physico-chemical parameters and human activity. In order to distinguish the most important environmental factors that determine the behaviour of trace elements in a polluted estuary, water (superficial and deep, at low and high tides) and sediment samples were collected along the Nerbioi–Ibaizabal River estuary (Basque Country, Spain) every three months for six years. The environmental dataset consisted on the concentration of trace elements (Al, As, Cd, Co, Cu, Cr, Fe, Mg, Mn, Ni, Pb, Sn, V and Zn in sediment samples and eight of them in water samples) and physico-chemical properties (temperature, redox potential, dissolved oxygen percentage, conductivity, pH for water samples and also the content of carbonates, total organic carbon (TOC), humic acids (HA) and fulvic acids (FA) in sediment samples). The study of these datasets with canonical correlation analysis revealed the existence of a strong link between different physico-chemical properties and the concentration of certain metals in the estuary. In sediments, for example, strong correlations were found between the carbonate content and some of the studied trace elements. In the water, on the other hand, evident relationships were found between the salinity and the pollutants.
Highlights
► Deep understanding of the metal behaviour in estuaries. ► CCA for combining the chemical information with environmental variables. ► Monitoring of two main matrices: water and sediments. ► Existence of a linkage between environmental properties and the metal content.Chemical processes monitoring based on weighted principal component analysis and its application
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
Qingchao Jiang, Xuefeng Yan
Conventional principal component analysis (PCA)-based methods employ the first several principal components (PCs) which indicate the most variances information of normal observations for process monitoring. Nevertheless, fault information has no definite mapping relationship to a certain PC and useful information might be submerged under the retained PCs. A new version of weighted PCA (WPCA) for process monitoring is proposed to deal with the situation of useful information being submerged and reduce missed detection rates of T 2 statistic. The main idea of WPCA is building conventional PCA model and then using change rate of T 2 statistic along every PC to capture the most useful information in process, and setting different weighting values for PCs to highlight useful information when online monitoring. Case studies on Tennessee Eastman process demonstrate the effectiveness of the proposed scheme and monitoring results are compared with conventional PCA method.
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
Qingchao Jiang, Xuefeng Yan
Conventional principal component analysis (PCA)-based methods employ the first several principal components (PCs) which indicate the most variances information of normal observations for process monitoring. Nevertheless, fault information has no definite mapping relationship to a certain PC and useful information might be submerged under the retained PCs. A new version of weighted PCA (WPCA) for process monitoring is proposed to deal with the situation of useful information being submerged and reduce missed detection rates of T 2 statistic. The main idea of WPCA is building conventional PCA model and then using change rate of T 2 statistic along every PC to capture the most useful information in process, and setting different weighting values for PCs to highlight useful information when online monitoring. Case studies on Tennessee Eastman process demonstrate the effectiveness of the proposed scheme and monitoring results are compared with conventional PCA method.
Highlights
► Weighted PCA is proposed to highlight the useful information for process monitoring. ► The situation of useful information being submerged is analyzed. ► The change of T 2 statistic along each principal component is examined. ► Fault information is taken into consideration timely while online monitoring. ► Monitoring result of T 2 statistic for both fault detection and diagnosis is improved.A tutorial on the Lasso approach to sparse modeling
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
Morten Arendt Rasmussen, Rasmus Bro
In applied research data are often collected from sources with a high dimensional multivariate output. Analysis of such data is composed of e.g. extraction and characterization of underlying patterns, and often with the aim of finding a small subset of significant variables or features. Variable and feature selection is well-established in the area of regression, whereas for other types of models this seems more difficult. Penalization of the L 1 norm provides an interesting avenue for such a problem, as it produces a sparse solution and hence embeds variable selection. In this paper a brief introduction to the mathematical properties of using the L 1 norm as a penalty is given. Examples of models extended with L 1 norm penalties/constraints are presented. The examples include PCA modeling with sparse loadings which enhance interpretability of single components. Sparse inverse covariance matrix estimation is used to unravel which variables are affecting each other, and a modified PCA to model data with (piecewise) constant responses in e.g. process monitoring is shown. All examples are demonstrated on real or synthetic data. The results indicate that sparse solutions, when appropriate, can enhance model interpretability.
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
Morten Arendt Rasmussen, Rasmus Bro
In applied research data are often collected from sources with a high dimensional multivariate output. Analysis of such data is composed of e.g. extraction and characterization of underlying patterns, and often with the aim of finding a small subset of significant variables or features. Variable and feature selection is well-established in the area of regression, whereas for other types of models this seems more difficult. Penalization of the L 1 norm provides an interesting avenue for such a problem, as it produces a sparse solution and hence embeds variable selection. In this paper a brief introduction to the mathematical properties of using the L 1 norm as a penalty is given. Examples of models extended with L 1 norm penalties/constraints are presented. The examples include PCA modeling with sparse loadings which enhance interpretability of single components. Sparse inverse covariance matrix estimation is used to unravel which variables are affecting each other, and a modified PCA to model data with (piecewise) constant responses in e.g. process monitoring is shown. All examples are demonstrated on real or synthetic data. The results indicate that sparse solutions, when appropriate, can enhance model interpretability.
Combining infrared spectroscopy with chemometric analysis for the characterization of proteinaceous binders in medieval paints
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
Catarina Miguel, João A. Lopes, Mark Clarke, Maria João Melo
This work describes the application of infrared spectroscopy combined with chemometric tools for the characterization of the proteinaceous binding media used in medieval paints. Historically accurate reconstructions of the most common binders and binder mixtures used in medieval illuminations (egg white, egg yolk, parchment glue and casein glue) were made. Red and blue colors based on vermilion and lapis lazuli were selected as reference paint samples. These two colors were very widely used in medieval illuminations. Different chemometrics methods (supervised and unsupervised), applied to infrared spectral data, were evaluated in terms of their accuracy in the characterization and quantification of complex binding media formulations of the red and blue paints. Principal components analysis and hierarchical cluster analysis revealed that the CH stretching absorption region (3000–2840cm−1) and the ester-amides region (1760–1495cm−1 ) were the best wavenumber region for discriminating the different proteinaceous binders. A regression analysis using classical least squares and partial least squares regression allowed the quantification of binder composition in red and blue paint reconstructions. The restriction to the ester-amide absorption region presented the best spectral reconstruction error and the lowest binder composition reconstruction error.
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
Catarina Miguel, João A. Lopes, Mark Clarke, Maria João Melo
This work describes the application of infrared spectroscopy combined with chemometric tools for the characterization of the proteinaceous binding media used in medieval paints. Historically accurate reconstructions of the most common binders and binder mixtures used in medieval illuminations (egg white, egg yolk, parchment glue and casein glue) were made. Red and blue colors based on vermilion and lapis lazuli were selected as reference paint samples. These two colors were very widely used in medieval illuminations. Different chemometrics methods (supervised and unsupervised), applied to infrared spectral data, were evaluated in terms of their accuracy in the characterization and quantification of complex binding media formulations of the red and blue paints. Principal components analysis and hierarchical cluster analysis revealed that the CH stretching absorption region (3000–2840cm−1) and the ester-amides region (1760–1495cm−1 ) were the best wavenumber region for discriminating the different proteinaceous binders. A regression analysis using classical least squares and partial least squares regression allowed the quantification of binder composition in red and blue paint reconstructions. The restriction to the ester-amide absorption region presented the best spectral reconstruction error and the lowest binder composition reconstruction error.
Highlights
► FTIRS & chemometrics characterized proteineaceous binders on medieval paints. ► Unsupervised chemometrics methods discriminated proteinaceous binders. ► PLSR applied to the FTIR ester-amide region gave the best binders predictions.CORAL: Models of toxicity of binary mixtures
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
Alla P. Toropova, Andrey A. Toropov, Emilio Benfenati, Giuseppina Gini, Danuta Leszczynska, Jerzy Leszczynski
Quantitative structure–activity relationships (QSAR) for toxicity of binary mixtures (expressed as pEC50 (i.e. log[1/EC50], logarithm of the inverse of the effective concentration required to bring about a 50% decrease in light emission), for Photobacterium phosphoreum) have been developed. The simplified molecular input-line entry system (SMILES) was used as the representation of the molecular structure of components of binary mixtures. Using the Monte Carlo technique the SMILES-based optimal descriptors were calculated. One-variable correlations between the optimal descriptors and toxicity of the binary mixtures were analyzed to develop a predictive model. Six random splits of the data into sub-training, calibration, and test sets were tested. A satisfactory statistical quality of the model was achieved for each above-mentioned split.
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
Alla P. Toropova, Andrey A. Toropov, Emilio Benfenati, Giuseppina Gini, Danuta Leszczynska, Jerzy Leszczynski
Quantitative structure–activity relationships (QSAR) for toxicity of binary mixtures (expressed as pEC50 (i.e. log[1/EC50], logarithm of the inverse of the effective concentration required to bring about a 50% decrease in light emission), for Photobacterium phosphoreum) have been developed. The simplified molecular input-line entry system (SMILES) was used as the representation of the molecular structure of components of binary mixtures. Using the Monte Carlo technique the SMILES-based optimal descriptors were calculated. One-variable correlations between the optimal descriptors and toxicity of the binary mixtures were analyzed to develop a predictive model. Six random splits of the data into sub-training, calibration, and test sets were tested. A satisfactory statistical quality of the model was achieved for each above-mentioned split.
Highlights
► The quantitative model for toxicity of binary mixtures is suggested. ► The molecular structure is represented by SMILES. ► The model of toxicity is built up by the CORAL software available on the Internet.Multivariate extension of classical equations for the study of electrochemically irreversible systems
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
Mojtaba Kooshki, José Manuel Díaz-Cruz, Hamid Abdollahi, Cristina Ariño, Miquel Esteban
A new approach is presented to apply the classical equations of direct current and normal pulse polarography of irreversible processes to the multivariate data generated in voltammetric titrations of mixtures of complexes which are irreversibly reduced. For this purpose, the well known complexes of Pb(II) and Cd(II) with nitrilotriacetate (NTA) systems are considered. The proposed methodology is based on least-squares fitting and can be applied for a fixed time at different metal-to-ligand ratios and in a time-dependent way, which involves the fitting of current vs. potential vs. time matrices. In both cases, consistent and realistic results are obtained, thus suggesting the potential usefulness of this approach for more involved systems.
Source:Chemometrics and Intelligent Laboratory Systems, Volume 119
Mojtaba Kooshki, José Manuel Díaz-Cruz, Hamid Abdollahi, Cristina Ariño, Miquel Esteban
A new approach is presented to apply the classical equations of direct current and normal pulse polarography of irreversible processes to the multivariate data generated in voltammetric titrations of mixtures of complexes which are irreversibly reduced. For this purpose, the well known complexes of Pb(II) and Cd(II) with nitrilotriacetate (NTA) systems are considered. The proposed methodology is based on least-squares fitting and can be applied for a fixed time at different metal-to-ligand ratios and in a time-dependent way, which involves the fitting of current vs. potential vs. time matrices. In both cases, consistent and realistic results are obtained, thus suggesting the potential usefulness of this approach for more involved systems.
Highlights
► Multivariate extension of classical equations for irreversible voltammograms. ► It allows the determination of charge transfer coefficients. ► It allows the fitting of current versus potential and time matrices.Deflation strategies for multi-block principal component analysis revisited
07 November 2012,
12:39:46
Publication year:
2012
Source:Chemometrics and Intelligent Laboratory Systems
Sahar Hassani, Mohamed Hanafi, El Mostafa Qannari, Achim Kohler
Within the framework of multi-block data sets, multi-block principal component analysis has been successfully used as a tool to investigate the structure of spectroscopy, -omics and sensory data. The determination of the successive principal components involves a deflation procedure which can be performed according to several strategies. We discuss the respective interest of these strategies and show orthogonality properties related to the vectors of loadings or to the scores. Reconstruction formulas for the data blocks are established for each deflation strategy. Interpretational aspects of the different deflation strategies are discussed and illustrated on the basis of a real and a simulated data set.
Source:Chemometrics and Intelligent Laboratory Systems
Sahar Hassani, Mohamed Hanafi, El Mostafa Qannari, Achim Kohler
Within the framework of multi-block data sets, multi-block principal component analysis has been successfully used as a tool to investigate the structure of spectroscopy, -omics and sensory data. The determination of the successive principal components involves a deflation procedure which can be performed according to several strategies. We discuss the respective interest of these strategies and show orthogonality properties related to the vectors of loadings or to the scores. Reconstruction formulas for the data blocks are established for each deflation strategy. Interpretational aspects of the different deflation strategies are discussed and illustrated on the basis of a real and a simulated data set.
No comments:
Post a Comment