A decomposition of pairwise continuity via ideals
Directory of Open Access Journals (Sweden)
Mahes Wari
2016-02-01
Full Text Available In this paper, we introduce and study the notions of (i, j - regular - ℐ -closed sets, (i, j - Aℐ -sets, (i, j - ℐ -locally closed sets, p- Aℐ -continuous functions and p- ℐ -LC-continuous functions in ideal bitopological spaces and investigate some of their properties. Also, a new decomposition of pairwise continuity is obtained using these sets.
Pairwise harmonics for shape analysis
Zheng, Youyi
2013-07-01
This paper introduces a simple yet effective shape analysis mechanism for geometry processing. Unlike traditional shape analysis techniques which compute descriptors per surface point up to certain neighborhoods, we introduce a shape analysis framework in which the descriptors are based on pairs of surface points. Such a pairwise analysis approach leads to a new class of shape descriptors that are more global, discriminative, and can effectively capture the variations in the underlying geometry. Specifically, we introduce new shape descriptors based on the isocurves of harmonic functions whose global maximum and minimum occur at the point pair. We show that these shape descriptors can infer shape structures and consistently lead to simpler and more efficient algorithms than the state-of-the-art methods for three applications: intrinsic reflectional symmetry axis computation, matching shape extremities, and simultaneous surface segmentation and skeletonization. © 2012 IEEE.
Zhu, Lin; Guo, Wei-Li; Deng, Su-Ping; Huang, De-Shuang
2016-01-01
In recent years, thanks to the efforts of individual scientists and research consortiums, a huge amount of chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) experimental data have been accumulated. Instead of investigating them independently, several recent studies have convincingly demonstrated that a wealth of scientific insights can be gained by integrative analysis of these ChIP-seq data. However, when used for the purpose of integrative analysis, a serious drawback of current ChIP-seq technique is that it is still expensive and time-consuming to generate ChIP-seq datasets of high standard. Most researchers are therefore unable to obtain complete ChIP-seq data for several TFs in a wide variety of cell lines, which considerably limits the understanding of transcriptional regulation pattern. In this paper, we propose a novel method called ChIP-PIT to overcome the aforementioned limitation. In ChIP-PIT, ChIP-seq data corresponding to a diverse collection of cell types, TFs and genes are fused together using the three-mode pair-wise interaction tensor (PIT) model, and the prediction of unperformed ChIP-seq experimental results is formulated as a tensor completion problem. Computationally, we propose efficient first-order method based on extensions of coordinate descent method to learn the optimal solution of ChIP-PIT, which makes it particularly suitable for the analysis of massive scale ChIP-seq data. Experimental evaluation the ENCODE data illustrate the usefulness of the proposed model.
Pairwise conjoint analysis of activity engagement choice
Wang, Donggen; Oppewal, H.; Timmermans, H.J.P.
2000-01-01
Information overload is a well-known problem of conjoint choice models when respondents have to evaluate a large number of attributes and/or attribute levels. In this paper we develop an alternative conjoint modelling approach, called pairwise conjoint analysis. It differs from conventional conjoint
BETASCAN: probable beta-amyloids identified by pairwise probabilistic analysis.
Directory of Open Access Journals (Sweden)
Allen W Bryan
2009-03-01
Full Text Available Amyloids and prion proteins are clinically and biologically important beta-structures, whose supersecondary structures are difficult to determine by standard experimental or computational means. In addition, significant conformational heterogeneity is known or suspected to exist in many amyloid fibrils. Recent work has indicated the utility of pairwise probabilistic statistics in beta-structure prediction. We develop here a new strategy for beta-structure prediction, emphasizing the determination of beta-strands and pairs of beta-strands as fundamental units of beta-structure. Our program, BETASCAN, calculates likelihood scores for potential beta-strands and strand-pairs based on correlations observed in parallel beta-sheets. The program then determines the strands and pairs with the greatest local likelihood for all of the sequence's potential beta-structures. BETASCAN suggests multiple alternate folding patterns and assigns relative a priori probabilities based solely on amino acid sequence, probability tables, and pre-chosen parameters. The algorithm compares favorably with the results of previous algorithms (BETAPRO, PASTA, SALSA, TANGO, and Zyggregator in beta-structure prediction and amyloid propensity prediction. Accurate prediction is demonstrated for experimentally determined amyloid beta-structures, for a set of known beta-aggregates, and for the parallel beta-strands of beta-helices, amyloid-like globular proteins. BETASCAN is able both to detect beta-strands with higher sensitivity and to detect the edges of beta-strands in a richly beta-like sequence. For two proteins (Abeta and Het-s, there exist multiple sets of experimental data implying contradictory structures; BETASCAN is able to detect each competing structure as a potential structure variant. The ability to correlate multiple alternate beta-structures to experiment opens the possibility of computational investigation of prion strains and structural heterogeneity of amyloid
Directory of Open Access Journals (Sweden)
Panthip Tue-ngeun
2013-01-01
Full Text Available Computational approaches have been used to evaluate and define important residues for protein-protein interactions, especially antigen-antibody complexes. In our previous study, pairwise decomposition of residue interaction energies of single chain Fv with HIV-1 p17 epitope variants has indicated the key specific residues in the complementary determining regions (CDRs of scFv anti-p17. In this present investigation in order to determine whether a specific side chain group of residue in CDRs plays an important role in bioactivity, computational alanine scanning has been applied. Molecular dynamics simulations were done with several complexes of original scFv anti-p17 and scFv anti-p17mutants with HIV-1 p17 epitope variants with a production run up to 10 ns. With the combination of pairwise decomposition residue interaction and alanine scanning calculations, the point mutation has been initially selected at the position MET100 to improve the residue binding affinity. The calculated docking interaction energy between a single mutation from methionine to either arginine or glycine has shown the improved binding affinity, contributed from the electrostatic interaction with the negative favorably interaction energy, compared to the wild type. Theoretical calculations agreed well with the results from the peptide ELISA results.
Analysis of Geographic and Pairwise Distances among Chinese Cashmere Goat Populations
Liu, Jian-Bin; Wang, Fan; Lang, Xia; Zha, Xi; Sun, Xiao-Ping; Yue, Yao-Jing; Feng, Rui-Lin; Yang, Bo-Hui; Guo, Jian
2013-01-01
This study investigated the geographic and pairwise distances of nine Chinese local Cashmere goat populations through the analysis of 20 microsatellite DNA markers. Fluorescence PCR was used to identify the markers, which were selected based on their significance as identified by the Food and Agriculture Organization of the United Nations (FAO) and the International Society for Animal Genetics (ISAG). In total, 206 alleles were detected; the average allele number was 10.30; the polymorphism i...
DEFF Research Database (Denmark)
Boetker, Johan P.; Koradia, Vishal; Rades, Thomas
2012-01-01
was subjected to quench cooling thereby creating an amorphous form of the drug from both starting materials. The milled and quench cooled samples were, together with the crystalline starting materials, analyzed with X-ray powder diffraction (XRPD), Raman spectroscopy and atomic pair-wise distribution function...... (PDF) analysis of the XRPD pattern. When compared to XRPD and Raman spectroscopy, the PDF analysis was superior in displaying the difference between the amorphous samples prepared by milling and quench cooling approaches of the two starting materials....
Extraction of tacit knowledge from large ADME data sets via pairwise analysis.
Keefer, Christopher E; Chang, George; Kauffman, Gregory W
2011-06-15
Pharmaceutical companies routinely collect data across multiple projects for common ADME endpoints. Although at the time of collection the data is intended for use in decision making within a specific project, knowledge can be gained by data mining the entire cross-project data set for patterns of structure-activity relationships (SAR) that may be applied to any project. One such data mining method is pairwise analysis. This method has the advantage of being able to identify small structural changes that lead to significant changes in activity. In this paper, we describe the process for full pairwise analysis of our high-throughput ADME assays routinely used for compound discovery efforts at Pfizer (microsomal clearance, passive membrane permeability, P-gp efflux, and lipophilicity). We also describe multiple strategies for the application of these transforms in a prospective manner during compound design. Finally, a detailed analysis of the activity patterns in pairs of compounds that share the same molecular transformation reveals multiple types of transforms from an SAR perspective. These include bioisosteres, additives, multiplicatives, and a type we call switches as they act to either turn on or turn off an activity. Copyright © 2011 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Wingender Edgar
2008-05-01
Full Text Available Abstract Background Currently, there is a gap between purely theoretical studies of the topology of large bioregulatory networks and the practical traditions and interests of experimentalists. While the theoretical approaches emphasize the global characterization of regulatory systems, the practical approaches focus on the role of distinct molecules and genes in regulation. To bridge the gap between these opposite approaches, one needs to combine 'general' with 'particular' properties and translate abstract topological features of large systems into testable functional characteristics of individual components. Here, we propose a new topological parameter – the pairwise disconnectivity index of a network's element – that is capable of such bridging. Results The pairwise disconnectivity index quantifies how crucial an individual element is for sustaining the communication ability between connected pairs of vertices in a network that is displayed as a directed graph. Such an element might be a vertex (i.e., molecules, genes, an edge (i.e., reactions, interactions, as well as a group of vertices and/or edges. The index can be viewed as a measure of topological redundancy of regulatory paths which connect different parts of a given network and as a measure of sensitivity (robustness of this network to the presence (absence of each individual element. Accordingly, we introduce the notion of a path-degree of a vertex in terms of its corresponding incoming, outgoing and mediated paths, respectively. The pairwise disconnectivity index has been applied to the analysis of several regulatory networks from various organisms. The importance of an individual vertex or edge for the coherence of the network is determined by the particular position of the given element in the whole network. Conclusion Our approach enables to evaluate the effect of removing each element (i.e., vertex, edge, or their combinations from a network. The greatest potential value of
Analysis of Geographic and Pairwise Distances among Chinese Cashmere Goat Populations
Directory of Open Access Journals (Sweden)
Jian-Bin Liu
2013-03-01
Full Text Available This study investigated the geographic and pairwise distances of nine Chinese local Cashmere goat populations through the analysis of 20 microsatellite DNA markers. Fluorescence PCR was used to identify the markers, which were selected based on their significance as identified by the Food and Agriculture Organization of the United Nations (FAO and the International Society for Animal Genetics (ISAG. In total, 206 alleles were detected; the average allele number was 10.30; the polymorphism information content of loci ranged from 0.5213 to 0.7582; the number of effective alleles ranged from 4.0484 to 4.6178; the observed heterozygosity was from 0.5023 to 0.5602 for the practical sample; the expected heterozygosity ranged from 0.5783 to 0.6464; and Allelic richness ranged from 4.7551 to 8.0693. These results indicated that Chinese Cashmere goat populations exhibited rich genetic diversity. Further, the Wright’s F-statistics of subpopulation within total (FST was 0.1184; the genetic differentiation coefficient (GST was 0.0940; and the average gene flow (Nm was 2.0415. All pairwise FST values among the populations were highly significant (p<0.01 or p<0.001, suggesting that the populations studied should all be considered to be separate breeds. Finally, the clustering analysis divided the Chinese Cashmere goat populations into at least four clusters, with the Hexi and Yashan goat populations alone in one cluster. These results have provided useful, practical, and important information for the future of Chinese Cashmere goat breeding.
Multilevel index decomposition analysis: Approaches and application
International Nuclear Information System (INIS)
Xu, X.Y.; Ang, B.W.
2014-01-01
With the growing interest in using the technique of index decomposition analysis (IDA) in energy and energy-related emission studies, such as to analyze the impacts of activity structure change or to track economy-wide energy efficiency trends, the conventional single-level IDA may not be able to meet certain needs in policy analysis. In this paper, some limitations of single-level IDA studies which can be addressed through applying multilevel decomposition analysis are discussed. We then introduce and compare two multilevel decomposition procedures, which are referred to as the multilevel-parallel (M-P) model and the multilevel-hierarchical (M-H) model. The former uses a similar decomposition procedure as in the single-level IDA, while the latter uses a stepwise decomposition procedure. Since the stepwise decomposition procedure is new in the IDA literature, the applicability of the popular IDA methods in the M-H model is discussed and cases where modifications are needed are explained. Numerical examples and application studies using the energy consumption data of the US and China are presented. - Highlights: • We discuss the limitations of single-level decomposition in IDA applied to energy study. • We introduce two multilevel decomposition models, study their features and discuss how they can address the limitations. • To extend from single-level to multilevel analysis, necessary modifications to some popular IDA methods are discussed. • We further discuss the practical significance of the multilevel models and present examples and cases to illustrate
Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias; Salanti, Georgia
2018-02-28
To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) ("living" network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (Pmeta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided
Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias
2018-01-01
Abstract Objective To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) (“living” network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Design Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Data sources Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Eligibility criteria for study selection Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (Pmeta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. Results 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided strong evidence against the null hypothesis (P=0.002). The median time to strong evidence against the null hypothesis was 19 years with living network meta-analysis and 23 years with living pairwise meta-analysis (hazard ratio 2.78, 95% confidence interval 1.00 to 7.72, P=0.05). Studies directly comparing
Gene ontology analysis of pairwise genetic associations in two genome-wide studies of sporadic ALS
Directory of Open Access Journals (Sweden)
Kim Nora
2012-07-01
Full Text Available Abstract Background It is increasingly clear that common human diseases have a complex genetic architecture characterized by both additive and nonadditive genetic effects. The goal of the present study was to determine whether patterns of both additive and nonadditive genetic associations aggregate in specific functional groups as defined by the Gene Ontology (GO. Results We first estimated all pairwise additive and nonadditive genetic effects using the multifactor dimensionality reduction (MDR method that makes few assumptions about the underlying genetic model. Statistical significance was evaluated using permutation testing in two genome-wide association studies of ALS. The detection data consisted of 276 subjects with ALS and 271 healthy controls while the replication data consisted of 221 subjects with ALS and 211 healthy controls. Both studies included genotypes from approximately 550,000 single-nucleotide polymorphisms (SNPs. Each SNP was mapped to a gene if it was within 500 kb of the start or end. Each SNP was assigned a p-value based on its strongest joint effect with the other SNPs. We then used the Exploratory Visual Analysis (EVA method and software to assign a p-value to each gene based on the overabundance of significant SNPs at the α = 0.05 level in the gene. We also used EVA to assign p-values to each GO group based on the overabundance of significant genes at the α = 0.05 level. A GO category was determined to replicate if that category was significant at the α = 0.05 level in both studies. We found two GO categories that replicated in both studies. The first, ‘Regulation of Cellular Component Organization and Biogenesis’, a GO Biological Process, had p-values of 0.010 and 0.014 in the detection and replication studies, respectively. The second, ‘Actin Cytoskeleton’, a GO Cellular Component, had p-values of 0.040 and 0.046 in the detection and replication studies, respectively. Conclusions Pathway
Evaluation of advanced multiplex short tandem repeat systems in pairwise kinship analysis.
Tamura, Tomonori; Osawa, Motoki; Ochiai, Eriko; Suzuki, Takanori; Nakamura, Takashi
2015-09-01
The AmpFLSTR Identifiler Kit, comprising 15 autosomal short tandem repeat (STR) loci, is commonly employed in forensic practice for calculating match probabilities and parentage testing. The conventional system exhibits insufficient estimation for kinship analysis such as sibship testing because of shortness of examined loci. This study evaluated the power of the PowerPlex Fusion System, GlobalFiler Kit, and PowerPlex 21 System, which comprise more than 20 autosomal STR loci, to estimate pairwise blood relatedness (i.e., parent-child, full siblings, second-degree relatives, and first cousins). The genotypes of all 24 STR loci in 10,000 putative pedigrees were constructed by simulation. The likelihood ratio for each locus was calculated from joint probabilities for relatives and non-relatives. The combined likelihood ratio was calculated according to the product rule. The addition of STR loci improved separation between relatives and non-relatives. However, these systems were less effectively extended to the inference for first cousins. In conclusion, these advanced systems will be useful in forensic personal identification, especially in the evaluation of full siblings and second-degree relatives. Moreover, the additional loci may give rise to two major issues of more frequent mutational events and several pairs of linked loci on the same chromosome. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Pharmacological treatments in asthma-affected horses: A pair-wise and network meta-analysis.
Calzetta, L; Roncada, P; di Cave, D; Bonizzi, L; Urbani, A; Pistocchini, E; Rogliani, P; Matera, M G
2017-11-01
Equine asthma is a disease characterised by reversible airflow obstruction, bronchial hyper-responsiveness and airway inflammation following exposure of susceptible horses to specific airborne agents. Although clinical remission can be achieved in a low-airborne dust environment, repeated exacerbations may lead to irreversible airway remodelling. The available data on the pharmacotherapy of equine asthma result from several small studies, and no head-to-head clinical trials have been conducted among the available medications. To assess the impact of the pharmacological interventions in equine asthma and compare the effect of different classes of drugs on lung function. Pair-wise and network meta-analysis. Literature searches for clinical trials on the pharmacotherapy of equine asthma were performed. The risk of publication bias was assessed by funnel plots and Egger's test. Changes in maximum transpulmonary or pleural pressure, pulmonary resistance and dynamic lung compliance vs. control were analysed via random-effects models and Bayesian networks. The results obtained from 319 equine asthma-affected horses were extracted from 32 studies. Bronchodilators, corticosteroids and chromones improved maximum transpulmonary or pleural pressure (range: -8.0 to -21.4 cmH 2 O; Ptherapies. Long-term treatments were more effective than short-term treatments. Weak publication bias was detected. This study demonstrates that long-term treatments with inhaled corticosteroids and long-acting β 2 -AR agonists may represent the first choice for treating equine asthma. Further high quality clinical trials are needed to clarify whether inhaled bronchodilators should be preferred to inhaled corticosteroids or vice versa, and to investigate the potential superiority of combination therapy in equine asthma. © 2017 EVJ Ltd.
Chella, Federico; Pizzella, Vittorio; Zappasodi, Filippo; Nolte, Guido; Marzetti, Laura
2016-05-01
Brain cognitive functions arise through the coordinated activity of several brain regions, which actually form complex dynamical systems operating at multiple frequencies. These systems often consist of interacting subsystems, whose characterization is of importance for a complete understanding of the brain interaction processes. To address this issue, we present a technique, namely the bispectral pairwise interacting source analysis (biPISA), for analyzing systems of cross-frequency interacting brain sources when multichannel electroencephalographic (EEG) or magnetoencephalographic (MEG) data are available. Specifically, the biPISA makes it possible to identify one or many subsystems of cross-frequency interacting sources by decomposing the antisymmetric components of the cross-bispectra between EEG or MEG signals, based on the assumption that interactions are pairwise. Thanks to the properties of the antisymmetric components of the cross-bispectra, biPISA is also robust to spurious interactions arising from mixing artifacts, i.e., volume conduction or field spread, which always affect EEG or MEG functional connectivity estimates. This method is an extension of the pairwise interacting source analysis (PISA), which was originally introduced for investigating interactions at the same frequency, to the study of cross-frequency interactions. The effectiveness of this approach is demonstrated in simulations for up to three interacting source pairs and for real MEG recordings of spontaneous brain activity. Simulations show that the performances of biPISA in estimating the phase difference between the interacting sources are affected by the increasing level of noise rather than by the number of the interacting subsystems. The analysis of real MEG data reveals an interaction between two pairs of sources of central mu and beta rhythms, localizing in the proximity of the left and right central sulci.
Comparing structural decomposition analysis and index
International Nuclear Information System (INIS)
Hoekstra, Rutger; Van den Bergh, Jeroen C.J.M.
2003-01-01
To analyze and understand historical changes in economic, environmental, employment or other socio-economic indicators, it is useful to assess the driving forces or determinants that underlie these changes. Two techniques for decomposing indicator changes at the sector level are structural decomposition analysis (SDA) and index decomposition analysis (IDA). For example, SDA and IDA have been used to analyze changes in indicators such as energy use, CO 2 -emissions, labor demand and value added. The changes in these variables are decomposed into determinants such as technological, demand, and structural effects. SDA uses information from input-output tables while IDA uses aggregate data at the sector-level. The two methods have developed quite independently, which has resulted in each method being characterized by specific, unique techniques and approaches. This paper has three aims. First, the similarities and differences between the two approaches are summarized. Second, the possibility of transferring specific techniques and indices is explored. Finally, a numerical example is used to illustrate differences between the two approaches
Global sensitivity analysis by polynomial dimensional decomposition
Energy Technology Data Exchange (ETDEWEB)
Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)
2011-07-15
This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.
Energy Technology Data Exchange (ETDEWEB)
Leibfarth, Sara; Moennich, David; Thorwarth, Daniela [University Hospital Tuebingen, Section for Biomedical Physics, Department of Radiation Oncology, Tuebingen (Germany); Simoncic, Urban [University Hospital Tuebingen, Section for Biomedical Physics, Department of Radiation Oncology, Tuebingen (Germany); University of Ljubljana, Faculty of Mathematics and Physics, Ljubljana (Slovenia); Jozef Stefan Institute, Ljubljana (Slovenia); Welz, Stefan; Zips, Daniel [University Hospital Tuebingen, Department of Radiation Oncology, Tuebingen (Germany); Schmidt, Holger; Schwenzer, Nina [University Hospital Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany)
2016-07-15
The aim of this pilot study was to explore simultaneous functional PET/MR for biological characterization of tumors and potential future treatment adaptations. To investigate the extent of complementarity between different PET/MR-based functional datasets, a pairwise correlation analysis was performed. Functional datasets of N=15 head and neck (HN) cancer patients were evaluated. For patients of group A (N=7), combined PET/MR datasets including FDG-PET and ADC maps were available. Patients of group B (N=8) had FMISO-PET, DCE-MRI and ADC maps from combined PET/MRI, an additional dynamic FMISO-PET/CT acquired directly after FMISO tracer injection as well as an FDG-PET/CT acquired a few days earlier. From DCE-MR, parameter maps K{sup trans}, v{sub e} and v{sub p} were obtained with the extended Tofts model. Moreover, parameter maps of mean DCE enhancement, ΔS{sub DCE}, and mean FMISO signal 0-4 min p.i., anti A{sub FMISO}, were derived. Pairwise correlations were quantified using the Spearman correlation coefficient (r) on both a voxel and a regional level within the gross tumor volume. Between some pairs of functional imaging modalities moderate correlations were observed with respect to the median over all patient datasets, whereas distinct correlations were only present on an individual basis. Highest inter-modality median correlations on the voxel level were obtained for FDG/FMISO (r = 0.56), FDG/ anti A{sub FMISO} (r = 0.55), anti A{sub FMISO}/ΔS{sub DCE} (r = 0.46), and FDG/ADC (r = -0.39). Correlations on the regional level showed comparable results. The results of this study suggest that the examined functional datasets provide complementary information. However, only pairwise correlations were examined, and correlations could still exist between combinations of three or more datasets. These results might contribute to the future design of individually adapted treatment approaches based on multiparametric functional imaging.
Singular Value Decomposition and Ligand Binding Analysis
Directory of Open Access Journals (Sweden)
André Luiz Galo
2013-01-01
Full Text Available Singular values decomposition (SVD is one of the most important computations in linear algebra because of its vast application for data analysis. It is particularly useful for resolving problems involving least-squares minimization, the determination of matrix rank, and the solution of certain problems involving Euclidean norms. Such problems arise in the spectral analysis of ligand binding to macromolecule. Here, we present a spectral data analysis method using SVD (SVD analysis and nonlinear fitting to determine the binding characteristics of intercalating drugs to DNA. This methodology reduces noise and identifies distinct spectral species similar to traditional principal component analysis as well as fitting nonlinear binding parameters. We applied SVD analysis to investigate the interaction of actinomycin D and daunomycin with native DNA. This methodology does not require prior knowledge of ligand molar extinction coefficients (free and bound, which potentially limits binding analysis. Data are acquired simply by reconstructing the experimental data and by adjusting the product of deconvoluted matrices and the matrix of model coefficients determined by the Scatchard and McGee and von Hippel equation.
Analysis of Pairwise Interactions in a Maximum Likelihood Sense to Identify Leaders in a Group
Directory of Open Access Journals (Sweden)
Violet Mwaffo
2017-07-01
Full Text Available Collective motion in animal groups manifests itself in the form of highly coordinated maneuvers determined by local interactions among individuals. A particularly critical question in understanding the mechanisms behind such interactions is to detect and classify leader–follower relationships within the group. In the technical literature of coupled dynamical systems, several methods have been proposed to reconstruct interaction networks, including linear correlation analysis, transfer entropy, and event synchronization. While these analyses have been helpful in reconstructing network models from neuroscience to public health, rules on the most appropriate method to use for a specific dataset are lacking. Here, we demonstrate the possibility of detecting leaders in a group from raw positional data in a model-free approach that combines multiple methods in a maximum likelihood sense. We test our framework on synthetic data of groups of self-propelled Vicsek particles, where a single agent acts as a leader and both the size of the interaction region and the level of inherent noise are systematically varied. To assess the feasibility of detecting leaders in real-world applications, we study a synthetic dataset of fish shoaling, generated by using a recent data-driven model for social behavior, and an experimental dataset of pharmacologically treated zebrafish. Not only does our approach offer a robust strategy to detect leaders in synthetic data but it also allows for exploring the role of psychoactive compounds on leader–follower relationships.
Time-frequency analysis : mathematical analysis of the empirical mode decomposition.
2009-01-01
Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...
Datta, Sumona; Shah, Lena; Gilman, Robert H; Evans, Carlton A
2017-08-01
The performance of laboratory tests to diagnose pulmonary tuberculosis is dependent on the quality of the sputum sample tested. The relative merits of sputum collection methods to improve tuberculosis diagnosis are poorly characterised. We therefore aimed to investigate the effects of sputum collection methods on tuberculosis diagnosis. We did a systematic review and meta-analysis to investigate whether non-invasive sputum collection methods in people aged at least 12 years improve the diagnostic performance of laboratory testing for pulmonary tuberculosis. We searched PubMed, Google Scholar, ProQuest, Web of Science, CINAHL, and Embase up to April 14, 2017, to identify relevant experimental, case-control, or cohort studies. We analysed data by pairwise meta-analyses with a random-effects model and by network meta-analysis. All diagnostic performance data were calculated at the sputum-sample level, except where authors only reported data at the individual patient-level. Heterogeneity was assessed, with potential causes identified by logistic meta-regression. We identified 23 eligible studies published between 1959 and 2017, involving 8967 participants who provided 19 252 sputum samples. Brief, on-demand spot sputum collection was the main reference standard. Pooled sputum collection increased tuberculosis diagnosis by microscopy (odds ratio [OR] 1·6, 95% CI 1·3-1·9, pmeta-analysis confirmed these findings, and revealed that both pooled and instructed spot sputum collections were similarly effective techniques for increasing the diagnostic performance of microscopy. Tuberculosis diagnoses were substantially increased by either pooled collection or by providing instruction on how to produce a sputum sample taken at any time of the day. Both interventions had a similar effect to that reported for the introduction of new, expensive laboratory tests, and therefore warrant further exploration in the drive to end the global tuberculosis epidemic. Wellcome Trust
Decomposition analysis of differential dose volume histograms
International Nuclear Information System (INIS)
Heuvel, Frank van den
2006-01-01
Dose volume histograms are a common tool to assess the value of a treatment plan for various forms of radiation therapy treatment. The purpose of this work is to introduce, validate, and apply a set of tools to analyze differential dose volume histograms by decomposing them into physically and clinically meaningful normal distributions. A weighted sum of the decomposed normal distributions (e.g., weighted dose) is proposed as a new measure of target dose, rather than the more unstable point dose. The method and its theory are presented and validated using simulated distributions. Additional validation is performed by analyzing simple four field box techniques encompassing a predefined target, using different treatment energies inside a water phantom. Furthermore, two clinical situations are analyzed using this methodology to illustrate practical usefulness. A comparison of a treatment plan for a breast patient using a tangential field setup with wedges is compared to a comparable geometry using dose compensators. Finally, a normal tissue complication probability (NTCP) calculation is refined using this decomposition. The NTCP calculation is performed on a liver as organ at risk in a treatment of a mesothelioma patient with involvement of the right lung. The comparison of the wedged breast treatment versus the compensator technique yields comparable classical dose parameters (e.g., conformity index ≅1 and equal dose at the ICRU dose point). The methodology proposed here shows a 4% difference in weighted dose outlining the difference in treatment using a single parameter instead of at least two in a classical analysis (e.g., mean dose, and maximal dose, or total dose variance). NTCP-calculations for the mesothelioma case are generated automatically and show a 3% decrease with respect to the classical calculation. The decrease is slightly dependant on the fractionation and on the α/β-value utilized. In conclusion, this method is able to distinguish clinically
Gas hydrates forming and decomposition conditions analysis
Directory of Open Access Journals (Sweden)
А. М. Павленко
2017-07-01
Full Text Available The concept of gas hydrates has been defined; their brief description has been given; factors that affect the formation and decomposition of the hydrates have been reported; their distribution, structure and thermodynamic conditions determining the gas hydrates formation disposition in gas pipelines have been considered. Advantages and disadvantages of the known methods for removing gas hydrate plugs in the pipeline have been analyzed, the necessity of their further studies has been proved. In addition to the negative impact on the process of gas extraction, the hydrates properties make it possible to outline the following possible fields of their industrial use: obtaining ultrahigh pressures in confined spaces at the hydrate decomposition; separating hydrocarbon mixtures by successive transfer of individual components through the hydrate given the mode; obtaining cold due to heat absorption at the hydrate decomposition; elimination of the open gas fountain by means of hydrate plugs in the bore hole of the gushing gasser; seawater desalination, based on the hydrate ability to only bind water molecules into the solid state; wastewater purification; gas storage in the hydrate state; dispersion of high temperature fog and clouds by means of hydrates; water-hydrates emulsion injection into the productive strata to raise the oil recovery factor; obtaining cold in the gas processing to cool the gas, etc.
Ragain, Stephen; Ugander, Johan
2016-01-01
As datasets capturing human choices grow in richness and scale---particularly in online domains---there is an increasing need for choice models that escape traditional choice-theoretic axioms such as regularity, stochastic transitivity, and Luce's choice axiom. In this work we introduce the Pairwise Choice Markov Chain (PCMC) model of discrete choice, an inferentially tractable model that does not assume any of the above axioms while still satisfying the foundational axiom of uniform expansio...
Decomposition of Copper (II) Sulfate Pentahydrate: A Sequential Gravimetric Analysis.
Harris, Arlo D.; Kalbus, Lee H.
1979-01-01
Describes an improved experiment of the thermal dehydration of copper (II) sulfate pentahydrate. The improvements described here are control of the temperature environment and a quantitative study of the decomposition reaction to a thermally stable oxide. Data will suffice to show sequential gravimetric analysis. (Author/SA)
Qualitative Functional Decomposition Analysis of Evolved Neuromorphic Flight Controllers
Directory of Open Access Journals (Sweden)
Sanjay K. Boddhu
2012-01-01
Full Text Available In the previous work, it was demonstrated that one can effectively employ CTRNN-EH (a neuromorphic variant of EH method methodology to evolve neuromorphic flight controllers for a flapping wing robot. This paper describes a novel frequency grouping-based analysis technique, developed to qualitatively decompose the evolved controllers into explainable functional control blocks. A summary of the previous work related to evolving flight controllers for two categories of the controller types, called autonomous and nonautonomous controllers, is provided, and the applicability of the newly developed decomposition analysis for both controller categories is demonstrated. Further, the paper concludes with appropriate discussion of ongoing work and implications for possible future work related to employing the CTRNN-EH methodology and the decomposition analysis techniques presented in this paper.
Task Decomposition in Human Reliability Analysis
Energy Technology Data Exchange (ETDEWEB)
Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory
2014-06-01
In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.
Approximate modal analysis using Fourier decomposition
International Nuclear Information System (INIS)
Kozar, Ivica; Jericevic, Zeljko; Pecak, Tatjana
2010-01-01
The paper presents a novel numerical approach for approximate solution of eigenvalue problem and investigates its suitability for modal analysis of structures with special attention on plate structures. The approach is based on Fourier transformation of the matrix equation into frequency domain and subsequent removal of potentially less significant frequencies. The procedure results in a much reduced problem that is used in eigenvalue calculation. After calculation eigenvectors are expanded and transformed back into time domain. The principles are presented in Jericevic [1]. Fourier transform can be formulated in a way that some parts of the matrix that should not be approximated are not transformed but are fully preserved. In this paper we present formulation that preserves central or edge parts of the matrix and compare it with the formulation that performs transform on the whole matrix. Numerical experiments on transformed structural dynamic matrices describe quality of the approximations obtained in modal analysis of structures. On the basis of the numerical experiments, from the three approaches to matrix reduction one is recommended.
Tensor decompositions for the analysis of atomic resolution electron energy loss spectra
Energy Technology Data Exchange (ETDEWEB)
Spiegelberg, Jakob; Rusz, Ján [Department of Physics and Astronomy, Uppsala University, Box 516, S-751 20 Uppsala (Sweden); Pelckmans, Kristiaan [Department of Information Technology, Uppsala University, Box 337, S-751 05 Uppsala (Sweden)
2017-04-15
A selection of tensor decomposition techniques is presented for the detection of weak signals in electron energy loss spectroscopy (EELS) data. The focus of the analysis lies on the correct representation of the simulated spatial structure. An analysis scheme for EEL spectra combining two-dimensional and n-way decomposition methods is proposed. In particular, the performance of robust principal component analysis (ROBPCA), Tucker Decompositions using orthogonality constraints (Multilinear Singular Value Decomposition (MLSVD)) and Tucker decomposition without imposed constraints, canonical polyadic decomposition (CPD) and block term decompositions (BTD) on synthetic as well as experimental data is examined. - Highlights: • A scheme for compression and analysis of EELS or EDX data is proposed. • Several tensor decomposition techniques are presented for BSS on hyperspectral data. • Robust PCA and MLSVD are discussed for denoising of raw data.
Image decomposition as a tool for validating stress analysis models
Directory of Open Access Journals (Sweden)
Mottershead J.
2010-06-01
Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-05
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Chieng, Norman; Trnka, Hjalte; Boetker, Johan
2013-01-01
The purpose of this study is to investigate the use of multivariate data analysis for powder X-ray diffraction-pair-wise distribution function (PXRD-PDF) data to detect phase separation in freeze-dried binary amorphous systems. Polymer-polymer and polymer-sugar binary systems at various ratios were...... freeze-dried. All samples were analyzed by PXRD, transformed to PDF and analyzed by principal component analysis (PCA). These results were validated by differential scanning calorimetry (DSC) through characterization of glass transition of the maximally freeze-concentrate solute (Tg'). Analysis of PXRD......-PDF data using PCA provides a more clear 'miscible' or 'phase separated' interpretation through the distribution pattern of samples on a score plot presentation compared to residual plot method. In a phase separated system, samples were found to be evenly distributed around the theoretical PDF profile...
Iterative analysis of cerebrovascular reactivity dynamic response by temporal decomposition.
van Niftrik, Christiaan Hendrik Bas; Piccirelli, Marco; Bozinov, Oliver; Pangalu, Athina; Fisher, Joseph A; Valavanis, Antonios; Luft, Andreas R; Weller, Michael; Regli, Luca; Fierstra, Jorn
2017-09-01
To improve quantitative cerebrovascular reactivity (CVR) measurements and CO 2 arrival times, we present an iterative analysis capable of decomposing different temporal components of the dynamic carbon dioxide- Blood Oxygen-Level Dependent (CO 2 -BOLD) relationship. Decomposition of the dynamic parameters included a redefinition of the voxel-wise CO 2 arrival time, and a separation from the vascular response to a stepwise increase in CO 2 (Delay to signal Plateau - DTP) and a decrease in CO 2 (Delay to signal Baseline -DTB). Twenty-five (normal) datasets, obtained from BOLD MRI combined with a standardized pseudo-square wave CO 2 change, were co-registered to generate reference atlases for the aforementioned dynamic processes to score the voxel-by-voxel deviation probability from normal range. This analysis is further illustrated in two subjects with unilateral carotid artery occlusion using these reference atlases. We have found that our redefined CO 2 arrival time resulted in the best data fit. Additionally, excluding both dynamic BOLD phases (DTP and DTB) resulted in a static CVR, that is maximal response, defined as CVR calculated only over a normocapnic and hypercapnic calibrated plateau. Decomposition and novel iterative modeling of different temporal components of the dynamic CO 2 -BOLD relationship improves quantitative CVR measurements.
Analysis of large fault trees based on functional decomposition
International Nuclear Information System (INIS)
Contini, Sergio; Matuzas, Vaidas
2011-01-01
With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.
Analysis of large fault trees based on functional decomposition
Energy Technology Data Exchange (ETDEWEB)
Contini, Sergio, E-mail: sergio.contini@jrc.i [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy); Matuzas, Vaidas [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy)
2011-03-15
With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.
Structural decomposition analysis of Australia's greenhouse gas emissions
International Nuclear Information System (INIS)
Wood, Richard
2009-01-01
A complex system of production links our greenhouse gas emissions to our consumer demands. Whilst progress may be made in improving efficiency, other changes in the production structure may easily annul global improvements. Utilising a structural decomposition analysis, a comparative-static technique of input-output analysis, over a time period of around 30 years, net greenhouse emissions are decomposed in this study into the effects, due to changes in industrial efficiency, forward linkages, inter-industry structure, backward linkages, type of final demand, cause of final demand, population affluence, population size, and mix and level of exports. Historically, significant competing forces at both the whole of economy and industrial scale have been mitigating potential improvements. Key sectors and structural influences are identified that have historically shown the greatest potential for change, and would likely have the greatest net impact. Results clearly reinforce that the current dichotomy of growth and exports are the key problems in need of address.
Pairwise harmonics for shape analysis
Zheng, Youyi; Tai, Chiewlan; Zhang, Eugene; Xu, Pengfei
2013-01-01
efficient algorithms than the state-of-the-art methods for three applications: intrinsic reflectional symmetry axis computation, matching shape extremities, and simultaneous surface segmentation and skeletonization. © 2012 IEEE.
Pareto optimal pairwise sequence alignment.
DeRonne, Kevin W; Karypis, George
2013-01-01
Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.
South Africa's electricity consumption: A sectoral decomposition analysis
International Nuclear Information System (INIS)
Inglesi-Lotz, Roula; Blignaut, James N.
2011-01-01
Highlights: → We conduct a decomposition exercise of the South African electricity consumption. → The increase in electricity consumption was due to output and structural changes. → The increasing at a low rate electricity intensity was a decreasing factor to consumption. → Increases in production were proven to be part of the rising trend for all sectors. → Only 5 sectors' consumption were negatively affected by efficiency improvements. -- Abstract: South Africa's electricity consumption has shown a sharp increase since the early 1990s. Here we conduct a sectoral decomposition analysis of the electricity consumption for the period 1993-2006 to determine the main drivers responsible for this increase. The results show that the increase was mainly due to output or production related factors, with structural changes playing a secondary role. While there is some evidence of efficiency improvements, indicated here as a slowdown in the rate of increase of electricity intensity, it was not nearly sufficient to offset the combined production and structural effects that propelled electricity consumption forward. This general economy-wide statement, however, can be misleading since the results, in essence, are very sector specific and the inter-sectoral differences are substantial. Increases in production were proven to be part of the rising trend for all sectors. However, only five out of fourteen sectors were affected by efficiency improvements, while the structural changes affected the sectors' electricity consumption in different ways. These differences concerning the production, structural and efficiency effects on the sectors indicate the need for a sectoral approach in the energy policy-making of the country rather than a blanket or unilateral economy-wide approach.
Selecting numerical scales for pairwise comparisons
International Nuclear Information System (INIS)
Elliott, Michael A.
2010-01-01
It is often desirable in decision analysis problems to elicit from an individual the rankings of a population of attributes according to the individual's preference and to understand the degree to which each attribute is preferred to the others. A common method for obtaining this information involves the use of pairwise comparisons, which allows an analyst to convert subjective expressions of preference between two attributes into numerical values indicating preferences across the entire population of attributes. Key to the use of pairwise comparisons is the underlying numerical scale that is used to convert subjective linguistic expressions of preference into numerical values. This scale represents the psychological manner in which individuals perceive increments of preference among abstract attributes and it has important implications about the distribution and consistency of an individual's preferences. Three popular scale types, the traditional integer scales, balanced scales and power scales are examined. Results of a study of 64 individuals responding to a hypothetical decision problem show that none of these scales can accurately capture the preferences of all individuals. A study of three individuals working on an actual engineering decision problem involving the design of a decay heat removal system for a nuclear fission reactor show that the choice of scale can affect the preferred decision. It is concluded that applications of pairwise comparisons would benefit from permitting participants to choose the scale that best models their own particular way of thinking about the relative preference of attributes.
[Effects decomposition in mediation analysis: a numerical example].
Zugna, Daniela; Richiardi, Lorenzo
2018-01-01
Mediation analysis aims to decompose the total effect of the exposure on the outcome into a direct effect (unmediated) and an indirect effect (mediated by a mediator). When the interest also lies on understanding whether the exposure effect differs in different sub-groups of study population or under different scenarios, the mediation analysis needs to be integrated with interaction analysis. In this setting it is necessary to decompose the total effect not only into two components, the direct and indirect effects, but other two components linked to interaction. The interaction between the exposure and the mediator in their effect on the outcome could indeed act through the effect of the exposure on the mediator or through the mediator when the mediator is not totally explained by the exposure. We describe options for decomposition, proposed in literature, of the total effect and we illustrate them through a hypothetical example of the effect of age at diagnosis of cancer on survival, mediated and unmediated by the therapeutical approach, and a numerical example.
Cluster analysis by optimal decomposition of induced fuzzy sets
Energy Technology Data Exchange (ETDEWEB)
Backer, E
1978-01-01
Nonsupervised pattern recognition is addressed and the concept of fuzzy sets is explored in order to provide the investigator (data analyst) additional information supplied by the pattern class membership values apart from the classical pattern class assignments. The basic ideas behind the pattern recognition problem, the clustering problem, and the concept of fuzzy sets in cluster analysis are discussed, and a brief review of the literature of the fuzzy cluster analysis is given. Some mathematical aspects of fuzzy set theory are briefly discussed; in particular, a measure of fuzziness is suggested. The optimization-clustering problem is characterized. Then the fundamental idea behind affinity decomposition is considered. Next, further analysis takes place with respect to the partitioning-characterization functions. The iterative optimization procedure is then addressed. The reclassification function is investigated and convergence properties are examined. Finally, several experiments in support of the method suggested are described. Four object data sets serve as appropriate test cases. 120 references, 70 figures, 11 tables. (RWR)
Kinetic analysis of the thermal decomposition of Li4Ti5O12 pellets
Directory of Open Access Journals (Sweden)
Hugo A. Mosqueda
2011-12-01
Full Text Available A single dynamic kinetic analysis, describing the surface decomposition of Li4Ti5O12 pellets, has been performed. Samples were analyzed by X-ray diffraction and scanning electron microscopy. The analyses were performed between 1000 and 1100°C and different times, perceiving the Li4Ti5O12 decomposition to Li2Ti3O7, with a loss of lithium. As expected, more rapid decomposition behaviour was found at higher temperatures. Finally, the activation energy for this decomposition of Li4Ti5O12 to Li2Ti3O7 was estimated to be equal to 383 kJ/mol.
Xingyan Huang; Cornelis F. De Hoop; Jiulong Xie; Chung-Yun Hse; Jinqiu Qi; Yuzhu Chen; Feng Li
2017-01-01
The thermal decomposition characteristics of microwave liquefied rape straw residues with respect to liquefaction condition and pyrolysis conversion were investigated using a thermogravimetric (TG) analyzer at the heating rates of 5, 20, 50 Â°C min-1. The hemicellulose decomposition peak was absent at the derivative thermogravimetric analysis (DTG...
Thermal decomposition kinetics of sorghum straw via thermogravimetric analysis.
Dhyani, Vaibhav; Kumar, Jitendra; Bhaskar, Thallada
2017-12-01
The thermal decomposition of sorghum straw was investigated by non-isothermal thermogravimetric analysis, where the determination of kinetic triplet (activation energy, pre-exponential factor, and reaction model), was the key objective. The activation energy was determined using different isoconversional methods: Friedman, Flynn-Wall-Ozawa (FWO), Kissinger-Akahira-Sunose (KAS), Starink, Iterative method of Chai & Chen, Vyazovkin AIC method, and Li & Tang equation. The pre-exponential factor was calculated using Kissinger's equation; while the reaction model was predicted by comparison of z-master plot obtained from experimental values with the theoretical plots. The values of activation energy obtained from isoconversional methods were further used for evaluation of thermodynamic parameters, enthalpy, entropy and Gibbs free energy. Results showed three zones of pyrolysis having average activation energy values of 151.21kJ/mol, 116.15kJ/mol, and 136.65kJ/mol respectively. The data was well fitting with two-dimension 'Valensi' model for conversion values from 0 to 0.4 with a coefficient of determination (R 2 ) value of 0.988, and with third order reaction model for values from 0.4 to 0.9 with an R 2 value of 0.843. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multi-country comparisons of energy performance: The index decomposition analysis approach
International Nuclear Information System (INIS)
Ang, B.W.; Xu, X.Y.; Su, Bin
2015-01-01
Index decomposition analysis (IDA) is a popular tool for studying changes in energy consumption over time in a country or region. This specific application of IDA, which may be called temporal decomposition analysis, has been extended by researchers and analysts to study variations in energy consumption or energy efficiency between countries or regions, i.e. spatial decomposition analysis. In spatial decomposition analysis, the main objective is often to understand the relative contributions of overall activity level, activity structure, and energy intensity in explaining differences in total energy consumption between two countries or regions. We review the literature of spatial decomposition analysis, investigate the methodological issues, and propose a spatial decomposition analysis framework for multi-region comparisons. A key feature of the proposed framework is that it passes the circularity test and provides consistent results for multi-region comparisons. A case study in which 30 regions in China are compared and ranked based on their performance in energy consumption is presented. - Highlights: • We conducted cross-regional comparisons of energy consumption using IDA. • We proposed two criteria for IDA method selection in spatial decomposition analysis. • We proposed a new model for regional comparison that passes the circularity test. • Features of the new model are illustrated using the data of 30 regions in China
Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis
Directory of Open Access Journals (Sweden)
Jacques Duchêne
2008-05-01
Full Text Available The aim of this paper was to compare empirical mode decomposition (EMD and two new extended methods of Ã¢Â€Â‰EMD named complex empirical mode decomposition (complex-EMD and bivariate empirical mode decomposition (bivariate-EMD. All methods were used to analyze stabilogram center of pressure (COP time series. The two new methods are suitable to be applied to complex time series to extract complex intrinsic mode functions (IMFs before the Hilbert transform is subsequently applied on the IMFs. The trace of the analytic IMF in the complex plane has a circular form, with each IMF having its own rotation frequency. The area of the circle and the average rotation frequency of IMFs represent efficient indicators of the postural stability status of subjects. Experimental results show the effectiveness of these indicators to identify differences in standing posture between groups.
Ferrocene Orientation Determined Intramolecular Interactions Using Energy Decomposition Analysis
Directory of Open Access Journals (Sweden)
Feng Wang
2015-11-01
Full Text Available Two very different quantum mechanically based energy decomposition analyses (EDA schemes are employed to study the dominant energy differences between the eclipsed and staggered ferrocene conformers. One is the extended transition state (ETS based on the Amsterdam Density Functional (ADF package and the other is natural EDA (NEDA based in the General Atomic and Molecular Electronic Structure System (GAMESS package. It reveals that in addition to the model (theory and basis set, the fragmentation channels more significantly affect the interaction energy terms (ΔE between the conformers. It is discovered that such an interaction energy can be absorbed into the pre-partitioned fragment channels so that to affect the interaction energies in a particular conformer of Fc. To avoid this, the present study employs a complete fragment channel—the fragments of ferrocene are individual neutral atoms. It therefore discovers that the major difference between the ferrocene conformers is due to the quantum mechanical Pauli repulsive energy and orbital attractive energy, leading to the eclipsed ferrocene the energy preferred structure. The NEDA scheme further indicates that the sum of attractive (negative polarization (POL and charge transfer (CL energies prefers the eclipsed ferrocene. The repulsive (positive deformation (DEF energy, which is dominated by the cyclopentadienyle (Cp rings, prefers the staggered ferrocene. Again, the cancellation results in a small energy residue in favour of the eclipsed ferrocene, in agreement with the ETS scheme. Further Natural Bond Orbital (NBO analysis indicates that all NBO energies, total Lewis (no Fe and lone pair (LP deletion all prefer the eclipsed Fc conformer. The most significant energy preferring the eclipsed ferrocene without cancellation is the interactions between the donor lone pairs (LP of the Fe atom and the acceptor antibond (BD* NBOs of all C–C and C–H bonds in the ligand, LP(Fe-BD*(C–C & C
Gender Differences in Mental Well-Being: A Decomposition Analysis
Madden, David
2010-01-01
The General Health Questionnaire (GHQ) is frequently used as a measure of mental well-being. A consistent pattern across countries is that women report lower levels of mental well-being, as measured by the GHQ. This paper applies decomposition techniques to Irish data for 1994 and 2000 to examine the factors lying behind the gender differences in…
Trends in air pollution in Ireland : A decomposition analysis
Tol, Richard S.J.
2016-01-01
Trends in the emissions to air of sulphur dioxide, nitrogen oxides, carbon monoxide, volatile organic compounds and ammonia in Ireland are analysed with a logarithmic mean Divisia index decomposition for the period of 1990-2009. Emissions fell for four of the five pollutants, with ammonia being
International Nuclear Information System (INIS)
Ferreira, Verónica; Koricheva, Julia; Duarte, Sofia; Niyogi, Dev K.; Guérold, François
2016-01-01
Many streams worldwide are affected by heavy metal contamination, mostly due to past and present mining activities. Here we present a meta-analysis of 38 studies (reporting 133 cases) published between 1978 and 2014 that reported the effects of heavy metal contamination on the decomposition of terrestrial litter in running waters. Overall, heavy metal contamination significantly inhibited litter decomposition. The effect was stronger for laboratory than for field studies, likely due to better control of confounding variables in the former, antagonistic interactions between metals and other environmental variables in the latter or differences in metal identity and concentration between studies. For laboratory studies, only copper + zinc mixtures significantly inhibited litter decomposition, while no significant effects were found for silver, aluminum, cadmium or zinc considered individually. For field studies, coal and metal mine drainage strongly inhibited litter decomposition, while drainage from motorways had no significant effects. The effect of coal mine drainage did not depend on drainage pH. Coal mine drainage negatively affected leaf litter decomposition independently of leaf litter identity; no significant effect was found for wood decomposition, but sample size was low. Considering metal mine drainage, arsenic mines had a stronger negative effect on leaf litter decomposition than gold or pyrite mines. Metal mine drainage significantly inhibited leaf litter decomposition driven by both microbes and invertebrates, independently of leaf litter identity; no significant effect was found for microbially driven decomposition, but sample size was low. Overall, mine drainage negatively affects leaf litter decomposition, likely through negative effects on invertebrates. - Highlights: • A meta-analysis was done to assess the effects of heavy metals on litter decomposition. • Heavy metals significantly and strongly inhibited litter decomposition in streams.
Parallel algorithms for nuclear reactor analysis via domain decomposition method
International Nuclear Information System (INIS)
Kim, Yong Hee
1995-02-01
In this thesis, the neutron diffusion equation in reactor physics is discretized by the finite difference method and is solved on a parallel computer network which is composed of T-800 transputers. T-800 transputer is a message-passing type MIMD (multiple instruction streams and multiple data streams) architecture. A parallel variant of Schwarz alternating procedure for overlapping subdomains is developed with domain decomposition. The thesis provides convergence analysis and improvement of the convergence of the algorithm. The convergence of the parallel Schwarz algorithms with DN(or ND), DD, NN, and mixed pseudo-boundary conditions(a weighted combination of Dirichlet and Neumann conditions) is analyzed for both continuous and discrete models in two-subdomain case and various underlying features are explored. The analysis shows that the convergence rate of the algorithm highly depends on the pseudo-boundary conditions and the theoretically best one is the mixed boundary conditions(MM conditions). Also it is shown that there may exist a significant discrepancy between continuous model analysis and discrete model analysis. In order to accelerate the convergence of the parallel Schwarz algorithm, relaxation in pseudo-boundary conditions is introduced and the convergence analysis of the algorithm for two-subdomain case is carried out. The analysis shows that under-relaxation of the pseudo-boundary conditions accelerates the convergence of the parallel Schwarz algorithm if the convergence rate without relaxation is negative, and any relaxation(under or over) decelerates convergence if the convergence rate without relaxation is positive. Numerical implementation of the parallel Schwarz algorithm on an MIMD system requires multi-level iterations: two levels for fixed source problems, three levels for eigenvalue problems. Performance of the algorithm turns out to be very sensitive to the iteration strategy. In general, multi-level iterations provide good performance when
A posteriori error analysis of multiscale operator decomposition methods for multiphysics models
International Nuclear Information System (INIS)
Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T
2008-01-01
Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples
Theory of pairwise lesion interaction
International Nuclear Information System (INIS)
Harder, Dietrich; Virsik-Peuckert, Patricia; Bartels, Ernst
1992-01-01
A comparison between repair time constants measured both at the molecular and cellular levels has shown that the DNA double strand break is the molecular change of key importance in the causation of cellular effects such as chromosome aberrations and cell inactivation. Cell fusion experiments provided the evidence that it needs the pairwise interaction between two double strand breaks - or more exactly between the two ''repair sites'' arising from them in the course of enzymatic repair - to provide the faulty chromatin crosslink which leads to cytogenetic and cytolethal effects. These modern experiments have confirmed the classical assumption of pairwise lesion interaction (PLI) on which the models of Lea and Neary were based. It seems worthwhile to continue and complete the mathematical treatment of their proposed mechanism in order to show in quantitative terms that the well-known fractionation, protraction and linear energy transfer (LET) irradiation effects are consequences of or can at least be partly attributed to PLI. Arithmetic treatment of PLI - a second order reaction - has also the advantage of providing a prerequisite for further investigations into the stages of development of misrepair products such as chromatin crosslinks. It has been possible to formulate a completely arithmetic theory of PLI by consequently applying three biophysically permitted approximations - pure first order lesion repair kinetics, dose-independent repair time constants and low yield of the ionization/lesion conversion. The mathematical approach will be summarized here, including several formulae not elaborated at the time of previous publications. We will also study an application which sheds light on the chain of events involved in PLI. (author)
Analysis of benzoquinone decomposition in solution plasma process
International Nuclear Information System (INIS)
Bratescu, M.A.; Saito, N.
2016-01-01
The decomposition of p-benzoquinone (p-BQ) in Solution Plasma Processing (SPP) was analyzed by Coherent Anti-Stokes Raman Spectroscopy (CARS) by monitoring the change of the anti-Stokes signal intensity of the vibrational transitions of the molecule, during and after SPP. Just in the beginning of the SPP treatment, the CARS signal intensities of the ring vibrational molecular transitions increased under the influence of the electric field of plasma. The results show that plasma influences the p-BQ molecules in two ways: (i) plasma produces a polarization and an orientation of the molecules in the local electric field of plasma and (ii) the gas phase plasma supplies, in the liquid phase, hydrogen and hydroxyl radicals, which reduce or oxidize the molecules, respectively, generating different carboxylic acids. The decomposition of p-BQ after SPP was confirmed by UV-visible absorption spectroscopy and liquid chromatography
Analysis of benzoquinone decomposition in solution plasma process
Bratescu, M. A.; Saito, N.
2016-01-01
The decomposition of p-benzoquinone (p-BQ) in Solution Plasma Processing (SPP) was analyzed by Coherent Anti-Stokes Raman Spectroscopy (CARS) by monitoring the change of the anti-Stokes signal intensity of the vibrational transitions of the molecule, during and after SPP. Just in the beginning of the SPP treatment, the CARS signal intensities of the ring vibrational molecular transitions increased under the influence of the electric field of plasma. The results show that plasma influences the p-BQ molecules in two ways: (i) plasma produces a polarization and an orientation of the molecules in the local electric field of plasma and (ii) the gas phase plasma supplies, in the liquid phase, hydrogen and hydroxyl radicals, which reduce or oxidize the molecules, respectively, generating different carboxylic acids. The decomposition of p-BQ after SPP was confirmed by UV-visible absorption spectroscopy and liquid chromatography.
Singular value decomposition methods for wave propagation analysis
Czech Academy of Sciences Publication Activity Database
Santolík, Ondřej; Parrot, M.; Lefeuvre, F.
2003-01-01
Roč. 38, č. 1 (2003), s. 10-1-10-13 ISSN 0048-6604 R&D Projects: GA ČR GA205/01/1064 Grant - others:Barrande(CZ) 98039/98055 Institutional research plan: CEZ:AV0Z3042911; CEZ:MSM 113200004 Keywords : wave propagation * singular value decomposition Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.832, year: 2003
Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods
Directory of Open Access Journals (Sweden)
Feng Ma
2014-01-01
Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.
Bonan, G. B.; Wieder, W. R.
2012-12-01
litterfall and model-derived climatic decomposition index. While comparison with the LIDET 10-year litterbag study reveals sharp contrasts between CLM4 and DAYCENT, simulations of steady-state soil carbon show less difference between models. Both CLM4 and DAYCENT significantly underestimate soil carbon. Sensitivity analyses highlight causes of the low soil carbon bias. The terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real-world process-oriented benchmark to evaluate models and can critically inform model development. Analysis of steady-state soil carbon estimates reveal additional, but here different, inferences about model performance.
Václav URUBA
2010-01-01
Separation of the turbulent boundary layer (BL) on a flat plate under adverse pressure gradient was studied experimentally using Time-Resolved PIV technique. The results of spatio-temporal analysis of flow-field in the separation zone are presented. For this purpose, the POD (Proper Orthogonal Decomposition) and its extension BOD (Bi-Orthogonal Decomposition) techniques are applied as well as dynamical approach based on POPs (Principal Oscillation Patterns) method. The study contributes...
1985-06-01
12. It was stated that analysis of the gaseous products showed that they consisted of N2O, NO, N2, CO, CO2, F^CO and traces of N,* The products of...IR, UV and mass spectrometry. These were (yields summarized in Table 1) as follows: No 1 N2O, NO, CO2, CO, HCN, CH2O, and I^O. NO2 and a trace ...Ramirez, "Reaction of Gem-Nitronitroso Compounds with Triethyl Phosphite ," Tetrahedron, Vol. 29, p. 4195, 1973. J. Jappy and P.N. Preston
Variance decomposition-based sensitivity analysis via neural networks
International Nuclear Information System (INIS)
Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo
2003-01-01
This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project
Decomposition method for analysis of closed queuing networks
Directory of Open Access Journals (Sweden)
Yu. G. Nesterov
2014-01-01
Full Text Available This article deals with the method to estimate the average residence time in nodes of closed queuing networks with priorities and a wide range of conservative disciplines to be served. The method is based on a decomposition of entire closed queuing network into a set of simple basic queuing systems such as M|GI|m|N for each node. The unknown average residence times in the network nodes are interrelated through a system of nonlinear equations. The fact that there is a solution of this system has been proved. An iterative procedure based on Newton-Kantorovich method is proposed for finding the solution of such system. This procedure provides fast convergence to solution. Today possibilities of proposed method are limited by known analytical solutions for simple basic queuing systems of M|GI|m|N type.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
Zilong Zhang; Xingpeng Chen; Peter Heck
2014-01-01
Integrated analysis on socio-economic metabolism could provide a basis for understanding and optimizing regional sustainability. The paper conducted socio-economic metabolism analysis by means of the emergy accounting method coupled with data envelopment analysis and decomposition analysis techniques to assess the sustainability of Qingyang city and its eight sub-region system, as well as to identify the major driving factors of performance change during 2000–2007, to serve as the basis for f...
Analysis of Siderite Thermal Decomposition by Differential Scanning Calorimetry
Bell, M. S.; Lin, I.-C.; McKay, D. S.
2000-01-01
Characterization of carbonate devolitilization has important implications for atmospheric interactions and climatic effects related to large meteorite impacts in platform sediments. On a smaller scale, meteorites contain carbonates which have witnessed shock metamorphic events and may record pressure/temperature histories of impact(s). ALH84001 meteorite contains zoned Ca-Mg-Fe-carbonates which formed on Mars. Magnetite crystals are found in the rims and cores of these carbonates and some are associated with void spaces leading to the suggestion by Brearley et al. that the crystals were produced by thermal decomposition of the carbonate at high temperature, possibly by incipient shock melting or devolitilization. Golden et al. recently synthesized spherical Mg-Fe-Ca-carbonates from solution under mild hydrothermal conditions that have similar carbonate compositional zoning to those of ALH84001. They have shown experimental evidence that the carbonate-sulfide-magnetite assemblage in ALH84001 can result from a multistep inorganic process involving heating possibly due to shock events. Experimental shock studies on calcium carbonate prove its stability to approx. 60 GPa, well in excess of the approx. 45 GPa peak pressures indicated by other shock features in ALH84001. In addition, Raman spectroscopy of carbonate globules in ALH84001 indicates no presence of CaO and MgO. Such oxide phases should be found associated with the magnetites in voids if these magnetites are high temperature shock products, the voids resulting from devolitilization of CO2 from calcium or magnesium carbonate. However, if the starting material was siderite (FeCO3), thermal breakdown of the ALH84001 carbonate at 470 C would produce iron oxide + CO2. As no documentation of shock effects in siderite exists, we have begun shock experiments to determine whether or not magnetite is produced by the decomposition of siderite within the < 45GPa pressure window and by the resultant thermal pulse to approx
Determinants of sovereign debt yield spreads under EMU: Pairwise approach
Fazlioglu, S.
2013-01-01
This study aims at providing an empirical analysis of long-term determinants of sovereign debt yield spreads under European EMU (Economic and Monetary Union) through pairwise approach within panel framework. Panel gravity models are increasingly used in the cross-market correlation literature while
Analysis of generalized Schwarz alternating procedure for domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Engquist, B.; Zhao, Hongkai [Univ. of California, Los Angeles, CA (United States)
1996-12-31
The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.
Energy Technology Data Exchange (ETDEWEB)
Comsa, D.C. E-mail: comsadc@mcmaster.ca; Prestwich, W.V.; McNeill, F.E.; Byun, S.H
2004-12-01
The toxic effects of aluminum are cumulative and result in painful forms of renal osteodystrophy, most notably adynamic bone disease and osteomalacia, but also other forms of disease. The Trace Element Group at McMaster University has developed an accelerator-based in vivo procedure for detecting aluminum body burden by neutron activation analysis (NAA). Further refining of the method was necessary for increasing its sensitivity. In this context, the present study proposes an improved algorithm for data analysis, based on spectral decomposition. A new minimum detectable limit (MDL) of (0.7{+-}0.1) mg Al was reached for a local dose of (20{+-}1) mSv. The study also addresses the feasibility of a new data acquisition technique, the electronic rejection of the coincident events detected by a NaI(Tl) system. It is expected that the application of this technique, together with spectral decomposition analysis, would provide an acceptable MDL for the method to be valuable in a clinical setting.
Decomposition analysis of CO2 emissions from passenger cars: The cases of Greece and Denmark
International Nuclear Information System (INIS)
Papagiannaki, Katerina; Diakoulaki, Danae
2009-01-01
The paper presents a decomposition analysis of the changes in carbon dioxide (CO 2 ) emissions from passenger cars in Denmark and Greece, for the period 1990-2005. A time series analysis has been applied based on the logarithmic mean Divisia index I (LMDI I) methodology, which belongs to the wider family of index decomposition approaches. The particularity in road transport that justifies a profound analysis is its remarkably rapid growth during the last decades, followed by a respective increase in emissions. Denmark and Greece have been selected based on the challenging differences of specific socio-economic characteristics of these two small EU countries, as well as on the availability of detailed data used in the frame of the analysis. In both countries, passenger cars are responsible for half of the emissions from road transport as well as for their upward trend, which provokes the implementation of a decomposition analysis focusing exactly on this segment of road transport. The factors examined in the present decomposition analysis are related to vehicles ownership, fuel mix, annual mileage, engine capacity and technology of cars. The comparison of the results discloses the differences in the transportation profiles of the two countries and reveals how they affect the trend of CO 2 emissions.
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
Residential energy consumption in urban China: A decomposition analysis
International Nuclear Information System (INIS)
Zhao Xiaoli; Li Na; Ma, Chunbo
2012-01-01
Residential energy consumption (REC) is the second largest energy use category (10%) in China and urban residents account for 63% of the REC. Understanding the underlying drivers of variations of urban REC thus helps to identify challenges and opportunities and provide advices for future policy measures. This paper applies the LMDI method to a decomposition of China's urban REC during the period of 1998–2007 at disaggregated product/activity level using data collected from a wide range of sources. Our results have shown an extensive structure change towards a more energy-intensive household consumption structure as well as an intensive structure change towards high-quality and cleaner energy such as electricity, oil, and natural gas, which reflects a changing lifestyle and consumption mode in pursuit of a higher level of comfort, convenience and environmental protection. We have also found that China's price reforms in the energy sector have contributed to a reduction of REC while scale factors including increased urban population and income levels have played a key role in the rapid growth of REC. We suggest that further deregulation in energy prices and regulatory as well as voluntary energy efficiency and conservation policies in the residential sector should be promoted. - Highlights: ► We examine china's residential energy consumption (REC) at detailed product level. ► Results show significant extensive and intensive structure changed. ► Price deregulation in the energy sector has contributed a reduction of REC. ► Growth of population and income played a key role in REC rapid growth. ► We provide policy suggestions to promote REC saving.
Doctoral Program Selection Using Pairwise Comparisons.
Tadisina, Suresh K.; Bhasin, Vijay
1989-01-01
The application of a pairwise comparison methodology (Saaty's Analytic Hierarchy Process) to the doctoral program selection process is illustrated. A hierarchy for structuring and facilitating the doctoral program selection decision is described. (Author/MLW)
Directory of Open Access Journals (Sweden)
Batakliev Todor
2014-06-01
Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates
Directory of Open Access Journals (Sweden)
Václav URUBA
2010-12-01
Full Text Available Separation of the turbulent boundary layer (BL on a flat plate under adverse pressure gradient was studied experimentally using Time-Resolved PIV technique. The results of spatio-temporal analysis of flow-field in the separation zone are presented. For this purpose, the POD (Proper Orthogonal Decomposition and its extension BOD (Bi-Orthogonal Decomposition techniques are applied as well as dynamical approach based on POPs (Principal Oscillation Patterns method. The study contributes to understanding physical mechanisms of a boundary layer separation process. The acquired information could be used to improve strategies of a boundary layer separation control.
Vreugdenhil, Andrew J.; Brienne, Stephane H. R.; Markwell, Ross D.; Butler, Ian S.; Finch, James A.
1997-03-01
The O-ethyldithiocarbonate (ethyl xanthate, CH 3CH 2OCS -2) anion is a widely used reagent in mineral processing for the separation of sulphide minerals by froth flotation. Ethyl xanthate interacts with mineral powders to produce a hydrophobic layer on the mineral surface. A novel infrared technique, headspace analysis gas-phase infrared spectroscopy (HAGIS) has been used to study the in situ thermal decomposition products of ethyl xanthate on mineral surfaces. These products include CS 2, COS, CO 2, CH 4, SO 2, and higher molecular weight alkyl-containing species. Decomposition pathways have been proposed with some information determined from 2H- and 13C-isotope labelling experiments.
Statistical physics of pairwise probability models
Directory of Open Access Journals (Sweden)
Yasser Roudi
2009-11-01
Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
Multiscale analysis of damage using dual and primal domain decomposition techniques
Lloberas-Valls, O.; Everdij, F.P.X.; Rixen, D.J.; Simone, A.; Sluys, L.J.
2014-01-01
In this contribution, dual and primal domain decomposition techniques are studied for the multiscale analysis of failure in quasi-brittle materials. The multiscale strategy essentially consists in decomposing the structure into a number of nonoverlapping domains and considering a refined spatial
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.
DEFF Research Database (Denmark)
Han, Xixuan; Clemmensen, Line Katrine Harder
2015-01-01
We propose a general technique for obtaining sparse solutions to generalized eigenvalue problems, and call it Regularized Generalized Eigen-Decomposition (RGED). For decades, Fisher's discriminant criterion has been applied in supervised feature extraction and discriminant analysis, and it is for...
Directory of Open Access Journals (Sweden)
Zhengyan Liu
Full Text Available Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.
Dynamics of pairwise motions in the Cosmic Web
Hellwing, Wojciech A.
2016-10-01
We present results of analysis of the dark matter (DM) pairwise velocity statistics in different Cosmic Web environments. We use the DM velocity and density field from the Millennium 2 simulation together with the NEXUS+ algorithm to segment the simulation volume into voxels uniquely identifying one of the four possible environments: nodes, filaments, walls or cosmic voids. We show that the PDFs of the mean infall velocities v 12 as well as its spatial dependence together with the perpendicular and parallel velocity dispersions bear a significant signal of the large-scale structure environment in which DM particle pairs are embedded. The pairwise flows are notably colder and have smaller mean magnitude in wall and voids, when compared to much denser environments of filaments and nodes. We discuss on our results, indicating that they are consistent with a simple theoretical predictions for pairwise motions as induced by gravitational instability mechanism. Our results indicate that the Cosmic Web elements are coherent dynamical entities rather than just temporal geometrical associations. In addition it should be possible to observationally test various Cosmic Web finding algorithms by segmenting available peculiar velocity data and studying resulting pairwise velocity statistics.
Sugiura, Shinji; Ikeda, Hiroshi
2014-03-01
The decomposition of vertebrate carcasses is an important ecosystem function. Soft tissues of dead vertebrates are rapidly decomposed by diverse animals. However, decomposition of hard tissues such as hairs and feathers is much slower because only a few animals can digest keratin, a protein that is concentrated in hairs and feathers. Although beetles of the family Trogidae are considered keratin feeders, their ecological function has rarely been explored. Here, we investigated the keratin-decomposition function of trogid beetles in heron-breeding colonies where keratin was frequently supplied as feathers. Three trogid species were collected from the colonies and observed feeding on heron feathers under laboratory conditions. We also measured the nitrogen (δ15N) and carbon (δ13C) stable isotope ratios of two trogid species that were maintained on a constant diet (feathers from one heron individual) during 70 days under laboratory conditions. We compared the isotopic signatures of the trogids with the feathers to investigate isotopic shifts from the feathers to the consumers for δ15N and δ13C. We used mixing models (MixSIR and SIAR) to estimate the main diets of individual field-collected trogid beetles. The analysis indicated that heron feathers were more important as food for trogid beetles than were soft tissues under field conditions. Together, the feeding experiment and stable isotope analysis provided strong evidence of keratin decomposition by trogid beetles.
Statistical pairwise interaction model of stock market
Bury, Thomas
2013-03-01
Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.
Metabolic network prediction through pairwise rational kernels.
Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian
2014-09-26
Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy
Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques
2018-04-30
Resources: N/A TOTAL: $18,687 2 TECHNICAL STATUS REPORT Abstract The program goal is analysis of sea ice dynamical behavior using Koopman Mode Decompo...Title: Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques Subject: Monthly Progress Report Period of...Attn: Code 5596 4555 Overlook Avenue, SW Washington, D.C. 20375-5320 E-mail: reports@library.nrl.navy.mil Defense Technical Information Center
Ionic liquid thermal stabilities: decomposition mechanisms and analysis tools.
Maton, Cedric; De Vos, Nils; Stevens, Christian V
2013-07-07
The increasing amount of papers published on ionic liquids generates an extensive quantity of data. The thermal stability data of divergent ionic liquids are collected in this paper with attention to the experimental set-up. The influence and importance of the latter parameters are broadly addressed. Both ramped temperature and isothermal thermogravimetric analysis are discussed, along with state-of-the-art methods, such as TGA-MS and pyrolysis-GC. The strengths and weaknesses of the different methodologies known to date demonstrate that analysis methods should be in line with the application. The combination of data from advanced analysis methods allows us to obtain in-depth information on the degradation processes. Aided with computational methods, the kinetics and thermodynamics of thermal degradation are revealed piece by piece. The better understanding of the behaviour of ionic liquids at high temperature allows selective and application driven design, as well as mathematical prediction for engineering purposes.
Distributed Robustness Analysis of Interconnected Uncertain Systems Using Chordal Decomposition
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard
2014-01-01
Large-scale interconnected uncertain systems commonly have large state and uncertainty dimensions. Aside from the heavy computational cost of performing robust stability analysis in a centralized manner, privacy requirements in the network can also introduce further issues. In this paper, we util...
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Multistage principal component analysis based method for abdominal ECG decomposition
International Nuclear Information System (INIS)
Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas
2015-01-01
Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
Preference Learning and Ranking by Pairwise Comparison
Fürnkranz, Johannes; Hüllermeier, Eyke
This chapter provides an overview of recent work on preference learning and ranking via pairwise classification. The learning by pairwise comparison (LPC) paradigm is the natural machine learning counterpart to the relational approach to preference modeling and decision making. From a machine learning point of view, LPC is especially appealing as it decomposes a possibly complex prediction problem into a certain number of learning problems of the simplest type, namely binary classification. We explain how to approach different preference learning problems, such as label and instance ranking, within the framework of LPC. We primarily focus on methodological aspects, but also address theoretical questions as well as algorithmic and complexity issues.
Statistical physics of pairwise probability models
DEFF Research Database (Denmark)
Roudi, Yasser; Aurell, Erik; Hertz, John
2009-01-01
(dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data......: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying...
Sharaf, Mesbah Fathy; Rashad, Ahmed Shoukry
2016-01-01
There is substantial evidence that on average, urban children have better health outcomes than rural children. This paper investigates the underlying factors that account for the regional disparities in child malnutrition in three Arab countries, namely; Egypt, Jordan, and Yemen. We use data on a nationally representative sample from the most recent rounds of the Demographic and Health Survey. A Blinder-Oaxaca decomposition analysis is conducted to decompose the rural-urban differences in chi...
Li, Guohui; Zhang, Songling; Yang, Hong
2017-01-01
Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...
International Nuclear Information System (INIS)
Cansino, José M.; Román, Rocío; Ordóñez, Manuel
2016-01-01
The aim of this paper is the analysis of structural decomposition of changes in CO_2 emissions in Spain by using an enhanced Structural Decomposition Analysis (SDA) supported by detailed Input–Output tables from the World Input–Output Database (2013) (WIOD) for the period 1995–2009. The decomposition of changes in CO_2 emissions at sectoral level are broken down into six effects: carbonization, energy intensity, technology, structural demand, consumption pattern and scale. The results are interesting, not only for researchers but also for utility companies and policy-makers as soon as past and current political mitigation measures are analyzed in line with such results. The results allow us to conclude that the implementation of the Kyoto Protocol together with European Directives related to the promotion of RES seem to have a positive impact on CO_2 emissions trends in Spain. After reviewing the current mitigation measures in Spain, one policy recommendation is suggested to avoid the rebound effect and to enhance the fight against Climate Change that is tax benefits for those companies that prove reductions in their energy intensity ratios. - Highlights: • Kyoto's Protocol and European Directives acted against CO_2 emissions in Spain. • Changes in primary energy mix acted against increasing CO_2 emissions. • Energy efficiency seems to have improved. • Historical analysis gives support for most mitigation measures currently in force.
European CO2 emission trends: A decomposition analysis for water and aviation transport sectors
International Nuclear Information System (INIS)
Andreoni, V.; Galmarini, S.
2012-01-01
A decomposition analysis is used to investigate the main factors influencing the CO 2 emissions of European transport activities for the period 2001–2008. The decomposition method developed by Sun has been used to investigate the carbon dioxide emissions intensity, the energy intensity, the structural changes and the economy activity growth effects for the water and the aviation transport sectors. The analysis is based on Eurostat data and results are presented for 14 Member States, Norway and EU27. Results indicate that economic growth has been the main factor behind the carbon dioxide emissions increase in EU27 both for water and aviation transport activities. -- Highlights: ► Decomposition analysis is used to investigate factors that influenced the energy-related CO 2 emissions of European transport. ► Economic growth has been the main factor affecting the energy-related CO 2 emissions increases. ► Investigating the CO 2 emissions drivers is the first step to define energy efficiency policies and emission reduction strategies.
Calzetta, Luigino; Ora, Josuel; Cavalli, Francesco; Rogliani, Paola; O'Donnell, Denis E; Cazzola, Mario
2017-08-01
The ability to exercise is an important clinical outcome in COPD, and the improvement in exercise capacity is recognized to be an important goal in the management of COPD. Therefore, since the current interest in the use of bronchodilators in COPD is gradually shifting towards the dual bronchodilation, we carried out a meta-analysis to evaluate the impact of LABA/LAMA combination on exercise capacity and lung hyperinflation in COPD. RCTs were identified after a search in different databases of published and unpublished trials. The aim of this study was to assess the influence of LABA/LAMA combinations on endurance time (ET) and inspiratory capacity (IC), vs. monocomponents. Eight RCTs including 1632 COPD patients were meta-analysed. LABA/LAMA combinations were significantly (P meta-analysis. This meta-analysis clearly demonstrates that if the goal of the therapy is to enhance exercise capacity in patients with COPD, LABA/LAMA combinations consistently meet the putative clinically meaningful differences for both ET and IC and, in this respect, are superior to their monocomponents. Copyright © 2017 Elsevier Ltd. All rights reserved.
Unjamming in models with analytic pairwise potentials
Kooij, S.; Lerner, E.
Canonical models for studying the unjamming scenario in systems of soft repulsive particles assume pairwise potentials with a sharp cutoff in the interaction range. The sharp cutoff renders the potential nonanalytic but makes it possible to describe many properties of the solid in terms of the
PAIRWISE BLENDING OF HIGH LEVEL WASTE
International Nuclear Information System (INIS)
CERTA, P.J.
2006-01-01
The primary objective of this study is to demonstrate a mission scenario that uses pairwise and incidental blending of high level waste (HLW) to reduce the total mass of HLW glass. Secondary objectives include understanding how recent refinements to the tank waste inventory and solubility assumptions affect the mass of HLW glass and how logistical constraints may affect the efficacy of HLW blending
Thermal Analysis of the Decomposition of Ammonium Uranyl Carbonate (AUC) in Different Atmospheres
DEFF Research Database (Denmark)
Hälldahl, L.; Sørensen, Ole Toft
1979-01-01
The intermediate products formed during thermal decomposition of ammonium uranyl carbonate (AUC) in different atmospheres, (air, helium and hydrogen) have been determined by thermal analysis, (TG, and DTA) and X-ray analysis. The endproducts observed are U3O8 and UO2 in air/He and hydrogen, respe......, respectively. The following intermediate products were observed in all atmospheres: http://www.sciencedirect.com.globalproxy.cvt.dk/cache/MiamiImageURL/B6THV-44K80TV-FB-1/0?wchp=dGLzVlz-zSkWW X-ray diffraction analysis showed that these phases were amorphous....
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Sekiguchi, Kazuki; Shirakawa, Hiroki; Chokawa, Kenta; Araidai, Masaaki; Kangawa, Yoshihiro; Kakimoto, Koichi; Shiraishi, Kenji
2018-04-01
We analyzed the decomposition of Ga(CH3)3 (TMG) during the metal organic vapor phase epitaxy (MOVPE) of GaN on the basis of first-principles calculations and thermodynamic analysis. We performed activation energy calculations of TMG decomposition and determined the main reaction processes of TMG during GaN MOVPE. We found that TMG reacts with the H2 carrier gas and that (CH3)2GaH is generated after the desorption of the methyl group. Next, (CH3)2GaH decomposes into (CH3)GaH2 and this decomposes into GaH3. Finally, GaH3 becomes GaH. In the MOVPE growth of GaN, TMG decomposes into GaH by the successive desorption of its methyl groups. The results presented here concur with recent high-resolution mass spectroscopy results.
s-core network decomposition: A generalization of k-core analysis to weighted networks
Eidsaa, Marius; Almaas, Eivind
2013-12-01
A broad range of systems spanning biology, technology, and social phenomena may be represented and analyzed as complex networks. Recent studies of such networks using k-core decomposition have uncovered groups of nodes that play important roles. Here, we present s-core analysis, a generalization of k-core (or k-shell) analysis to complex networks where the links have different strengths or weights. We demonstrate the s-core decomposition approach on two random networks (ER and configuration model with scale-free degree distribution) where the link weights are (i) random, (ii) correlated, and (iii) anticorrelated with the node degrees. Finally, we apply the s-core decomposition approach to the protein-interaction network of the yeast Saccharomyces cerevisiae in the context of two gene-expression experiments: oxidative stress in response to cumene hydroperoxide (CHP), and fermentation stress response (FSR). We find that the innermost s-cores are (i) different from innermost k-cores, (ii) different for the two stress conditions CHP and FSR, and (iii) enriched with proteins whose biological functions give insight into how yeast manages these specific stresses.
Why did China's energy intensity increase during 1998-2006. Decomposition and policy analysis
International Nuclear Information System (INIS)
Zhao, Xiaoli; Ma, Chunbo; Hong, Dongyue
2010-01-01
Despite the fact that China's energy intensity has continuously decreased during the 1980s and mostly 1990s, the decreasing trend has reversed since 1998 and the past few years have witnessed rapid increase in China's energy intensity. We firstly conduct an index decomposition analysis to identify the key forces behind the increase. It is found that: (1) the high energy demand in industrial sectors is mainly attributed to expansion of production scale, especially in energy-intensive industries; (2) energy saving mainly comes from efficiency improvement, with energy-intensive sectors making the largest contribution; and (3) a heavier industrial structure also contributes to the increase. This study also makes the first attempt to bridge the quantitative decomposition analysis with qualitative policy analyses and fill the gap between decomposition results and policy relevance in previous work. We argue that: (1) energy efficiency improvement in energy-intensive sectors is mainly due to the industrial policies that have been implemented in the past few years; (2) low energy prices have directly contributed to high industrial energy consumption and indirectly to the heavy industrial structure. We provide policy suggestions in the end. (author)
Carbon dioxide emissions from the electricity sector in major countries: a decomposition analysis.
Li, Xiangzheng; Liao, Hua; Du, Yun-Fei; Wang, Ce; Wang, Jin-Wei; Liu, Yanan
2018-03-01
The electric power sector is one of the primary sources of CO 2 emissions. Analyzing the influential factors that result in CO 2 emissions from the power sector would provide valuable information to reduce the world's CO 2 emissions. Herein, we applied the Divisia decomposition method to analyze the influential factors for CO 2 emissions from the power sector from 11 countries, which account for 67% of the world's emissions from 1990 to 2013. We decompose the influential factors for CO 2 emissions into seven areas: the emission coefficient, energy intensity, the share of electricity generation, the share of thermal power generation, electricity intensity, economic activity, and population. The decomposition analysis results show that economic activity, population, and the emission coefficient have positive roles in increasing CO 2 emissions, and their contribution rates are 119, 23.9, and 0.5%, respectively. Energy intensity, electricity intensity, the share of electricity generation, and the share of thermal power generation curb CO 2 emissions and their contribution rates are 17.2, 15.7, 7.7, and 2.8%, respectively. Through decomposition analysis for each country, economic activity and population are the major factors responsible for increasing CO 2 emissions from the power sector. However, the other factors from developed countries can offset the growth in CO 2 emissions due to economic activities.
Energy Technology Data Exchange (ETDEWEB)
O' Hara, Matthew J.; Kellogg, Cyndi M.; Parker, Cyrena M.; Morrison, Samuel S.; Corbey, Jordan F.; Grate, Jay W.
2017-09-01
Ammonium bifluoride (ABF, NH4F·HF) is a well-known reagent for converting metal oxides to fluorides and for its applications in breaking down minerals and ores in order to extract useful components. It has been more recently applied to the decomposition of inorganic matrices prior to elemental analysis. Herein, a sample decomposition method that employs molten ABF sample treatment in the initial step is systematically evaluated across a range of inorganic sample types: glass, quartz, zircon, soil, and pitchblende ore. Method performance is evaluated across the two variables: duration of molten ABF treatment and ABF reagent mass to sample mass ratio. The degree of solubilization of these sample classes are compared to the fluoride stoichiometry that is theoretically necessary to enact complete fluorination of the sample types. Finally, the sample decomposition method is performed on several soil and pitchblende ore standard reference materials, after which elemental constituent analysis is performed by ICP-OES and ICP-MS. Elemental recoveries are compared to the certified values; results indicate good to excellent recoveries across a range of alkaline earth, rare earth, transition metal, and actinide elements.
Energy Technology Data Exchange (ETDEWEB)
Huang, Cunping; T-Raissi, Ali [Central Florida Univ., Florida Solar Energy Center, Cocoa, FL (United States)
2005-05-01
The sulfur-iodine (S-I) thermochemical water splitting cycle is one of the most studied cycles for hydrogen (H{sub 2}) production. S-I cycle consists of four sections: (I) acid production and separation and oxygen purification, (II) sulfuric acid concentration and decomposition, (III) hydroiodic acid (HI) concentration, and (IV) HI decomposition and H{sub 2} purification. Section II of the cycle is an endothermic reaction driven by the heat input from a high temperature source. Analysis of the S-I cycle in the past thirty years have been focused mostly on the utilization of nuclear power as the high temperature heat source for the sulfuric acid decomposition step. Thermodynamic as well as kinetic considerations indicate that both the extent and rate of sulfuric acid decomposition can be improved at very high temperatures (in excess of 1000 deg C) available only from solar concentrators. The beneficial effect of high temperature solar heat for decomposition of sulfuric acid in the S-I cycle is described in this paper. We used Aspen Technologies' HYSYS chemical process simulator (CPS) to develop flowsheets for sulfuric acid (H{sub 2}SO{sub 4}) decomposition that include all mass and heat balances. Based on the HYSYS analyses, two new process flowsheets were developed. These new sulfuric acid decomposition processes are simpler and more stable than previous processes and yield higher conversion efficiencies for the sulfuric acid decomposition and sulfur dioxide and oxygen formation. (Author)
Wavelet decomposition based principal component analysis for face recognition using MATLAB
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
Kabanov, Dmitry; Kasimov, Aslan R.
2018-01-01
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
Kabanov, Dmitry I.
2017-12-08
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
Kabanov, Dmitry
2018-03-20
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
Spectral decomposition in advection-diffusion analysis by finite element methods
International Nuclear Information System (INIS)
Nickell, R.E.; Gartling, D.K.; Strang, G.
1978-01-01
In a recent study of the convergence properties of finite element methods in nonlinear fluid mechanics, an indirect approach was taken. A two-dimensional example with a known exact solution was chosen as the vehicle for the study, and various mesh refinements were tested in an attempt to extract information on the effect of the local Reynolds number. However, more direct approaches are usually preferred. In this study one such direct approach is followed, based upon the spectral decomposition of the solution operator. Spectral decomposition is widely employed as a solution technique for linear structural dynamics problems and can be applied readily to linear, transient heat transfer analysis; in this case, the extension to nonlinear problems is of interest. It was shown previously that spectral techniques were applicable to stiff systems of rate equations, while recent studies of geometrically and materially nonlinear structural dynamics have demonstrated the increased information content of the numerical results. The use of spectral decomposition in nonlinear problems of heat and mass transfer would be expected to yield equally increased flow of information to the analyst, and this information could include a quantitative comparison of various solution strategies, meshes, and element hierarchies
Analysis of Neuronal Sequences Using Pairwise Biases
2015-08-27
semantic memory (knowledge of facts) and implicit memory (e.g., how to ride a bike ). Evidence for the participation of the hippocampus in the formation of...hippocampal formation in an attempt to be cured of severe epileptic seizures. Although the surgery was successful in regards to reducing the frequency and...very different from each other in many ways including duration and number of spikes. Still, these sequences share a similar trend in the general order
Supplier Evaluation Process by Pairwise Comparisons
Directory of Open Access Journals (Sweden)
Arkadiusz Kawa
2015-01-01
Full Text Available We propose to assess suppliers by using consistency-driven pairwise comparisons for tangible and intangible criteria. The tangible criteria are simpler to compare (e.g., the price of a service is lower than that of another service with identical characteristics. Intangible criteria are more difficult to assess. The proposed model combines assessments of both types of criteria. The main contribution of this paper is the presentation of an extension framework for the selection of suppliers in a procurement process. The final weights are computed from relative pairwise comparisons. For the needs of the paper, surveys were conducted among Polish managers dealing with cooperation with suppliers in their enterprises. The Polish practice and restricted bidding are discussed, too.
Harmonic analysis of traction power supply system based on wavelet decomposition
Dun, Xiaohong
2018-05-01
With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.
Reznick, Julia; Friedmann, Naama
2015-01-01
This study examined whether and how the morphological structure of written words affects reading in word-based neglect dyslexia (neglexia), and what can be learned about morphological decomposition in reading from the effect of morphology on neglexia. The oral reading of 7 Hebrew-speaking participants with acquired neglexia at the word level—6 with left neglexia and 1 with right neglexia—was evaluated. The main finding was that the morphological role of the letters on the neglected side of the word affected neglect errors: When an affix appeared on the neglected side, it was neglected significantly more often than when the neglected side was part of the root; root letters on the neglected side were never omitted, whereas affixes were. Perceptual effects of length and final letter form were found for words with an affix on the neglected side, but not for words in which a root letter appeared in the neglected side. Semantic and lexical factors did not affect the participants' reading and error pattern, and neglect errors did not preserve the morpho-lexical characteristics of the target words. These findings indicate that an early morphological decomposition of words to their root and affixes occurs before access to the lexicon and to semantics, at the orthographic-visual analysis stage, and that the effects did not result from lexical feedback. The same effects of morphological structure on reading were manifested by the participants with left- and right-sided neglexia. Since neglexia is a deficit at the orthographic-visual analysis level, the effect of morphology on reading patterns in neglexia further supports that morphological decomposition occurs in the orthographic-visual analysis stage, prelexically, and that the search for the three letters of the root in Hebrew is a trigger for attention shift in neglexia. PMID:26528159
CO2 emissions embodied in China's exports from 2002 to 2008: A structural decomposition analysis
International Nuclear Information System (INIS)
Xu Ming; Li Ran; Crittenden, John C.; Chen Yongsheng
2011-01-01
This study examines the annual CO 2 emissions embodied in China's exports from 2002 to 2008 using environmental input-output analysis. Four driving forces, including emission intensity, economic production structure, export composition, and total export volume, are compared for their contributions to the increase of embodied CO 2 emissions using a structural decomposition analysis (SDA) technique. Although offset by the decrease in emission intensity, the increase of embodied CO 2 emissions was driven by changes of the other three factors. In particular, the change of the export composition was the largest driver, primarily due to the increasing fraction of metal products in China's total export. Relevant policy implications and future research directions are discussed at the end of the paper. - Highlights: → We investigate annual CO 2 emission embodied in China's exports from 2002 to 2008 using environmental input-output analysis. → We conduct a structural decomposition analysis to measure contributions from different driving forces. → Change of export composition was the largest driver for the increase of CO 2 emissions embodied in China's exports. → Increasing fraction of metal products in exports is the key change in export composition.
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2006-01-01
The presence of harmonic components in the measured responses is unavoidable in many applications of Operational Modal Analysis. This is especially true when measuring on mechanical structures containing rotating or reciprocating parts. This paper describes a new method based on the popular...... agreement is found and the method is proven to be an easy-to-use and robust tool for handling responses with deterministic and stochastic content....... Enhanced Frequency Domain Decomposition technique for eliminating the influence of these harmonic components in the modal parameter extraction process. For various experiments, the quality of the method is assessed and compared to the results obtained using broadband stochastic excitation forces. Good...
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA
Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe
2015-01-01
Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-...
The Fourier decomposition method for nonlinear and non-stationary time series analysis.
Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-03-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.
Changes in carbon intensity in China's industrial sector: Decomposition and attribution analysis
International Nuclear Information System (INIS)
Liu, Nan; Ma, Zujun; Kang, Jidong
2015-01-01
The industrial sector accounts for 70% of the total energy-related CO_2 emissions in China. To gain a better understanding of the changes in carbon intensity in China's industrial sector, this study first utilized logarithmic mean Divisia index (LMDI) decomposition analysis to disentangle the carbon intensity into three influencing factors, including the emission coefficient effect, the energy intensity effect, and the structure effect. Then, the analysis was furthered to explore the contributions of individual industrial sub-sectors to each factor by using an extension of the decomposition method proposed in Choi and Ang (2012). The results indicate that from 1996 to 2012, the energy intensity effect was the dominant factor in reducing carbon intensity, of which chemicals, iron and steel, metal and machinery, and cement and ceramics were the most representative sub-sectors. The structure effect did not show a strong impact on carbon intensity. The emission coefficient effect gradually increased the carbon intensity, mainly due to the expansion of electricity consumption, particularly in the metal and machinery and chemicals sub-sectors. The findings suggest that differentiated policies and measures should be considered for various industrial sub-sectors to maximize the energy efficiency potential. Moreover, readjusting the industrial structure and promoting clean and renewable energy is also urgently required to further reduce carbon intensity in China's industrial sector. - Highlights: • The study analyzed the changes in carbon intensity in China's industrial sector. • An extension of the Divisia index decomposition methodology was utilized. • Energy efficiency improvement was the dominant factor reducing carbon intensity. • The sub-sector contributions to the energy efficiency improvement varied markedly. • Emission coefficient growth can be mainly due to the expansion of electricity.
Multidisciplinary Product Decomposition and Analysis Based on Design Structure Matrix Modeling
DEFF Research Database (Denmark)
Habib, Tufail
2014-01-01
Design structure matrix (DSM) modeling in complex system design supports to define physical and logical configuration of subsystems, components, and their relationships. This modeling includes product decomposition, identification of interfaces, and structure analysis to increase the architectural...... interactions across subsystems and components. For this purpose, Cambridge advanced modeler (CAM) software tool is used to develop the system matrix. The analysis of the product (printer) architecture includes clustering, partitioning as well as structure analysis of the system. The DSM analysis is helpful...... understanding of the system. Since product architecture has broad implications in relation to product life cycle issues, in this paper, mechatronic product is decomposed into subsystems and components, and then, DSM model is developed to examine the extent of modularity in the system and to manage multiple...
Directory of Open Access Journals (Sweden)
Daniele Cavalli
2016-09-01
Full Text Available Two features distinguishing soil organic matter simulation models are the type of kinetics used to calculate pool decomposition rates, and the algorithm used to handle the effects of nitrogen (N shortage on carbon (C decomposition. Compared to widely used first-order kinetics, Monod kinetics more realistically represent organic matter decomposition, because they relate decomposition to both substrate and decomposer size. Most models impose a fixed C to N ratio for microbial biomass. When N required by microbial biomass to decompose a given amount of substrate-C is larger than soil available N, carbon decomposition rates are limited proportionally to N deficit (N inhibition hypothesis. Alternatively, C-overflow was proposed as a way of getting rid of excess C, by allocating it to a storage pool of polysaccharides. We built six models to compare the combinations of three decomposition kinetics (first-order, Monod, and reverse Monod, and two ways to simulate the effect of N shortage on C decomposition (N inhibition and C-overflow. We conducted sensitivity analysis to identify model parameters that mostly affected CO2 emissions and soil mineral N during a simulated 189-day laboratory incubation assuming constant water content and temperature. We evaluated model outputs sensitivity at different stages of organic matter decomposition in a soil amended with three inputs of increasing C to N ratio: liquid manure, solid manure, and low-N crop residue. Only few model parameters and their interactions were responsible for consistent variations of CO2 and soil mineral N. These parameters were mostly related to microbial biomass and to the partitioning of applied C among input pools, as well as their decomposition constants. In addition, in models with Monod kinetics, CO2 was also sensitive to a variation of the half-saturation constants. C-overflow enhanced pool decomposition compared to N inhibition hypothesis when N shortage occurred. Accumulated C in the
Unjamming in models with analytic pairwise potentials
Kooij, Stefan; Lerner, Edan
2017-06-01
Canonical models for studying the unjamming scenario in systems of soft repulsive particles assume pairwise potentials with a sharp cutoff in the interaction range. The sharp cutoff renders the potential nonanalytic but makes it possible to describe many properties of the solid in terms of the coordination number z , which has an unambiguous definition in these cases. Pairwise potentials without a sharp cutoff in the interaction range have not been studied in this context, but should in fact be considered to understand the relevance of the unjamming phenomenology in systems where such a cutoff is not present. In this work we explore two systems with such interactions: an inverse power law and an exponentially decaying pairwise potential, with the control parameters being the exponent (of the inverse power law) for the former and the number density for the latter. Both systems are shown to exhibit the characteristic features of the unjamming transition, among which are the vanishing of the shear-to-bulk modulus ratio and the emergence of an excess of low-frequency vibrational modes. We establish a relation between the pressure-to-bulk modulus ratio and the distance to unjamming in each of our model systems. This allows us to predict the dependence of other key observables on the distance to unjamming. Our results provide the means for a quantitative estimation of the proximity of generic glass-forming models to the unjamming transition in the absence of a clear-cut definition of the coordination number and highlight the general irrelevance of nonaffine contributions to the bulk modulus.
Socioeconomic Inequalities in Adult Obesity Prevalence in South Africa: A Decomposition Analysis
Alaba, Olufunke; Chola, Lumbwe
2014-01-01
In recent years, there has been a dramatic increase in obesity in low and middle income countries. However, there is limited research in these countries showing the prevalence and determinants of obesity. In this study, we examine the socioeconomic inequalities in obesity among South African adults. We use nationally representative data from the South Africa National Income Dynamic Survey of 2008 to: (1) construct an asset index using multiple correspondence analyses (MCA) as a proxy for socioeconomic status; (2) estimate concentration indices (CI) to measure socioeconomic inequalities in obesity; and (3) perform a decomposition analysis to determine the factors that contribute to socioeconomic related inequalities. Consistent with other studies, we find that women are more obese than men. The findings show that obesity inequalities exist in South Africa. Rich men are more likely to be obese than their poorer counterparts with a concentration index of 0.27. Women on the other hand have similar obesity patterns, regardless of socioeconomic status with CI of 0.07. The results of the decomposition analysis suggest that asset index contributes positively and highly to socio-economic inequality in obesity among females; physical exercise contributes negatively to the socio-economic inequality. In the case of males, educational attainment and asset index contributed more to socio-economic inequalities in obesity. Our findings suggest that focusing on economically well-off men and all women across socioeconomic status is one way to address the obesity problem in South Africa. PMID:24662998
Socioeconomic Inequalities in Adult Obesity Prevalence in South Africa: A Decomposition Analysis
Directory of Open Access Journals (Sweden)
Olufunke Alaba
2014-03-01
Full Text Available In recent years, there has been a dramatic increase in obesity in low and middle income countries. However, there is limited research in these countries showing the prevalence and determinants of obesity. In this study, we examine the socioeconomic inequalities in obesity among South African adults. We use nationally representative data from the South Africa National Income Dynamic Survey of 2008 to: (1 construct an asset index using multiple correspondence analyses (MCA as a proxy for socioeconomic status; (2 estimate concentration indices (CI to measure socioeconomic inequalities in obesity; and (3 perform a decomposition analysis to determine the factors that contribute to socioeconomic related inequalities. Consistent with other studies, we find that women are more obese than men. The findings show that obesity inequalities exist in South Africa. Rich men are more likely to be obese than their poorer counterparts with a concentration index of 0.27. Women on the other hand have similar obesity patterns, regardless of socioeconomic status with CI of 0.07. The results of the decomposition analysis suggest that asset index contributes positively and highly to socio-economic inequality in obesity among females; physical exercise contributes negatively to the socio-economic inequality. In the case of males, educational attainment and asset index contributed more to socio-economic inequalities in obesity. Our findings suggest that focusing on economically well-off men and all women across socioeconomic status is one way to address the obesity problem in South Africa.
International Nuclear Information System (INIS)
Zhao Xiaojun; Shang Pengjian; Zhao Chuang; Wang Jing; Tao Rui
2012-01-01
Highlights: ► Investigate the effects of linear, exponential and periodic trends on DCCA. ► Apply empirical mode decomposition to extract trend term. ► Strong and monotonic trends are successfully eliminated. ► Get the cross-correlation exponent in a persistent behavior without crossover. - Abstract: Detrended cross-correlation analysis (DCCA) is a scaling method commonly used to estimate long-range power law cross-correlation in non-stationary signals. However, the susceptibility of DCCA to trends makes the scaling results difficult to analyze due to spurious crossovers. We artificially generate long-range cross-correlated signals and systematically investigate the effect of linear, exponential and periodic trends. Specifically to the crossovers raised by trends, we apply empirical mode decomposition method which decomposes underlying signals into several intrinsic mode functions (IMF) and a residual trend. After the removal of residual term, strong and monotonic trends such as linear and exponential trends are successfully eliminated. But periodic trend cannot be separated out according to the criterion of IMF, which can be eliminated by Fourier transform. As a special case of DCCA, detrended fluctuation analysis presents similar results.
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
Explaining time changes in oral health-related quality of life in England: a decomposition analysis.
Tsakos, Georgios; Guarnizo-Herreño, Carol C; O'Connor, Rhiannon; Wildman, John; Steele, Jimmy G; Allen, Patrick Finbarr
2017-12-01
Oral diseases are highly prevalent and impact on oral health-related quality of life (OHRQoL). However, time changes in OHRQoL have been scarcely investigated in the current context of general improvement in clinical oral health. This study aims to examine changes in OHRQoL between 1998 and 2009 among adults in England, and to analyse the contribution of demographics, socioeconomic characteristics and clinical oral health measures. Using data from two nationally representative surveys in England, we assessed changes in the Oral Health Impact Profile-14 (OHIP-14), in both the sample overall (n=12 027) and by quasi-cohorts. We calculated the prevalence and extent of oral impacts and summary OHIP-14 scores. An Oaxaca-Blinder type decomposition analysis was used to assess the contribution of demographics (age, gender, marital status), socioeconomic position (education, occupation) and clinical measures (presence of decay, number of missing teeth, having advanced periodontitis). There were significant improvements in OHRQoL, predominantly among those that experienced oral impacts occasionally, but no difference in the proportion with frequent oral impacts. The decomposition model showed that 43% (-4.07/-9.47) of the decrease in prevalence of oral impacts reported occasionally or more often was accounted by the model explanatory variables. Improvements in clinical oral health and the effect of ageing itself accounted for most of the explained change in OHRQoL, but the effect of these factors varied substantially across the lifecourse and quasi-cohorts. These decomposition findings indicate that broader determinants could be primarily targeted to influence OHRQoL in different age groups or across different adult cohorts. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Thermal decomposition of dolomite under CO2: insights from TGA and in situ XRD analysis.
Valverde, Jose Manuel; Perejon, Antonio; Medina, Santiago; Perez-Maqueda, Luis A
2015-11-28
Thermal decomposition of dolomite in the presence of CO2 in a calcination environment is investigated by means of in situ X-ray diffraction (XRD) and thermogravimetric analysis (TGA). The in situ XRD results suggest that dolomite decomposes directly at a temperature around 700 °C into MgO and CaO. Immediate carbonation of nascent CaO crystals leads to the formation of calcite as an intermediate product of decomposition. Subsequently, decarbonation of this poorly crystalline calcite occurs when the reaction is thermodynamically favorable and sufficiently fast at a temperature depending on the CO2 partial pressure in the calcination atmosphere. Decarbonation of this dolomitic calcite occurs at a lower temperature than limestone decarbonation due to the relatively low crystallinity of the former. Full decomposition of dolomite leads also to a relatively low crystalline CaO, which exhibits a high reactivity as compared to limestone derived CaO. Under CO2 capture conditions in the Calcium-Looping (CaL) process, MgO grains remain inert yet favor the carbonation reactivity of dolomitic CaO especially in the solid-state diffusion controlled phase. The fundamental mechanism that drives the crystallographic transformation of dolomite in the presence of CO2 is thus responsible for its fast calcination kinetics and the high carbonation reactivity of dolomitic CaO, which makes natural dolomite a potentially advantageous alternative to limestone for CO2 capture in the CaL technology as well as SO2in situ removal in oxy-combustion fluidized bed reactors.
The decomposition of deformation: New metrics to enhance shape analysis in medical imaging.
Varano, Valerio; Piras, Paolo; Gabriele, Stefano; Teresi, Luciano; Nardinocchi, Paola; Dryden, Ian L; Torromeo, Concetta; Puddu, Paolo E
2018-05-01
In landmarks-based Shape Analysis size is measured, in most cases, with Centroid Size. Changes in shape are decomposed in affine and non affine components. Furthermore the non affine component can be in turn decomposed in a series of local deformations (partial warps). If the extent of deformation between two shapes is small, the difference between Centroid Size and m-Volume increment is barely appreciable. In medical imaging applied to soft tissues bodies can undergo very large deformations, involving large changes in size. The cardiac example, analyzed in the present paper, shows changes in m-Volume that can reach the 60%. We show here that standard Geometric Morphometrics tools (landmarks, Thin Plate Spline, and related decomposition of the deformation) can be generalized to better describe the very large deformations of biological tissues, without losing a synthetic description. In particular, the classical decomposition of the space tangent to the shape space in affine and non affine components is enriched to include also the change in size, in order to give a complete description of the tangent space to the size-and-shape space. The proposed generalization is formulated by means of a new Riemannian metric describing the change in size as change in m-Volume rather than change in Centroid Size. This leads to a redefinition of some aspects of the Kendall's size-and-shape space without losing Kendall's original formulation. This new formulation is discussed by means of simulated examples using 2D and 3D platonic shapes as well as a real example from clinical 3D echocardiographic data. We demonstrate that our decomposition based approaches discriminate very effectively healthy subjects from patients affected by Hypertrophic Cardiomyopathy. Copyright © 2018 Elsevier B.V. All rights reserved.
Sharaf, Mesbah Fathy; Rashad, Ahmed Shoukry
2016-12-01
There is substantial evidence that on average, urban children have better health outcomes than rural children. This paper investigates the underlying factors that account for the regional disparities in child malnutrition in three Arab countries, namely; Egypt, Jordan, and Yemen. We use data on a nationally representative sample from the most recent rounds of the Demographic and Health Survey. A Blinder-Oaxaca decomposition analysis is conducted to decompose the rural-urban differences in child nutrition outcomes into two components; one that is explained by regional differences in the level of the determinants (covariate effects), and another component that is explained by differences in the effect of the determinants on the child nutritional status (coefficient effects). Results show that the under-five stunting rates are 20 % in Egypt, 46.5 % in Yemen, and 7.7 % in Jordan. The rural- urban gap in child malnutrition was minor in the case of Egypt (2.3 %) and Jordan (1.5 %), while the regional gap was significant in the case of Yemen (17.7 %). Results of the Blinder-Oaxaca decomposition show that the covariate effect is dominant in the case of Yemen while the coefficients effect dominates in the case of Jordan. Income inequality between urban and rural households explains most of the malnutrition gap. Results were robust to the different decomposition weighting schemes. By identifying the underlying factors behind the rural- urban health disparities, the findings of this paper help in designing effective intervention measures aimed at reducing regional inequalities and improving population health outcomes.
Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min
2016-04-13
In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.
Decomposition analysis of gas consumption in the residential sector in Ireland
International Nuclear Information System (INIS)
Rogan, Fionn; Cahill, Caiman J.; Ó Gallachóir, Brian P.
2012-01-01
To-date, decomposition analysis has been widely used at the macro-economic level and for in-depth analyses of the industry and transport sectors; however, its application in the residential sector has been rare. This paper uses the Log-Mean Divisia Index I (LMDI-I) methodology to decompose gas consumption trends in the gas-connected residential sector in Ireland from 1990 to 2008, which despite an increasing number of energy efficiency policies, experienced total final consumption growth of 470%. The analysis decomposes this change in gas consumption into a number of effects, examining the impact over time of market factors such as a growing customer base, varying mix of dwelling types, changing share of vacant dwellings, changing size of new dwellings, the impact of building regulations policy and other factors such as the weather. The analysis finds the most significant effects are changing customer numbers and changing intensity; the analysis also quantifies the impact of building regulations and compares it with other effects such as changing size of new dwellings. By comparing the historical impact on gas consumption of policy factors and non-policy factors, this paper highlights the challenge for policy-makers in achieving overall energy consumption reduction. - Highlights: ► Contribution to a gap in the literature with a residential sector decomposition analysis of gas TFC. ► Activity effect had the largest impact and was cumulatively the best explainer of total TFC change. ► Intensity effect was the second biggest effect with a 19% share of total TFC change. ► In line with rising surface temperatures, the weather effect is declining over time. ► Building regulations are having a diminishing impact but are being negated by larger dwellings.
Pairwise Trajectory Management (PTM): Concept Overview
Jones, Kenneth M.; Graff, Thomas J.; Chartrand, Ryan C.; Carreno, Victor; Kibler, Jennifer L.
2017-01-01
Pairwise Trajectory Management (PTM) is an Interval Management (IM) concept that utilizes airborne and ground-based capabilities to enable the implementation of airborne pairwise spacing capabilities in oceanic regions. The goal of PTM is to use airborne surveillance and tools to manage an "at or greater than" inter-aircraft spacing. Due to the precision of Automatic Dependent Surveillance-Broadcast (ADS-B) information and the use of airborne spacing guidance, the PTM minimum spacing distance will be less than distances a controller can support with current automation systems that support oceanic operations. Ground tools assist the controller in evaluating the traffic picture and determining appropriate PTM clearances to be issued. Avionics systems provide guidance information that allows the flight crew to conform to the PTM clearance issued by the controller. The combination of a reduced minimum distance and airborne spacing management will increase the capacity and efficiency of aircraft operations at a given altitude or volume of airspace. This paper provides an overview of the proposed application, description of a few key scenarios, high level discussion of expected air and ground equipment and procedure changes, overview of a potential flight crew human-machine interface that would support PTM operations and some initial PTM benefits results.
Multiresolution signal decomposition schemes
J. Goutsias (John); H.J.A.M. Heijmans (Henk)
1998-01-01
textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis
The Decomposition Analysis of CO2 Emission and Economic Growth in Pakistan India and China
Directory of Open Access Journals (Sweden)
Muhammad Irfan Javaid Attari
2011-12-01
Full Text Available The conflict between economic growth and keeping greenhouse gases (GHG at controllable levels is one of the ultimate challenges of this century. The aim of Kyoto Protocol is to keep the level of carbon dioxide (CO2 below a certain threshold level. The purpose of this paper is to study the effect of CO2 emission on economic growth by conducting the regional analysis of PIC nations i.e. Pakistan, India and China. The study also provides the detail information regarding the atmospheric emission by applying decomposition analysis. It is suggested that environmental policies need more attention in the region by keeping the differences aside. So, the emission trading is considered to be the new concept. The approach should be introduced to tackle down the global warming in the region. Now it is time to respond because the low Carbon Economy is the reality.
International Nuclear Information System (INIS)
Wang Wen-Bo; Zhang Xiao-Dong; Chang Yuchan; Wang Xiang-Li; Wang Zhao; Chen Xi; Zheng Lei
2016-01-01
In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. (paper)
Thermal and X-ray diffraction analysis studies during the decomposition of ammonium uranyl nitrate
Kim, B. H.; Lee, Y. B.; Prelas, M. A.; Ghosh, T. K.
2012-01-01
Two types of ammonium uranyl nitrate (NH4)2UO2(NO3)4?2H2O and NH4UO2(NO3)3, were thermally decomposed and reduced in a TG-DTA unit in nitrogen, air, and hydrogen atmospheres. Various intermediate phases produced by the thermal decomposition and reduction process were investigated by an X-ray diffraction analysis and a TG/DTA analysis. Both (NH4)2UO2(NO3)4?2H2O and NH4UO2(NO3)3 decomposed to amorphous UO3 regardless of the atmosphere used. The amorphous UO3 from (NH4)2UO2(NO3)4?2H2O was crysta...
Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition
Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso
2005-04-01
Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.
Understanding China’s past and future energy demand: An exergy efficiency and decomposition analysis
International Nuclear Information System (INIS)
Brockway, Paul E.; Steinberger, Julia K.; Barrett, John R.; Foxon, Timothy J.
2015-01-01
Highlights: • We complete the first time series exergy and useful work study of China (1971–2010). • Novel exergy approach to understand China’s past and future energy consumption. • China’s exergy efficiency rose from 5% to 13%, and is now above US (11%). • Decomposition finds this is due to structural change not technical leapfrogging. • Results suggests current models may underestimate China’s future energy demand. - Abstract: There are very few useful work and exergy analysis studies for China, and fewer still that consider how the results inform drivers of past and future energy consumption. This is surprising: China is the world’s largest energy consumer, whilst exergy analysis provides a robust thermodynamic framework for analysing the technical efficiency of energy use. In response, we develop three novel sub-analyses. First we perform a long-term whole economy time-series exergy analysis for China (1971–2010). We find a 10-fold growth in China’s useful work since 1971, which is supplied by a 4-fold increase in primary energy coupled to a 2.5-fold gain in aggregate exergy conversion efficiency to useful work: from 5% to 12.5%. Second, using index decomposition we expose the key driver of efficiency growth as not ‘technological leapfrogging’ but structural change: i.e. increasing reliance on thermodynamically efficient (but very energy intensive) heavy industrial activities. Third, we extend our useful work analysis to estimate China’s future primary energy demand, and find values for 2030 that are significantly above mainstream projections
Energy Technology Data Exchange (ETDEWEB)
Jovanovic, S M [Nikola Tesla Inst., Belgrade (YU)
1990-01-01
This paper presents a model and an appropriate numerical procedure for a four-level time decomposition quasi-static power flow and successive disturbances analysis of power systems. The analysis consists of the sequential computation of the zero, primary, secondary and tertiary quasi-static states and of the estimation of successive structural disturbances during the 1200 s dynamics after a structural disturbance. The model is developed by detailed inspection of the time decomposition characteristics of automatic protection and control devices. Adequate speed of the numerical procedure is attained by a specific application of the inversion matrix lemma and the decoupled model constant coefficient matrices. The four-level time decomposition quasi-static method is intended for security and emergency analysis. (author).
Pairwise Trajectory Management (PTM): Concept Description and Documentation
Jones, Kenneth M.; Graff, Thomas J.; Carreno, Victor; Chartrand, Ryan C.; Kibler, Jennifer L.
2018-01-01
Pairwise Trajectory Management (PTM) is an Interval Management (IM) concept that utilizes airborne and ground-based capabilities to enable the implementation of airborne pairwise spacing capabilities in oceanic regions. The goal of PTM is to use airborne surveillance and tools to manage an "at or greater than" inter-aircraft spacing. Due to the accuracy of Automatic Dependent Surveillance-Broadcast (ADS-B) information and the use of airborne spacing guidance, the minimum PTM spacing distance will be less than distances a controller can support with current automation systems that support oceanic operations. Ground tools assist the controller in evaluating the traffic picture and determining appropriate PTM clearances to be issued. Avionics systems provide guidance information that allows the flight crew to conform to the PTM clearance issued by the controller. The combination of a reduced minimum distance and airborne spacing management will increase the capacity and efficiency of aircraft operations at a given altitude or volume of airspace. This document provides an overview of the proposed application, a description of several key scenarios, a high level discussion of expected air and ground equipment and procedure changes, a description of a NASA human-machine interface (HMI) prototype for the flight crew that would support PTM operations, and initial benefits analysis results. Additionally, included as appendices, are the following documents: the PTM Operational Services and Environment Definition (OSED) document and a companion "Future Considerations for the Pairwise Trajectory Management (PTM) Concept: Potential Future Updates for the PTM OSED" paper, a detailed description of the PTM algorithm and PTM Limit Mach rules, initial PTM safety requirements and safety assessment documents, a detailed description of the design, development, and initial evaluations of the proposed flight crew HMI, an overview of the methodology and results of PTM pilot training
Revision of Begomovirus taxonomy based on pairwise sequence comparisons
Brown, Judith K.
2015-04-18
Viruses of the genus Begomovirus (family Geminiviridae) are emergent pathogens of crops throughout the tropical and subtropical regions of the world. By virtue of having a small DNA genome that is easily cloned, and due to the recent innovations in cloning and low-cost sequencing, there has been a dramatic increase in the number of available begomovirus genome sequences. Even so, most of the available sequences have been obtained from cultivated plants and are likely a small and phylogenetically unrepresentative sample of begomovirus diversity, a factor constraining taxonomic decisions such as the establishment of operationally useful species demarcation criteria. In addition, problems in assigning new viruses to established species have highlighted shortcomings in the previously recommended mechanism of species demarcation. Based on the analysis of 3,123 full-length begomovirus genome (or DNA-A component) sequences available in public databases as of December 2012, a set of revised guidelines for the classification and nomenclature of begomoviruses are proposed. The guidelines primarily consider a) genus-level biological characteristics and b) results obtained using a standardized classification tool, Sequence Demarcation Tool, which performs pairwise sequence alignments and identity calculations. These guidelines are consistent with the recently published recommendations for the genera Mastrevirus and Curtovirus of the family Geminiviridae. Genome-wide pairwise identities of 91 % and 94 % are proposed as the demarcation threshold for begomoviruses belonging to different species and strains, respectively. Procedures and guidelines are outlined for resolving conflicts that may arise when assigning species and strains to categories wherever the pairwise identity falls on or very near the demarcation threshold value.
Revision of Begomovirus taxonomy based on pairwise sequence comparisons
Brown, Judith K.; Zerbini, F. Murilo; Navas-Castillo, Jesú s; Moriones, Enrique; Ramos-Sobrinho, Roberto; Silva, José C. F.; Fiallo-Olivé , Elvira; Briddon, Rob W.; Herná ndez-Zepeda, Cecilia; Idris, Ali; Malathi, V. G.; Martin, Darren P.; Rivera-Bustamante, Rafael; Ueda, Shigenori; Varsani, Arvind
2015-01-01
Viruses of the genus Begomovirus (family Geminiviridae) are emergent pathogens of crops throughout the tropical and subtropical regions of the world. By virtue of having a small DNA genome that is easily cloned, and due to the recent innovations in cloning and low-cost sequencing, there has been a dramatic increase in the number of available begomovirus genome sequences. Even so, most of the available sequences have been obtained from cultivated plants and are likely a small and phylogenetically unrepresentative sample of begomovirus diversity, a factor constraining taxonomic decisions such as the establishment of operationally useful species demarcation criteria. In addition, problems in assigning new viruses to established species have highlighted shortcomings in the previously recommended mechanism of species demarcation. Based on the analysis of 3,123 full-length begomovirus genome (or DNA-A component) sequences available in public databases as of December 2012, a set of revised guidelines for the classification and nomenclature of begomoviruses are proposed. The guidelines primarily consider a) genus-level biological characteristics and b) results obtained using a standardized classification tool, Sequence Demarcation Tool, which performs pairwise sequence alignments and identity calculations. These guidelines are consistent with the recently published recommendations for the genera Mastrevirus and Curtovirus of the family Geminiviridae. Genome-wide pairwise identities of 91 % and 94 % are proposed as the demarcation threshold for begomoviruses belonging to different species and strains, respectively. Procedures and guidelines are outlined for resolving conflicts that may arise when assigning species and strains to categories wherever the pairwise identity falls on or very near the demarcation threshold value.
Decomposition of Polarimetric SAR Images Based on Second- and Third-order Statics Analysis
Kojima, S.; Hensley, S.
2012-12-01
There are many papers concerning the research of the decomposition of polerimetric SAR imagery. Most of them are based on second-order statics analysis that Freeman and Durden [1] suggested for the reflection symmetry condition that implies that the co-polarization and cross-polarization correlations are close to zero. Since then a number of improvements and enhancements have been proposed to better understand the underlying backscattering mechanisms present in polarimetric SAR images. For example, Yamaguchi et al. [2] added the helix component into Freeman's model and developed a 4 component scattering model for the non-reflection symmetry condition. In addition, Arii et al. [3] developed an adaptive model-based decomposition method that could estimate both the mean orientation angle and a degree of randomness for the canopy scattering for each pixel in a SAR image without the reflection symmetry condition. This purpose of this research is to develop a new decomposition method based on second- and third-order statics analysis to estimate the surface, dihedral, volume and helix scattering components from polarimetric SAR images without the specific assumptions concerning the model for the volume scattering. In addition, we evaluate this method by using both simulation and real UAVSAR data and compare this method with other methods. We express the volume scattering component using the wire formula and formulate the relationship equation between backscattering echo and each component such as the surface, dihedral, volume and helix via linearization based on second- and third-order statics. In third-order statics, we calculate the correlation of the correlation coefficients for each polerimetric data and get one new relationship equation to estimate each polarization component such as HH, VV and VH for the volume. As a result, the equation for the helix component in this method is the same formula as one in Yamaguchi's method. However, the equation for the volume
Analysis of Human's Motions Based on Local Mean Decomposition in Through-wall Radar Detection
Lu, Qi; Liu, Cai; Zeng, Zhaofa; Li, Jing; Zhang, Xuebing
2016-04-01
Observation of human motions through a wall is an important issue in security applications and search-and rescue. Radar has advantages in looking through walls where other sensors give low performance or cannot be used at all. Ultrawideband (UWB) radar has high spatial resolution as a result of employment of ultranarrow pulses. It has abilities to distinguish the closely positioned targets and provide time-lapse information of targets. Moreover, the UWB radar shows good performance in wall penetration when the inherently short pulses spread their energy over a broad frequency range. Human's motions show periodic features including respiration, swing arms and legs, fluctuations of the torso. Detection of human targets is based on the fact that there is always periodic motion due to breathing or other body movements like walking. The radar can gain the reflections from each human body parts and add the reflections at each time sample. The periodic movements will cause micro-Doppler modulation in the reflected radar signals. Time-frequency analysis methods are consider as the effective tools to analysis and extract micro-Doppler effects caused by the periodic movements in the reflected radar signal, such as short-time Fourier transform (STFT), wavelet transform (WT), and Hilbert-Huang transform (HHT).The local mean decomposition (LMD), initially developed by Smith (2005), is to decomposed amplitude and frequency modulated signals into a small set of product functions (PFs), each of which is the product of an envelope signal and a frequency modulated signal from which a time-vary instantaneous phase and instantaneous frequency can be derived. As bypassing the Hilbert transform, the LMD has no demodulation error coming from window effect and involves no negative frequency without physical sense. Also, the instantaneous attributes obtained by LMD are more stable and precise than those obtained by the empirical mode decomposition (EMD) because LMD uses smoothed local
Thermal decomposition of pyrite
International Nuclear Information System (INIS)
Music, S.; Ristic, M.; Popovic, S.
1992-01-01
Thermal decomposition of natural pyrite (cubic, FeS 2 ) has been investigated using X-ray diffraction and 57 Fe Moessbauer spectroscopy. X-ray diffraction analysis of pyrite ore from different sources showed the presence of associated minerals, such as quartz, szomolnokite, stilbite or stellerite, micas and hematite. Hematite, maghemite and pyrrhotite were detected as thermal decomposition products of natural pyrite. The phase composition of the thermal decomposition products depends on the terature, time of heating and starting size of pyrite chrystals. Hematite is the end product of the thermal decomposition of natural pyrite. (author) 24 refs.; 6 figs.; 2 tabs
Nonparametric predictive pairwise comparison with competing risks
International Nuclear Information System (INIS)
Coolen-Maturi, Tahani
2014-01-01
In reliability, failure data often correspond to competing risks, where several failure modes can cause a unit to fail. This paper presents nonparametric predictive inference (NPI) for pairwise comparison with competing risks data, assuming that the failure modes are independent. These failure modes could be the same or different among the two groups, and these can be both observed and unobserved failure modes. NPI is a statistical approach based on few assumptions, with inferences strongly based on data and with uncertainty quantified via lower and upper probabilities. The focus is on the lower and upper probabilities for the event that the lifetime of a future unit from one group, say Y, is greater than the lifetime of a future unit from the second group, say X. The paper also shows how the two groups can be compared based on particular failure mode(s), and the comparison of the two groups when some of the competing risks are combined is discussed
Locating one pairwise interaction: Three recursive constructions
Directory of Open Access Journals (Sweden)
Charles J. Colbourn
2016-09-01
Full Text Available In a complex component-based system, choices (levels for components (factors may interact tocause faults in the system behaviour. When faults may be caused by interactions among few factorsat specific levels, covering arrays provide a combinatorial test suite for discovering the presence offaults. While well studied, covering arrays do not enable one to determine the specific levels of factorscausing the faults; locating arrays ensure that the results from test suite execution suffice to determinethe precise levels and factors causing faults, when the number of such causes is small. Constructionsfor locating arrays are at present limited to heuristic computational methods and quite specific directconstructions. In this paper three recursive constructions are developed for locating arrays to locateone pairwise interaction causing a fault.
Takahashi, Osamu; Nomura, Tetsuo; Tabayashi, Kiyohiko; Yamasaki, Katsuyoshi
2008-07-01
We performed spectral analysis by using the maximum entropy method instead of the traditional Fourier transform technique to investigate the short-time behavior in molecular systems, such as the energy transfer between vibrational modes and chemical reactions. This procedure was applied to direct ab initio molecular dynamics calculations for the decomposition of formic acid. More reactive trajectories of dehydrolation than those of decarboxylation were obtained for Z-formic acid, which was consistent with the prediction of previous theoretical and experimental studies. Short-time maximum entropy method analyses were performed for typical reactive and non-reactive trajectories. Spectrograms of a reactive trajectory were obtained; these clearly showed the reactant, transient, and product regions, especially for the dehydrolation path.
LARGE SCALE DISTRIBUTED PARAMETER MODEL OF MAIN MAGNET SYSTEM AND FREQUENCY DECOMPOSITION ANALYSIS
Energy Technology Data Exchange (ETDEWEB)
ZHANG,W.; MARNERIS, I.; SANDBERG, J.
2007-06-25
Large accelerator main magnet system consists of hundreds, even thousands, of dipole magnets. They are linked together under selected configurations to provide highly uniform dipole fields when powered. Distributed capacitance, insulation resistance, coil resistance, magnet inductance, and coupling inductance of upper and lower pancakes make each magnet a complex network. When all dipole magnets are chained together in a circle, they become a coupled pair of very high order complex ladder networks. In this study, a network of more than thousand inductive, capacitive or resistive elements are used to model an actual system. The circuit is a large-scale network. Its equivalent polynomial form has several hundred degrees. Analysis of this high order circuit and simulation of the response of any or all components is often computationally infeasible. We present methods to use frequency decomposition approach to effectively simulate and analyze magnet configuration and power supply topologies.
Analysis of ZDDP Content and Thermal Decomposition in Motor Oils Using NAA and NMR
Ferguson, S.; Johnson, J.; Gonzales, D.; Hobbs, C.; Allen, C.; Williams, S.
Zinc dialkyldithiophosphates (ZDDPs) are one of the most common anti-wear additives present in commercially-available motor oils. The ZDDP concentrations of motor oils are most commonly determined using inductively coupled plasma atomic emission spectroscopy (ICP-AES). As part of an undergraduate research project, we have determined the Zn concentrations of eight commercially-available motor oils and one oil additive using neutron activation analysis (NAA), which has potential for greater accuracy and less sensitivity to matrix effects as compared to ICP-AES. The 31P nuclear magnetic resonance (31P-NMR) spectra were also obtained for several oil additive samples which have been heated to various temperatures in order to study the thermal decomposition of ZDDPs.
Water Complexes of Cytochrome P450: Insights from Energy Decomposition Analysis
Directory of Open Access Journals (Sweden)
Hajime Hirao
2013-06-01
Full Text Available Water is a small molecule that nevertheless perturbs, sometimes significantly, the electronic properties of an enzyme’s active site. In this study, interactions of a water molecule with the ferric heme and the compound I (Cpd I intermediate of cytochrome P450 are studied. Energy decomposition analysis (EDA schemes are used to investigate the physical origins of these interactions. Localized molecular orbital EDA (LMOEDA implemented in the quantum chemistry software GAMESS and the EDA method implemented in the ADF quantum chemistry program are used. EDA reveals that the electrostatic and polarization effects act as the major driving force in both of these interactions. The hydrogen bonding in the Cpd I•••H2O complex is similar to that in the water dimer; however, the relative importance of the electrostatic effect is somewhat larger in the water dimer.
Empirical mode decomposition and Hilbert transforms for analysis of oil-film interferograms
International Nuclear Information System (INIS)
Chauhan, Kapil; Ng, Henry C H; Marusic, Ivan
2010-01-01
Oil-film interferometry is rapidly becoming the preferred method for direct measurement of wall shear stress in studies of wall-bounded turbulent flows. Although being widely accepted as the most accurate technique, it does have inherent measurement uncertainties, one of which is associated with determining the fringe spacing. This is the focus of this paper. Conventional analysis methods involve a certain level of user input and thus some subjectivity. In this paper, we consider empirical mode decomposition (EMD) and the Hilbert transform as an alternative tool for analyzing oil-film interferograms. In contrast to the commonly used Fourier-based techniques, this new method is less subjective and, as it is based on the Hilbert transform, is superior for treating amplitude and frequency modulated data. This makes it particularly robust to wide differences in the quality of interferograms
Index decomposition analysis of residential energy consumption in China: 2002–2010
International Nuclear Information System (INIS)
Nie, Hongguang; Kemp, René
2014-01-01
Highlights: • We examine residential energy use in China and predict household electricity use. • We decompose the dramatic increase of residential energy use in China. • Driving factors consist of population, floor space, energy mix and appliances. • Floor space per capita effect becomes increasingly important over time. • Electricity use from appliances will continue to rise despite a saturation. - Abstract: Residential energy consumption in China increased dramatically over the period of 2002–2010. In this paper, we undertake a decomposition analysis of changes in energy use by Chinese households for five energy-using activities: space heating/cooling, cooking, lighting and electric appliances. We investigate to what extent changes in energy use are due to changes from appliances and to change in floor space, population and energy mix. Our decomposition analysis is based on the logarithmic mean Divisia index technique using data from the China statistical yearbook and China energy statistical yearbook in the period of 2002–2010. According to our results, the increase in energy-using appliances is the biggest contributor to the increase of residential energy consumption during 2002–2010 but the effect declines over time, due to energy efficiency improvements in those appliances. The second most important contributor is floor space per capita, which increased with 28%. Of the four factors, population is the most stable factor and energy mix is the least important factor. We predicted electricity use, with the help of regression-based predictions for ownership of appliances and the energy efficiency of appliances. We found that electricity use will continue to rise despite a gradual saturation of demand
Measuring pair-wise molecular interactions in a complex mixture
Chakraborty, Krishnendu; Varma, Manoj M.; Venkatapathi, Murugesan
2016-03-01
Complex biological samples such as serum contain thousands of proteins and other molecules spanning up to 13 orders of magnitude in concentration. Present measurement techniques do not permit the analysis of all pair-wise interactions between the components of such a complex mixture to a given target molecule. In this work we explore the use of nanoparticle tags which encode the identity of the molecule to obtain the statistical distribution of pair-wise interactions using their Localized Surface Plasmon Resonance (LSPR) signals. The nanoparticle tags are chosen such that the binding between two molecules conjugated to the respective nanoparticle tags can be recognized by the coupling of their LSPR signals. This numerical simulation is done by DDA to investigate this approach using a reduced system consisting of three nanoparticles (a gold ellipsoid with aspect ratio 2.5 and short axis 16 nm, and two silver ellipsoids with aspect ratios 3 and 2 and short axes 8 nm and 10 nm respectively) and the set of all possible dimers formed between them. Incident light was circularly polarized and all possible particle and dimer orientations were considered. We observed that minimum peak separation between two spectra is 5 nm while maximum is 184nm.
Li, Dong-po; Wu, Zhi-jie; Liang, Cheng-hua; Chen, Li-jun; Zhang, Yu-lan; Nie, Yan-xi
2012-03-01
The degradability characteristics of film with 4 kinds of methyl methacrylate coated urea amended with inhibitors were analyzed by FITR, which was purposed to supply theoretical basis for applying the FITR analysis method to film decomposition and methyl methacrylate coated urea fertilizers on farming. The result showed that the chemical component, molecule structure and material form of the membrane were not changed because of adding different inhibitors to urea. the main peaks of expressing film degradation process were brought by the -C-H of CH3 & CH2, -OH, C-O, C-C, C-O-C, C=O, C=C flexing vibrancy in asymmetry and symmetry in 3 479-3 195, 2 993--2 873, 1 741-1 564, 1 461-925 and 850-650 cm(-1). The peak value changed from smooth to tip, and from width to narrow caused by chemical structural transform of film The infrared spectrum of 4 kinds of fertilizers was not different remarkably before 60 days, and the film was slowly degraded. But degradation of the film was expedited after 60 days, it was most quickened at 120 day, and the decomposition rate of film was decreased at 310 day. The substantiality change of film in main molecule structure of 4 kinds of fertilizers didn't happen in 310 days. The main component of film materials was degraded most slowly in brown soil. The speed of film degradation wasn't heavily impacted by different inhibitors. The characteristic of film degradation may be monitored entirely by infrared spectrum. The degradation dynamic, chemical structure change, degradation speed difference of the film could be represented through infrared spectrum.
Kernel based pattern analysis methods using eigen-decompositions for reading Icelandic sagas
DEFF Research Database (Denmark)
Christiansen, Asger Nyman; Carstensen, Jens Michael
We want to test the applicability of kernel based eigen-decomposition methods, compared to the traditional eigen-decomposition methods. We have implemented and tested three kernel based methods methods, namely PCA, MAF and MNF, all using a Gaussian kernel. We tested the methods on a multispectral...... image of a page in the book 'hauksbok', which contains Icelandic sagas....
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China’s exports and net exports during 2002–2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade. PMID:28441399
A new approach for crude oil price analysis based on empirical mode decomposition
International Nuclear Information System (INIS)
Zhang, Xun; Wang, Shou-Yang; Lai, K.K.
2008-01-01
The importance of understanding the underlying characteristics of international crude oil price movements attracts much attention from academic researchers and business practitioners. Due to the intrinsic complexity of the oil market, however, most of them fail to produce consistently good results. Empirical Mode Decomposition (EMD), recently proposed by Huang et al., appears to be a novel data analysis method for nonlinear and non-stationary time series. By decomposing a time series into a small number of independent and concretely implicational intrinsic modes based on scale separation, EMD explains the generation of time series data from a novel perspective. Ensemble EMD (EEMD) is a substantial improvement of EMD which can better separate the scales naturally by adding white noise series to the original time series and then treating the ensemble averages as the true intrinsic modes. In this paper, we extend EEMD to crude oil price analysis. First, three crude oil price series with different time ranges and frequencies are decomposed into several independent intrinsic modes, from high to low frequency. Second, the intrinsic modes are composed into a fluctuating process, a slowly varying part and a trend based on fine-to-coarse reconstruction. The economic meanings of the three components are identified as short term fluctuations caused by normal supply-demand disequilibrium or some other market activities, the effect of a shock of a significant event, and a long term trend. Finally, the EEMD is shown to be a vital technique for crude oil price analysis. (author)
International Nuclear Information System (INIS)
Platoni, K.; Lefkopoulos, D.; Grandjean, P.; Schlienger, M.
1999-01-01
A Linac sterotactic irradiation space is characterized by different angular separations of beams because of the geometry of the stereotactic irradiation. The regions of the stereotactic space characterized by low angular separations are one of the causes of ill-conditioning of the stereotactic irradiation inverse problem. The singular value decomposition (SVD) is a powerful mathematical analysis that permits the measurement of the ill-conditioning of the stereotactic irradiation problem. This study examines the ill-conditioning of the stereotactic irradiation space, provoked by the different angular separations of beams, using the SVD analysis. We subdivided the maximum irradiation space (MIS: (AA) AP x (AA) RL =180 x 180 ) into irradiation subspaces (ISSs), each characterized by its own angular separation. We studied the influence of ISSs on the SVD analysis and the evolution of the reconstruction quality of well defined three-dimensional dose matrices in each configuration. The more the ISS is characterized by low angular separation the more the condition number and the reconstruction inaccuracy are increased. Based on the above results we created two reduced irradiation spaces (RIS: (AA) AP x (AA) RL =180 x 140 and (AA) AP x (AA) RL =180 x 120 ) and compared the reconstruction quality of the RISs with respect to the MIS. The more an irradiation space is free of low angular separations the more the irradiation space contains useful singular components. (orig.)
Multivariate Empirical Mode Decomposition Based Signal Analysis and Efficient-Storage in Smart Grid
Energy Technology Data Exchange (ETDEWEB)
Liu, Lu [University of Tennessee, Knoxville (UTK); Albright, Austin P [ORNL; Rahimpour, Alireza [University of Tennessee, Knoxville (UTK); Guo, Jiandong [University of Tennessee, Knoxville (UTK); Qi, Hairong [University of Tennessee, Knoxville (UTK); Liu, Yilu [University of Tennessee (UTK) and Oak Ridge National Laboratory (ORNL)
2017-01-01
Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-area oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements
International Nuclear Information System (INIS)
Meng, Ming; Shang, Wei; Zhao, Xiaoli; Niu, Dongxiao; Li, Wei
2015-01-01
The coordinated actions of the central and the provincial governments are important in improving China's energy efficiency. This paper uses a three-dimensional decomposition model to measure the contribution of each province in improving the country's energy efficiency and a small-sample hybrid model to forecast this contribution. Empirical analysis draws the following conclusions which are useful for the central government to adjust its provincial energy-related policies. (a) There are two important areas for the Chinese government to improve its energy efficiency: adjusting the provincial economic structure and controlling the number of the small-scale private industrial enterprises; (b) Except for a few outliers, the energy efficiency growth rates of the northern provinces are higher than those of the southern provinces; provinces with high growth rates tend to converge geographically; (c) With regard to the energy sustainable development level, Beijing, Tianjin, Jiangxi, and Shaanxi are the best performers and Heilongjiang, Shanxi, Shanghai, and Guizhou are the worst performers; (d) By 2020, China's energy efficiency may reach 24.75 thousand yuan per ton of standard coal; as well as (e) Three development scenarios are designed to forecast China's energy consumption in 2012–2020. - Highlights: • Decomposition and forecasting models are used to analyze China's energy efficiency. • China should focus on the small industrial enterprises and local protectionism. • The energy sustainable development level of each province is evaluated. • Geographic distribution characteristics of energy efficiency changes are revealed. • Future energy efficiency and energy consumption are forecasted
Predicting community composition from pairwise interactions
Friedman, Jonathan; Higgins, Logan; Gore, Jeff
The ability to predict the structure of complex, multispecies communities is crucial for understanding the impact of species extinction and invasion on natural communities, as well as for engineering novel, synthetic communities. Communities are often modeled using phenomenological models, such as the classical generalized Lotka-Volterra (gLV) model. While a lot of our intuition comes from such models, their predictive power has rarely been tested experimentally. To directly assess the predictive power of this approach, we constructed synthetic communities comprised of up to 8 soil bacteria. We measured the outcome of competition between all species pairs, and used these measurements to predict the composition of communities composed of more than 2 species. The pairwise competitions resulted in a diverse set of outcomes, including coexistence, exclusion, and bistability, and displayed evidence for both interference and facilitation. Most pair outcomes could be captured by the gLV framework, and the composition of multispecies communities could be predicted for communities composed solely of such pairs. Our results demonstrate the predictive ability and utility of simple phenomenology, which enables accurate predictions in the absence of mechanistic details.
Hudson, Nicolas; Lin, Ying; Barengoltz, Jack
2010-01-01
A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.
Industrial CO2 emissions from energy use in Korea: A structural decomposition analysis
International Nuclear Information System (INIS)
Lim, Hea-Jin; Yoo, Seung-Hoon; Kwak, Seung-Jun
2009-01-01
This paper attempts to quantify energy consumption and CO 2 emissions in the industrial sectors of Korea. The sources of the changes in CO 2 emissions for the years 1990-2003 are investigated, in terms of a total of eight factors, through input-output structural decomposition analysis: changes in emission coefficient (caused by shifts in energy intensity and carbon intensity); changes in economic growth; and structural changes (in terms of shifts in domestic final demand, exports, imports of final and intermediate goods, and production technology). The results show that the rate of growth of industrial CO 2 emissions has drastically decreased since the 1998 financial crisis in Korea. The effect on emission reductions due to changes in energy intensity and domestic final demand surged in the second period (1995-2000), while the impact of exports steeply rose in the third period (2000-2003). Of all the individual factors, economic growth accounted for the largest increase in CO 2 emissions. The results of this analysis can be used to infer the potential for emission-reduction in Korea
Hassan, Mahmoud; Boudaoud, Sofiane; Terrien, Jérémy; Karlsson, Brynjar; Marque, Catherine
2011-09-01
The electrohysterogram (EHG) is often corrupted by electronic and electromagnetic noise as well as movement artifacts, skeletal electromyogram, and ECGs from both mother and fetus. The interfering signals are sporadic and/or have spectra overlapping the spectra of the signals of interest rendering classical filtering ineffective. In the absence of efficient methods for denoising the monopolar EHG signal, bipolar methods are usually used. In this paper, we propose a novel combination of blind source separation using canonical correlation analysis (BSS_CCA) and empirical mode decomposition (EMD) methods to denoise monopolar EHG. We first extract the uterine bursts by using BSS_CCA then the biggest part of any residual noise is removed from the bursts by EMD. Our algorithm, called CCA_EMD, was compared with wavelet filtering and independent component analysis. We also compared CCA_EMD with the corresponding bipolar signals to demonstrate that the new method gives signals that have not been degraded by the new method. The proposed method successfully removed artifacts from the signal without altering the underlying uterine activity as observed by bipolar methods. The CCA_EMD algorithm performed considerably better than the comparison methods.
Cicone, A.; Zhou, H.; Piersanti, M.; Materassi, M.; Spogli, L.
2017-12-01
Nonlinear and nonstationary signals are ubiquitous in real life. Their decomposition and analysis is of crucial importance in many research fields. Traditional techniques, like Fourier and wavelet Transform have been proved to be limited in this context. In the last two decades new kind of nonlinear methods have been developed which are able to unravel hidden features of these kinds of signals. In this poster we present a new method, called Adaptive Local Iterative Filtering (ALIF). This technique, originally developed to study mono-dimensional signals, unlike any other algorithm proposed so far, can be easily generalized to study two or higher dimensional signals. Furthermore, unlike most of the similar methods, it does not require any a priori assumption on the signal itself, so that the technique can be applied as it is to any kind of signals. Applications of ALIF algorithm to real life signals analysis will be presented. Like, for instance, the behavior of the water level near the coastline in presence of a Tsunami, length of the day signal, pressure measured at ground level on a global grid, radio power scintillation from GNSS signals,
A decomposition analysis of CO2 emissions from energy use: Turkish case
International Nuclear Information System (INIS)
Ipek Tunc, G.; Tueruet-Asik, Serap; Akbostanci, Elif
2009-01-01
Environmental problems, especially 'climate change' due to significant increase in anthropogenic greenhouse gases, have been on the agenda since 1980s. Among the greenhouse gases, carbon dioxide (CO 2 ) is the most important one and is responsible for more than 60% of the greenhouse effect. The objective of this study is to identify the factors that contribute to changes in CO 2 emissions for the Turkish economy by utilizing Log Mean Divisia Index (LMDI) method developed by Ang (2005) [Ang, B.W., 2005. The LMDI approach to decomposition analysis: a practical guide. Energy Policy 33, 867-871]. Turkish economy is divided into three aggregated sectors, namely agriculture, industry and services, and energy sources used by these sectors are aggregated into four groups: solid fuels, petroleum, natural gas and electricity. This study covers the period 1970-2006, which enables us to investigate the effects of different macroeconomic policies on carbon dioxide emissions through changes in shares of industries and use of different energy sources. Our analysis shows that the main component that determines the changes in CO 2 emissions of the Turkish economy is the economic activity. Even though important changes in the structure of the economy during 1970-2006 period are observed, structure effect is not a significant factor in changes in CO 2 emissions, however intensity effect is.
Directory of Open Access Journals (Sweden)
Laili Wang
2017-09-01
Full Text Available Energy is the essential input for operations along the industrial manufacturing chain of textiles. China’s textile industry is facing great pressure on energy consumption reduction. This paper presents an analysis of the energy footprint (EFP of China’s textile industry from 1991 to 2015. The relationship between EFP and economic growth in the textile industry was investigated with a decoupling index approach. The logarithmic mean Divisia index approach was applied for decomposition analysis on how changes in key factors influenced the EFP of China’s textile industry. Results showed that the EFP of China’s textile industry increased from 41.1 Mt in 1991 to 99.6 Mt in 2015. EFP increased fastest in the period of 1996–2007, with an average annual increasing rate of 7.7 percent, especially from 2001 to 2007 (8.5 percent. Manufacture of textile sector consumed most (from 58 percent to 76 percent of the energy among the three sub-sectors, as it has lots of energy-intensive procedures. EFP and economic growth were in a relative decoupling state for most years of the researched period. Their relationship showed a clear tendency toward decoupling. Industrial scale was the most important factor that led to the increase of EFP, while decreasing energy intensity contributed significantly to reducing the EFP. The promoting effect of the factors was larger than the inhibiting effect on EFP in most years from 1991 to 2015.
Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng
2016-01-01
In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).
De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets
Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.
2017-08-01
The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.
Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip
Directory of Open Access Journals (Sweden)
Jane Louie Fresco Zamora
2015-01-01
Full Text Available Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values.
Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip.
Zamora, Jane Louie Fresco; Kashihara, Shigeru; Yamaguchi, Suguru
2015-01-01
Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values.
A scalable pairwise class interaction framework for multidimensional classification
DEFF Research Database (Denmark)
Arias, Jacinto; Gámez, Jose A.; Nielsen, Thomas Dyhre
2016-01-01
We present a general framework for multidimensional classification that cap- tures the pairwise interactions between class variables. The pairwise class inter- actions are encoded using a collection of base classifiers (Phase 1), for which the class predictions are combined in a Markov random fie...
Key trends of climate change in the ASEAN countries. The IPAT decomposition analysis 1980-2005
Energy Technology Data Exchange (ETDEWEB)
Vehmas, J.; Luukkanen, J.; Kaivo-oja, J.; Panula-Ontto, J.; Allievi, F.
2012-07-01
Decomposition analyses of energy consumption and CO{sub 2} emissions have mainly focused on effects of changes in economic activity, energy intensity and fuel mix, and structural changes in energy consumption in different countries or different sectors of the economy. This e-Book introduces a different perspective by identifying five globally relevant factors affecting CO{sub 2} emissions. Changes in carbon intensity of primary energy, efficiency of the energy system, energy intensity of the economy, level of economic development, and the amount of population have been identified by extending the well-known IPAT identity. Empirical part focuses on CO{sub 2} emissions from fuel combustion in the ASEAN countries between the years 1980 and 2005. CO{sub 2} emissions are considerable low in many ASEAN countries but have increased in recent years due to the rapid economic growth and increased reliance on fossil fuels. Emission and energy intensities have increased during the industrialization process, but with a shift towards a more service-oriented economy and the increase in GDP per capita, the intensities have started to decrease in some ASEAN countries. However, these changes have not been able to slow down the rapid increase in CO{sub 2} emissions due to the growth of both the economy and the population. With the rapid economic development of the member countries of the Association of South East Asian Nations (ASEAN) and nations such as China and India since the mid-1980s, the Asia-Pacific region has emerged as the growth centre of the global economy. However, many countries in the region have, instead of being successful, faced serious social and environmental problems, particularly with regard to deforestation, land degradation and the loss of biological diversity. Climate change has been regarded one of the major environmental threats to developing countries. The need to develop theoretical and empirical research in the field of climate and energy policy analysis
Analysis of Decomposition for Structure I Methane Hydrate by Molecular Dynamics Simulation
Wei, Na; Sun, Wan-Tong; Meng, Ying-Feng; Liu, An-Qi; Zhou, Shou-Wei; Guo, Ping; Fu, Qiang; Lv, Xin
2018-05-01
Under multi-nodes of temperatures and pressures, microscopic decomposition mechanisms of structure I methane hydrate in contact with bulk water molecules have been studied through LAMMPS software by molecular dynamics simulation. Simulation system consists of 482 methane molecules in hydrate and 3027 randomly distributed bulk water molecules. Through analyses of simulation results, decomposition number of hydrate cages, density of methane molecules, radial distribution function for oxygen atoms, mean square displacement and coefficient of diffusion of methane molecules have been studied. A significant result shows that structure I methane hydrate decomposes from hydrate-bulk water interface to hydrate interior. As temperature rises and pressure drops, the stabilization of hydrate will weaken, decomposition extent will go deep, and mean square displacement and coefficient of diffusion of methane molecules will increase. The studies can provide important meanings for the microscopic decomposition mechanisms analyses of methane hydrate.
3D quantitative analysis of early decomposition changes of the human face.
Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina
2018-03-01
Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.
Model-free method for isothermal and non-isothermal decomposition kinetics analysis of PET sample
International Nuclear Information System (INIS)
Saha, B.; Maiti, A.K.; Ghoshal, A.K.
2006-01-01
Pyrolysis, one possible alternative to recover valuable products from waste plastics, has recently been the subject of renewed interest. In the present study, the isoconversion methods, i.e., Vyazovkin model-free approach is applied to study non-isothermal decomposition kinetics of waste PET samples using various temperature integral approximations such as Coats and Redfern, Gorbachev, and Agrawal and Sivasubramanian approximation and direct integration (recursive adaptive Simpson quadrature scheme) to analyze the decomposition kinetics. The results show that activation energy (E α ) is a weak but increasing function of conversion (α) in case of non-isothermal decomposition and strong and decreasing function of conversion in case of isothermal decomposition. This indicates possible existence of nucleation, nuclei growth and gas diffusion mechanism during non-isothermal pyrolysis and nucleation and gas diffusion mechanism during isothermal pyrolysis. Optimum E α dependencies on α obtained for non-isothermal data showed similar nature for all the types of temperature integral approximations
Simões, P. N.; Pedroso, L. M.; Portugal, A. A.; Campos, J. L.
1998-01-01
Ammonium nitrate (AN) has been extensively used both in explosive and propellant formulations. Unlike AN, there is a lack of information about the thermal decomposition and related kinetic analysis of phase stabilized ammonium nitrate (PSAN). Simultaneous thermal analysis (DSC-TG) has been used in the thermal characterisation of a specific type of PSAN containing 1.0% of NiO (stabilizing agent) and 0.5% of Petro (anti-caking agent) as additives. Repeated runs covering the nominal heating rate...
International Nuclear Information System (INIS)
Song, Feng; Zheng, Xinye
2012-01-01
We employ decomposition analysis and econometric analysis to investigate the driving forces behind China's changing energy intensity using a provincial-level panel data set for the period from 1995 to 2009. The decomposition analysis indicates that: (a) all of the provinces except for a few experienced efficiency improvement, while around three-fourths of the provinces' economics became more energy intensive or remained unchanged; (b) consequently the efficiency improvement accounts for more than 90% of China's energy intensity change as opposed to the economic structural change. The econometric analysis shows that the rising income plays a significant role in the reduction of energy intensity while the effect of energy price is relatively limited. The result may reflect the urgency of deregulating the price and establishing a market-oriented pricing system in China's energy sector. The implementation of the energy intensity reduction policies in the Eleventh Five-Year Plan (FYP) has helped reverse the increasing trend of energy intensity since 2002. Although the Chinese Government intended to change the industry-led economic growth pattern, it seems that most of the policy effects flow through the efficiency improvement as opposed to the economic structure adjustment. More fundamental changes to the economic structure are needed to achieve more sustainable progress in energy intensity reduction. - Highlights: ► We examine the determinants of China's energy intensity change at provincial level. ► Rising income plays a significant role in reducing China's energy intensity. ► Policy effects mainly flow through the efficiency improvement. ► Fundamental structure changes are needed to further reduce China's energy intensity.
Pairwise Comparison and Distance Measure of Hesitant Fuzzy Linguistic Term Sets
Directory of Open Access Journals (Sweden)
Han-Chen Huang
2014-01-01
Full Text Available A hesitant fuzzy linguistic term set (HFLTS, allowing experts using several possible linguistic terms to assess a qualitative linguistic variable, is very useful to express people’s hesitancy in practical decision-making problems. Up to now, a little research has been done on the comparison and distance measure of HFLTSs. In this paper, we present a comparison method for HFLTSs based on pairwise comparisons of each linguistic term in the two HFLTSs. Then, a distance measure method based on the pairwise comparison matrix of HFLTSs is proposed, and we prove that this distance is equal to the distance of the average values of HFLTSs, which makes the distance measure much more simple. Finally, the pairwise comparison and distance measure methods are utilized to develop two multicriteria decision-making approaches under hesitant fuzzy linguistic environments. The results analysis shows that our methods in this paper are more reasonable.
Pattern decomposition and quantitative-phase analysis in pulsed neutron transmission
International Nuclear Information System (INIS)
Steuwer, A.; Santisteban, J.R.; Withers, P.J.; Edwards, L.
2004-01-01
Neutron diffraction methods provide accurate quantitative insight into material properties with applications ranging from fundamental physics to applied engineering research. Neutron radiography or tomography on the other hand, are useful tools in the non-destructive spatial imaging of materials or engineering components, but are less accurate with respect to any quantitative analysis. It is possible to combine the advantages of diffraction and radiography using pulsed neutron transmission in a novel way. Using a pixellated detector at a time-of-flight source it is possible to collect 2D 'images' containing a great deal of interesting information in the thermal regime. This together with the unprecedented intensities available at spallation sources and improvements in computing power allow for a re-assessment of the transmission methods. It opens the possibility of simultaneous imaging of diverse material properties such as strain or temperature, as well as the variation in attenuation, and can assist in the determination of phase volume fraction. Spatial and time resolution (for dynamic experiment) are limited only by the detector technology and the intensity of the source. In this example, phase information contained in the cross-section is extracted from Bragg edges using an approach similar to pattern decomposition
Factor Decomposition Analysis of Energy-Related CO2 Emissions in Tianjin, China
Directory of Open Access Journals (Sweden)
Zhe Wang
2015-07-01
Full Text Available Tianjin is the largest coastal city in northern China with rapid economic development and urbanization. Energy-related CO2 emissions from Tianjin’s production and household sectors during 1995–2012 were calculated according to the default carbon-emission coefficients provided by the Intergovernmental Panel on Climate Change. We decomposed the changes in CO2 emissions resulting from 12 causal factors based on the method of Logarithmic Mean Divisia Index. The examined factors were divided into four types of effects: energy intensity effect, structure effect, activity intensity effect, scale effect and the various influencing factors imposed differential impacts on CO2 emissions. The decomposition outcomes indicate that per capita GDP and population scale are the dominant positive driving factors behind the growth in CO2 emissions for all sectors, while the energy intensity of the production sector is the main contributor to dampen the CO2 emissions increment, and the contributions from industry structure and energy structure need further enhancement. The analysis results reveal the reasons for CO2 emission changes in Tianjin and provide a solid basis upon which policy makers may propose emission reduction measures and approaches for the implementation of sustainable development strategies.
Decoupling and Decomposition Analysis of Carbon Emissions from Industry: A Case Study from China
Directory of Open Access Journals (Sweden)
Qiang Wang
2016-10-01
Full Text Available China has overtaken the United States as the world’s largest producer of carbon dioxide, with industrial carbon emissions (ICE accounting for approximately 65% of the country’s total emissions. Understanding the ICE decoupling patterns and factors influencing the decoupling status is a prerequisite for balancing economic growth and carbon emissions. This paper provides an overview of ICE based on decoupling elasticity and the Tapio decoupling model. Furthermore, the study identifies the factors contributing to ICE changes in China, using the Kaya identity and Log Mean Divisia Index (LMDI techniques. Based on the effects and contributions of ICE, we close with a number of recommendations. The results revealed a significant upward trend of ICE during the study period 1994 to 2013, with a total amount of 11,147 million tons. Analyzing the decoupling relationship indicates that “weak decoupling” and “expansive decoupling” were the main states during the study period. The decomposition analysis showed that per capita wealth associated with industrial outputs and energy intensity are the main driving force of ICE, while energy intensity of industrial output and energy structure are major determinants for ICE reduction. The largest contributing cumulative effect to ICE is per capita wealth, at 1.23 in 2013. This factor is followed by energy intensity, with a contributing cumulative effect of −0.32. The cumulative effects of energy structure and population are relatively small, at 0.01 and 0.08, respectively.
Horn, Paul R; Head-Gordon, Martin
2016-02-28
In energy decomposition analysis (EDA) of intermolecular interactions calculated via density functional theory, the initial supersystem wavefunction defines the so-called "frozen energy" including contributions such as permanent electrostatics, steric repulsions, and dispersion. This work explores the consequences of the choices that must be made to define the frozen energy. The critical choice is whether the energy should be minimized subject to the constraint of fixed density. Numerical results for Ne2, (H2O)2, BH3-NH3, and ethane dissociation show that there can be a large energy lowering associated with constant density orbital relaxation. By far the most important contribution is constant density inter-fragment relaxation, corresponding to charge transfer (CT). This is unwanted in an EDA that attempts to separate CT effects, but it may be useful in other contexts such as force field development. An algorithm is presented for minimizing single determinant energies at constant density both with and without CT by employing a penalty function that approximately enforces the density constraint.
The Analysis of Two-Way Functional Data Using Two-Way Regularized Singular Value Decompositions
Huang, Jianhua Z.
2009-12-01
Two-way functional data consist of a data matrix whose row and column domains are both structured, for example, temporally or spatially, as when the data are time series collected at different locations in space. We extend one-way functional principal component analysis (PCA) to two-way functional data by introducing regularization of both left and right singular vectors in the singular value decomposition (SVD) of the data matrix. We focus on a penalization approach and solve the nontrivial problem of constructing proper two-way penalties from oneway regression penalties. We introduce conditional cross-validated smoothing parameter selection whereby left-singular vectors are cross- validated conditional on right-singular vectors, and vice versa. The concept can be realized as part of an alternating optimization algorithm. In addition to the penalization approach, we briefly consider two-way regularization with basis expansion. The proposed methods are illustrated with one simulated and two real data examples. Supplemental materials available online show that several "natural" approaches to penalized SVDs are flawed and explain why so. © 2009 American Statistical Association.
Determinants of energy demand in the French service sector: A decomposition analysis
International Nuclear Information System (INIS)
Mairet, Nicolas; Decellas, Fabrice
2009-01-01
This paper analyzes the changes in the energy consumption of the service sector in France over the period 1995-2006, using the logarithmic mean Divisia index I (LMDI I) decomposition method. The analysis is carried out at various disaggregation levels to highlight the specifics of each sub-sector and end-use according to their respective determinants. The results show that in this period the economic growth of the service sector was the main factor that led to the increase in total energy consumption. Structure, productivity, substitution and intensity effects restricted this growth, but with limited effect. By analyzing each end-use, this paper enables a more precise understanding of the impact of these factors. The activity effect was the main determinant of the increase in energy consumption for all end-uses except for air conditioning, for which the equipment rate effect was the main factor. Structural changes in the service sector primarily impacted energy consumption for space heating and cooking. Improvements in productivity limited the growth of energy consumption for all end-uses except for cooking. Finally, energy efficiency improvements mainly affected space-heating energy use.
Energy Technology Data Exchange (ETDEWEB)
Kurzweil, P.; Chwistek, M. [University of Applied Sciences, Kaiser-Wilhelm-Ring 23, D-92224 Amberg (Germany)
2008-02-01
The fundamental aging mechanisms in double-layer capacitors based on alkylammonium electrolytes in acetonitrile were clarified for the first time. After abusive testing at cell voltages above 4 V, ultracapacitors cast out a crystalline mass of residual electrolyte, organic acids, acetamide, aromatics, and polymer compounds. The mixture could be reproduced by electrolysis. The decomposition products of active carbon electrodes and electrolyte solution after a heat treatment at 70 C were identified by infrared and ultraviolet spectroscopy, liquid and headspace GC-MS, thermogravimetric analysis, and X-ray diffraction. The alkylammonium cation is destroyed by the elimination of ethene. The fluoroborate anion works as source of fluoride and hydrogenfluoride, and boric acid derivates. Acetonitrile forms acetamide, acetic and fluoroacetic acid, and derivates thereof. Due to the catalytic activity of the electrode, heterocyclic compounds are generated in the liquid phase. The etched aluminium support under the active carbon layer is locally destroyed by fluorination. Exploring novel electrolytes, ionic liquids were characterized by impedance spectroscopy. (author)
Wright, Cameron M; Bulsara, Max K; Norman, Richard; Moorin, Rachael E
2017-07-01
Publicly funded computed tomography (CT) procedure descriptions in Australia often specify the body site, rather than indication for use. This study aimed to evaluate the relative contribution of demographic versus non-demographic factors in driving the increase in CT services in Australia. A decomposition analysis was conducted to assess the proportion of additional CT attributable to changing population structure, CT use on a per capita basis (CPC, a proxy for change in practice) and/or cost of CT. Aggregated Medicare usage and billing data were obtained for selected years between 1993/4 and 2012/3. The number of billed CT scans rose from 33 per annum per 1000 of population in 1993/94 (total 572,925) to 112 per 1000 by 2012/13 (total 2,540,546). The respective cost to Medicare rose from $145.7 million to $790.7 million. Change in CPC was the most important factor accounting for changes in CT services (88%) and cost (65%) over the study period. While this study cannot conclude if the increase is appropriate, it does represent a shift in how CT is used, relative to when many CT services were listed for public funding. This 'scope shift' poses questions as to need for and frequency of retrospective/ongoing review of publicly funded services, as medical advances and other demand- or supply-side factors change the way health services are used. Copyright © 2017 Elsevier B.V. All rights reserved.
Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing
2012-01-01
Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.
Changes in energy intensities of Thai industry between 1981 and 2000: a decomposition analysis
International Nuclear Information System (INIS)
Bhattacharyya, S.C.; Ussanarassamee, Arjaree
2005-01-01
Industrial demand accounts for about 30% of total final energy demand in Thailand, which experienced rapid increases in energy demand. This paper analyzes the changes in industrial energy intensities over a period of 20 years (1981-2000) and identifies the factors affecting the energy consumption using logarithmic mean Divisia decomposition technique. It is found that Thai industry has passed through four different phases of growth and energy consumption has closely followed the industrial growth pattern. Energy intensity of Thai industry decreased from 17.6 toe/million baht (constant 1988 prices) in 1981 to 15.8 toe/million baht (1988 prices) in 2000. Non-metallic mineral industry is the most intensive industry followed by basic metal, food and beverage, chemical and paper industries. The factor analysis indicates that both the structural effect and intensity effect contributed to a decline of aggregate intensity by 8% during 1981-1986 but in the rest of the periods, the two effects acted in opposite directions and thereby reducing the overall effect on aggregate intensity. Food and beverages, non-metallic mineral and chemical industries had significantly influenced the changes in aggregate intensity at sectoral level
International Nuclear Information System (INIS)
Wang, Qunwei; Chiu, Yung-Ho; Chiu, Ching-Ren
2015-01-01
Research on the driving factors behind carbon dioxide emission changes in China can inform better carbon emission reduction policies and help develop a low-carbon economy. As one of important methods, production-theoretical decomposition analysis (PDA) has been widely used to understand these driving factors. To avoid the infeasibility issue in solving the linear programming, this study proposed a modified PDA approach to decompose carbon dioxide emission changes into seven drivers. Using 2005–2010 data, the study found that economic development was the largest factor of increasing carbon dioxide emissions. The second factor was energy structure (reflecting potential carbon), and the third factor was low energy efficiency. Technological advances, energy intensity reductions, and carbon dioxide emission efficiency improvements were the negative driving factors reducing carbon dioxide emission growth rates. Carbon dioxide emissions and driving factors varied significantly across east, central and west China. - Highlights: • A modified PDA used to decompose carbon dioxide emission changes into seven drivers. • Two models were proposed to ameliorate the infeasible occasions. • Economic development was the largest factor of increasing CO_2 emissions in China.
Energy Technology Data Exchange (ETDEWEB)
Donald Estep; Michael Holst; Simon Tavener
2010-02-08
This project was concerned with the accurate computational error estimation for numerical solutions of multiphysics, multiscale systems that couple different physical processes acting across a large range of scales relevant to the interests of the DOE. Multiscale, multiphysics models are characterized by intimate interactions between different physics across a wide range of scales. This poses significant computational challenges addressed by the proposal, including: (1) Accurate and efficient computation; (2) Complex stability; and (3) Linking different physics. The research in this project focused on Multiscale Operator Decomposition methods for solving multiphysics problems. The general approach is to decompose a multiphysics problem into components involving simpler physics over a relatively limited range of scales, and then to seek the solution of the entire system through some sort of iterative procedure involving solutions of the individual components. MOD is a very widely used technique for solving multiphysics, multiscale problems; it is heavily used throughout the DOE computational landscape. This project made a major advance in the analysis of the solution of multiscale, multiphysics problems.
Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K
2017-04-11
First-principles quantum mechanical calculations with methods such as density functional theory (DFT) allow the accurate calculation of interaction energies between molecules. These interaction energies can be dissected into chemically relevant components such as electrostatics, polarization, and charge transfer using energy decomposition analysis (EDA) approaches. Typically EDA has been used to study interactions between small molecules; however, it has great potential to be applied to large biomolecular assemblies such as protein-protein and protein-ligand interactions. We present an application of EDA calculations to the study of ligands that bind to the thrombin protein, using the ONETEP program for linear-scaling DFT calculations. Our approach goes beyond simply providing the components of the interaction energy; we are also able to provide visual representations of the changes in density that happen as a result of polarization and charge transfer, thus pinpointing the functional groups between the ligand and protein that participate in each kind of interaction. We also demonstrate with this approach that we can focus on studying parts (fragments) of ligands. The method is relatively insensitive to the protocol that is used to prepare the structures, and the results obtained are therefore robust. This is an application to a real protein drug target of a whole new capability where accurate DFT calculations can produce both energetic and visual descriptors of interactions. These descriptors can be used to provide insights for tailoring interactions, as needed for example in drug design.
International Nuclear Information System (INIS)
Oladosu, Gbadebo
2009-01-01
This paper employs the empirical mode decomposition (EMD) method to filter cyclical components of US quarterly gross domestic product (GDP) and quarterly average oil price (West Texas Intermediate - WTI). The method is adaptive and applicable to non-linear and non-stationary data. A correlation analysis of the resulting components is performed and examined for insights into the relationship between oil and the economy. Several components of this relationship are identified. However, the principal one is that the medium-run component of the oil price has a negative relationship with the main cyclical component of the GDP. In addition, weak correlations suggesting a lagging, demand-driven component and a long-run component of the relationship were also identified. Comparisons of these findings with significant oil supply disruption and recession dates were supportive. The study identifies a number of lessons applicable to recent oil market events, including the eventuality of persistent oil price and economic decline following a long oil price run-up. In addition, it was found that oil market related exogenous events are associated with short- to medium-run price implications regardless of whether they lead to actual supply losses. (author)
Singular value decomposition based feature extraction technique for physiological signal analysis.
Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C
2012-06-01
Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.
Acoustics flow analysis in circular duct using sound intensity and dynamic mode decomposition
International Nuclear Information System (INIS)
Weyna, S
2014-01-01
Sound intensity generation in hard-walled duct with acoustic flow (no mean-flow) is treated experimentally and shown graphically. In paper, numerous methods of visualization illustrating the vortex flow (2D, 3D) can graphically explain diffraction and scattering phenomena occurring inside the duct and around open end area. Sound intensity investigation in annular duct gives a physical picture of sound waves in any duct mode. In the paper, modal energy analysis are discussed with particular reference to acoustics acoustic orthogonal decomposition (AOD). The image of sound intensity fields before and above 'cut-off' frequency region are found to compare acoustic modes which might resonate in duct. The experimental results show also the effects of axial and swirling flow. However acoustic field is extremely complicated, because pressures in non-propagating (cut-off) modes cooperate with the particle velocities in propagating modes, and vice versa. Measurement in cylindrical duct demonstrates also the cut-off phenomenon and the effect of reflection from open end. The aim of experimental study was to obtain information on low Mach number flows in ducts in order to improve physical understanding and validate theoretical CFD and CAA models that still may be improved.
Properties of gamma-ray burst time profiles using pulse decomposition analysis
Energy Technology Data Exchange (ETDEWEB)
Lee, A.
2000-02-08
The time profiles of many gamma-ray bursts consist of distinct pulses, which offers the possibility of characterizing the temporal structure of these bursts using a relatively small set of pulse shape parameters. This pulse decomposition analysis has previously been performed on a small sample of bright long bursts using binned data from BATSE, which comes in several data types, and on a sample of short bursts using the BATSE Time-Tagged Event (TTE) data type. The authors have developed an interactive pulse-fitting program using the phenomenological pulse model of Norris, et. al. and a maximum-likelihood fitting routine. They have used this program to analyze the Time-to-Spill (TTS) data for all bursts observed by BATSE up through trigger number 2000, in all energy channels for which TTS data is available. They present statistical information on the attributes of pulses comprising these bursts, including relations between pulse characteristics through the course of a burst. They carry out simulations to determine the biases that their procedures may introduce. They find that pulses tend to have shorter rise times than decay times, and tend to be narrower and peak earlier at higher energies. They also find that pulse brightness, pulse width, and pulse hardness ratios do not evolve monotonically within bursts, but that the ratios of pulse rise times to decay times tends to decrease with time within bursts.
Directory of Open Access Journals (Sweden)
Jinlu Sheng
2016-07-01
Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.
Engelbrecht, Nicolaas; Chiuta, Steven; Bessarabov, Dmitri G.
2018-05-01
The experimental evaluation of an autothermal microchannel reactor for H2 production from NH3 decomposition is described. The reactor design incorporates an autothermal approach, with added NH3 oxidation, for coupled heat supply to the endothermic decomposition reaction. An alternating catalytic plate arrangement is used to accomplish this thermal coupling in a cocurrent flow strategy. Detailed analysis of the transient operating regime associated with reactor start-up and steady-state results is presented. The effects of operating parameters on reactor performance are investigated, specifically, the NH3 decomposition flow rate, NH3 oxidation flow rate, and fuel-oxygen equivalence ratio. Overall, the reactor exhibits rapid response time during start-up; within 60 min, H2 production is approximately 95% of steady-state values. The recommended operating point for steady-state H2 production corresponds to an NH3 decomposition flow rate of 6 NL min-1, NH3 oxidation flow rate of 4 NL min-1, and fuel-oxygen equivalence ratio of 1.4. Under these flows, NH3 conversion of 99.8% and H2 equivalent fuel cell power output of 0.71 kWe is achieved. The reactor shows good heat utilization with a thermal efficiency of 75.9%. An efficient autothermal reactor design is therefore demonstrated, which may be upscaled to a multi-kW H2 production system for commercial implementation.
International Nuclear Information System (INIS)
Yuan, Dong-hai; Guo, Xu-jing; Wen, Li; He, Lian-sheng; Wang, Jing-gang; Li, Jun-qi
2015-01-01
Fluorescence excitation-emission matrix (EEM) spectra coupled with parallel factor analysis (PARAFAC) was used to characterize dissolved organic matter (DOM) derived from macrophyte decomposition, and to study its complexation with Cu (II) and Cd (II). Both the protein-like and the humic-like components showed a marked quenching effect by Cu (II). Negligible quenching effects were found for Cd (II) by components 1, 5 and 6. The stability constants and the fraction of the binding fluorophores for humic-like components and Cu (II) can be influenced by macrophyte decomposition of various weight gradients in aquatic plants. Macrophyte decomposition within the scope of the appropriate aquatic phytomass can maximize the stability constant of DOM-metal complexes. A large amount of organic matter was introduced into the aquatic environment by macrophyte decomposition, suggesting that the potential risk of DOM as a carrier of heavy metal contamination in macrophytic lakes should not be ignored. - Highlights: • Macrophyte decomposition increases fluorescent DOM components in the upper sediment. • Protein-like components are quenched or enhanced by adding Cu (II) and Cd (II). • Macrophyte decomposition DOM can impact the affinity of Cu (II) and Cd (II). • The log K M and f values showed a marked change due to macrophyte decomposition. • Macrophyte decomposition can maximize the stability constant of DOM-Cu (II) complexes. - Macrophyte decomposition DOM can influence on the binding affinity of metal ions in macrophytic lakes
SVM-dependent pairwise HMM: an application to protein pairwise alignments.
Orlando, Gabriele; Raimondi, Daniele; Khan, Taushif; Lenaerts, Tom; Vranken, Wim F
2017-12-15
Methods able to provide reliable protein alignments are crucial for many bioinformatics applications. In the last years many different algorithms have been developed and various kinds of information, from sequence conservation to secondary structure, have been used to improve the alignment performances. This is especially relevant for proteins with highly divergent sequences. However, recent works suggest that different features may have different importance in diverse protein classes and it would be an advantage to have more customizable approaches, capable to deal with different alignment definitions. Here we present Rigapollo, a highly flexible pairwise alignment method based on a pairwise HMM-SVM that can use any type of information to build alignments. Rigapollo lets the user decide the optimal features to align their protein class of interest. It outperforms current state of the art methods on two well-known benchmark datasets when aligning highly divergent sequences. A Python implementation of the algorithm is available at http://ibsquare.be/rigapollo. wim.vranken@vub.be. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
An ill-conditioning conformal radiotherapy analysis based on singular values decomposition
International Nuclear Information System (INIS)
Lefkopoulos, D.; Grandjean, P.; Bendada, S.; Dominique, C.; Platoni, K.; Schlienger, M.
1995-01-01
Clinical experience in stereotactic radiotherapy of irregular complex lesions had shown that optimization algorithms were necessary to improve the dose distribution. We have developed a general optimization procedure which can be applied to different conformal irradiation techniques. In this presentation this procedure is tested on the stereotactic radiotherapy modality of complex cerebral lesions treated with multi-isocentric technique based on the 'associated targets methodology'. In this inverse procedure we use the singular value decomposition (SVD) analysis which proposes several optimal solutions for the narrow beams weights of each isocentre. The SVD analysis quantifies the ill-conditioning of the dosimetric calculation of the stereotactic irradiation, using the condition number which is the ratio of the bigger to smaller singular values. Our dose distribution optimization approach consists on the study of the irradiation parameters influence on the stereotactic radiotherapy inverse problem. The adjustment of the different irradiation parameters into the 'SVD optimizer' procedure is realized taking into account the ratio of the quality reconstruction to the time calculation. It will permit a more efficient use of the 'SVD optimizer' in clinical applications for real 3D lesions. The evaluation criteria for the choice of satisfactory solutions are based on the dose-volume histograms and clinical considerations. We will present the efficiency of ''SVD optimizer'' to analyze and predict the ill-conditioning in stereotactic radiotherapy and to recognize the topography of the different beams in order to create optimal reconstructed weighting vector. The planification of stereotactic treatments using the ''SVD optimizer'' is examined for mono-isocentrically and complex dual-isocentrically treated lesions. The application of the SVD optimization technique provides conformal dose distribution for complex intracranial lesions. It is a general optimization procedure
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Thermal and X-ray diffraction analysis studies during the decomposition of ammonium uranyl nitrate.
Kim, B H; Lee, Y B; Prelas, M A; Ghosh, T K
Two types of ammonium uranyl nitrate (NH 4 ) 2 UO 2 (NO 3 ) 4 ·2H 2 O and NH 4 UO 2 (NO 3 ) 3 , were thermally decomposed and reduced in a TG-DTA unit in nitrogen, air, and hydrogen atmospheres. Various intermediate phases produced by the thermal decomposition and reduction process were investigated by an X-ray diffraction analysis and a TG/DTA analysis. Both (NH 4 ) 2 UO 2 (NO 3 ) 4 ·2H 2 O and NH 4 UO 2 (NO 3 ) 3 decomposed to amorphous UO 3 regardless of the atmosphere used. The amorphous UO 3 from (NH 4 ) 2 UO 2 (NO 3 ) 4 ·2H 2 O was crystallized to γ-UO 3 regardless of the atmosphere used without a change in weight. The amorphous UO 3 obtained from decomposition of NH 4 UO 2 (NO 3 ) 3 was crystallized to α-UO 3 under a nitrogen and air atmosphere, and to β-UO 3 under a hydrogen atmosphere without a change in weight. Under each atmosphere, the reaction paths of (NH 4 ) 2 UO 2 (NO 3 ) 4 ·2H 2 O and NH 4 UO 2 (NO 3 ) 3 were as follows: under a nitrogen atmosphere: (NH 4 ) 2 UO 2 (NO 3 ) 4 ·2H 2 O → (NH 4 ) 2 UO 2 (NO 3 ) 4 ·H 2 O → (NH 4 ) 2 UO 2 (NO 3 ) 4 → NH 4 UO 2 (NO 3 ) 3 → A-UO 3 → γ-UO 3 → U 3 O 8 , NH 4 UO 2 (NO 3 ) 3 → A-UO 3 → α-UO 3 → U 3 O 8 ; under an air atmosphere: (NH 4 ) 2 UO 2 (NO 3 ) 4 ·2H 2 O → (NH 4 ) 2 UO 2 (NO 3 ) 4 ·H 2 O → (NH 4 ) 2 UO 2 (NO 3 ) 4 → NH 4 UO 2 (NO 3 ) 3 → A-UO 3 → γ-UO 3 → U 3 O 8 , NH 4 UO 2 (NO 3 ) 3 → A-UO 3 → α-UO 3 → U 3 O 8 ; and under a hydrogen atmosphere: (NH 4 ) 2 UO 2 (NO 3 ) 4 ·2H 2 O → (NH 4 ) 2 UO 2 (NO 3 ) 4 ·H 2 O → (NH 4 ) 2 UO 2 (NO 3 ) 4 → NH 4 UO 2 (NO 3 ) 3 → A-UO 3 → γ-UO 3 → α-U 3 O 8 → UO 2 , NH 4 UO 2 (NO 3 ) 3 → A-UO 3 → β-UO 3 → α-U 3 O 8 → UO 2 .
Modal analysis of fluid flows using variants of proper orthogonal decomposition
Rowley, Clarence; Dawson, Scott
2017-11-01
This talk gives an overview of several methods for analyzing fluid flows, based on variants of proper orthogonal decomposition. These methods may be used to determine simplified, approximate models that capture the essential features of these flows, in order to better understand the dominant physical mechanisms, and potentially to develop appropriate strategies for model-based flow control. We discuss balanced proper orthogonal decomposition as an approximation of balanced truncation, and explain connections with system identification methods such as the eigensystem realization algorithm. We demonstrate the methods on several canonical examples, including a linearized channel flow and the flow past a circular cylinder. Supported by AFOSR, Grant FA9550-14-1-0289.
Retrospective and Prospective Decomposition Analysis of Chinese Manufacturing Energy Use, 1995-2020
Energy Technology Data Exchange (ETDEWEB)
Hasanbeigi, Ali [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division, Environmental Impacts Dept., China Energy Group; Price, Lynn [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division, Environmental Impacts Dept., China Energy Group; Fino-Chen, Cecilia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division, Environmental Impacts Dept., China Energy Group; Lu, Hongyou [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division, Environmental Impacts Dept., China Energy Group; Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division, Environmental Impacts Dept., China Energy Group
2013-01-15
In 2010, China was responsible for nearly 20 percent of global energy use and 25 percent of energy-related carbon dioxide (CO_{2}) emissions. Unlike most countries, China’s energy consumption pattern is unique because the industrial sector dominates the country’s total energy consumption, accounting for about 70 percent of energy use and 72 percent of CO_{2} emissions in 2010. For this reason, the development path of China’s industrial sector will greatly affect future energy demand and dynamics of not only China, but the entire world. A number of analyses of historical trends have been conducted, but careful projections of the key factors affecting China’s industry sector energy use over the next decade are scarce. This study analyzes industrial energy use and the economic structure of the Chinese manufacturing sector in detail. First, the study analyzes the energy use of and output from 18 industry sub-sectors. Then, retrospective (1995-2010) and prospective (2010-2020) decomposition analyses are conducted for these industrial sectors in order to show how different factors (production growth, structural change, and energy intensity change) influenced industrial energy use trends in China over the last 15 years and how they will do so over the next 10 years. The results of this study will allow policy makers to quantitatively compare the level of structural change in the past and in the years to come and adjust their policies if needed to move towards the target of less energy-intensive industries. The scenario analysis shows the structural change achieved through different paths and helps to understand the consequences of supporting or limiting the growth of certain manufacturing subsectors from the point of view of energy use and structural change. The results point out the industries that have the largest influence in such structural change
Zhang, Cuihong; Zhao, Chunxia; Liu, Xiangyu; Wei, Qianwei; Luo, Shusheng; Guo, Sufang; Zhang, Jingxu; Wang, Xiaoli; Scherpbier, Robert W
2017-12-08
Previous studies about inequality in children's health focused more on physical health than the neurodevelopment. In this study, we aimed to evaluate the inequality in early childhood neurodevelopment in poor rural China and explore the contributions of socioeconomic factors to the inequality. Information of 2120 children aged 0 to 35 months and their households in six poor rural counties of China was collected during July - September, 2013. Age and Stages Questionnaire-Chinese version, concentration index and decomposition analysis were used to assess the neurodevelopment of early childhood, measure its inequality and evaluate the contributions of socioeconomic factors to the inequality, respectively. The prevalence of suspected developmental delay in children under 35 months of age in six poor rural counties of China was nearly 40%, with the concentration index of -0.0877. Household economic status, caregivers' depressive symptoms, learning material and family support for learning were significantly associated with children's suspected developmental delay, and explained 34.1, 14.1, 8.9 and 7.0% of the inequality in early childhood neurodevelopment, respectively. The early childhood neurodevelopment in the surveyed area is poor and unfair. Factors including household economic status, caregivers' depressive symptoms, learning material and family support for learning are significantly associated with children's suspected developmental delay and early developmental inequality. The results highlight the urgent need of monitoring child neurodevelopment in poor rural areas. Interventions targeting the caregivers' depressive symptoms, providing learning material and developmental appropriate stimulating activities may help improve early childhood neurodevelopment and reduce its inequality.
International Nuclear Information System (INIS)
Wang, Yafei; Zhao, Hongyan; Li, Liying; Liu, Zhu; Liang, Sai
2013-01-01
As the capital of China, Beijing is regarded as a major metropolis in the world. Study of the variation in temporal CO 2 emissions generated by the driving forces in Beijing can provide guidance for policy decisions on CO 2 emissions mitigation in global metropolises. Based on input–output structural decomposition analysis (IO-SDA), we analysed the driving forces for the increment in CO 2 emissions in Beijing from both production and final demand perspectives during 1997–2010. According to our results, the CO 2 emission growth in Beijing is driven mainly by production structure change and population growth, partly offset by CO 2 emission intensity reduction as well as the decline in per capita final demand volume during the study period. Final demand structure change has a limited effect on the change in the CO 2 emissions in Beijing. From the final demand perspective, urban trades, urban residential consumption, government consumption and fixed capital formation are mainly responsible for the booming emissions. This study showed how the “top-down” IO-SDA methodology was implemented on a city scale. Policy implications from this study would be helpful for addressing CO 2 emissions mitigation in global capital cities and metropolises. - Highlights: • Changes in production structure and population are drivers of CO 2 increment. • Changes in CO 2 intensity and per capita GDP are forces to offset CO 2 increment. • Final demand structure change has limited effect on Beijing's CO 2 emission change. • Beijing's key final demand categories and economic sectors are identified. • Policy implications of Beijing's results are analyzed
International Nuclear Information System (INIS)
Futatani, S.; Benkadda, S.; Del-Castillo-Negrete, D.
2009-01-01
The spatiotemporal multiscale dynamics of the turbulent transport of impurities is studied in the context of the collisional drift wave turbulence. Two turbulence regimes are considered: a quasihydrodynamic regime and a quasiadiabatic regime. The impurity is assumed to be a passive scalar advected by the corresponding ExB turbulent flow in the presence of diffusion. Two mixing scenarios are studied: a freely decaying case, and a forced case in which the scalar is forced by an externally imposed gradient. The results of the direct numerical simulations are analyzed using proper orthogonal decomposition (POD) techniques. The multiscale analysis is based on a space-time separable POD of the impurity field. The low rank spatial POD eigenfunctions capture the large scale coherent structures and the high rank eigenfunctions capture the small scale fluctuations. The temporal evolution at each scale is dictated by the corresponding temporal POD eigenfunctions. Contrary to the decaying case in which the POD spectrum decays fast, the spectrum in the forced case is relatively flat. The most striking difference between these two mixing scenarios is in the temporal dynamics of the small scale structures. In the decaying case the POD reveals the presence of 'bursty' dynamics in which successively small (high POD rank) scales are intermittently activated during the mixing process. On the other hand, in the forced simulations the temporal dynamics exhibits stationary fluctuations. Spatial intermittency or 'patchiness' in the mixing process characterizes the distribution of the passive tracer in the decaying quasihydrodynamic regime. In particular, in this case the probability distribution function of the low rank POD spatial reconstruction error is non-Gaussian. The spatiotemporal POD scales exhibit a diffusive-type scaling in the quasiadiabatic regime. However, this scaling seems to be absent in the quasihydrodynamic regime that shows no scaling (in the decaying case) or two
China's energy consumption under the global economic crisis: Decomposition and sectoral analysis
International Nuclear Information System (INIS)
Li, Fangyi; Song, Zhouying; Liu, Weidong
2014-01-01
It is now widely recognized that there is a strong relationship between energy consumption and economic growth. Most countries′ energy demands declined during the economic depression of 2008–2009 when a worldwide economic crisis occurred. As an export-oriented economy, China suffered a serious exports decline in the course of the crisis. However, it was found that energy consumption continued to increase. Against such a background, this paper aims to assess and explain the factors causing the growth of energy consumption in China. First, we will explain the impact of domestic final use and international trade on energy consumption by using decomposition analysis. Second, embodied energy and its variation across sectors are quantified to identify the key sectors contributing to the growth. Lastly, the policy implications for long-term energy conservation are discussed. The results show that the decline in exports was one of the driving forces for energy consumption reduction in the crisis, but that the growth of domestic demand in manufacturing and construction, largely stimulated by economic stimulus plans, had the opposite effect on energy consumption. International trade contributed to decreasing energy consumption of China during and after the crisis because the structure of exports and imports changed in this period. - Highlights: • We analyze the reasons for China's energy consumption change under the global economic crisis during 2007–2010. • Domestic final use growth, especially in construction and manufacturing of machinery and equipment, resulted in energy consumption increase. • International trade is identified as a driver of energy consumption reduction during and after the crisis. • Increasing China's share of consumption or reducing its share of investment in the GDP can reduce national energy intensity
Temporal associations between weather and headache: analysis by empirical mode decomposition.
Directory of Open Access Journals (Sweden)
Albert C Yang
Full Text Available BACKGROUND: Patients frequently report that weather changes trigger headache or worsen existing headache symptoms. Recently, the method of empirical mode decomposition (EMD has been used to delineate temporal relationships in certain diseases, and we applied this technique to identify intrinsic weather components associated with headache incidence data derived from a large-scale epidemiological survey of headache in the Greater Taipei area. METHODOLOGY/PRINCIPAL FINDINGS: The study sample consisted of 52 randomly selected headache patients. The weather time-series parameters were detrended by the EMD method into a set of embedded oscillatory components, i.e. intrinsic mode functions (IMFs. Multiple linear regression models with forward stepwise methods were used to analyze the temporal associations between weather and headaches. We found no associations between the raw time series of weather variables and headache incidence. For decomposed intrinsic weather IMFs, temperature, sunshine duration, humidity, pressure, and maximal wind speed were associated with headache incidence during the cold period, whereas only maximal wind speed was associated during the warm period. In analyses examining all significant weather variables, IMFs derived from temperature and sunshine duration data accounted for up to 33.3% of the variance in headache incidence during the cold period. The association of headache incidence and weather IMFs in the cold period coincided with the cold fronts. CONCLUSIONS/SIGNIFICANCE: Using EMD analysis, we found a significant association between headache and intrinsic weather components, which was not detected by direct comparisons of raw weather data. Contributing weather parameters may vary in different geographic regions and different seasons.
International Nuclear Information System (INIS)
Zeng, Lin; Xu, Ming; Liang, Sai; Zeng, Siyu; Zhang, Tianzhu
2014-01-01
The decline of China's energy intensity slowed since 2000. During 2002–2005 it actually increased, reversing the long-term trend. Therefore, it is important to identify drivers of the fluctuation of energy intensity. We use input–output structural decomposition analysis to investigate the contributions of changes in energy mix, sectoral energy efficiency, production structure, final demand structure, and final demand category composition to China's energy intensity fluctuation during 1997–2007. We include household energy consumption in the study by closing the input–output model with respect to households. Results show that sectoral energy efficiency improvements contribute the most to the energy intensity decline during 1997–2007. The increase in China's energy intensity during 2002–2007 is instead explained by changes in final demand composition and production structure. Changes in final demand composition are mainly due to increasing share of exports, while changes in production structure mainly arise from the shift of Chinese economy to more energy-intensive industries. Changes in energy mix and final demand structure contribute little to China's energy intensity fluctuation. From the consumption perspective, growing exports of energy-intensive products and increasing infrastructure demands explain the majority of energy intensity increase during 2002–2007. - Highlights: • We analyzed energy intensity change from production and consumption perspectives. • We extended the research scope of energy intensity to cover household consumption. • Sectoral energy efficiency improvement contributed most to energy intensity decline. • Impact of production structure change on energy intensity varied at different times. • Growing export demand newly became main driver of China's energy intensity increase
Fechner, Hanna B; Schooler, Lael J; Pachur, Thorsten
2018-01-01
Several theories of cognition distinguish between strategies that differ in the mental effort that their use requires. But how can the effort-or cognitive costs-associated with a strategy be conceptualized and measured? We propose an approach that decomposes the effort a strategy requires into the time costs associated with the demands for using specific cognitive resources. We refer to this approach as resource demand decomposition analysis (RDDA) and instantiate it in the cognitive architecture Adaptive Control of Thought-Rational (ACT-R). ACT-R provides the means to develop computer simulations of the strategies. These simulations take into account how strategies interact with quantitative implementations of cognitive resources and incorporate the possibility of parallel processing. Using this approach, we quantified, decomposed, and compared the time costs of two prominent strategies for decision making, take-the-best and tallying. Because take-the-best often ignores information and foregoes information integration, it has been considered simpler than strategies like tallying. However, in both ACT-R simulations and an empirical study we found that under increasing cognitive demands the response times (i.e., time costs) of take-the-best sometimes exceeded those of tallying. The RDDA suggested that this pattern is driven by greater requirements for working memory updates, memory retrievals, and the coordination of mental actions when using take-the-best compared to tallying. The results illustrate that assessing the relative simplicity of strategies requires consideration of the overall cognitive system in which the strategies are embedded. Copyright © 2017 Elsevier B.V. All rights reserved.
Pseudo inputs for pairwise learning with Gaussian processes
DEFF Research Database (Denmark)
Nielsen, Jens Brehm; Jensen, Bjørn Sand; Larsen, Jan
2012-01-01
We consider learning and prediction of pairwise comparisons between instances. The problem is motivated from a perceptual view point, where pairwise comparisons serve as an effective and extensively used paradigm. A state-of-the-art method for modeling pairwise data in high dimensional domains...... is based on a classical pairwise probit likelihood imposed with a Gaussian process prior. While extremely flexible, this non-parametric method struggles with an inconvenient O(n3) scaling in terms of the n input instances which limits the method only to smaller problems. To overcome this, we derive...... to other similar approximations that have been applied in standard Gaussian process regression and classification problems such as FI(T)C and PI(T)C....
Sekiguchi, K.; Shirakawa, H.; Yamamoto, Y.; Araidai, M.; Kangawa, Y.; Kakimoto, K.; Shiraishi, K.
2017-06-01
We analyzed the decomposition mechanisms of trimethylgallium (TMG) used for the gallium source of GaN fabrication based on first-principles calculations and thermodynamic analysis. We considered two conditions. One condition is under the total pressure of 1 atm and the other one is under metal organic vapor phase epitaxy (MOVPE) growth of GaN. Our calculated results show that H2 is indispensable for TMG decomposition under both conditions. In GaN MOVPE, TMG with H2 spontaneously decomposes into Ga(CH3) and Ga(CH3) decomposes into Ga atom gas when temperature is higher than 440 K. From these calculations, we confirmed that TMG surely becomes Ga atom gas near the GaN substrate surfaces.
International Nuclear Information System (INIS)
Okushima, Shinichiro; Tamura, Makoto
2007-01-01
The purpose of this paper is to present a new approach to evaluating structural change of the economy in a multisector general equilibrium framework. The multiple calibration technique is applied to an ex post decomposition analysis of structural change between periods, enabling the distinction between price substitution and technological change to be made for each sector. This approach has the advantage of sounder microtheoretical underpinnings when compared with conventional decomposition methods. The proposed technique is empirically applied to changes in energy use and carbon dioxide (CO 2 ) emissions in the Japanese economy from 1970 to 1995. The results show that technological change is of great importance for curtailing energy use and CO 2 emissions in Japan. Total CO 2 emissions increased during this period primarily because of economic growth, which is represented by final demand effects. On the other hand, the effects such as technological change for labor or energy mitigated the increase in CO 2 emissions
International Nuclear Information System (INIS)
Yeh, J-R; Lin, T-Y; Shieh, J-S; Chen, Y; Huang, N E; Wu, Z; Peng, C-K
2008-01-01
In this investigation, surgical operations of blocked intestinal artery have been conducted on pigs to simulate the condition of acute mesenteric arterial occlusion. The empirical mode decomposition method and the algorithm of linguistic analysis were applied to verify the blood pressure signals in simulated situation. We assumed that there was some information hidden in the high-frequency part of the blood pressure signal when an intestinal artery is blocked. The empirical mode decomposition method (EMD) has been applied to decompose the intrinsic mode functions (IMF) from a complex time series. But, the end effects and phenomenon of intermittence damage the consistence of each IMF. Thus, we proposed the complementary ensemble empirical mode decomposition method (CEEMD) to solve the problems of end effects and the phenomenon of intermittence. The main wave of blood pressure signals can be reconstructed by the main components, identified by Monte Carlo verification, and removed from the original signal to derive a riding wave. Furthermore, the concept of linguistic analysis was applied to design the blocking index to verify the pattern of riding wave of blood pressure using the measurements of dissimilarity. Blocking index works well to identify the situation in which the sampled time series of blood pressure signal was recorded. Here, these two totally different algorithms are successfully integrated and the existence of the existence of information hidden in high-frequency part of blood pressure signal has been proven
International Nuclear Information System (INIS)
Li, DuoQi; Wang, DuanYi
2016-01-01
Freeways have become the main subject of energy conservation reviews in China because of their large energy consumption. Decomposition analysis has been widely applied in energy consumption studies. However, most studies have only analyzed the driving forces of energy consumption on the national level, and seldom at smaller levels, e.g. for departments like toll stations and maintenance centers. Based on the characteristics of the freeway operation period, this paper serves as a preliminary attempt at applying the logarithm mean Divisia index method I on the department level to analyze the energy consumption of freeways during the operation period. To elucidate the evolution of energy consumption during the operational period, the logarithm mean Divisia index analysis was performed to disentangle the energy consumption based on a real case in Guangdong from 2005 to 2013. The analyses establish that the traffic volume influences energy consumption. The results show that some departments can influence the total energy consumption. For example, the tunnels and toll stations are key factors that influence the total energy consumption of the whole road. The decomposition in the departments revealed that energy efficiency improvements caused by the use of alternate materials and energy-saving technologies mainly contributed to energy conservation. - Highlights: • Decomposition model was used to analyze energy consumption of freeway management. • This attempt finds the impact of traffic volume on management energy consumption. • Tunnels and toll stations mainly affect the management energy consumption. • Using alternate materials and energy-saving technology improved energy efficiency.
Energy Technology Data Exchange (ETDEWEB)
Yeh, J-R; Lin, T-Y; Shieh, J-S [Department of Mechanical Engineering, Yuan Ze University, 135 Far-East Road, Chung-Li, Taoyuan, Taiwan (China); Chen, Y [Far Eastern Memorial Hospital, Taiwan (China); Huang, N E [Research Center for Adaptive Data Analysis, National Central University, Taiwan (China); Wu, Z [Center for Ocean-Land-Atmosphere Studies (United States); Peng, C-K [Beth Israel Deaconess Medical Center, Harvard Medical School (United States)], E-mail: s939205@ mail.yzu.edu.tw
2008-02-15
In this investigation, surgical operations of blocked intestinal artery have been conducted on pigs to simulate the condition of acute mesenteric arterial occlusion. The empirical mode decomposition method and the algorithm of linguistic analysis were applied to verify the blood pressure signals in simulated situation. We assumed that there was some information hidden in the high-frequency part of the blood pressure signal when an intestinal artery is blocked. The empirical mode decomposition method (EMD) has been applied to decompose the intrinsic mode functions (IMF) from a complex time series. But, the end effects and phenomenon of intermittence damage the consistence of each IMF. Thus, we proposed the complementary ensemble empirical mode decomposition method (CEEMD) to solve the problems of end effects and the phenomenon of intermittence. The main wave of blood pressure signals can be reconstructed by the main components, identified by Monte Carlo verification, and removed from the original signal to derive a riding wave. Furthermore, the concept of linguistic analysis was applied to design the blocking index to verify the pattern of riding wave of blood pressure using the measurements of dissimilarity. Blocking index works well to identify the situation in which the sampled time series of blood pressure signal was recorded. Here, these two totally different algorithms are successfully integrated and the existence of the existence of information hidden in high-frequency part of blood pressure signal has been proven.
Analysis of local ionospheric time varying characteristics with singular value decomposition
DEFF Research Database (Denmark)
Jakobsen, Jakob Anders; Knudsen, Per; Jensen, Anna B. O.
2010-01-01
In this paper, a time series from 1999 to 2007 of absolute total electron content (TEC) values has been computed and analyzed using singular value decomposition (SVD). The data set has been computed using a Kalman Filter and is based on dual frequency GPS data from three reference stations in Den...
[Analysis of the bacterial community developing in the course of Sphagnum moss decomposition].
Kulichevskaia, I S; Belova, S E; Kevbrin, V V; Dedysh, S N; Zavarzin, G A
2007-01-01
Slow degradation of organic matter in acidic Sphagnum peat bogs suggests a limited activity of organotrophic microorganisms. Monitoring of the Sphagnum debris decomposition in a laboratory simulation experiment showed that this process was accompanied by a shift in the water color to brownish due to accumulation of humic substances and by the development of a specific bacterial community with a density of 2.4 x 10(7) cells ml(-1). About half of these organisms are metabolically active and detectable with rRNA-specific oligonucleotide probes. Molecular identification of the components of this microbial community showed the numerical dominance of bacteria affiliated with the phyla Alphaproteobacteria, Actinobacteria, and Phanctomycetes. The population sizes of Firmicutes and Bacteroidetes, which are believed to be the main agents of bacterially-mediated decomposition in eutrophic wetlands, were low. The numbers of planctomycetes increased at the final stage of Sphagnum decomposition. The representative isolates of Alphaproteobacteria were able to utilize galacturonic acid, the only low-molecular-weight organic compound detected in the water samples; the representatives of Planctomycetes were able to decompose some heteropolysaccharides, which points to the possible functional role of these groups of microorganisms in the community under study. Thus, the composition of the bacterial community responsible for Sphagnum decomposition in acidic and low-mineral oligotrophic conditions seems to be fundamentally different from that of the bacterial community which decomposes plant debris in eutrophic ecosystems at neutral pH.
Educational Outcomes and Socioeconomic Status: A Decomposition Analysis for Middle-Income Countries
Nieto, Sandra; Ramos, Raúl
2015-01-01
This article analyzes the factors that explain the gap in educational outcomes between the top and bottom quartile of students in different countries, according to their socioeconomic status. To do so, it uses PISA microdata for 10 middle-income and 2 high-income countries, and applies the Oaxaca-Blinder decomposition method. Its results show that…
International Nuclear Information System (INIS)
Das, Aparna; Paul, Saikat Kumar
2014-01-01
CO 2 emission from anthropogenic activities is one of the major causes of global warming. India being an agriculture dependent country, global warming would mean monsoon instability and consequent food scarcity, natural disasters and economic concerns. However with proper policy interventions, CO 2 emissions can be controlled. Input–output analysis has been used to estimate direct and indirect CO 2 emissions by households for 1993–94, 1998–99, 2003–04 and 2006–07. Complete decomposition analysis of the changes in CO 2 emissions between 1993–94 and 2006–07 has been done to identify the causes into pollution, energy intensity, structure, activity and population effects according to broad household consumption categories. Results indicate that activity, structure and population effects are the main causes of increase in CO 2 emission from household fuel consumption. To identify the causes at the sectoral level a second decomposition has been done for changes between 2003–04 and 2006–07 to identify the causes in the next stage. Finally alternative energy policy options have been examined for each consumption category to reduce emissions. Combined strategies of technology upgradation, fuel switching and market management in order to reduce CO 2 emissions for sectors like Batteries, Other non-electrical machinery, Construction and Electronic equipments (including Television), for which all the effects are positive, need to be adopted. - Highlights: • Household CO 2 emissions (direct and indirect) from 1993–94 to 2006–07 using IOTT. • Decomposition of changes between 1993–94 and 2006–07 for consumption categories. • Decomposition of changes in CO 2 emission from 2003–04 to 2006–07 at the sectoral level. • Monetary and physical resource saving under different energy policy options. • Energy policy guideline pertaining to the consumption categories at the sectoral level
Torres, A. F.
2011-12-01
two excellent tools from the Learning Machine field know as the Wavelet Decomposition Analysis (WDA) and the Multivariate Relevance Vector Machine (MVRVM) to forecast the results obtained from the SEBAL algorithm using LandSat imagery and soil moisture maps. The predictive capability of this novel hybrid WDA-RVM actual evapotranspiration forecasting technique is tested by comparing the crop water requirements and delivered crop water in the Lower Sevier River Basin, Utah, for the period 2007-2011. This location was selected because of their success increasing the efficiency of water use and control along the entire irrigation system. Research is currently on going to assess the efficacy of the WDA-RVM technique along the irrigation season, which is required to enhance the water use efficiency and minimize climate change impact on the Sevier River Basin.
International Nuclear Information System (INIS)
Liu, Yingnan; Wang, Ke
2015-01-01
The process of energy conservation and emission reduction in China requires the specific and accurate evaluation of the energy efficiency of the industry sector because this sector accounts for 70 percent of China's total energy consumption. Previous studies have used a “black box” DEA (data envelopment analysis) model to obtain the energy efficiency without considering the inner structure of the industry sector. However, differences in the properties of energy utilization (final consumption or intermediate conversion) in different industry departments may lead to bias in energy efficiency measures under such “black box” evaluation structures. Using the network DEA model and efficiency decomposition technique, this study proposes an adjusted energy efficiency evaluation model that can characterize the inner structure and associated energy utilization properties of the industry sector so as to avoid evaluation bias. By separating the energy-producing department and energy-consuming department, this adjusted evaluation model was then applied to evaluate the energy efficiency of China's provincial industry sector. - Highlights: • An adjusted network DEA (data envelopment analysis) model for energy efficiency evaluation is proposed. • The inner structure of industry sector is taken into account for energy efficiency evaluation. • Energy final consumption and energy intermediate conversion processes are separately modeled. • China's provincial industry energy efficiency is measured through the adjusted model.
A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.
Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem
2018-06-12
Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.
Engelhardt, Felix; Maaß, Christian; Andrada, Diego M; Herbst-Irmer, Regine; Stalke, Dietmar
2018-03-28
Lithium amides are versatile C-H metallation reagents with vast industrial demand because of their high basicity combined with their weak nucleophilicity, and they are applied in kilotons worldwide annually. The nuclearity of lithium amides, however, modifies and steers reactivity, region- and stereo-selectivity and product diversification in organic syntheses. In this regard, it is vital to understand Li-N bonding as it causes the aggregation of lithium amides to form cubes or ladders from the polar Li-N covalent metal amide bond along the ring stacking and laddering principle. Deaggregation, however, is more governed by the Li←N donor bond to form amine adducts. The geometry of the solid state structures already suggests that there is σ- and π-contribution to the covalent bond. To quantify the mutual influence, we investigated [{(Me 2 NCH 2 ) 2 (C 4 H 2 N)}Li] 2 ( 1 ) by means of experimental charge density calculations based on the quantum theory of atoms in molecules (QTAIM) and DFT calculations using energy decomposition analysis (EDA). This new approach allows for the grading of electrostatic Li + N - , covalent Li-N and donating Li←N bonding, and provides a way to modify traditional widely-used heuristic concepts such as the -I and +I inductive effects. The electron density ρ ( r ) and its second derivative, the Laplacian ∇ 2 ρ ( r ), mirror the various types of bonding. Most remarkably, from the topological descriptors, there is no clear separation of the lithium amide bonds from the lithium amine donor bonds. The computed natural partial charges for lithium are only +0.58, indicating an optimal density supply from the four nitrogen atoms, while the Wiberg bond orders of about 0.14 au suggest very weak bonding. The interaction energy between the two pincer molecules, (C 4 H 2 N) 2 2- , with the Li 2 2+ moiety is very strong ( ca. -628 kcal mol -1 ), followed by the bond dissociation energy (-420.9 kcal mol -1 ). Partitioning the interaction energy
Main drivers of health expenditure growth in China: a decomposition analysis.
Zhai, Tiemin; Goss, John; Li, Jinjing
2017-03-09
In past two decades, health expenditure in China grew at a rate of 11.6% per year, which is much faster than the growth of the country's economy (9.9% per year). As cost containment is a key aspect of China's new health system reform agenda, this study aims to identify the main drivers of past growth so that cost containment policies are focussed in the right areas. The analysis covered the period 1993-2012. To understand the drivers of past growth during this period, Das Gupta's decomposition method was used to decompose the changes in health expenditure by disease into five main components that include population growth, population ageing, disease prevalence rate, expenditure per case of disease, and excess health price inflation. Demographic data on population size and age-composition were obtained from the Department of Economic and Social Affairs of the United Nations. Age- and disease- specific expenditure and prevalence rates by age and disease were extracted from China's National Health Accounts studies and Global Burden of Disease 2013 studies of the Institute for Health Metrics and Evaluation, respectively. Growth in health expenditure in China was mainly driven by a rapid increase in real expenditure per prevalent case, which contributed 8.4 percentage points of the 11.6% annual average growth. Excess health price inflation and population growth contributed 1.3 and 1.3% respectively. The effect of population ageing was relatively small, contributing 0.8% per year. However, reductions in disease prevalence rates reduced the growth rate by 0.3 percentage points. Future policy in optimising growth in health expenditure in China should address growth in expenditure per prevalent case. This is especially so for neoplasms, and for circulatory and respiratory disease. And a focus on effective interventions to reduce the prevalence of disease in the country will ensure that changing disease rates do not lead to a higher growth in future health expenditure
Empirical mode decomposition and long-range correlation analysis of sunspot time series
International Nuclear Information System (INIS)
Zhou, Yu; Leung, Yee
2010-01-01
Sunspots, which are the best known and most variable features of the solar surface, affect our planet in many ways. The number of sunspots during a period of time is highly variable and arouses strong research interest. When multifractal detrended fluctuation analysis (MF-DFA) is employed to study the fractal properties and long-range correlation of the sunspot series, some spurious crossover points might appear because of the periodic and quasi-periodic trends in the series. However many cycles of solar activities can be reflected by the sunspot time series. The 11-year cycle is perhaps the most famous cycle of the sunspot activity. These cycles pose problems for the investigation of the scaling behavior of sunspot time series. Using different methods to handle the 11-year cycle generally creates totally different results. Using MF-DFA, Movahed and co-workers employed Fourier truncation to deal with the 11-year cycle and found that the series is long-range anti-correlated with a Hurst exponent, H, of about 0.12. However, Hu and co-workers proposed an adaptive detrending method for the MF-DFA and discovered long-range correlation characterized by H≈0.74. In an attempt to get to the bottom of the problem in the present paper, empirical mode decomposition (EMD), a data-driven adaptive method, is applied to first extract the components with different dominant frequencies. MF-DFA is then employed to study the long-range correlation of the sunspot time series under the influence of these components. On removing the effects of these periods, the natural long-range correlation of the sunspot time series can be revealed. With the removal of the 11-year cycle, a crossover point located at around 60 months is discovered to be a reasonable point separating two different time scale ranges, H≈0.72 and H≈1.49. And on removing all cycles longer than 11 years, we have H≈0.69 and H≈0.28. The three cycle-removing methods—Fourier truncation, adaptive detrending and the
Decomposition analysis of cupric chloride hydrolysis in the Cu-Cl cycle of hydrogen production
International Nuclear Information System (INIS)
Daggupati, V.N.; Naterer, G.F.; Gabriel, K.S.; Gravelsins, R.; Wang, Z.
2009-01-01
This paper examines cupric chloride solid conversion during hydrolysis in a thermochemical copper-chlorine (Cu-Cl) cycle for hydrogen production. The hydrolysis reaction is a challenging step, in terms of the excess steam requirement and the decomposition of cupric chloride (CuCl 2 ) into cuprous chloride (CuCl) and chlorine (Cl 2 ). The hydrolysis and decomposition reactions are analyzed with respect to the chemical equilibrium constant. The effects of operating parameters are examined, including the temperature, pressure, excess steam and equilibrium conversion. A maximization of yield and selectivity are very important. Rate constants for the simultaneous reaction steps are determined using a uniform reaction model. A shrinking core model is used to determine the rate coefficients and predict the solid conversion time, with diffusional and reaction control. These new results are useful for scale-up of the engineering equipment in the thermochemical Cu-Cl cycle for hydrogen production. (author)
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by
International Nuclear Information System (INIS)
Tavares Neto, J.I.H.; Brito, K.D.; Vasconcelos, L.G.S.; Alves, J.J.N.; Fossy, M.F.; Brito, R.P.
2007-01-01
This work presents the dynamic simulation of the thermal decomposition of nitrogen trichloride (NCl 3 ) during electrolytic chlorine (Cl 2 ) production, using an industrial plant as a case study. NCl 3 is an extremely unstable and explosive compound and the decomposition process has the following main problems: changeability of the reactor temperature and loss of solvent. The results of this work will be used to establish a more efficient and safe control strategy and to analyze the loss of solvent during the dynamic period. The implemented model will also be used to study the use of a new solvent, considering that currently used solvent will be prohibited from commercial use in 2010. The process was simulated by using the commercial simulator Aspen TM and the simulations were validated with plant data. From the results of the simulation it can be concluded that the rate of decomposition depends strongly on the temperature of the reactor, which has a stronger relationship to the liquid Cl 2 (reflux) and gaseous Cl 2 flow rates which feed the system. The results also showed that the loss of solvent changes strongly during the dynamic period
Convergent cross-mapping and pairwise asymmetric inference.
McCracken, James M; Weigel, Robert S
2014-12-01
Convergent cross-mapping (CCM) is a technique for computing specific kinds of correlations between sets of times series. It was introduced by Sugihara et al. [Science 338, 496 (2012).] and is reported to be "a necessary condition for causation" capable of distinguishing causality from standard correlation. We show that the relationships between CCM correlations proposed by Sugihara et al. do not, in general, agree with intuitive concepts of "driving" and as such should not be considered indicative of causality. It is shown that the fact that the CCM algorithm implies causality is a function of system parameters for simple linear and nonlinear systems. For example, in a circuit containing a single resistor and inductor, both voltage and current can be identified as the driver depending on the frequency of the source voltage. It is shown that the CCM algorithm, however, can be modified to identify relationships between pairs of time series that are consistent with intuition for the considered example systems for which CCM causality analysis provided nonintuitive driver identifications. This modification of the CCM algorithm is introduced as "pairwise asymmetric inference" (PAI) and examples of its use are presented.
Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking
Zhang, Chuxu
2017-08-02
Predicting the fast-rising young researchers (the Academic Rising Stars) in the future provides useful guidance to the research community, e.g., offering competitive candidates to university for young faculty hiring as they are expected to have success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt years. We explore a series of factors that can drive an author to be fast-rising and design a novel pairwise citation increment ranking (PCIR) method that leverages those factors to predict the academic rising stars. Experimental results on the large ArnetMiner dataset with over 1.7 million authors demonstrate the effectiveness of PCIR. Specifically, it outperforms all given benchmark methods, with over 8% average improvement. Further analysis demonstrates that temporal features are the best indicators for rising stars prediction, while venue features are less relevant.
International Nuclear Information System (INIS)
Mendez, M O; Cerutti, S; Bianchi, A M; Corthout, J; Van Huffel, S; Matteucci, M; Penzel, T
2010-01-01
This study analyses two different methods to detect obstructive sleep apnea (OSA) during sleep time based only on the ECG signal. OSA is a common sleep disorder caused by repetitive occlusions of the upper airways, which produces a characteristic pattern on the ECG. ECG features, such as the heart rate variability (HRV) and the QRS peak area, contain information suitable for making a fast, non-invasive and simple screening of sleep apnea. Fifty recordings freely available on Physionet have been included in this analysis, subdivided in a training and in a testing set. We investigated the possibility of using the recently proposed method of empirical mode decomposition (EMD) for this application, comparing the results with the ones obtained through the well-established wavelet analysis (WA). By these decomposition techniques, several features have been extracted from the ECG signal and complemented with a series of standard HRV time domain measures. The best performing feature subset, selected through a sequential feature selection (SFS) method, was used as the input of linear and quadratic discriminant classifiers. In this way we were able to classify the signals on a minute-by-minute basis as apneic or nonapneic with different best-subset sizes, obtaining an accuracy up to 89% with WA and 85% with EMD. Furthermore, 100% correct discrimination of apneic patients from normal subjects was achieved independently of the feature extractor. Finally, the same procedure was repeated by pooling features from standard HRV time domain, EMD and WA together in order to investigate if the two decomposition techniques could provide complementary features. The obtained accuracy was 89%, similarly to the one achieved using only Wavelet analysis as the feature extractor; however, some complementary features in EMD and WA are evident
International Nuclear Information System (INIS)
Kang, Jidong; Zhao, Tao; Liu, Nan; Zhang, Xin; Xu, Xianshuo; Lin, Tao
2014-01-01
To better understand how city-level greenhouse gas (GHG) emissions have evolved, we performed a multi-sectoral decomposition analysis to disentangle the GHG emissions in Tianjin from 2001 to 2009. Five sectors were considered, including the agricultural, industrial, transportation, commercial and other sectors. An industrial sub-sector decomposition analysis was further performed in the six high-emission industrial branches. The results show that, for all five sectors in Tianjin, economic growth was the most important factor driving the increase in emissions, while energy efficiency improvements were primarily responsible for the decrease in emissions. In comparison, the influences from energy mix shift and emission coefficient changes were relatively marginal. The disaggregated decomposition in the industry further revealed that energy efficiency improvement has been widely achieved in the industrial branches, which was especially true for the Smelting and Pressing of Ferrous Metals and Chemical Raw Materials and Chemical Products sub-sectors. However, the energy efficiency declined in a few branches, e.g., Petroleum Processing and Coking Products. Moreover, the increased emissions related to industrial structure shift were primarily due to the expansion of Smelting and Pressing of Ferrous Metals; its share in the total industry output increased from 5.62% to 16.1% during the examined period. - Highlights: • We perform the LMDI analysis on the emissions in five sectors of Tianjin. • Economic growth was the most important factor for the emissions increase. • Energy efficiency improvements mainly contributed to the emission decrease. • Negative energy intensity effect was observed in most of the industrial sub-sectors. • Industrial structure change largely resulted in emission increase
Pairwise Constraint-Guided Sparse Learning for Feature Selection.
Liu, Mingxia; Zhang, Daoqiang
2016-01-01
Feature selection aims to identify the most informative features for a compact and accurate data representation. As typical supervised feature selection methods, Lasso and its variants using L1-norm-based regularization terms have received much attention in recent studies, most of which use class labels as supervised information. Besides class labels, there are other types of supervised information, e.g., pairwise constraints that specify whether a pair of data samples belong to the same class (must-link constraint) or different classes (cannot-link constraint). However, most of existing L1-norm-based sparse learning methods do not take advantage of the pairwise constraints that provide us weak and more general supervised information. For addressing that problem, we propose a pairwise constraint-guided sparse (CGS) learning method for feature selection, where the must-link and the cannot-link constraints are used as discriminative regularization terms that directly concentrate on the local discriminative structure of data. Furthermore, we develop two variants of CGS, including: 1) semi-supervised CGS that utilizes labeled data, pairwise constraints, and unlabeled data and 2) ensemble CGS that uses the ensemble of pairwise constraint sets. We conduct a series of experiments on a number of data sets from University of California-Irvine machine learning repository, a gene expression data set, two real-world neuroimaging-based classification tasks, and two large-scale attribute classification tasks. Experimental results demonstrate the efficacy of our proposed methods, compared with several established feature selection methods.
Multipole decomposition analysis of the 27Al, 90Zr, 208Pb(p, n) reactions at 295 MeV
International Nuclear Information System (INIS)
Wakasa, T.; Greenfield, M.B.; Koori, N.; Okihana, A.; Hatanaka, K.
1996-01-01
Differential cross sections at θ lab between 0 and 15 and the polarization transfer D NN at zero degrees for the 27 Al, 90 Zr, 208 Pb(p,n) reactions are measured at a bombarding energy of 295 MeV. A multipole decomposition (MD) technique is applied to extract L=0, L=1, and L≥2 contributions to the cross sections. The summed Gamow-Teller strength B(GT) is compared with shell-model calculations for the 27 Al(p,n) and 90 Zr(p,n) reactions. The usefulness of the polarization transfer observable in the MD analysis is discussed. (orig.)
International Nuclear Information System (INIS)
Macasek, F.; Buriova, E.
2004-01-01
In this presentation authors present the results of analysis of decomposition products of [ 18 ]fluorodexyglucose. It is concluded that the coupling of liquid chromatography - mass spectrometry with electrospray ionisation is a suitable tool for quantitative analysis of FDG radiopharmaceutical, i.e. assay of basic components (FDG, glucose), impurities (Kryptofix) and decomposition products (gluconic and glucuronic acids etc.); 2-[ 18 F]fluoro-deoxyglucose (FDG) is sufficiently stable and resistant towards autoradiolysis; the content of radiochemical impurities (2-[ 18 F]fluoro-gluconic and 2-[ 18 F]fluoro-glucuronic acids in expired FDG did not exceed 1%
Goli, Srinivas; Doshi, Riddhi; Perianayagam, Arokiasamy
2013-01-01
Children and women comprise vulnerable populations in terms of health and are gravely affected by the impact of economic inequalities through multi-dimensional channels. Urban areas are believed to have better socioeconomic and maternal and child health indicators than rural areas. This perception leads to the implementation of health policies ignorant of intra-urban health inequalities. Therefore, the objective of this study is to explain the pathways of economic inequalities in maternal and child health indicators among the urban population of India. Using data from the third wave of the National Family Health Survey (NFHS, 2005-06), this study calculated relative contribution of socioeconomic factors to inequalities in key maternal and child health indicators such as antenatal check-ups (ANCs), institutional deliveries, proportion of children with complete immunization, proportion of underweight children, and Infant Mortality Rate (IMR). Along with regular CI estimates, this study applied widely used regression-based Inequality Decomposition model proposed by Wagstaff and colleagues. The CI estimates show considerable economic inequalities in women with less than 3 ANCs (CI = -0.3501), institutional delivery (CI = -0.3214), children without fully immunization (CI = -0.18340), underweight children (CI = -0.19420), and infant deaths (CI = -0.15596). Results of the decomposition model reveal that illiteracy among women and her partner, poor economic status, and mass media exposure are the critical factors contributing to economic inequalities in maternal and child health indicators. The residuals in all the decomposition models are very less; this implies that the above mentioned factors explained maximum inequalities in maternal and child health of urban population in India. Findings suggest that illiteracy among women and her partner, poor economic status, and mass media exposure are the critical pathways through which economic factors operate on inequalities in
Directory of Open Access Journals (Sweden)
Srinivas Goli
Full Text Available BACKGROUND/OBJECTIVE: Children and women comprise vulnerable populations in terms of health and are gravely affected by the impact of economic inequalities through multi-dimensional channels. Urban areas are believed to have better socioeconomic and maternal and child health indicators than rural areas. This perception leads to the implementation of health policies ignorant of intra-urban health inequalities. Therefore, the objective of this study is to explain the pathways of economic inequalities in maternal and child health indicators among the urban population of India. METHODS: Using data from the third wave of the National Family Health Survey (NFHS, 2005-06, this study calculated relative contribution of socioeconomic factors to inequalities in key maternal and child health indicators such as antenatal check-ups (ANCs, institutional deliveries, proportion of children with complete immunization, proportion of underweight children, and Infant Mortality Rate (IMR. Along with regular CI estimates, this study applied widely used regression-based Inequality Decomposition model proposed by Wagstaff and colleagues. RESULTS: The CI estimates show considerable economic inequalities in women with less than 3 ANCs (CI = -0.3501, institutional delivery (CI = -0.3214, children without fully immunization (CI = -0.18340, underweight children (CI = -0.19420, and infant deaths (CI = -0.15596. Results of the decomposition model reveal that illiteracy among women and her partner, poor economic status, and mass media exposure are the critical factors contributing to economic inequalities in maternal and child health indicators. The residuals in all the decomposition models are very less; this implies that the above mentioned factors explained maximum inequalities in maternal and child health of urban population in India. CONCLUSION: Findings suggest that illiteracy among women and her partner, poor economic status, and mass media exposure are the critical
Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.
Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond
2018-04-01
We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.
Analysis of Coherent Phonon Signals by Sparsity-promoting Dynamic Mode Decomposition
Murata, Shin; Aihara, Shingo; Tokuda, Satoru; Iwamitsu, Kazunori; Mizoguchi, Kohji; Akai, Ichiro; Okada, Masato
2018-05-01
We propose a method to decompose normal modes in a coherent phonon (CP) signal by sparsity-promoting dynamic mode decomposition. While the CP signals can be modeled as the sum of finite number of damped oscillators, the conventional method such as Fourier transform adopts continuous bases in a frequency domain. Thus, the uncertainty of frequency appears and it is difficult to estimate the initial phase. Moreover, measurement artifacts are imposed on the CP signal and deforms the Fourier spectrum. In contrast, the proposed method can separate the signal from the artifact precisely and can successfully estimate physical properties of the normal modes.
Structural Analysis of Multi-component Amyloid Systems by Chemometric SAXS Data Decomposition
DEFF Research Database (Denmark)
Trillo, Isabel Fatima Herranz; Jensen, Minna Grønning; van Maarschalkerweerd, Andreas
2017-01-01
Formation of amyloids is the hallmark of several neurodegenerative pathologies. Structural investigation of these complex transformation processes poses significant experimental challenges due to the co-existence of multiple species. The additive nature of small-angle X-ray scattering (SAXS) data...... least squares (MCR-ALS) chemometric method. The approach enables rigorous and robust decomposition of synchrotron SAXS data by simultaneously introducing these data in different representations that emphasize molecular changes at different time and structural resolution ranges. The approach has allowed...
Thermal decomposition of lutetium propionate
DEFF Research Database (Denmark)
Grivel, Jean-Claude
2010-01-01
The thermal decomposition of lutetium(III) propionate monohydrate (Lu(C2H5CO2)3·H2O) in argon was studied by means of thermogravimetry, differential thermal analysis, IR-spectroscopy and X-ray diffraction. Dehydration takes place around 90 °C. It is followed by the decomposition of the anhydrous...... °C. Full conversion to Lu2O3 is achieved at about 1000 °C. Whereas the temperatures and solid reaction products of the first two decomposition steps are similar to those previously reported for the thermal decomposition of lanthanum(III) propionate monohydrate, the final decomposition...... of the oxycarbonate to the rare-earth oxide proceeds in a different way, which is here reminiscent of the thermal decomposition path of Lu(C3H5O2)·2CO(NH2)2·2H2O...
Decomposition methods for unsupervised learning
DEFF Research Database (Denmark)
Mørup, Morten
2008-01-01
This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...
A predictive model of music preference using pairwise comparisons
DEFF Research Database (Denmark)
Jensen, Bjørn Sand; Gallego, Javier Saez; Larsen, Jan
2012-01-01
Music recommendation is an important aspect of many streaming services and multi-media systems, however, it is typically based on so-called collaborative filtering methods. In this paper we consider the recommendation task from a personal viewpoint and examine to which degree music preference can...... be elicited and predicted using simple and robust queries such as pairwise comparisons. We propose to model - and in turn predict - the pairwise music preference using a very flexible model based on Gaussian Process priors for which we describe the required inference. We further propose a specific covariance...
Filippi, Sarah; Holmes, Chris C; Nieto-Barajas, Luis E
2016-11-16
In this article we propose novel Bayesian nonparametric methods using Dirichlet Process Mixture (DPM) models for detecting pairwise dependence between random variables while accounting for uncertainty in the form of the underlying distributions. A key criteria is that the procedures should scale to large data sets. In this regard we find that the formal calculation of the Bayes factor for a dependent-vs.-independent DPM joint probability measure is not feasible computationally. To address this we present Bayesian diagnostic measures for characterising evidence against a "null model" of pairwise independence. In simulation studies, as well as for a real data analysis, we show that our approach provides a useful tool for the exploratory nonparametric Bayesian analysis of large multivariate data sets.
Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H
2014-08-08
For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.
Tilsen, Sam; Arvaniti, Amalia
2013-07-01
This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.
Goli, Srinivas; Singh, Lucky; Jain, Kshipra; Pou, Ladumai Maikho Apollo
2014-12-01
This study quantified and decomposed health inequalities among the older population in India and analyzes how health status varies for populations between 60 to 69 years and 70 years and above. Data from the 60th round of the National Sample Survey (NSS) was used for the analyses. Socioeconomic inequalities in health status were measured by using Concentration Index (CI) and further decomposed to find critical determinants and their relative contributions to total health inequality. Overall, CI estimates were negative for the older population as a whole (CI = -0.1156), as well as for two disaggregated groups, 60 to 69 years (CI = -0.0943) and 70 years and above (CI = -0.08198). This suggests that poor health status is more concentrated among the socioeconomically disadvantaged older population. Decomposition analyses revealed that poor economic status (54 %) is the dominant contributor to total health inequalities in the older population, followed by illiteracy (24 %) and rural place of residence (20 %). Other indicators, such as religion, gender and marital status were positive, while Caste was negatively associated with health inequality in the older populations. Finally, a comparative assessment of decomposition results suggest that critical contributors for health inequality vary for the older population of 60 to 69 years and 70 years and above. These findings provide important insights on health inequalities among the older population in India. Implications are advanced.
Analysis of respiratory mechanomyographic signals by means of the empirical mode decomposition
International Nuclear Information System (INIS)
Torres, A; Jane, R; Fiz, J A; Laciar, E; Galdiz, J B; Gea, J; Morera, J
2007-01-01
The study of the mechanomyographic (MMG) signals of respiratory muscles is a promising technique in order to evaluate the respiratory muscles effort. A critical point in MMG studies is the selection of the cut-off frequency in order to separate the low frequency (LF) component (basically due to gross movement of the muscle or of the body) and the high frequency (HF) component (related with the vibration of the muscle fibres during contraction). In this study, we propose to use the Empirical Mode Decomposition method in order to analyze the Intrinsic Mode Functions of MMG signals of the diaphragm muscle, acquired by means of a capacitive accelerometer applied on the costal wall. The method was tested on an animal model, with two incremental respiratory protocols performed by two non anesthetized mongrel dogs. The proposed EMD based method seems to be a useful tool to eliminate the low frequency component of MMG signals. The obtained correlation coefficients between respiratory and MMG parameters were higher than the ones obtained with a Wavelet multiresolution decomposition method utilized in a previous work
Gai, Chao; Zhang, Yuanhui; Chen, Wan-Ting; Zhang, Peng; Dong, Yuping
2013-12-01
The thermal decomposition behavior of two microalgae, Chlorella pyrenoidosa (CP) and Spirulina platensis (SP), were investigated on a thermogravimetric analyzer under non-isothermal conditions. Iso-conversional Vyazovkin approach was used to calculate the kinetic parameters, and the universal integral method was applied to evaluate the most probable mechanisms for thermal degradation of the two feedstocks. The differential equations deduced from the models were compared with experimental data. For the range of conversion fraction investigated (20-80%), the thermal decomposition process of CP could be described by the reaction order model (F3), which can be calculated by the integral equation of G(α) = [(1 - α)(-2) - 1]/2. And the apparent activation energy was in the range of 58.85-114.5 kJ/mol. As for SP, it can be described by the reaction order model (F2), which can be calculated by the integral equation of G(α) = (1 - α)(-1) - 1, and the range of apparent activation energy was 74.35-140.1 kJ/mol. Copyright © 2013 Elsevier Ltd. All rights reserved.
Decomposition for the analysis of radionuclides in solidified cement radioactive waste
International Nuclear Information System (INIS)
Lee, Jeong Jin; Pyo, Hyung Yeal; Jee, Kwang Yung; Jeon, Jong Seon
2004-01-01
Spent ion exchange resins make solid radioactive wastes when mixed with cement as solidifying material that was widely used in securing human environment from radionuclides for at least hundreds years. The cumulative increase of low and medium level radioactive wastes results in capacity problem of temporary storage in some NPPs (Nuclear Power Plants) of Korea around 2008. Radioactive wastes are scheduled to be disposed in a permanent disposal facility in accordance with the Korean Radioactive Wastes Management Program. It is mandatory to identify kinds and concentration of radionuclides immobilized for transporting them from temporary storage in NPPs to disposal facility. Accordingly, the effective sample decomposition prior to radiochemical separation is prerequisite to obtain the analytical data about radionuclides in cement waste forms. The closed-vessel microwave digestion technology among several sample preparation methods is taken into account to decompose cement waste forms. In this study, SRM 1880a (Portland cement) which is known for its certified values was used to optimize decomposition condition of cement waste forms containing nonradioactive ion exchange resins from NPP. With such variables as reagents, time, and power, the variation of the transparency and the color of the solution after closed-vessel microwave digestion can be examine. SRM 1880a is decomposed by suggested digestion procedure and the recoveries of constituents were investigated by ICP-AES and AAS
International Nuclear Information System (INIS)
Supasa, Tharinya; Hsiau, Shu-San; Lin, Shih-Mo; Wongsapai, Wongkot; Wu, Jiunn-Chi
2016-01-01
Thailand has depended heavily on imported fossil fuels since the 1990s, which hindered the nation's economic development because it created uncertainty in the nation's fuel supply. An energy conservation policy was implemented in 1995 to require industries to reduce their energy intensity (EI) and consumption immediately. This study investigates the effectiveness of the policy between 1995 and 2010 using the hybrid input–output approach. Surprisingly, EI improvement was observed in only a few sectors, such as transportation, non-metallic, paper, and textile. An embodied energy decomposition analysis revealed that while households were the largest energy consumer in 1995, energy consumption in exports exceeded that of households in 2000, 2005 and 2010. In addition, structural decomposition analysis revealed the final demand effect was the strongest factor in determining the efficacy of energy conservation, whereas the energy efficiency effect was not an effective factor as expected for decreasing energy consumption. Policy barriers and conflicting economic plans were factors that affected the outcome of these energy policies. - Highlights: • The hybrid IO technique was employed to analyse energy intensity of Thailand. • No clear evident of EI improvement in most of industries, thus fail to achieve the policy target. • Household and export sector had played a crucial role in energy consumption increase. • IO SDA method found that energy efficiency was not an offset factor for consumption increase. • Policy barriers were conflicting economic plans, fuel subsidy policies and inefficient process.
International Nuclear Information System (INIS)
Kesicki, Fabian; Anandarajah, Gabrial
2011-01-01
In order to reduce energy-related CO 2 emissions different options have been considered: energy efficiency improvements, structural changes to low carbon or zero carbon fuel/technologies, carbon sequestration, and reduction in energy-service demands (useful energy). While efficiency and technology options have been extensively studied within the context of climate change mitigation, this paper addresses the possible role of price-related energy-service demand reduction. For this analysis, the elastic demand version of the TIAM-UCL global energy system model is used in combination with decomposition analysis. The results of the CO 2 emission decomposition indicate that a reduction in energy-service demand can play a limited role, contributing around 5% to global emission reduction in the 21st century. A look at the sectoral level reveals that the demand reduction can play a greater role in selected sectors like transport contributing around 16% at a global level. The societal welfare loss is found to be high when the price elasticity of demand is low. - Highlights: → A reduction in global energy-service demand can contribute around 5% to global emission reduction in the 21st century. → The role of demand is a lot higher in transport than in the residential sector. → Contribution of demand reduction is higher in early periods of the 21st century. → Societal welfare loss is found to be high when the price elasticity of demand is low. → Regional shares in residual emissions vary under different elasticity scenarios.
Energy Technology Data Exchange (ETDEWEB)
Kesicki, Fabian, E-mail: fabian.kesicki.09@ucl.ac.uk [UCL Energy Institute, University College London, 14 Upper Woburn Place, London, WC1H 0NN (United Kingdom); Anandarajah, Gabrial [UCL Energy Institute, University College London, 14 Upper Woburn Place, London, WC1H 0NN (United Kingdom)
2011-11-15
In order to reduce energy-related CO{sub 2} emissions different options have been considered: energy efficiency improvements, structural changes to low carbon or zero carbon fuel/technologies, carbon sequestration, and reduction in energy-service demands (useful energy). While efficiency and technology options have been extensively studied within the context of climate change mitigation, this paper addresses the possible role of price-related energy-service demand reduction. For this analysis, the elastic demand version of the TIAM-UCL global energy system model is used in combination with decomposition analysis. The results of the CO{sub 2} emission decomposition indicate that a reduction in energy-service demand can play a limited role, contributing around 5% to global emission reduction in the 21st century. A look at the sectoral level reveals that the demand reduction can play a greater role in selected sectors like transport contributing around 16% at a global level. The societal welfare loss is found to be high when the price elasticity of demand is low. - Highlights: > A reduction in global energy-service demand can contribute around 5% to global emission reduction in the 21st century. > The role of demand is a lot higher in transport than in the residential sector. > Contribution of demand reduction is higher in early periods of the 21st century. > Societal welfare loss is found to be high when the price elasticity of demand is low. > Regional shares in residual emissions vary under different elasticity scenarios.
International Nuclear Information System (INIS)
Schoderböck, Peter; Brechbühl, Jens
2015-01-01
The X-ray investigation of stress states in materials, based on the determination of elastic lattice strains which are converted to stresses by means of theory of elasticity, is a necessity in quality control of thin layers and coatings for optimizing manufacturing steps and process parameters. This work introduces the evaluation of residual stress from complex and overlapping diffraction patterns using a whole-powder pattern decomposition procedure defining a 2θ-offset caused by residual stresses. Furthermore corrections for sample displacement and refraction are directly implemented in the calculation procedure. The correlation matrices of the least square fitting routines have been analyzed for parameter interactions and obvious interdependencies have been decoupled by the introduction of an internal standard within the diffraction experiment. This decomposition based evaluation has been developed on tungsten as a model material system and its efficiency was demonstrated by X-ray diffraction analysis of a solid oxide fuel cell multilayer system. The results are compared with those obtained by the classical sin 2 Ψ-method. - Highlights: • Analysis of complex multiphase diffraction patterns with respect to residual stress • Stress-gradient determination with in situ correction of displacement and refraction • Consideration of the elastic anisotropy within the refinement
Wang, Wenkang; Pan, Chong; Wang, Jinjun
2018-01-01
The identification and separation of multi-scale coherent structures is a critical task for the study of scale interaction in wall-bounded turbulence. Here, we propose a quasi-bivariate variational mode decomposition (QB-VMD) method to extract structures with various scales from instantaneous two-dimensional (2D) velocity field which has only one primary dimension. This method is developed from the one-dimensional VMD algorithm proposed by Dragomiretskiy and Zosso (IEEE Trans Signal Process 62:531-544, 2014) to cope with a quasi-2D scenario. It poses the feature of length-scale bandwidth constraint along the decomposed dimension, together with the central frequency re-balancing along the non-decomposed dimension. The feasibility of this method is tested on both a synthetic flow field and a turbulent boundary layer at moderate Reynolds number (Re_{τ } = 3458) measured by 2D particle image velocimetry (PIV). Some other popular scale separation tools, including pseudo-bi-dimensional empirical mode decomposition (PB-EMD), bi-dimensional EMD (B-EMD) and proper orthogonal decomposition (POD), are also tested for comparison. Among all these methods, QB-VMD shows advantages in both scale characterization and energy recovery. More importantly, the mode mixing problem, which degrades the performance of EMD-based methods, is avoided or minimized in QB-VMD. Finally, QB-VMD analysis of the wall-parallel plane in the log layer (at y/δ = 0.12) of the studied turbulent boundary layer shows the coexistence of large- or very large-scale motions (LSMs or VLSMs) and inner-scaled structures, which can be fully decomposed in both physical and spectral domains.
Analysis of the Compounds from the BTEX Group, Emitted During Thermal Decomposition of Alkyd Resin
Directory of Open Access Journals (Sweden)
M. Kubecki
2012-09-01
Full Text Available Suitability of the given binding agent for the moulding sands preparation depends on the one hand on the estimation of technological properties of the sand and the mould made of it and the obtained casting quality and on the other hand on the assessment of this sand influence on the natural and working environment. Out of moulding sands used in the foundry industry, sands with organic binders deserve a special attention. These binders are based on synthetic resins, which ensure obtaining the proper technological properties and sound castings, however, they negatively influence the environment. If in the initial state these resins are not very dangerous for people and for the environment, thus under an influence of high temperatures they generate very harmful products, being the result of their thermal decomposition. Depending on the kind of the applied resin (phenol-formaldehyde, urea, furfuryl, urea–furfuryl, alkyd under an influence of a temperature such compounds as: furfuryl alcohol, formaldehyde, phenol, BTEX group (benzene, toluene, ethylbenzene, xylene, and also polycyclic aromatic hydrocarbons (PAH can be formed and released.The aim of the study was the development of the method, selection of analytical methods and the determination of optimal conditionsof formation compounds from the BTEX group. An emission of these components constitutes one of the basic criteria of the harmfulnessassessment of binders applied for moulding and core sands. Investigations were carried out in the specially designed set up for the thermal decomposition of organic substances in a temperature range: 5000C – 13000C at the laboratory scale. The object for testing was alkyd resin applied as a binding material for moulding sands. Within investigations the minimal amount of adsorbent necessary for the adsorption of compounds released during the decomposition of the resin sample of a mass app. 15 mg was selected. Also the minimal amount of solvent needed for
Modeling Expressed Emotions in Music using Pairwise Comparisons
DEFF Research Database (Denmark)
Madsen, Jens; Nielsen, Jens Brehm; Jensen, Bjørn Sand
2012-01-01
We introduce a two-alternative forced-choice experimental paradigm to quantify expressed emotions in music using the two wellknown arousal and valence (AV) dimensions. In order to produce AV scores from the pairwise comparisons and to visualize the locations of excerpts in the AV space, we...
A transient analysis of decomposition and erosion of concrete exposed to a surface heat flux
International Nuclear Information System (INIS)
Kilic, A.N.
1994-01-01
A simple approximation for predicting the concrete erosion rate and depth is derived based on the heat balance integral method for conduction with the time-dependent boundary conditions. The problem is considered a four-region model including separate, moving heat sinks at the boundaries due to endothermic decomposition reactions. Polynomial temperature profiles are assumed, and the results are compared with previous experimental data and other analytical solutions. Since the technique provides an approximate temperature distribution on the average, it does not give the real temperature evaluation but provides a simple prediction of the erosion rates and the depth of defaulted concrete in terms of the parameters that are important during the physical phenomena. Because of its simplicity and reliability, the model might be useful for the larger molten core/concrete interaction codes and aerosol generation models
GraphAlignment: Bayesian pairwise alignment of biological networks
Directory of Open Access Journals (Sweden)
Kolář Michal
2012-11-01
Full Text Available Abstract Background With increased experimental availability and accuracy of bio-molecular networks, tools for their comparative and evolutionary analysis are needed. A key component for such studies is the alignment of networks. Results We introduce the Bioconductor package GraphAlignment for pairwise alignment of bio-molecular networks. The alignment incorporates information both from network vertices and network edges and is based on an explicit evolutionary model, allowing inference of all scoring parameters directly from empirical data. We compare the performance of our algorithm to an alternative algorithm, Græmlin 2.0. On simulated data, GraphAlignment outperforms Græmlin 2.0 in several benchmarks except for computational complexity. When there is little or no noise in the data, GraphAlignment is slower than Græmlin 2.0. It is faster than Græmlin 2.0 when processing noisy data containing spurious vertex associations. Its typical case complexity grows approximately as O(N2.6. On empirical bacterial protein-protein interaction networks (PIN and gene co-expression networks, GraphAlignment outperforms Græmlin 2.0 with respect to coverage and specificity, albeit by a small margin. On large eukaryotic PIN, Græmlin 2.0 outperforms GraphAlignment. Conclusions The GraphAlignment algorithm is robust to spurious vertex associations, correctly resolves paralogs, and shows very good performance in identification of homologous vertices defined by high vertex and/or interaction similarity. The simplicity and generality of GraphAlignment edge scoring makes the algorithm an appropriate choice for global alignment of networks.
Performance analysis of humid air turbine cycle with solar energy for methanol decomposition
International Nuclear Information System (INIS)
Zhao, Hongbin; Yue, Pengxiu
2011-01-01
According to the physical and chemical energy cascade utilization and concept of synthesis integration of variety cycle systems, a new humid air turbine (HAT) cycle with solar energy for methanol decomposition has been proposed in this paper. The solar energy is utilized for methanol decomposing as a heat source in the HAT cycle. The low energy level of solar energy is supposed to convert the high energy level of chemical energy through methanol absorption, realizing the combination of clean energy and normal chemical fuels as compared to the normal chemical recuperative cycle. As a result, the performance of normal chemical fuel thermal cycle can be improved to some extent. Though the energy level of decomposed syngas from methanol is decreased, the cascade utilization of methanol is upgraded. The energy level and exergy losses in the system are graphically displayed with the energy utilization diagrams (EUD). The results show that the cycle's exergy efficiency is higher than that of the conventional HAT cycle by at least 5 percentage points under the same operating conditions. In addition, the cycle's thermal efficiency, exergy efficiency and solar thermal efficiency respond to an optimal methanol conversion. -- Highlights: → This paper proposed and studied the humid air turbine (HAT) cycle with methanol through decomposition with solar energy. → The cycle's exergy efficiency is higher than that of the conventional HAT cycle by at least 5 percentage points. → It is estimated that the solar heat-work conversion efficiency is about 39%, higher than usual. → There is an optimal methanol conversation for the cycle's thermal efficiency and exergy efficiency at given π and TIT. → Using EUD, the exergy loss is decreased by 8 percentage points compared with the conventional HAT cycle.
International Nuclear Information System (INIS)
Ediger, Volkan S.; Huvaz, Ozkan
2006-01-01
This paper aims to investigate the sectoral energy use in the Turkish economy for the 1980-2000 period when significant changes occurred in the economic and demographic structure of the country. These changes had several impacts on the energy use in the primary sectors such as agriculture, industry and services. Decomposition analysis is conducted on these sectors by using the additive version of the LMDI method due to its advantages over others. Although a close relationship exists between primary energy consumption and GDP, analyses show that significant variations occurred in the sectoral energy use during the 1982, 1988-1989, 1994 and 1998-2000 periods. Such variations are related to the economic policies of the governments. The Turkish economy has undergone a transformation from agricultural to industrial enhanced by rapid urbanization, especially after 1982. However, industrialization has not been completed yet, and the energy demand should be increasing faster than national income until the energy intensity of the country reaches a peak. This study is performed on three basic sectors; however, decomposition into secondary and tertiary sectors will provide detailed information for further investigations
Zhong, X. Y.; Gao, J. X.; Ren, H.; Cai, W. G.
2018-04-01
The acceleration of the urbanization process has brought new opportunities for China’s development. With the rapid economic development and people’s living standards improving, building energy consumption also showed a rigid growth trend. With the continuous improvement of the level of industrialization, industrial energy-saving potential declines. The construction industry to bear the task of energy-saving emission reduction will face more severe challenges. As the three municipalities of China, Beijing, Shanghai and Chongqing have significant radiation effects in the economy, urbanization level and construction industry development of the region. Therefore, it is of great significance to study the building energy consumption in the three regions with the change of urbanization level and the key factors. Based on the data of Beijing, Shanghai and Chongqing from 2001 to 2015, this paper attempts to find out whether the EKC curve of building energy consumption exists. At the same time, based on the results of the model, the data of the three regions are divided into three periods. The exponential decomposition method (LMDI) is used to find out the factors that have the greatest impact on the energy consumption of buildings in different stages. Moreover, analyzes the policy background of each stage and puts forward some policy suggestions on this basis.
Shiraogawa, Takafumi; Ehara, Masahiro; Jurinovich, Sandro; Cupellini, Lorenzo; Mennucci, Benedetta
2018-06-15
Recently, a method to calculate the absorption and circular dichroism (CD) spectra based on the exciton coupling has been developed. In this work, the method was utilized for the decomposition of the CD and circularly polarized luminescence (CPL) spectra of a multichromophoric system into chromophore contributions for recently developed through-space conjugated oligomers. The method which has been implemented using rotatory strength in the velocity form and therefore it is gauge-invariant, enables us to evaluate the contribution from each chromophoric unit and locally excited state to the CD and CPL spectra of the total system. The excitonic calculations suitably reproduce the full calculations of the system, as well as the experimental results. We demonstrate that the interactions between electric transition dipole moments of adjacent chromophoric units are crucial in the CD and CPL spectra of the multichromophoric systems, while the interactions between electric and magnetic transition dipole moments are not negligible. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Xiao-Wei Ma
2016-09-01
Full Text Available This paper analyzes Chinese household CO2 emissions in 1994–2012 based on the Logarithmic Mean Divisia Index (LMDI structure decomposition model, and discusses the relationship between household CO2 emissions and economic growth based on a decoupling indicator. The results show that in 1994–2012, household CO2 emissions grew in general and displayed an accelerated growth trend during the early 21st century. Economic growth leading to an increase in energy consumption is the main driving factor of CO2 emission growth (an increase of 1.078 Gt CO2 with cumulative contribution rate of 55.92%, while the decline in energy intensity is the main cause of CO2 emission growth inhibition (0.723 Gt CO2 emission reduction with cumulative contribution rate of 38.27%. Meanwhile, household CO2 emissions are in a weak state of decoupling in general. The change in CO2 emissions caused by population and economic growth shows a weak decoupling and expansive decoupling state, respectively. The CO2 emission change caused by energy intensity is in a state of strong decoupling, and the change caused by energy consumption structure fluctuates between a weak and a strong decoupling state.
Decomposition analysis of the change of energy intensity of manufacturing industries in Thailand
International Nuclear Information System (INIS)
Chontanawat, Jaruwan; Wiboonchutikula, Paitoon; Buddhivanich, Atinat
2014-01-01
The study computes and analyses the sources of the change of energy intensity of the manufacturing industries in Thailand during the period (1991–2011) using the decomposition method. The Logarithmic Mean Divisia Index is computed and the results show that the energy intensity in the period (1991–2000) increased greatly from the increased energy intensity of each industry. In the more recent period (2000–2011) the energy intensity declined a little. However the decline was mainly from the structural change effect with negligible contribution from decreased energy intensity of each industry. The findings imply the need to balance industrial restructuring policies with efforts to reduce energy intensity for a sustainable economic development. Besides, there is much room for individual industries to improve their energy efficiency. Policies on restructuring energy prices and other non-price related measures should be devised to induce individual industries, particularly the highly energy intensive ones, to reduce their energy intensity. - Highlights: • Decomposing change of energy intensity of Thai manufacturing industries, 1991–2011. • 1991–2000 energy intensity rose due to increased energy intensity of each industry. • 2000–2011 energy intensity declined due mainly to the structural change effect. • Need to balance industrial restructuring policies to reduce energy intensity
Zokagoa, Jean-Marie; Soulaïmani, Azzeddine
2012-06-01
This article presents a reduced-order model (ROM) of the shallow water equations (SWEs) for use in sensitivity analyses and Monte-Carlo type applications. Since, in the real world, some of the physical parameters and initial conditions embedded in free-surface flow problems are difficult to calibrate accurately in practice, the results from numerical hydraulic models are almost always corrupted with uncertainties. The main objective of this work is to derive a ROM that ensures appreciable accuracy and a considerable acceleration in the calculations so that it can be used as a surrogate model for stochastic and sensitivity analyses in real free-surface flow problems. The ROM is derived using the proper orthogonal decomposition (POD) method coupled with Galerkin projections of the SWEs, which are discretised through a finite-volume method. The main difficulty of deriving an efficient ROM is the treatment of the nonlinearities involved in SWEs. Suitable approximations that provide rapid online computations of the nonlinear terms are proposed. The proposed ROM is applied to the simulation of hypothetical flood flows in the Bordeaux breakwater, a portion of the 'Rivière des Prairies' located near Laval (a suburb of Montreal, Quebec). A series of sensitivity analyses are performed by varying the Manning roughness coefficient and the inflow discharge. The results are satisfactorily compared to those obtained by the full-order finite volume model.
Muramatsu, K.; Furumi, S.; Hayashi, A.; Shiono, Y.; Ono, A.; Fujiwara, N.; Daigo, M.; Ochiai, F.
We have developed the ``pattern decomposition method'' based on linear spectral mixing of ground objects for n-dimensional satellite data. In this method, spectral response patterns for each pixel in an image are decomposed into three components using three standard spectral shape patterns determined from the image data. Applying this method to AMSS (Airborne Multi-Spectral Scanner) data, eighteen-dimensional data are successfully transformed into three-dimensional data. Using the three components, we have developed a new vegetation index in which all the multispectral data are reflected. We consider that the index should be linear to the amount of vegetation and vegetation vigor. To validate the index, its relations to vegetation types, vegetation cover ratio, and chlorophyll contents of a leaf were studied using spectral reflectance data measured in the field with a spectrometer. The index was sensitive to vegetation types and vegetation vigor. This method and index are very useful for assessment of vegetation vigor, classifying land cover types and monitoring vegetation changes
International Nuclear Information System (INIS)
Fiske, David R
2006-01-01
Computing spherical harmonic decompositions is a ubiquitous technique that arises in a wide variety of disciplines and a large number of scientific codes. Because spherical harmonics are defined by integrals over spheres, however, one must perform some sort of interpolation in order to compute them when data are stored on a cubic lattice. Misner (2004 Class. Quantum Grav. 21 S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid, which has been found in real applications to be both efficient and robust to the presence of mesh refinement boundaries. At the same time, however, practical applications of the algorithm require knowledge of how the truncation errors of the algorithm depend on the various parameters in the algorithm. Based on analytic arguments and experience using the algorithm in real numerical simulations, I explore these dependences and provide a rule of thumb for choosing the parameters based on the truncation errors of the underlying data. I also demonstrate that symmetries in the spherical harmonics themselves allow for an even more efficient implementation of the algorithm than was suggested by Misner in his original paper
Directory of Open Access Journals (Sweden)
Peng Ren
Full Text Available Preterm delivery increases the risk of infant mortality and morbidity, and therefore developing reliable methods for predicting its likelihood are of great importance. Previous work using uterine electromyography (EMG recordings has shown that they may provide a promising and objective way for predicting risk of preterm delivery. However, to date attempts at utilizing computational approaches to achieve sufficient predictive confidence, in terms of area under the curve (AUC values, have not achieved the high discrimination accuracy that a clinical application requires. In our study, we propose a new analytical approach for assessing the risk of preterm delivery using EMG recordings which firstly employs Empirical Mode Decomposition (EMD to obtain their Intrinsic Mode Functions (IMF. Next, the entropy values of both instantaneous amplitude and instantaneous frequency of the first ten IMF components are computed in order to derive ratios of these two distinct components as features. Discrimination accuracy of this approach compared to those proposed previously was then calculated using six differently representative classifiers. Finally, three different electrode positions were analyzed for their prediction accuracy of preterm delivery in order to establish which uterine EMG recording location was optimal signal data. Overall, our results show a clear improvement in prediction accuracy of preterm delivery risk compared with previous approaches, achieving an impressive maximum AUC value of 0.986 when using signals from an electrode positioned below the navel. In sum, this provides a promising new method for analyzing uterine EMG signals to permit accurate clinical assessment of preterm delivery risk.
Unger, K A; Watterson, J H
2016-10-01
The effects of decomposition microclimate on the distribution of dextromethorphan (DXM) and dextrorphan (DXT) in skeletonized remains of rats acutely exposed to DXM were examined. Animals (n = 10) received DXM (75 mg/kg, i.p.), were euthanized 30 min post-dose and immediately allowed to decompose at either Site A (shaded forest microenvironment on a grass-covered soil substrate) or Site B (rocky substrate exposed to direct sunlight, 600 m from Site A). Ambient temperature and relative humidity were automatically recorded 3 cm above rats at each site. Skeletal elements (vertebral columns, ribs, pelvic girdles, femora, tibiae, humeri and scapulae) were harvested, and analyzed using microwave assisted extraction, microplate solid phase extraction, and GC/MS. Drug levels, expressed as mass-normalized response ratios, and the ratios of DXT and DXM levels were compared across bones and between microclimate sites. No significant differences in DXT levels or metabolite/parent ratios were observed between sites or across bones. Only femoral DXM levels differed significantly between microclimate sites. For pooled data, microclimate was not observed to significantly affect analyte levels, nor the ratio of levels of DXT and DXM. These data suggest that microclimate conditions do not influence DXM and metabolite distribution in skeletal remains. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Confronting South Africa’s water challenge: A decomposition analysis of water intensity
Directory of Open Access Journals (Sweden)
Marcel Kohler
2016-12-01
Full Text Available Water is a vital natural resource, demanding careful management. It is essential for life and integral to virtually all economic activities, including energy and food production and the production of industrial outputs. The availability of clean water in sufficient quantities is not only a prerequisite for human health and well-being but the life-blood of freshwater ecosystems and the many services that these provide. Water resource intensity measures the intensity of water use in terms of volume of water per unit of value added. It is an internationally accepted environmental indicator of the pressure of economic activity on a country’s water resources and therefore a reliable indicator of sustainable economic development. The indicator is particularly useful in the allocation of water resources between sectors of the economy since in waterstressed countries like South Africa, there is competition for water among various users, which makes it necessary to allocate water resources to economic activities that are less intensive in their use of water. This study focuses on economy-wide changes in South Africa’s water intensity using both decomposition and empirical estimation techniques in an effort to identify and understand the impact of economic activity on changes in the use of the economy’s water resources. It is hoped that this study will help inform South Africa’s water conservation and resource management policies
Shang, Yizi; Lu, Shibao; Gong, Jiaguo; Shang, Ling; Li, Xiaofei; Wei, Yongping; Shi, Hongwang
2017-12-01
A recent study decomposed the changes in industrial water use into three hierarchies (output, technology, and structure) using a refined Laspeyres decomposition model, and found monotonous and exclusive trends in the output and technology hierarchies. Based on that research, this study proposes a hierarchical prediction approach to forecast future industrial water demand. Three water demand scenarios (high, medium, and low) were then established based on potential future industrial structural adjustments, and used to predict water demand for the structural hierarchy. The predictive results of this approach were compared with results from a grey prediction model (GPM (1, 1)). The comparison shows that the results of the two approaches were basically identical, differing by less than 10%. Taking Tianjin, China, as a case, and using data from 2003-2012, this study predicts that industrial water demand will continuously increase, reaching 580 million m 3 , 776.4 million m 3 , and approximately 1.09 billion m 3 by the years 2015, 2020 and 2025 respectively. It is concluded that Tianjin will soon face another water crisis if no immediate measures are taken. This study recommends that Tianjin adjust its industrial structure with water savings as the main objective, and actively seek new sources of water to increase its supply.
International Nuclear Information System (INIS)
Devaux, J.Y.; Mazelier, L.; Lefkopoulos, D.
1997-01-01
We have earlier shown that the method of singular value decomposition (SVD) allows the image reconstruction in single-photon-tomography with precision higher than the classical method of filtered back-projections. Actually, the establishing of an elementary response matrix which incorporates both the photon attenuation phenomenon, the scattering, the translation non-invariance principle and the detector response, allows to take into account the totality of physical parameters of acquisition. By an non-consecutive optimized truncation of the singular values we have obtained a significant improvement in the efficiency of the regularization of bad conditioning of this problem. The present study aims at verifying the stability of this truncation under modifications of acquisition conditions. Two series of parameters were tested, first, those modifying the geometry of acquisition: the influence of rotation center, the asymmetric disposition of the elementary-volume sources against the detector and the precision of rotation angle, and secondly, those affecting the correspondence between the matrix and the space to be reconstructed: the effect of partial volume and a noise propagation in the experimental model. For the parameters which introduce a spatial distortion, the alteration of reconstruction has been, as expected, comparable to that observed with the classical reconstruction and proportional with the amplitude of shift from the normal one. In exchange, for the effect of partial volume and of noise, the study of truncation signature revealed a variation in the optimal choice of the conserved singular values but with no effect on the global precision of reconstruction
Directory of Open Access Journals (Sweden)
Zhixin Yang
2013-01-01
Full Text Available A reliable fault diagnostic system for gas turbine generator system (GTGS, which is complicated and inherent with many types of component faults, is essential to avoid the interruption of electricity supply. However, the GTGS diagnosis faces challenges in terms of the existence of simultaneous-fault diagnosis and high cost in acquiring the exponentially increased simultaneous-fault vibration signals for constructing the diagnostic system. This research proposes a new diagnostic framework combining feature extraction, pairwise-coupled probabilistic classifier, and decision threshold optimization. The feature extraction module adopts wavelet packet transform and time-domain statistical features to extract vibration signal features. Kernel principal component analysis is then applied to further reduce the redundant features. The features of single faults in a simultaneous-fault pattern are extracted and then detected using a probabilistic classifier, namely, pairwise-coupled relevance vector machine, which is trained with single-fault patterns only. Therefore, the training dataset of simultaneous-fault patterns is unnecessary. To optimize the decision threshold, this research proposes to use grid search method which can ensure a global solution as compared with traditional computational intelligence techniques. Experimental results show that the proposed framework performs well for both single-fault and simultaneous-fault diagnosis and is superior to the frameworks without feature extraction and pairwise coupling.
Investigating hydrogel dosimeter decomposition by chemical methods
International Nuclear Information System (INIS)
Jordan, Kevin
2015-01-01
The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products
Zhang, Zi-Long; Chen, Xing-Peng; Yang, Jing; Xue, Bing; Li, Yong-Jin
2010-02-01
Based on the ideology of macro environmental economics, a function of environmental pressure represented by pollutant emission was built, and the relative importance of the driving factors in the dynamic changes of the relationships between economic growth and environmental pressure in Gansu Province in 1990 - 2005 was analyzed by using structural decomposition analysis (SDA) model combining with 'refined Laspeyres' method. In the study period, the environmental pressure in the Province was mainly caused by the emission of waste gases and solids in the process of economic growth, and showed a rapid increasing trend at the late stage of the period. Population factor had less impact on the increase of this environmental pressure, while economic growth factor had obvious impact on it. Technological progress did mitigate, but could not offset the impact of economic growth factor, and the impacts of economic growth and technological factors on the environmental pressure differed with the kinds of pollutants.
International Nuclear Information System (INIS)
Cao, Guangxi; Xu, Wei
2016-01-01
Basing on daily price data of carbon emission rights in futures markets of Certified Emission Reduction (CER) and European Union Allowances (EUA), we analyze the multiscale characteristics of the markets by using empirical mode decomposition (EMD) and multifractal detrended fluctuation analysis (MFDFA) based on EMD. The complexity of the daily returns of CER and EUA futures markets changes with multiple time scales and multilayered features. The two markets also exhibit clear multifractal characteristics and long-range correlation. We employ shuffle and surrogate approaches to analyze the origins of multifractality. The long-range correlations and fat-tail distributions significantly contribute to multifractality. Furthermore, we analyze the influence of high returns on multifractality by using threshold method. The multifractality of the two futures markets is related to the presence of high values of returns in the price series.
Jiang, Wenqian; Zeng, Bo; Yang, Zhou; Li, Gang
2018-01-01
In the non-invasive load monitoring mode, the load decomposition can reflect the running state of each load, which will help the user reduce unnecessary energy costs. With the demand side management measures of time of using price, a resident load influence analysis method for time of using price (TOU) based on non-intrusive load monitoring data are proposed in the paper. Relying on the current signal of the resident load classification, the user equipment type, and different time series of self-elasticity and cross-elasticity of the situation could be obtained. Through the actual household load data test with the impact of TOU, part of the equipment will be transferred to the working hours, and users in the peak price of electricity has been reduced, and in the electricity at the time of the increase Electrical equipment, with a certain regularity.
International Nuclear Information System (INIS)
Wu, Yunfeng; Yang, Shanshan; Zheng, Fang; Cai, Suxian; Lu, Meng; Wu, Meihong
2014-01-01
High-resolution knee joint vibroarthrographic (VAG) signals can help physicians accurately evaluate the pathological condition of a degenerative knee joint, in order to prevent unnecessary exploratory surgery. Artifact cancellation is vital to preserve the quality of VAG signals prior to further computer-aided analysis. This paper describes a novel method that effectively utilizes ensemble empirical mode decomposition (EEMD) and detrended fluctuation analysis (DFA) algorithms for the removal of baseline wander and white noise in VAG signal processing. The EEMD method first successively decomposes the raw VAG signal into a set of intrinsic mode functions (IMFs) with fast and low oscillations, until the monotonic baseline wander remains in the last residue. Then, the DFA algorithm is applied to compute the fractal scaling index parameter for each IMF, in order to identify the anti-correlation and the long-range correlation components. Next, the DFA algorithm can be used to identify the anti-correlated and the long-range correlated IMFs, which assists in reconstructing the artifact-reduced VAG signals. Our experimental results showed that the combination of EEMD and DFA algorithms was able to provide averaged signal-to-noise ratio (SNR) values of 20.52 dB (standard deviation: 1.14 dB) and 20.87 dB (standard deviation: 1.89 dB) for 45 normal signals in healthy subjects and 20 pathological signals in symptomatic patients, respectively. The combination of EEMD and DFA algorithms can ameliorate the quality of VAG signals with great SNR improvements over the raw signal, and the results were also superior to those achieved by wavelet matching pursuit decomposition and time-delay neural filter. (paper)
Directory of Open Access Journals (Sweden)
M. Imran
2017-09-01
Full Text Available A blind adaptive color image watermarking scheme based on principal component analysis, singular value decomposition, and human visual system is proposed. The use of principal component analysis to decorrelate the three color channels of host image, improves the perceptual quality of watermarked image. Whereas, human visual system and fuzzy inference system helped to improve both imperceptibility and robustness by selecting adaptive scaling factor, so that, areas more prone to noise can be added with more information as compared to less prone areas. To achieve security, location of watermark embedding is kept secret and used as key at the time of watermark extraction, whereas, for capacity both singular values and vectors are involved in watermark embedding process. As a result, four contradictory requirements; imperceptibility, robustness, security and capacity are achieved as suggested by results. Both subjective and objective methods are acquired to examine the performance of proposed schemes. For subjective analysis the watermarked images and watermarks extracted from attacked watermarked images are shown. For objective analysis of proposed scheme in terms of imperceptibility, peak signal to noise ratio, structural similarity index, visual information fidelity and normalized color difference are used. Whereas, for objective analysis in terms of robustness, normalized correlation, bit error rate, normalized hamming distance and global authentication rate are used. Security is checked by using different keys to extract the watermark. The proposed schemes are compared with state-of-the-art watermarking techniques and found better performance as suggested by results.
Dynamics of pairwise entanglement between two Tavis-Cummings atoms
International Nuclear Information System (INIS)
Guo Jinliang; Song Heshan
2008-01-01
We investigate the time evolution of pairwise entanglement between two Tavis-Cummings atoms for various entangled initial states, including pure and mixed states. We find that the phenomenon of entanglement sudden death behaviors is distinct in the evolution of entanglement for different initial states. What deserves mentioning here is that the initial portion of the excited state in the initial state is responsible for the sudden death of entanglement, and the degree of this effect also depends on the initial states
Directory of Open Access Journals (Sweden)
Amelia Maika
Full Text Available BACKGROUND: Measuring social inequalities in health is common; however, research examining inequalities in child cognitive function is more limited. We investigated household expenditure-related inequality in children's cognitive function in Indonesia in 2000 and 2007, the contributors to inequality in both time periods, and changes in the contributors to cognitive function inequalities between the periods. METHODS: Data from the 2000 and 2007 round of the Indonesian Family Life Survey (IFLS were used. Study participants were children aged 7-14 years (n = 6179 and n = 6680 in 2000 and 2007, respectively. The relative concentration index (RCI was used to measure the magnitude of inequality. Contribution of various contributors to inequality was estimated by decomposing the concentration index in 2000 and 2007. Oaxaca-type decomposition was used to estimate changes in contributors to inequality between 2000 and 2007. RESULTS: Expenditure inequality decreased by 45% from an RCI = 0.29 (95% CI 0.22 to 0.36 in 2000 to 0.16 (95% CI 0.13 to 0.20 in 2007 but the burden of poorer cognitive function was higher among the disadvantaged in both years. The largest contributors to inequality in child cognitive function were inequalities in per capita expenditure, use of improved sanitation and maternal high school attendance. Changes in maternal high school participation (27%, use of improved sanitation (25% and per capita expenditures (18% were largely responsible for the decreasing inequality in children's cognitive function between 2000 and 2007. CONCLUSIONS: Government policy to increase basic education coverage for women along with economic growth may have influenced gains in children's cognitive function and reductions in inequalities in Indonesia.
Maika, Amelia; Mittinty, Murthy N; Brinkman, Sally; Harper, Sam; Satriawan, Elan; Lynch, John W
2013-01-01
Measuring social inequalities in health is common; however, research examining inequalities in child cognitive function is more limited. We investigated household expenditure-related inequality in children's cognitive function in Indonesia in 2000 and 2007, the contributors to inequality in both time periods, and changes in the contributors to cognitive function inequalities between the periods. Data from the 2000 and 2007 round of the Indonesian Family Life Survey (IFLS) were used. Study participants were children aged 7-14 years (n = 6179 and n = 6680 in 2000 and 2007, respectively). The relative concentration index (RCI) was used to measure the magnitude of inequality. Contribution of various contributors to inequality was estimated by decomposing the concentration index in 2000 and 2007. Oaxaca-type decomposition was used to estimate changes in contributors to inequality between 2000 and 2007. Expenditure inequality decreased by 45% from an RCI = 0.29 (95% CI 0.22 to 0.36) in 2000 to 0.16 (95% CI 0.13 to 0.20) in 2007 but the burden of poorer cognitive function was higher among the disadvantaged in both years. The largest contributors to inequality in child cognitive function were inequalities in per capita expenditure, use of improved sanitation and maternal high school attendance. Changes in maternal high school participation (27%), use of improved sanitation (25%) and per capita expenditures (18%) were largely responsible for the decreasing inequality in children's cognitive function between 2000 and 2007. Government policy to increase basic education coverage for women along with economic growth may have influenced gains in children's cognitive function and reductions in inequalities in Indonesia.
Directory of Open Access Journals (Sweden)
Stella Babalola
2018-01-01
Full Text Available Background: Northern Nigeria has some of the worst reproductive health indicators worldwide. Conspicuous North-South variations exist in contraceptive use; not much is known about the drivers of contraceptive use disparities in the North compared to the South. Objective: In this study, we examine the relative weights of the factors that contribute to this North-South gap in contraceptive prevalence. Methods: Using the women's 2013 Demographic Health Survey dataset, we applied a nonlinear decomposition technique to determine the contribution of sociodemographic and socioeconomic characteristics, conjugal relationship dynamics, intimate partner violence, ideational variables, and Islamic culture to the North-South disparities in contraceptive use. Results: There was a gap of 12.4 percentage points in contraceptive prevalence between the north and south of Nigeria (5.2Š vs 17.6Š. The largest contributors to the gap were ideational characteristics (explaining 42.0Š of the gap and socio-economic profiles (explaining 42.6Š. Patterns of conjugal relationship dynamics (11.1Š, socio-demographic characteristics (‒11.0Š, Islamic religious culture (7.6Š, and exposure to family planning messaging (6.1Š were also significant contributors. Conclusions: Effective interventions to increase contraceptive use in northern Nigeria should aim at addressing socioeconomic disadvantage in the North, impacting ideational characteristics and specifically targeting poor women and those with low levels of education. Working with Islamic religious leaders is also critical to bridging the gap. Contribution: This paper broadens the knowledge on the determinants of contraceptive use in Nigeria by identifying contextual factors that operate differently in the North compared to the South.
Spatial Harmonic Decomposition as a tool for unsteady flow phenomena analysis
International Nuclear Information System (INIS)
Duparchy, A; Guillozet, J; De Colombel, T; Bornard, L
2014-01-01
Hydropower is already the largest single renewable electricity source today but its further development will face new deployment constraints such as large-scale projects in emerging economies and the growth of intermittent renewable energy technologies. The potential role of hydropower as a grid stabilizer leads to operating hydro power plants in ''off-design'' zones. As a result, new methods of analyzing associated unsteady phenomena are needed to improve the design of hydraulic turbines. The key idea of the development is to compute a spatial description of a phenomenon by using a combination from several sensor signals. The spatial harmonic decomposition (SHD) extends the concept of so-called synchronous and asynchronous pulsations by projecting sensor signals on a linearly independent set of a modal scheme. This mathematical approach is very generic as it can be applied on any linear distribution of a scalar quantity defined on a closed curve. After a mathematical description of SHD, this paper will discuss the impact of instrumentation and provide tools to understand SHD signals. Then, as an example of a practical application, SHD is applied on a model test measurement in order to capture and describe dynamic pressure fields. Particularly, the spatial description of the phenomena provides new tools to separate the part of pressure fluctuations that contribute to output power instability or mechanical stresses. The study of the machine stability in partial load operating range in turbine mode or the comparison between the gap pressure field and radial thrust behavior during turbine brake operation are both relevant illustrations of SHD contribution
Robust Subjective Visual Property Prediction from Crowdsourced Pairwise Labels.
Fu, Yanwei; Hospedales, Timothy M; Xiang, Tao; Xiong, Jiechao; Gong, Shaogang; Wang, Yizhou; Yao, Yuan
2016-03-01
The problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.
Energy Technology Data Exchange (ETDEWEB)
Hoekstra, R.
2003-10-01
Economic processes generate a variety of material flows, which cause resource problems through the depletion of natural resources and environmental issues due to the emission of pollutants. This thesis presents an analytical method to study the relationship between the monetary economy and the 'physical economy'. In particular, this method can assess the impact of structural change in the economy on physical throughput. The starting point for the approach is the development of an elaborate version of the physical input-output table (PIOT), which acts as an economic-environmental accounting framework for the physical economy. In the empirical application, hybrid-unit input-output (I/O) tables, which combine physical and monetary information, are constructed for iron and steel, and plastic products for the Netherlands for the years 1990 and 1997. The impact of structural change on material flows is analyzed using Structural Decomposition Analysis (SDA), whic specifies effects such as sectoral shifts, technological change, and alterations in consumer spending and international trade patterns. The study thoroughly reviews the application of SDA to environmental issues, compares the method with other decomposition methods, and develops new mathematical specifications. An SDA is performed using the hybrid-unit input-output tables for the Netherlands. The results are subsequently used in novel forecasting and backcasting scenario analyses for the period 1997-2030. The results show that dematerialization of iron and steel, and plastics, has generally not occurred in the recent past (1990-1997), and will not occur, under a wide variety of scenario assumptions, in the future (1997-2030)
International Nuclear Information System (INIS)
Hoekstra, R.
2003-01-01
Economic processes generate a variety of material flows, which cause resource problems through the depletion of natural resources and environmental issues due to the emission of pollutants. This thesis presents an analytical method to study the relationship between the monetary economy and the 'physical economy'. In particular, this method can assess the impact of structural change in the economy on physical throughput. The starting point for the approach is the development of an elaborate version of the physical input-output table (PIOT), which acts as an economic-environmental accounting framework for the physical economy. In the empirical application, hybrid-unit input-output (I/O) tables, which combine physical and monetary information, are constructed for iron and steel, and plastic products for the Netherlands for the years 1990 and 1997. The impact of structural change on material flows is analyzed using Structural Decomposition Analysis (SDA), whic specifies effects such as sectoral shifts, technological change, and alterations in consumer spending and international trade patterns. The study thoroughly reviews the application of SDA to environmental issues, compares the method with other decomposition methods, and develops new mathematical specifications. An SDA is performed using the hybrid-unit input-output tables for the Netherlands. The results are subsequently used in novel forecasting and backcasting scenario analyses for the period 1997-2030. The results show that dematerialization of iron and steel, and plastics, has generally not occurred in the recent past (1990-1997), and will not occur, under a wide variety of scenario assumptions, in the future (1997-2030)
The Contribution of Ageing to Hospitalisation Days in Hong Kong: A Decomposition Analysis.
Kwok, Chi Leung; Lee, Carmen Km; Lo, William Tl; Yip, Paul Sf
2016-08-17
Ageing has become a serious challenge in Hong Kong and globally. It has serious implications for health expenditure, which accounts for nearly 20% of overall government expenditure. Here we assess the contribution of ageing and related factors to hospitalisation days in Hong Kong. We used hospital discharge data from all publicly funded hospitals in Hong Kong between 2001 and 2012. A decomposition method was used to examine the factors that account for the change of total hospitalisation days during the two periods, 2001-2004 and 2004-2012. The five factors include two demographic factors - population size and age-gender composition - and three service components - hospital discharge rate, number of discharge episodes per patient, and average length of stay (LOS) - which are all measured at age-gender group level. In order to assess the health cost burden in the future, we also project the total hospitalisation days up to 2041, for a range of scenarios. During the decreasing period of hospitalisation days (2001-2004), the reduction of LOS contributed to about 60% of the reduction. For the period of increase (2004-2012), ageing is associated with an increase in total hospitalisation days of 1.03 million, followed by an increase in hospital discharge rates (0.67 million), an increase in the number of discharge episodes per patient (0.62 million), and population growth (0.43 million). The reduction of LOS has greatly offset these increases (-2.19 million days), and has become one of the most significant factors in containing the increasing number of hospitalisation days. Projected increases in total hospitalisation days under different scenarios have highlighted that the contribution of ageing will become even more prominent after 2022. Hong Kong is facing increasing healthcare burden caused by the rapid increase in demand for inpatient services due to ageing. Better management of inpatient services with the aim of increasing efficiency and reducing LOS, avoidable
International Nuclear Information System (INIS)
Stamatopoulos, V G; Karras, D A; Mertzios, B G
2009-01-01
An efficient modification of singular value decomposition (SVD) is proposed in this paper aiming at denoising and more importantly at quantifying more accurately the statistically independent spectra of metabolite sources in magnetic resonance spectroscopy (MRS). Although SVD is known in MRS applications and several efficient algorithms exist for estimating SVD summation terms in which the raw MRS data are analyzed, however, it would be more beneficial for such an analysis if techniques with the ability to estimate statistically independent spectra could be employed. SVD is known to separate signal and noise subspaces but it assumes orthogonal properties for the components comprising signal subspace, which is not always the case, and might impose heavy constraints for the MRS case. A much more relaxing constraint would be to assume statistically independent components. Therefore, a modification of the main methodology incorporating techniques for calculating the assumed statistically independent spectra is proposed by applying SVD on the MRS spectrogram through application of the short time Fourier transform (STFT). This approach is based on combining SVD on STFT spectrogram followed by an iterative application of independent component analysis (ICA). Moreover, it is shown that the proposed methodology combined with a regression analysis would lead to improved quantification of the MRS signals. An experimental study based on synthetic MRS signals has been conducted to evaluate the herein proposed methodologies. The results obtained have been discussed and it is shown to be quite promising
International Nuclear Information System (INIS)
Chang, C C; Hsiao, T C; Kao, S C; Hsu, H Y
2014-01-01
Arterial blood pressure (ABP) is an important indicator of cardiovascular circulation and presents various intrinsic regulations. It has been found that the intrinsic characteristics of blood vessels can be assessed quantitatively by ABP analysis (called reflection wave analysis (RWA)), but conventional RWA is insufficient for assessment during non-stationary conditions, such as the Valsalva maneuver. Recently, a novel adaptive method called empirical mode decomposition (EMD) was proposed for non-stationary data analysis. This study proposed a RWA algorithm based on EMD (EMD-RWA). A total of 51 subjects participated in this study, including 39 healthy subjects and 12 patients with autonomic nervous system (ANS) dysfunction. The results showed that EMD-RWA provided a reliable estimation of reflection time in baseline and head-up tilt (HUT). Moreover, the estimated reflection time is able to assess the ANS function non-invasively, both in normal, healthy subjects and in the patients with ANS dysfunction. EMD-RWA provides a new approach for reflection time estimation in non-stationary conditions, and also helps with non-invasive ANS assessment. (paper)
Pujilestari, Cahya Utamie; Nyström, Lennarth; Norberg, Margareta; Weinehall, Lars; Hakimi, Mohammad; Ng, Nawi
2017-12-12
Obesity has become a global health challenge as its prevalence has increased globally in recent decades. Studies in high-income countries have shown that obesity is more prevalent among the poor. In contrast, obesity is more prevalent among the rich in low- and middle-income countries, hence requiring different focal points to design public health policies in the latter contexts. We examined socioeconomic inequalities in abdominal obesity in Purworejo District, Central Java, Indonesia and identified factors contributing to the inequalities. We utilised data from the WHO-INDEPTH Study on global AGEing and adult health (WHO-INDEPTH SAGE) conducted in the Purworejo Health and Demographic Surveillance System (HDSS) in Purworejo District, Indonesia in 2010. The study included 14,235 individuals aged 50 years and older. Inequalities in abdominal obesity across wealth groups were assessed separately for men and women using concentration indexes. Decomposition analysis was conducted to assess the determinants of socioeconomic inequalities in abdominal obesity. Abdominal obesity was five-fold more prevalent among women than in men (30% vs. 6.1%; p < 0.001). The concentration index (CI) analysis showed that socioeconomic inequalities in abdominal obesity were less prominent among women (CI = 0.26, SE = 0.02, p < 0.001) compared to men (CI = 0.49, SE = 0.04, p < 0.001). Decomposition analysis showed that physical labour was the major determinant of socioeconomic inequalities in abdominal obesity among men, explaining 47% of the inequalities, followed by poor socioeconomic status (31%), ≤ 6 years of education (15%) and current smoking (11%). The three major determinants of socioeconomic inequalities in abdominal obesity among women were poor socio-economic status (48%), physical labour (17%) and no formal education (16%). Abdominal obesity was more prevalent among older women in a rural Indonesian setting. Socioeconomic inequality in
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard
2013-01-01
We consider a class of convex feasibility problems where the constraints that describe the feasible set are loosely coupled. These problems arise in robust stability analysis of large, weakly interconnected uncertain systems. To facilitate distributed implementation of robust stability analysis o...
Directory of Open Access Journals (Sweden)
Hongtao Gao
2012-01-01
Full Text Available N, Cd-codoped TiO2 have been synthesized by thermal decomposition method. The products were characterized by X-ray diffraction (XRD, scanning electron microscope (SEM, UV-visible diffuse reflectance spectra (DRS, X-ray photoelectron spectroscopy (XPS, and Brunauer-Emmett-Teller (BET specific surface area analysis, respectively. The products represented good performance in photocatalytic degradation of methyl orange. The effect of the incorporation of N and Cd on electronic structure and optical properties of TiO2 was studied by first-principle calculations on the basis of density functional theory (DFT. The impurity states, introduced by N 2p or Cd 5d, lied between the valence band and the conduction band. Due to dopants, the band gap of N, Cd-codoped TiO2 became narrow. The electronic transition from the valence band to conduction band became easy, which could account for the observed photocatalytic performance of N, Cd-codoped TiO2. The theoretical analysis might provide a probable reference for the experimentally element-doped TiO2 synthesis.
Directory of Open Access Journals (Sweden)
Didier G. Leibovici
2010-10-01
Full Text Available The purpose of this paper is to describe the R package {PTAk and how the spatio-temporal context can be taken into account in the analyses. Essentially PTAk( is a multiway multidimensional method to decompose a multi-entries data-array, seen mathematically as a tensor of any order. This PTAk-modes method proposes a way of generalizing SVD (singular value decomposition, as well as some other well known methods included in the R package, such as PARAFAC or CANDECOMP and the PCAn-modes or Tucker-n model. The example datasets cover different domains with various spatio-temporal characteristics and issues: (i~medical imaging in neuropsychology with a functional MRI (magnetic resonance imaging study, (ii~pharmaceutical research with a pharmacodynamic study with EEG (electro-encephaloegraphic data for a central nervous system (CNS drug, and (iii~geographical information system (GIS with a climatic dataset that characterizes arid and semi-arid variations. All the methods implemented in the R package PTAk also support non-identity metrics, as well as penalizations during the optimization process. As a result of these flexibilities, together with pre-processing facilities, PTAk constitutes a framework for devising extensions of multidimensional methods such ascorrespondence analysis, discriminant analysis, and multidimensional scaling, also enabling spatio-temporal constraints.
International Nuclear Information System (INIS)
Jacob, Richard E; Carson, James P
2014-01-01
Assessing heterogeneity in lung images can be an important diagnosis tool. We present a novel and objective method for assessing lung damage in a rat model of emphysema. We combined a three-dimensional (3D) computer graphics method–octree decomposition–with a geostatistics-based approach for assessing spatial relationships–the variogram–to evaluate disease in 3D computed tomography (CT) image volumes. Male, Sprague-Dawley rats were dosed intratracheally with saline (control), or with elastase dissolved in saline to either the whole lung (for mild, global disease) or a single lobe (for severe, local disease). Gated 3D micro-CT images were acquired on the lungs of all rats at end expiration. Images were masked, and octree decomposition was performed on the images to reduce the lungs to homogeneous blocks of 2 × 2 × 2, 4 × 4 × 4, and 8 × 8 × 8 voxels. To focus on lung parenchyma, small blocks were ignored because they primarily defined boundaries and vascular features, and the spatial variance between all pairs of the 8 × 8 × 8 blocks was calculated as the square of the difference of signal intensity. Variograms–graphs of distance vs. variance–were constructed, and results of a least-squares-fit were compared. The robustness of the approach was tested on images prepared with various filtering protocols. Statistical assessment of the similarity of the three control rats was made with a Kruskal-Wallis rank sum test. A Mann-Whitney-Wilcoxon rank sum test was used to measure statistical distinction between individuals. For comparison with the variogram results, the coefficient of variation and the emphysema index were also calculated for all rats. Variogram analysis showed that the control rats were statistically indistinct (p = 0.12), but there were significant differences between control, mild global disease, and severe local disease groups (p < 0.0001). A heterogeneity index was calculated to describe the difference of an individual variogram from
Biton, Yaacov; Rabinovitch, Avinoam; Braunstein, Doron; Aviram, Ira; Campbell, Katherine; Mironov, Sergey; Herron, Todd; Jalife, José; Berenfeld, Omer
2018-01-01
Cardiac fibrillation is a major clinical and societal burden. Rotors may drive fibrillation in many cases, but their role and patterns are often masked by complex propagation. We used Singular Value Decomposition (SVD), which ranks patterns of activation hierarchically, together with Wiener-Granger causality analysis (WGCA), which analyses direction of information among observations, to investigate the role of rotors in cardiac fibrillation. We hypothesized that combining SVD analysis with WGCA should reveal whether rotor activity is the dominant driving force of fibrillation even in cases of high complexity. Optical mapping experiments were conducted in neonatal rat cardiomyocyte monolayers (diameter, 35 mm), which were genetically modified to overexpress the delayed rectifier K+ channel IKr only in one half of the monolayer. Such monolayers have been shown previously to sustain fast rotors confined to the IKr overexpressing half and driving fibrillatory-like activity in the other half. SVD analysis of the optical mapping movies revealed a hierarchical pattern in which the primary modes corresponded to rotor activity in the IKr overexpressing region and the secondary modes corresponded to fibrillatory activity elsewhere. We then applied WGCA to evaluate the directionality of influence between modes in the entire monolayer using clear and noisy movies of activity. We demonstrated that the rotor modes influence the secondary fibrillatory modes, but influence was detected also in the opposite direction. To more specifically delineate the role of the rotor in fibrillation, we decomposed separately the respective SVD modes of the rotor and fibrillatory domains. In this case, WGCA yielded more information from the rotor to the fibrillatory domains than in the opposite direction. In conclusion, SVD analysis reveals that rotors can be the dominant modes of an experimental model of fibrillation. Wiener-Granger causality on modes of the rotor domains confirms their
Jansen, M.H.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.
2006-01-01
We present a continuous wavelet analysis of count data with timevarying intensities. The objective is to extract intervals with significant intensities from background intervals. This includes the precise starting point of the significant interval, its exact duration and the (average) level of
Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials
2017-04-06
also consistent. In the subsequent analysis, the normal direction at each node is computed using the cross product of the connected element edges...For this mode in particular, the revised RFP mode shape is much cleaner than the original SVD mode shape. The algorithm for computing the revised
Decomposition of multivariate phenotypic means in multigroup genetic covarinace structure analysis
Dolan, C.V.; Molenaar, P.C.M.; Boomsma, D.I.
1992-01-01
Uses D. Sorbom's (1974) method to study differences in latent means in multivariate twin data. By restricting the analysis to a comparison between groups, the results pertain only to the additive contributions of common genetic and environmental factors to the deviation of the group means from what
International Nuclear Information System (INIS)
Kuo, V.
2016-01-01
Full text: The European Qualifications Framework categorizes learning objectives into three qualifiers “knowledge”, “skills”, and “competences” (KSCs) to help improve the comparability between different fields and disciplines. However, the management of KSCs remains a great challenge given their semantic fuzziness. Similar texts may describe different concepts and different texts may describe similar concepts among different domains. This is difficult for the indexing, searching and matching of semantically similar KSCs within an information system, to facilitate transfer and mobility of KSCs. We present a working example using a semantic inference method known as Latent Semantic Analysis, employing a matrix operation called Singular Value Decomposition, which have been shown to infer semantic associations within unstructured textual data comparable to that of human interpretations. In our example, a few natural language text passages representing KSCs in the nuclear sector are used to demonstrate the capabilities of the system. It can be shown that LSA is able to infer latent semantic associations between texts, and cluster and match separate text passages semantically based on these associations. We propose this methodology for modelling existing natural language KSCs in the nuclear domain so they can be semantically queried, retrieved and filtered upon request. (author
Plank, Barbara; Eisenmenger, Nina; Schaffartzik, Anke; Wiedenhofer, Dominik
2018-04-03
Globalization led to an immense increase of international trade and the emergence of complex global value chains. At the same time, global resource use and pressures on the environment are increasing steadily. With these two processes in parallel, the question arises whether trade contributes positively to resource efficiency, or to the contrary is further driving resource use? In this article, the socioeconomic driving forces of increasing global raw material consumption (RMC) are investigated to assess the role of changing trade relations, extended supply chains and increasing consumption. We apply a structural decomposition analysis of changes in RMC from 1990 to 2010, utilizing the Eora multi-regional input-output (MRIO) model. We find that changes in international trade patterns significantly contributed to an increase of global RMC. Wealthy developed countries play a major role in driving global RMC growth through changes in their trade structures, as they shifted production processes increasingly to less material-efficient input suppliers. Even the dramatic increase in material consumption in the emerging economies has not diminished the role of industrialized countries as drivers of global RMC growth.
Directory of Open Access Journals (Sweden)
Jong-Hwan Ko
2014-03-01
Full Text Available This study aims to answer two questions using input-output decomposition analysis: 1 Have emerging Asian economies decoupled? 2 What are the sources of structural changes in gross outputs and value-added of emerging Asian economies related to the first question? The main findings of the study are as follows: First, since 1990, there has been a trend of increasing dependence on exports to extra-regions such as G3 and the ROW, indicating no sign of "decoupling", but rather an increasing integration of emerging Asian countries into global trade. Second, there is a contrasting feature in the sources of structural changes between non-China emerging Asia and China. Dependence of non-China emerging Asia on intra-regional trade has increased in line with strengthening economic integration in East Asia, whereas China has disintegrated from the region. Therefore, it can be said that China has contributed to no sign of decoupling of emerging Asia as a whole.
International Nuclear Information System (INIS)
Toraya, H.; Tusaka, S.
1995-01-01
A new procedure for quantitative phase analysis using the whole-powder-pattern decomposition method is proposed. The procedure consists of two steps. In the first, the whole powder patterns of single-component materials are decomposed separately. The refined parameters of integrated intensity, unit cell and profile shape for respective phases are stored in computer data files. In the second step, the whole powder pattern of a mixture sample is fitted, where the parameters refined in the previous step are used to calculate the profile intensity. The integrated intensity parameters are, however, not varied during the least-squares fitting, while the scale factors for the profile intensities of individual phases are adjusted instead. Weight fractions are obtained by solving simultaneous equations, coefficients of which include the scale factors and the mass-absorption coefficients calculated from chemical formulas of respective phases. The procedure can be applied to all mixture samples, including those containing an amorphous material, if single-component samples with known chemical compositions and their approximate unit-cell parameters are provided. The procedure has been tested by using two-to five-component samples, giving average deviations of 1 to 1.5%. Optimum refinement conditions are discussed in connection with the accuracy of the procedure. (orig.)
International Nuclear Information System (INIS)
Liu, Zengming; Zhao, Tao
2015-01-01
Highlights: • Analysis about energy prices and the residential expenditure on energy in China. • Though the prices of energy declined, the price effect was negative. • The effect of price was the strongest restraining contribution. • Discussion on the proportion of energy expenditure in residential incomes. - Abstract: Since the establishment of the market economy in 1993, the residential consumption of commodities, including energy, has been highly influenced by prices in China. However, the contribution of the factors related to prices in residential energy consumption is relatively unexplored. This paper extends the KAYA identity with price and expenditure factors and then applies the LMDI method to a decomposition of residential energy consumption in China from 1993 to 2011. Our results show the following: (1) Though the prices of a majority of residential energy sources in China declined, the effect of energy prices restrained residential energy consumption because the expenditure structure changed during the period. (2) During the research period, the urban energy expenditure proportion experienced two progresses of rising and falling, and the rural proportion, which was stable before 2002, sharply increased. (3) The energy consumption intensity effect, which is the negative of the average energy price effect, contributed to most of the decrease in energy consumption, whereas residential income played a key role in the growth of consumption. According to the conclusions, we suggest further marketization and deregulation of energy prices, the promotion of advanced energy types and guidance for better energy consumption patterns
Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo
2016-09-01
The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.
Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K
2018-06-01
Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.
International Nuclear Information System (INIS)
Ogino, Masao
2016-01-01
Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)
Directory of Open Access Journals (Sweden)
Shichun Xu
2017-09-01
Full Text Available We decompose factors affecting China’s energy-related air pollutant (NOx, PM2.5, and SO2 emission changes into different effects using structural decomposition analysis (SDA. We find that, from 2005 to 2012, investment increased NOx, PM2.5, and SO2 emissions by 14.04, 7.82 and 15.59 Mt respectively, and consumption increased these emissions by 11.09, 7.98, and 12.09 Mt respectively. Export and import slightly increased the emissions on the whole, but the rate of the increase has slowed down, possibly reflecting the shift in China’s foreign trade structure. Energy intensity largely reduced NOx, PM2.5, and SO2 emissions by 12.49, 14.33 and 23.06 Mt respectively, followed by emission efficiency that reduces these emissions by 4.57, 9.08, and 17.25 Mt respectively. Input-output efficiency slightly reduces the emissions. At sectoral and sub-sectoral levels, consumption is a great driving factor in agriculture and commerce, whereas investment is a great driving factor in transport, construction, and some industrial subsectors such as iron and steel, nonferrous metals, building materials, coking, and power and heating supply. Energy intensity increases emissions in transport, chemical products and manufacturing, but decreases emissions in all other sectors and subsectors. Some policies arising from our study results are discussed.
Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K
2016-07-12
We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.
Vignola, Joseph F.; Bucaro, Joseph A.; Tressler, James F.; Ellingston, Damon; Kurdila, Andrew J.; Adams, George; Marchetti, Barbara; Agnani, Alexia; Esposito, Enrico; Tomasini, Enrico P.
2004-06-01
A large-scale survey (~700 m2) of frescos and wall paintings was undertaken in the U.S. Capitol Building in Washington, D.C. to identify regions that may need structural repair due to detachment, delamination, or other defects. The survey encompassed eight pre-selected spaces including: Brumidi's first work at the Capitol building in the House Appropriations Committee room; the Parliamentarian's office; the House Speaker's office; the Senate Reception room; the President's Room; and three areas of the Brumidi Corridors. Roughly 60% of the area surveyed was domed or vaulted ceilings, the rest being walls. Approximately 250 scans were done ranging in size from 1 to 4 m2. The typical mesh density was 400 scan points per square meter. A common approach for post-processing time series called Proper Orthogonal Decomposition, or POD, was adapted to frequency-domain data in order to extract the essential features of the structure. We present a POD analysis for one of these panels, pinpointing regions that have experienced severe substructural degradation.
Chen, Jianbiao; Wang, Yanhong; Lang, Xuemei; Ren, Xiu'e; Fan, Shuanshi
2017-11-01
Thermal oxidative decomposition characteristics, kinetics, and thermodynamics of rape straw (RS), rapeseed meal (RM), camellia seed shell (CS), and camellia seed meal (CM) were evaluated via thermogravimetric analysis (TGA). TG-DTG-DSC curves demonstrated that the combustion of oil-plant residues proceeded in three stages, including dehydration, release and combustion of organic volatiles, and chars oxidation. As revealed by combustion characteristic parameters, the ignition, burnout, and comprehensive combustion performance of residues were quite distinct from each other, and were improved by increasing heating rate. The kinetic parameters were determined by Coats-Redfern approach. The results showed that the most possible combustion mechanisms were order reaction models. The existence of kinetic compensation effect was clearly observed. The thermodynamic parameters (ΔH, ΔG, ΔS) at peak temperatures were calculated through the activated complex theory. With the combustion proceeding, the variation trends of ΔH, ΔG, and ΔS for RS (RM) similar to those for CS (CM). Copyright © 2017 Elsevier Ltd. All rights reserved.
Extension of Pairwise Broadcast Clock Synchronization for Multicluster Sensor Networks
Directory of Open Access Journals (Sweden)
Bruce W. Suter
2008-01-01
Full Text Available Time synchronization is crucial for wireless sensor networks (WSNs in performing a number of fundamental operations such as data coordination, power management, security, and localization. The Pairwise Broadcast Synchronization (PBS protocol was recently proposed to minimize the number of timing messages required for global network synchronization, which enables the design of highly energy-efficient WSNs. However, PBS requires all nodes in the network to lie within the communication ranges of two leader nodes, a condition which might not be available in some applications. This paper proposes an extension of PBS to the more general class of sensor networks. Based on the hierarchical structure of the network, an energy-efficient pair selection algorithm is proposed to select the best pairwise synchronization sequence to reduce the overall energy consumption. It is shown that in a multicluster networking environment, PBS requires a far less number of timing messages than other well-known synchronization protocols and incurs no loss in synchronization accuracy. Moreover, the proposed scheme presents significant energy savings for densely deployed WSNs.
One Size Does Not Fit All: Human Failure Event Decomposition and Task Analysis
Energy Technology Data Exchange (ETDEWEB)
Ronald Laurids Boring, PhD
2014-09-01
In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered or exacerbated by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down—defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications. In this paper, I first review top-down and bottom-up approaches for defining HFEs and then present a seven-step guideline to ensure a task analysis completed as part of human error identification decomposes to a level suitable for use as HFEs. This guideline illustrates an effective way to bridge the bottom-up approach with top-down requirements.
2014-04-01
Integral Role in Soft Tissue Mechanics, K. Troyer, D. Estep, and C. Puttlitz, Acta Biomaterialia 8 (201 2), 234-244 • A posteriori analysis of multi rate...2013, submitted • A posteriori error estimation for the Lax -Wendroff finite difference scheme, J. B. Collins, D. Estep, and S. Tavener, Journal of...oped over neArly six decades of activity and the major developments form a highly inter- connected web. We do not. ətternpt to review the history of
Kinetic analysis of the termal decomposition of colombian vacuum residua by termogravimetry
Directory of Open Access Journals (Sweden)
Fabian Andrey Diaz Mateus
2015-09-01
Full Text Available Five different Colombian vacuum residues were thermally decomposed in a thermogravimetric analyzer. Three heating rates were employed to heat the sample up to 650°C. The kinetic analysis was performed by the Coats-Redfern method to describe the non-isothermal pyrolysis of the residua, a reaction model where the reaction order gradually increases from first to second order is proposed and an excellent agreement of the experimental with the calculated data is presented. The results also indicate that the pyrolysis of a vacuum residue cannot be modeled by a single reaction mechanism.
Directory of Open Access Journals (Sweden)
Kyunam Kim
2017-10-01
Full Text Available Recently, several studies using various methods for analysis have tried to evaluate factors affecting knowledge creation activity, but few analyses quantitatively account for the impact that economic determinants have on them. This paper introduces a non-parametric method to structurally analyze changes in information and communication technology (ICT patenting trends as representative outcomes of knowledge creation activity with economic indicators. For this, the authors established a symmetric model that enables several economic contributors to be decomposed through the perspective of ICTs’ research and development (R&D performance, industrial change, and overall manufacturing growth. Additionally, an empirical analysis of some countries from 2001 to 2009 was conducted through this model. This paper found that all countries except the United States experienced an increase of 10.5–267.4% in ICT patent applications, despite fluctuations in the time series. It is interesting that the changes in ICT patenting of each country generally have a negative relationship with the intensity of each country’s patent protection system. Positive determinants include ICT R&D productivity and overall manufacturing growth, while ICT industrial change is a negative determinant in almost all countries. This paper emphasizes that each country needs to design strategic plans for effective ICT innovation. In particular, ICT innovation activities need to be promoted by increasing ICT R&D investment and developing the ICT industry, since ICT R&D intensity and ICT industrial change generally have a low contribution to ICT patenting.
Leakage detection in galvanized iron pipelines using ensemble empirical mode decomposition analysis
Amin, Makeen; Ghazali, M. Fairusham
2015-05-01
There are many numbers of possible approaches to detect leaks. Some leaks are simply noticeable when the liquids or water appears on the surface. However many leaks do not find their way to the surface and the existence has to be check by analysis of fluid flow in the pipeline. The first step is to determine the approximate position of leak. This can be done by isolate the sections of the mains in turn and noting which section causes a drop in the flow. Next approach is by using sensor to locate leaks. This approach are involves strain gauge pressure transducers and piezoelectric sensor. the occurrence of leaks and know its exact location in the pipeline by using specific method which are Acoustic leak detection method and transient method. The objective is to utilize the signal processing technique in order to analyse leaking in the pipeline. With this, an EEMD method will be applied as the analysis method to collect and analyse the data.
Pairwise contact energy statistical potentials can help to find probability of point mutations.
Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S
2017-01-01
To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β) 8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than i Mutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
A Decomposition of Hospital Profitability: An Application of DuPont Analysis to the US Market.
Turner, Jason; Broom, Kevin; Elliott, Michael; Lee, Jen-Fu
2015-01-01
This paper evaluates the drivers of profitability for a large sample of U.S. hospitals. Following a methodology frequently used by financial analysts, we use a DuPont analysis as a framework to evaluate the quality of earnings. By decomposing returns on equity (ROE) into profit margin, total asset turnover, and capital structure, the DuPont analysis reveals what drives overall profitability. Profit margin, the efficiency with which services are rendered (total asset turnover), and capital structure is calculated for 3,255 U.S. hospitals between 2007 and 2012 using data from the Centers for Medicare & Medicaid Services' Healthcare Cost Report Information System (CMS Form 2552). The sample is then stratified by ownership, size, system affiliation, teaching status, critical access designation, and urban or non-urban location. Those hospital characteristics and interaction terms are then regressed (OLS) against the ROE and the respective DuPont components. Sensitivity to regression methodology is also investigated using a seemingly unrelated regression. When the sample is stratified by hospital characteristics, the results indicate investor-owned hospitals have higher profit margins, higher efficiency, and are substantially more leveraged. Hospitals in systems are found to have higher ROE, margins, and efficiency but are associated with less leverage. In addition, a number of important and significant interactions between teaching status, ownership, location, critical access designation, and inclusion in a system are documented. Many of the significant relationships, most notably not-for-profit ownership, lose significance or are predominately associated with one interaction effect when interaction terms are introduced as explanatory variables. Results are not sensitive to the alternative methodology. The results of the DuPont analysis suggest that although there appears to be convergence in the behavior of NFP and IO hospitals, significant financial differences remain
Directory of Open Access Journals (Sweden)
Suxian Cai
2013-01-01
detected with the fixed threshold in the time domain. To perform a better classification over the data set of 89 VAG signals, we applied a novel classifier fusion system based on the dynamic weighted fusion (DWF method to ameliorate the classification performance. For comparison, a single leastsquares support vector machine (LS-SVM and the Bagging ensemble were used for the classification task as well. The results in terms of overall accuracy in percentage and area under the receiver operating characteristic curve obtained with the DWF-based classifier fusion method reached 88.76% and 0.9515, respectively, which demonstrated the effectiveness and superiority of the DWF method with two distinct features for the VAG signal analysis.
Thermal decomposition and Moessbauer analysis of two iron hydroxy-carbonate complexes
International Nuclear Information System (INIS)
Greaves, T.L.; Cashio, J.D.; Turney, T.
2002-01-01
Full text:The two iron hydroxy carbonate complexes (NH 4 ) 2 Fe 2 (OH) 4 (CO 3 ) 2 .H 2 O and (NH 4 ) 4 Fe 2 (OH) 4 (CO 3 ) 3 .3H 2 O were prepared by the method of Dvo a k and Feitknecht. Moessbauer spectra of the first sample at room temperature and 81K showed principally a ferric doublet with a small quadrupole splitting while spectra of the second sample showed a broad ferric doublet with a large mean quadrupole splitting of 1mm/s. Parameters for both spectra were characteristic of distorted octahedral coordination to oxygens. Thermal gravimetric analysis of both samples up to 750 K showed several fractions corresponding to the loss of the more volatile components
Dual-energy x-ray image decomposition by independent component analysis
Jiang, Yifeng; Jiang, Dazong; Zhang, Feng; Zhang, Dengfu; Lin, Gang
2001-09-01
The spatial distributions of bone and soft tissue in human body are separated by independent component analysis (ICA) of dual-energy x-ray images. It is because of the dual energy imaging modelí-s conformity to the ICA model that we can apply this method: (1) the absorption in body is mainly caused by photoelectric absorption and Compton scattering; (2) they take place simultaneously but are mutually independent; and (3) for monochromatic x-ray sources the total attenuation is achieved by linear combination of these two absorption. Compared with the conventional method, the proposed one needs no priori information about the accurate x-ray energy magnitude for imaging, while the results of the separation agree well with the conventional one.
International Nuclear Information System (INIS)
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-01-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
Energy Technology Data Exchange (ETDEWEB)
Tang, Kunkun, E-mail: ktg@illinois.edu [The Center for Exascale Simulation of Plasma-Coupled Combustion (XPACC), University of Illinois at Urbana–Champaign, 1308 W Main St, Urbana, IL 61801 (United States); Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Congedo, Pietro M. [Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Abgrall, Rémi [Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
International Nuclear Information System (INIS)
Wang, Ke; Wei, Yi-Ming
2016-01-01
Given that different energy inputs play different roles in production and that energy policy decision making requires an evaluation of productivity change in individual energy input to provide insight into the scope for improvement of the utilization of specific energy input, this study develops, based on the Luenberger productivity indicator and data envelopment analysis models, an aggregated specific energy productivity indicator combining the individual energy input productivity indicators that account for the contributions of each specific energy input toward energy productivity change. In addition, these indicators can be further decomposed into four factors: pure efficiency change, scale efficiency change, pure technology change, and scale of technology change. These decompositions enable a determination of which specific energy input is the driving force of energy productivity change and which of the four factors is the primary contributor of energy productivity change. An empirical analysis of China's energy productivity change over the period 1997–2012 indicates that (i) China's energy productivity growth may be overestimated if energy consumption structure is omitted; (ii) in regard to the contribution of specific energy input toward energy productivity growth, oil and electricity show positive contributions, but coal and natural gas show negative contributions; (iii) energy-specific productivity changes are mainly caused by technical changes rather than efficiency changes; and (iv) the Porter Hypothesis is partially supported in China that carbon emissions control regulations may lead to energy productivity growth. - Highlights: • An energy input specific Luenberger productivity indicator is proposed. • It enables to examine the contribution of specific energy input productivity change. • It can be decomposed for identifying pure and scale efficiency changes, as well as pure and scale technical changes. • China's energy productivity growth may
Thermal decomposition and kinetics of coal and fermented cornstalk using thermogravimetric analysis.
He, Yuyuan; Chang, Chun; Li, Pan; Han, Xiuli; Li, Hongliang; Fang, Shuqi; Chen, Junying; Ma, Xiaojian
2018-07-01
The thermal behavior and kinetics of Yiluo coal (YC) and the residues of fermented cornstalk (FC) were investigated in this study. The Kissinger-Akahira-Sunose (KAS) and Flynn-Wall-Ozawa (FWO) methods were used for the kinetic analysis of the pyrolysis process. The results showed that the activation energy (E α ) was increased with the increase of the thermal conversion rate (α), and the average values of E α of YC, FC and the blend (m YC /m FC = 6/4) were 304.26, 224.94 and 233.46 kJ/mol, respectively. The order reaction model function for the blend was also developed by the master-plots method. By comparing the E a and the enthalpy, it was found that the blend was favored to format activated complex due to the lower potential energy barrier. Meanwhile, the average value of Gibbs free energy of the blend was 169.83 kJ/mol, and the changes of entropies indicated that the pyrolysis process was evolved from ordered-state to disordered-state. Copyright © 2018 Elsevier Ltd. All rights reserved.
LMDI Decomposition Analysis of Energy Consumption in the Korean Manufacturing Sector
Directory of Open Access Journals (Sweden)
Suyi Kim
2017-02-01
Full Text Available The energy consumption of Korea’s manufacturing sector has sharply increased over the past 20 years. This paper decomposes the factors influencing energy consumption in this sector using the logarithmic mean Divisia index (LMDI method and analyzes the specific characteristics of energy consumption from 1991 to 2011. The analysis reveals that the activity effect played a major role in increasing energy consumption. While the structure and intensity effects contributed to the reduction in energy consumption, the structure effect was greater than the intensity effect. Over the periods, the effects moved in opposite directions; that is, the structure effect decreased when the intensity effect increased and vice versa. The energy consumption by each industry is decomposed into two factors, activity and intensity effects. The increase of energy consumption due to the activity effect is largest in the petroleum and chemical industry, followed by the primary metal and non-ferrous industry, and the fabricated metal industry. The decrease of energy consumption due to the intensity effect is largest in the fabricated metal industry, followed by the primary metal and non-ferrous industry, and the non-metallic industry. The energy consumption due to intensity effect in the petroleum and chemical industry has risen. To save energy consumption more efficiently for addressing climate change in this sector, industrial restructuring and industry-specific energy saving policies should be introduced.
DFT analysis of the reaction paths of formaldehyde decomposition on silver.
Montoya, Alejandro; Haynes, Brian S
2009-07-16
Periodic density functional theory is used to study the dehydrogenation of formaldehyde (CH(2)O) on the Ag(111) surface and in the presence of adsorbed oxygen or hydroxyl species. Thermodynamic and kinetic parameters of elementary surface reactions have been determined. The dehydrogenation of CH(2)O on clean Ag(111) is thermodynamically and kinetically unfavorable. In particular, the activation energy for the first C-H bond scission of adsorbed CH(2)O (25.8 kcal mol(-1)) greatly exceeds the desorption energy for molecular CH(2)O (2.5 kcal mol(-1)). Surface oxygen promotes the destruction of CH(2)O through the formation of CH(2)O(2), which readily decomposes to CHO(2) and then in turn to CO(2) and adsorbed hydrogen. Analysis of site selectivity shows that CH(2)O(2), CHO(2), and CHO are strongly bound to the surface through the bridge sites, whereas CO and CO(2) are weakly adsorbed with no strong preference for a particular surface site. Dissociation of CO and CO(2) on the Ag(111) surface is highly activated and therefore unfavorable with respect to their molecular desorption.
Directory of Open Access Journals (Sweden)
Xingpeng Chen
2014-12-01
Full Text Available As the largest solid waste (SW generator in the world, China is facing serious pollution issues induced by increasing quantities of SW. The sustainability assessment of SW management is very important for designing relevant policy for further improving the overall efficiency of solid waste management (SWM. By focusing on industrial solid waste (ISW and municipal solid waste (MSW, the paper investigated the sustainability performance of SWM by applying decoupling analysis, and further identified the main drivers of SW change in China by adopting Logarithmic Mean Divisia Index (LMDI model. The results indicate that China has made a great achievement in SWM which was specifically expressed as the increase of ISW utilized amount and harmless disposal ratio of MSW, decrease of industrial solid waste discharged (ISWD, and absolute decoupling of ISWD from economic growth as well. However, China has a long way to go to achieve the goal of sustainable management of SW. The weak decoupling, even expansive negative decoupling of ISW generation and MSW disposal suggests that China needs timely technology innovation and rational institutional arrangement to reduce SW intensity from the source and promote classification and recycling. The factors of investment efficiency and technology are the main determinants of the decrease in SW, inversely, economic growth has increased SW discharge. The effects of investment intensity showed a volatile trend over time but eventually decreased SW discharged. Moreover, the factors of population and industrial structure slightly increased SW.
LMDI decomposition analysis of greenhouse gas emissions in the Korean manufacturing sector
International Nuclear Information System (INIS)
Jeong, Kyonghwa; Kim, Suyi
2013-01-01
In this article, we decomposed Korean industrial manufacturing greenhouse gas (GHG) emissions using the log mean Divisia index (LMDI) method, both multiplicatively and additively. Changes in industrial CO 2 emissions from 1991 to 2009 may be studied by quantifying the contributions from changes in five different factors: overall industrial activity (activity effect), industrial activity mix (structure effect), sectoral energy intensity (intensity effect), sectoral energy mix (energy-mix effect) and CO 2 emission factors (emission-factor effect). The results indicate that the structure effect and intensity effect played roles in reducing GHG emissions, and the structure effect played a bigger role than the intensity effect. The energy-mix effect increased GHG emissions, and the emission-factor effect decreased GHG emissions. The time series analysis indicates that the GHG emission pattern was changed before and after the International Monetary Fund (IMF) regime in Korea. The structure effect and the intensity effect had contributed more in emission reductions after rather than before the IMF regime in Korea. The structure effect and intensity effect have been stimulated since the high oil price period after 2001. - Highlights: • We decomposed greenhouse gas emissions of Korea's manufacturing industry using LMDI. • The structure effect and intensity effect play a role in reducing GHG emissions. • The role of structure effect was bigger than intensity effect. • The energy-mix effect increased and the emission-factor effect decreased GHG emissions. • The GHG emission pattern has been changed before and after IMF regime in Korea
International Nuclear Information System (INIS)
Xu Jinhua; Fleiter, Tobias; Eichhammer, Wolfgang; Fan Ying
2012-01-01
We analyze the change of energy consumption and CO 2 emissions in China's cement industry and its driving factors over the period 1990–2009 by applying a log-mean Divisia index (LMDI) method. It is based on the typical production process for clinker manufacturing and differentiates among four determining factors: cement output, clinker share, process structure and specific energy consumption per kiln type. The results show that the growth of cement output is the most important factor driving energy consumption up, while clinker share decline, structural shifts mainly drive energy consumption down (similar for CO 2 emissions). These efficiency improvements result from a number of policies which are transforming the entire cement industry towards international best practice including shutting down many older plants and raising the efficiency standards of cement plants. Still, the efficiency gains cannot compensate for the huge increase in cement production resulting from economic growth particularly in the infrastructure and construction sectors. Finally, scenario analysis shows that applying best available technology would result in an additional energy saving potential of 26% and a CO 2 mitigation potential of 33% compared to 2009. - Highlights: ► We analyze the energy consumption and CO 2 emissions in China's cement industry. ► The growth of cement output is the most important driving factor. ► The efficiency policies and industrial standards significantly narrowed the gap. ► Efficiency gains cannot compensate for the huge increase in cement production. ► The potentials of energy-saving of 26% and CO 2 mitigation of 33% exist based on BAT.
Relationship of host recurrence in fungi to rates of tropical leaf decomposition
Mirna E. Santanaa; JeanD. Lodgeb; Patricia Lebowc
2004-01-01
Here we explore the significance of fungal diversity on ecosystem processes by testing whether microfungal âpreferencesâ for (i.e., host recurrence) different tropical leaf species increases the rate of decomposition. We used pairwise combinations of girradiated litter of five tree species with cultures of two dominant microfungi derived from each plant in a microcosm...
Pairwise comparisons and visual perceptions of equal area polygons.
Adamic, P; Babiy, V; Janicki, R; Kakiashvili, T; Koczkodaj, W W; Tadeusiewicz, R
2009-02-01
The number of studies related to visual perception has been plentiful in recent years. Participants rated the areas of five randomly generated shapes of equal area, using a reference unit area that was displayed together with the shapes. Respondents were 179 university students from Canada and Poland. The average error estimated by respondents using the unit square was 25.75%. The error was substantially decreased to 5.51% when the shapes were compared to one another in pairs. This gain of 20.24% for this two-dimensional experiment was substantially better than the 11.78% gain reported in the previous one-dimensional experiments. This is the first statistically sound two-dimensional experiment demonstrating that pairwise comparisons improve accuracy.
Directory of Open Access Journals (Sweden)
Song Yang
Full Text Available Nickel laterites cannot be effectively used in physical methods because of their poor crystallinity and fine grain size. Na2SO4 is the most efficient additive for grade enrichment and Ni recovery. However, how Na2SO4 affects the selective reduction of laterite ores has not been clearly investigated. This study investigated the decomposition of laterite with and without the addition of Na2SO4 in an argon atmosphere using thermogravimetry coupled with mass spectrometry (TG-MS. Approximately 25 mg of samples with 20 wt% Na2SO4 was pyrolyzed under a 100 ml/min Ar flow at a heating rate of 10°C/min from room temperature to 1300°C. The kinetic study was based on derivative thermogravimetric (DTG curves. The evolution of the pyrolysis gas composition was detected by mass spectrometry, and the decomposition products were analyzed by X-ray diffraction (XRD. The decomposition behavior of laterite with the addition of Na2SO4 was similar to that of pure laterite below 800°C during the first three stages. However, in the fourth stage, the dolomite decomposed at 897°C, which is approximately 200°C lower than the decomposition of pure laterite. In the last stage, the laterite decomposed and emitted SO2 in the presence of Na2SO4 with an activation energy of 91.37 kJ/mol. The decomposition of laterite with and without the addition of Na2SO4 can be described by one first-order reaction. Moreover, the use of Na2SO4 as the modification agent can reduce the activation energy of laterite decomposition; thus, the reaction rate can be accelerated, and the reaction temperature can be markedly reduced.
Wang, Changjian; Wang, Fei; Zhang, Xinlin; Deng, Haijun
2017-11-01
It is important to analyze the influence mechanism of energy-related carbon emissions from a regional perspective to effectively achieve reductions in energy consumption and carbon emissions in China. Based on the "energy-economy-carbon emissions" hybrid input-output analysis framework, this study conducted structural decomposition analysis (SDA) on carbon emissions influencing factors in Guangdong Province. Systems-based examination of direct and indirect drivers for regional emission is presented. (1) Direct effects analysis of influencing factors indicated that the main driving factors of increasing carbon emissions were economic and population growth. Carbon emission intensity was the main contributing factor restraining carbon emissions growth. (2) Indirect effects analysis of influencing factors showed that international and interprovincial trades significantly affected the total carbon emissions. (3) Analysis of the effects of different final demands on the carbon emissions of industrial sector indicated that the increase in carbon emission arising from international and interprovincial trades is mainly concentrated in energy- and carbon-intensive industries. (4) Guangdong had to compromise a certain amount of carbon emissions during the development of its export-oriented economy because of industry transfer arising from the economic globalization, thereby pointing to the existence of the "carbon leakage" problem. At the same time, interprovincial export and import resulted in Guangdong transferring a part of its carbon emissions to other provinces, thereby leading to the occurrence of "carbon transfer."
International Nuclear Information System (INIS)
Geng, Zhiqiang; Gao, Huachao; Wang, Yanqing; Han, Yongming; Zhu, Qunxiong
2017-01-01
Highlights: • The integrated framework that combines IDA with energy-saving potential method is proposed. • Energy saving analysis and management framework of complex chemical processes is obtained. • This proposed method is efficient in energy optimization and carbon emissions of complex chemical processes. - Abstract: Energy saving and management of complex chemical processes play a crucial role in the sustainable development procedure. In order to analyze the effect of the technology, management level, and production structure having on energy efficiency and energy saving potential, this paper proposed a novel integrated framework that combines index decomposition analysis (IDA) with energy saving potential method. The IDA method can obtain the level of energy activity, energy hierarchy and energy intensity effectively based on data-drive to reflect the impact of energy usage. The energy saving potential method can verify the correctness of the improvement direction proposed by the IDA method. Meanwhile, energy efficiency improvement, energy consumption reduction and energy savings can be visually discovered by the proposed framework. The demonstration analysis of ethylene production has verified the practicality of the proposed method. Moreover, we can obtain the corresponding improvement for the ethylene production based on the demonstration analysis. The energy efficiency index and the energy saving potential of these worst months can be increased by 6.7% and 7.4%, respectively. And the carbon emissions can be reduced by 7.4–8.2%.
Boniface Ngah Epo; Francis Menjo Baye; Nadine Teme Angele Manga
2011-01-01
This study applies the regression-based inequality decomposition technique to explain poverty and inequality trends in Cameroon. We also identify gender related factors which explain income disparities and discrimination based on the 2001 and 2007 Cameroon household consumption surveys. The results show that education, health, employment in the formal sector, age cohorts, household size, gender, ownership of farmland and urban versus rural residence explain household economic wellbeing; dispa...
PairWise Neighbours database: overlaps and spacers among prokaryote genomes
Directory of Open Access Journals (Sweden)
Garcia-Vallvé Santiago
2009-06-01
Full Text Available Abstract Background Although prokaryotes live in a variety of habitats and possess different metabolic and genomic complexity, they have several genomic architectural features in common. The overlapping genes are a common feature of the prokaryote genomes. The overlapping lengths tend to be short because as the overlaps become longer they have more risk of deleterious mutations. The spacers between genes tend to be short too because of the tendency to reduce the non coding DNA among prokaryotes. However they must be long enough to maintain essential regulatory signals such as the Shine-Dalgarno (SD sequence, which is responsible of an efficient translation. Description PairWise Neighbours is an interactive and intuitive database used for retrieving information about the spacers and overlapping genes among bacterial and archaeal genomes. It contains 1,956,294 gene pairs from 678 fully sequenced prokaryote genomes and is freely available at the URL http://genomes.urv.cat/pwneigh. This database provides information about the overlaps and their conservation across species. Furthermore, it allows the wide analysis of the intergenic regions providing useful information such as the location and strength of the SD sequence. Conclusion There are experiments and bioinformatic analysis that rely on correct annotations of the initiation site. Therefore, a database that studies the overlaps and spacers among prokaryotes appears to be desirable. PairWise Neighbours database permits the reliability analysis of the overlapping structures and the study of the SD presence and location among the adjacent genes, which may help to check the annotation of the initiation sites.
International Nuclear Information System (INIS)
Chen Xiangsong; Sun Weimin; Wang Fan; Goldman, T.
2011-01-01
We analyze the problem of spin decomposition for an interacting system from a natural perspective of constructing angular-momentum eigenstates. We split, from the total angular-momentum operator, a proper part which can be separately conserved for a stationary state. This part commutes with the total Hamiltonian and thus specifies the quantum angular momentum. We first show how this can be done in a gauge-dependent way, by seeking a specific gauge in which part of the total angular-momentum operator vanishes identically. We then construct a gauge-invariant operator with the desired property. Our analysis clarifies what is the most pertinent choice among the various proposals for decomposing the nucleon spin. A similar analysis is performed for extracting a proper part from the total Hamiltonian to construct energy eigenstates.
Hou, Fujun
2016-01-01
This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.
Directory of Open Access Journals (Sweden)
Richard R Stein
2015-07-01
Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.
The pairwise phase consistency in cortical network and its relationship with neuronal activation
Directory of Open Access Journals (Sweden)
Wang Daming
2017-01-01
Full Text Available Gamma-band neuronal oscillation and synchronization with the range of 30-90 Hz are ubiquitous phenomenon across numerous brain areas and various species, and correlated with plenty of cognitive functions. The phase of the oscillation, as one aspect of CTC (Communication through Coherence hypothesis, underlies various functions for feature coding, memory processing and behaviour performing. The PPC (Pairwise Phase Consistency, an improved coherence measure, statistically quantifies the strength of phase synchronization. In order to evaluate the PPC and its relationships with input stimulus, neuronal activation and firing rate, a simplified spiking neuronal network is constructed to simulate orientation columns in primary visual cortex. If the input orientation stimulus is preferred for a certain orientation column, neurons within this corresponding column will obtain higher firing rate and stronger neuronal activation, which consequently engender higher PPC values, with higher PPC corresponding to higher firing rate. In addition, we investigate the PPC in time resolved analysis with a sliding window.
The Role of Middlemen inEfficient and Strongly Pairwise Stable Networks
Gilles, R.P.; Chakrabarti, S.; Sarangi, S.; Badasyan, N.
2004-01-01
We examine the strong pairwise stability concept in network formation theory under collective network benefits.Strong pairwise stability considers a pair of players to add a link through mutual consent while permitting them to unilaterally delete any subset of links under their control.We examine
Energy Technology Data Exchange (ETDEWEB)
Das, Mitali; Shu, Chi-Min, E-mail: shucm@yuntech.edu.tw
2016-01-15
Highlights: • Thermally degraded DBPH products are identified. • An appropriate mathematical model was selected for decomposition study. • Differential isoconversional analysis was performed to obtain kinetic parameters. • Simulation on thermal analysis model was conducted for the best storage conditions. - Abstract: This study investigated the thermal degradation products of 2,5-dimethyl-2,5-di-(tert-butylperoxy) hexane (DBPH), by TG/GC/MS to identify runaway reaction and thermal safety parameters. It also included the determination of time to maximum rate under adiabatic conditions (TMR{sub ad}) and self-accelerating decomposition temperature obtained through Advanced Kinetics and Technology Solutions. The apparent activation energy (E{sub a}) was calculated from differential isoconversional kinetic analysis method using differential scanning calorimetry experiments. The E{sub a} value obtained by Friedman analysis is in the range of 118.0–149.0 kJ mol{sup −1}. The TMR{sub ad} was 24.0 h with an apparent onset temperature of 82.4 °C. This study has also established an efficient benchmark for a thermal hazard assessment of DBPH that can be applied to assure safer storage conditions.
Screening synteny blocks in pairwise genome comparisons through integer programming.
Tang, Haibao; Lyons, Eric; Pedersen, Brent; Schnable, James C; Paterson, Andrew H; Freeling, Michael
2011-04-18
It is difficult to accurately interpret chromosomal correspondences such as true orthology and paralogy due to significant divergence of genomes from a common ancestor. Analyses are particularly problematic among lineages that have repeatedly experienced whole genome duplication (WGD) events. To compare multiple "subgenomes" derived from genome duplications, we need to relax the traditional requirements of "one-to-one" syntenic matchings of genomic regions in order to reflect "one-to-many" or more generally "many-to-many" matchings. However this relaxation may result in the identification of synteny blocks that are derived from ancient shared WGDs that are not of interest. For many downstream analyses, we need to eliminate weak, low scoring alignments from pairwise genome comparisons. Our goal is to objectively select subset of synteny blocks whose total scores are maximized while respecting the duplication history of the genomes in comparison. We call this "quota-based" screening of synteny blocks in order to appropriately fill a quota of syntenic relationships within one genome or between two genomes having WGD events. We have formulated the synteny block screening as an optimization problem known as "Binary Integer Programming" (BIP), which is solved using existing linear programming solvers. The computer program QUOTA-ALIGN performs this task by creating a clear objective function that maximizes the compatible set of synteny blocks under given constraints on overlaps and depths (corresponding to the duplication history in respective genomes). Such a procedure is useful for any pairwise synteny alignments, but is most useful in lineages affected by multiple WGDs, like plants or fish lineages. For example, there should be a 1:2 ploidy relationship between genome A and B if genome B had an independent WGD subsequent to the divergence of the two genomes. We show through simulations and real examples using plant genomes in the rosid superorder that the quota
Sugimura, Natsuhiko; Igarashi, Yoko; Aoyama, Reiko; Shibue, Toshimichi
2017-09-01
The physical origins of the interactions in the acetophenone cation adducts [M+Na]+, [M+NH4]+, and [M+H]+ were explored by localized molecular orbital-energy decomposition analysis and density functional theory. The analyses highlighted the differences in the interactions in the three adduct ions. Electrostatic energy was important in [M+Na]+ and there was little change in the acetophenone orbital shape. Both electrostatic and polarization energy were important in [M+NH4]+, and a considerable change in the orbital shape occurred to maximize the strength of the hydrogen bond. Polarization energy was the major attractive force in [M+H]+.
Bouillot, Vincent R.; Alimi, Jean-Michel; Corasaniti, Pier-Stefano; Rasera, Yann
2015-06-01
Observations of colliding galaxy clusters with high relative velocity probe the tail of the halo pairwise velocity distribution with the potential of providing a powerful test of cosmology. As an example it has been argued that the discovery of the Bullet Cluster challenges standard Λ cold dark matter (ΛCDM) model predictions. Halo catalogues from N-body simulations have been used to estimate the probability of Bullet-like clusters. However, due to simulation volume effects previous studies had to rely on a Gaussian extrapolation of the pairwise velocity distribution to high velocities. Here, we perform a detail analysis using the halo catalogues from the Dark Energy Universe Simulation Full Universe Runs (DEUS-FUR), which enables us to resolve the high-velocity tail of the distribution and study its dependence on the halo mass definition, redshift and cosmology. Building upon these results, we estimate the probability of Bullet-like systems in the framework of Extreme Value Statistics. We show that the tail of extreme pairwise velocities significantly deviates from that of a Gaussian, moreover it carries an imprint of the underlying cosmology. We find the Bullet Cluster probability to be two orders of magnitude larger than previous estimates, thus easing the tension with the ΛCDM model. Finally, the comparison of the inferred probabilities for the different DEUS-FUR cosmologies suggests that observations of extreme interacting clusters can provide constraints on dark energy models complementary to standard cosmological tests.
Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi
2018-03-01
Combining analyses of spatial and temporal characteristics of the ionosphere is of great significance for scientific research and engineering applications. Tensor decomposition is performed to explore the temporal-longitudinal-latitudinal characteristics in the ionosphere. Three-dimensional tensors are established based on the time series of ionospheric vertical total electron content maps obtained from the Centre for Orbit Determination in Europe. To obtain large-scale characteristics of the ionosphere, rank-1 decomposition is used to obtain U^{(1)}, U^{(2)}, and U^{(3)}, which are the resulting vectors for the time, longitude, and latitude modes, respectively. Our initial finding is that the correspondence between the frequency spectrum of U^{(1)} and solar variation indicates that rank-1 decomposition primarily describes large-scale temporal variations in the global ionosphere caused by the Sun. Furthermore, the time lags between the maxima of the ionospheric U^{(2)} and solar irradiation range from 1 to 3.7 h without seasonal dependence. The differences in time lags may indicate different interactions between processes in the magnetosphere-ionosphere-thermosphere system. Based on the dataset displayed in the geomagnetic coordinates, the position of the barycenter of U^{(3)} provides evidence for north-south asymmetry (NSA) in the large-scale ionospheric variations. The daily variation in such asymmetry indicates the influences of solar ionization. The diurnal geomagnetic coordinate variations in U^{(3)} show that the large-scale EIA (equatorial ionization anomaly) variations during the day and night have similar characteristics. Considering the influences of geomagnetic disturbance on ionospheric behavior, we select the geomagnetic quiet GIMs to construct the ionospheric tensor. The results indicate that the geomagnetic disturbances have little effect on large-scale ionospheric characteristics.
Directory of Open Access Journals (Sweden)
Xiaoxing Zhang
2016-11-01
Full Text Available Detection of decomposition products of sulfur hexafluoride (SF6 is one of the best ways to diagnose early latent insulation faults in gas-insulated equipment, and the occurrence of sudden accidents can be avoided effectively by finding early latent faults. Recently, functionalized graphene, a kind of gas sensing material, has been reported to show good application prospects in the gas sensor field. Therefore, calculations were performed to analyze the gas sensing properties of intrinsic graphene (Int-graphene and functionalized graphene-based material, Ag-decorated graphene (Ag-graphene, for decomposition products of SF6, including SO2F2, SOF2, and SO2, based on density functional theory (DFT. We thoroughly investigated a series of parameters presenting gas-sensing properties of adsorbing process about gas molecule (SO2F2, SOF2, SO2 and double gas molecules (2SO2F2, 2SOF2, 2SO2 on Ag-graphene, including adsorption energy, net charge transfer, electronic state density, and the highest and lowest unoccupied molecular orbital. The results showed that the Ag atom significantly enhances the electrochemical reactivity of graphene, reflected in the change of conductivity during the adsorption process. SO2F2 and SO2 gas molecules on Ag-graphene presented chemisorption, and the adsorption strength was SO2F2 > SO2, while SOF2 absorption on Ag-graphene was physical adsorption. Thus, we concluded that Ag-graphene showed good selectivity and high sensitivity to SO2F2. The results can provide a helpful guide in exploring Ag-graphene material in experiments for monitoring the insulation status of SF6-insulated equipment based on detecting decomposition products of SF6.
Energy Technology Data Exchange (ETDEWEB)
Achao, Carla da Costa Lopes
2009-09-15
Introduced at the end of the 1970s to study the impacts of structural changes on electricity consumption by industry, index decomposition analysis techniques have been extended to various other areas to help in the formulation of energy policies, notably in developed countries. However, few authors have applied these techniques to study the evolution of energy consumption in developing countries. In Brazil, the few available studies have focused only on the industrial sector. In this thesis, we apply the decomposition technique called the logarithmic mean Divisia index (LMDI) to electricity consumption of the Brazilian residential sector, to explain its evolution in terms of the activity, structure and intensity affects, over the period from 1980 to 2007. The results obtained in a preliminary analysis point out, in a general way, that the observed variables in electricity consumption in the Brazilian residential sector are, mainly, the product of the increase in the number of consumers and the changes in the specific consumption of electricity in households. In a further analysis, carried out when the structure is given through consumers participation in consumption categories (Low Income and Conventional) taken from an overall point of view, although the intensity and activity effects may be considered the main explaining factors in consumption variation, the structure effect stands out in the period immediately after the 2001 rationing, as a reflect of the changes in the inclusion criteria in the Low Income category, specifically in the Northeast. (author)
Pando, Jesus; Fang, Li-Zhi
1995-01-01
A method for measuring the spectrum of a density field by a discrete wavelet space-scale decomposition (SSD) has been studied. We show how the power spectrum can effectively be described by the father function coefficients (FFC) of the wavelet SSD. We demonstrate that the features of the spectrum, such as the magnitude, the index of a power law, and the typical scales, can be determined with high precision by the FFC reconstructed spectrum. This method does not require the mean density, which...
Improving pairwise comparison of protein sequences with domain co-occurrence
Gascuel, Olivier
2018-01-01
Comparing and aligning protein sequences is an essential task in bioinformatics. More specifically, local alignment tools like BLAST are widely used for identifying conserved protein sub-sequences, which likely correspond to protein domains or functional motifs. However, to limit the number of false positives, these tools are used with stringent sequence-similarity thresholds and hence can miss several hits, especially for species that are phylogenetically distant from reference organisms. A solution to this problem is then to integrate additional contextual information to the procedure. Here, we propose to use domain co-occurrence to increase the sensitivity of pairwise sequence comparisons. Domain co-occurrence is a strong feature of proteins, since most protein domains tend to appear with a limited number of other domains on the same protein. We propose a method to take this information into account in a typical BLAST analysis and to construct new domain families on the basis of these results. We used Plasmodium falciparum as a case study to evaluate our method. The experimental findings showed an increase of 14% of the number of significant BLAST hits and an increase of 25% of the proteome area that can be covered with a domain. Our method identified 2240 new domains for which, in most cases, no model of the Pfam database could be linked. Moreover, our study of the quality of the new domains in terms of alignment and physicochemical properties show that they are close to that of standard Pfam domains. Source code of the proposed approach and supplementary data are available at: https://gite.lirmm.fr/menichelli/pairwise-comparison-with-cooccurrence PMID:29293498
Yang, Yi-Bo; Chen, Ying; Draper, Terrence; Liang, Jian; Liu, Keh-Fei
2018-03-01
We report the results on the proton mass decomposition and also on the related quark and glue momentum fractions. The results are based on overlap valence fermions on four ensembles of Nf = 2 + 1 DWF configurations with three lattice spacings and volumes, and several pion masses including the physical pion mass. With 1-loop pertur-bative calculation and proper normalization of the glue operator, we find that the u, d, and s quark masses contribute 9(2)% to the proton mass. The quark energy and glue field energy contribute 31(5)% and 37(5)% respectively in the MS scheme at µ = 2 GeV. The trace anomaly gives the remaining 23(1)% contribution. The u, d, s and glue momentum fractions in the MS scheme are consistent with the global analysis at µ = 2 GeV.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Directory of Open Access Journals (Sweden)
Zhao Fa-Jun
2018-02-01
Full Text Available Foaming agents, despite holding potential in steam injection technology for heavy oil recovery, are still poorly investigated. In this work, we analyzed the performance of the foaming agent NPL-10 in terms of foam height and half-life under various conditions of temperature, pH, salinity, and oil content by orthogonal experiments. The best conditions of use for NPL-10 among those tested are T=220°C, pH 7, salinity 10000 mg·L–1 and oil content 10 g·L–1. Thermal decomposition of NPL-10 was also studied by thermogravimetric and differential thermal analyses. NPL-10 decomposes above 220°C, and decomposition is a two-step process. The kinetic triplet (activation energy, kinetic function and pre-exponential factor and the corresponding rate law were calculated for each step. Steps 1 and 2 follow kinetics of different order (n = 2 and ½, respectively. These findings provide some criteria for the selection of foaming agents for oil recovery by steam injection.
Decomposition of Sodium Tetraphenylborate
International Nuclear Information System (INIS)
Barnes, M.J.
1998-01-01
The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability
Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos
2013-08-14
This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.
Directory of Open Access Journals (Sweden)
Carlos Reyes-Garcia
2013-08-01
Full Text Available This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user’s blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD. EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.
International Nuclear Information System (INIS)
Yang Jia; Ge Liangquan; Xiong Shengqing
2010-01-01
From the features of spectra shape of Chang'e-1 γ-ray spectrometer(CE1-GRS) data, it is difficult to determine elemental compositions on the lunar surface. Aimed at this problem, this paper proposes using noise adjusted singular value decomposition (NASVD) method to extract orthogonal spectral components from CE1-GRS data. Then the peak signals in the spectra of lower-order layers corresponding to the observed spectrum of each lunar region are respectively analyzed. Elemental compositions of each lunar region can be determined based upon whether the energy corresponding to each peak signal equals to the energy corresponding to the characteristic gamma-ray line emissions of specific elements. The result shows that a number of elements such as U, Th, K, Fe, Ti, Si, O, Al, Mg, Ca and Na are qualitatively determined by this method. (authors)
Sun, Qi; Fu, Shujun
2017-09-20
Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.
Directory of Open Access Journals (Sweden)
Julián Pérez-García
2017-03-01
Full Text Available Since 1990, Spain has had one of the highest elasticities of electricity demand in the European Union. We provide an in-depth analysis into the causes of this high elasticity, and we examine how these same causes influence electricity demand in other European countries. To this end, we present an index-decomposition analysis of growth in electricity demand which allows us to identify three key factors in the relationship between gross domestic product (GDP and electricity demand: (i structural change; (ii GDP growth; and (iii intensity of electricity use. Our findings show that the main differences in electricity demand elasticities across countries and time are accounted for by the fast convergence in residential per capita electricity consumption. This convergence has almost concluded, and we expect the Spanish energy demand elasticity to converge to European standards in the near future.
Azimuthal decomposition of optical modes
CSIR Research Space (South Africa)
Dudley, Angela L
2012-07-01
Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...
Thermal decomposition of biphenyl (1963); Decomposition thermique du biphenyle (1963)
Energy Technology Data Exchange (ETDEWEB)
Clerc, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1962-06-15
The rates of formation of the decomposition products of biphenyl; hydrogen, methane, ethane, ethylene, as well as triphenyl have been measured in the vapour and liquid phases at 460 deg. C. The study of the decomposition products of biphenyl at different temperatures between 400 and 460 deg. C has provided values of the activation energies of the reactions yielding the main products of pyrolysis in the vapour phase. Product and Activation energy: Hydrogen 73 {+-} 2 kCal/Mole; Benzene 76 {+-} 2 kCal/Mole; Meta-triphenyl 53 {+-} 2 kCal/Mole; Biphenyl decomposition 64 {+-} 2 kCal/Mole; The rate of disappearance of biphenyl is only very approximately first order. These results show the major role played at the start of the decomposition by organic impurities which are not detectable by conventional physico-chemical analysis methods and the presence of which accelerates noticeably the decomposition rate. It was possible to eliminate these impurities by zone-melting carried out until the initial gradient of the formation curves for the products became constant. The composition of the high-molecular weight products (over 250) was deduced from the mean molecular weight and the dosage of the aromatic C - H bonds by infrared spectrophotometry. As a result the existence in tars of hydrogenated tetra, penta and hexaphenyl has been demonstrated. (author) [French] Les vitesses de formation des produits de decomposition du biphenyle: hydrogene, methane, ethane, ethylene, ainsi que des triphenyles, ont ete mesurees en phase vapeur et en phase liquide a 460 deg. C. L'etude des produits de decomposition du biphenyle a differentes temperatures comprises entre 400 et 460 deg. C, a fourni les valeurs des energies d'activation des reactions conduisant aux principaux produits de la pyrolyse en phase vapeur. Produit et Energie d'activation: Hydrogene 73 {+-} 2 kcal/Mole; Benzene 76 {+-} 2 kcal/Mole; Metatriphenyle, 53 {+-} 2 kcal/Mole; Decomposition du biphenyle 64 {+-} 2 kcal/Mole; La
LMDI decomposition approach: A guide for implementation
International Nuclear Information System (INIS)
Ang, B.W.
2015-01-01
Since it was first used by researchers to analyze industrial electricity consumption in the early 1980s, index decomposition analysis (IDA) has been widely adopted in energy and emission studies. Lately its use as the analytical component of accounting frameworks for tracking economy-wide energy efficiency trends has attracted considerable attention and interest among policy makers. The last comprehensive literature review of IDA was reported in 2000 which is some years back. After giving an update and presenting the key trends in the last 15 years, this study focuses on the implementation issues of the logarithmic mean Divisia index (LMDI) decomposition methods in view of their dominance in IDA in recent years. Eight LMDI models are presented and their origin, decomposition formulae, and strengths and weaknesses are summarized. Guidelines on the choice among these models are provided to assist users in implementation. - Highlights: • Guidelines for implementing LMDI decomposition approach are provided. • Eight LMDI decomposition models are summarized and compared. • The development of the LMDI decomposition approach is presented. • The latest developments of index decomposition analysis are briefly reviewed.
International Nuclear Information System (INIS)
Ebohon, Obas John; Ikeme, Anthony Jekwu
2006-01-01
The need to decompose CO 2 emission intensity is predicated upon the need for effective climate change mitigation and adaptation policies. Such analysis enables key variables that instigate CO 2 emission intensity to be identified while at the same time providing opportunities to verify the mitigation and adaptation capacities of countries. However, most CO 2 decomposition analysis has been conducted for the developed economies and little attention has been paid to sub-Saharan Africa. The need for such an analysis for SSA is overwhelming for several reasons. Firstly, the region is amongst the most vulnerable to climate change. Secondly, there are disparities in the amount and composition of energy consumption and the levels of economic growth and development in the region. Thus, a decomposition analysis of CO 2 emission intensity for SSA affords the opportunity to identify key influencing variables and to see how they compare among countries in the region. Also, attempts have been made to distinguish between oil and non-oil-producing SSA countries. To this effect a comparative static analysis of CO 2 emission intensity for oil-producing and non oil-producing SSA countries for the periods 1971-1998 has been undertaken, using the refined Laspeyres decomposition model. Our analysis confirms the findings for other regions that CO 2 emission intensity is attributable to energy consumption intensity, CO 2 emission coefficient of energy types and economic structure. Particularly, CO 2 emission coefficient of energy use was found to exercise the most influence on CO 2 emission intensity for both oil and non-oil-producing sub-Saharan African countries in the first sub-interval period of our investigation from 1971-1981. In the second subinterval of 1981-1991, energy intensity and structural effect were the two major influencing factors on emission intensity for the two groups of countries. However, energy intensity effect had the most pronounced impact on CO 2 emission
Cellular decomposition in vikalloys
International Nuclear Information System (INIS)
Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.
1981-01-01
Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru
Daverman, Robert J
2007-01-01
Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve
Differential Decomposition Among Pig, Rabbit, and Human Remains.
Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe
2018-03-30
While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.
Pitfalls in VAR based return decompositions: A clarification
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid......Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...
The finite element analysis of austenite decomposition during continuous cooling in 22MnB5 steel
International Nuclear Information System (INIS)
Chen, Xiangjun; Li, Guangyao; Sun, Guangyong; Xiao, Namin; Li, Dianzhong
2014-01-01
The hot stamping process has been increasingly used in newly designed vehicles to improve crash worthiness and fuel efficiency. In this study, a finite element model based on a subroutine of the commercial software ABAQUS is developed to predict the interactive influence of temperature field and phase transformation on high-strength boron steel. JMAK-type equations with the incubation time and additivity hypothesis are adopted to describe the austenite decomposition into ferrite, pearlite and bainite, while the Koistinen and Marburger (K–M) model is used to describe the displacive transformation of matensite. The simulation results show that the introduction of incubation time into a JMAK equation can provide a more reasonable prediction of the transformation kinetics than if the equation is unmodified. A comparison between the simulation and the standard Jominy end-quenching test demonstrates the capability of the present model for the prediction of transformation kinetics, microstructure distribution and mechanical properties. Furthermore, the adoption of experimentally measured microhardness values for the individual phase constituent can produce improved accuracy of the hardness predictions compared to the empirical hardness equations. (paper)
International Nuclear Information System (INIS)
Wu, Xiaoyang; Liu, Tianyou
2010-01-01
Reflections from a hydrocarbon-saturated zone are generally expected to have a tendency to be low frequency. Previous work has shown the application of seismic spectral decomposition for low-frequency shadow detection. In this paper, we further analyse the characteristics of spectral amplitude in fractured sandstone reservoirs with different fluid saturations using the Wigner–Ville distribution (WVD)-based method. We give a description of the geometric structure of cross-terms due to the bilinear nature of WVD and eliminate cross-terms using smoothed pseudo-WVD (SPWVD) with time- and frequency-independent Gaussian kernels as smoothing windows. SPWVD is finally applied to seismic data from West Sichuan depression. We focus our study on the comparison of SPWVD spectral amplitudes resulting from different fluid contents. It shows that prolific gas reservoirs feature higher peak spectral amplitude at higher peak frequency, which attenuate faster than low-quality gas reservoirs and dry or wet reservoirs. This can be regarded as a spectral attenuation signature for future exploration in the study area
Ampatzidis, Dimitrios; König, Rolf; Glaser, Susanne; Heinkelmann, Robert; Schuh, Harald; Flechtner, Frank; Nilsson, Tobias
2016-04-01
The aim of our study is to assess the classical Helmert similarity transformation using the Velocity Decomposition Analysis (VEDA). The VEDA is a new methodology, developed by GFZ for the assessment of the reference frames' temporal variation and it is based on the separation of the velocities into two specified parts: The first is related to the reference system choice (the so called datum effect) and the latter one which refers to the real deformation of the terrestrial points. The advantage of the VEDA is its ability to detect the relative biases and reference system effects between two different frames or two different realizations of the same frame, respectively. We apply the VEDA for the assessment between several modern tectonic plate models and the recent global terrestrial reference frames.
Directory of Open Access Journals (Sweden)
Yuanyuan Gong
2015-12-01
Full Text Available Carbon emissions calculation at the sub-provincial level has issues in limited data and non-unified measurements. This paper calculated the life cycle energy consumption and carbon emissions of the building industry in Wuhan, China. The findings showed that the proportion of carbon emissions in the construction operation phase was the largest, followed by the carbon emissions of the indirect energy consumption and the construction material preparation phase. With the purpose of analyzing the contributors of the construction carbon emissions, this paper conducted decomposition analysis using Logarithmic Mean Divisia Index (LMDI. The results indicated that the increasing buidling area was the major driver of energy consumption and carbon emissions increase, followed by the behavior factor. Population growth and urbanization, to some extent, increased the carbon emissions as well. On the contrary, energy efficiency was the main inhibitory factor for reducing the carbon emissions. Policy implications in terms of low-carbon construction development were highlighted.
Thermal decomposition of beryllium perchlorate tetrahydrate
International Nuclear Information System (INIS)
Berezkina, L.G.; Borisova, S.I.; Tamm, N.S.; Novoselova, A.V.
1975-01-01
Thermal decomposition of Be(ClO 4 ) 2 x4H 2 O was studied by the differential flow technique in the helium stream. The kinetics was followed by an exchange reaction of the perchloric acid appearing by the decomposition with potassium carbonate. The rate of CO 2 liberation in this process was recorded by a heat conductivity detector. The exchange reaction yielding CO 2 is quantitative, it is not the limiting one and it does not distort the kinetics of the process of perchlorate decomposition. The solid products of decomposition were studied by infrared and NMR spectroscopy, roentgenography, thermography and chemical analysis. A mechanism suggested for the decomposition involves intermediate formation of hydroxyperchlorate: Be(ClO 4 ) 2 x4H 2 O → Be(OH)ClO 4 +HClO 4 +3H 2 O; Be(OH)ClO 4 → BeO+HClO 4 . Decomposition is accompained by melting of the sample. The mechanism of decomposition is hydrolytic. At room temperature the hydroxyperchlorate is a thick syrup-like compound crystallizing after long storing
Thermoanalytical study of the decomposition of yttrium trifluoroacetate thin films
International Nuclear Information System (INIS)
Eloussifi, H.; Farjas, J.; Roura, P.; Ricart, S.; Puig, T.; Obradors, X.; Dammak, M.
2013-01-01
We present the use of the thermal analysis techniques to study yttrium trifluoroacetate thin films decomposition. In situ analysis was done by means of thermogravimetry, differential thermal analysis, and evolved gas analysis. Solid residues at different stages and the final product have been characterized by X-ray diffraction and scanning electron microscopy. The thermal decomposition of yttrium trifluoroacetate thin films results in the formation of yttria and presents the same succession of intermediates than powder's decomposition, however, yttria and all intermediates but YF 3 appear at significantly lower temperatures. We also observe a dependence on the water partial pressure that was not observed in the decomposition of yttrium trifluoroacetate powders. Finally, a dependence on the substrate chemical composition is discerned. - Highlights: • Thermal decomposition of yttrium trifluoroacetate films. • Very different behavior of films with respect to powders. • Decomposition is enhanced in films. • Application of thermal analysis to chemical solution deposition synthesis of films
Directory of Open Access Journals (Sweden)
Orlando Soriano-Vargas
2016-12-01
Full Text Available Spinodal decomposition was studied during aging of Fe-Cr alloys by means of the numerical solution of the linear and nonlinear Cahn-Hilliard differential partial equations using the explicit finite difference method. Results of the numerical simulation permitted to describe appropriately the mechanism, morphology and kinetics of phase decomposition during the isothermal aging of these alloys. The growth kinetics of phase decomposition was observed to occur very slowly during the early stages of aging and it increased considerably as the aging progressed. The nonlinear equation was observed to be more suitable for describing the early stages of spinodal decomposition than the linear one.
International Nuclear Information System (INIS)
Wu, Ya; Zhang, Wanying
2016-01-01
With the rapid development of economy, especially the constant progress in industrialisation and urbanisation, China's energy consumption has increased annually. Coal consumption, which accounts for about 70% of total energy consumption, is of particular concern. Hence, it is crucial to study the driving factors behind coal demand in China. This work uses an input-output structural decomposition analysis (I-O SDA) model to decompose the increments of coal demand in China from 1997 to 2012 into the sum of the weighted average for eight driving factors from three aspects, including: domestic demand, foreign trade and industrial upgrading. Results show that: during the research period, the demand for coal increases by 153.3%, which is increased by 185.4% and 76.4% respectively due to the driving forces of domestic demand and foreign trade; in addition, industrial upgrading can effectively restrain the growth in coal demand with a contribution rate of −108.6%. On this basis, we mainly studied the driving factors of coal demand in six high energy-consuming industries, namely the electrical power, energy processing, metals, mining, building materials and chemical industries. Finally, we proposed targeted policy suggestions for the realisation of energy conservation and emissions reduction in China. - Highlights: •The driving factors behind coal demand in China from 1997 to 2012 are studied. •An input-output structural decomposition analysis is developed. •A fresh perspective of domestic demand, foreign trade, and industrial upgrading is employed. •The influences of these affecting factors on China's coal demand from six high energy-consuming industries are investigated. •Targeted policy suggestions for energy conservation and emissions reduction are suggested.
Photochemical decomposition of catecholamines
International Nuclear Information System (INIS)
Mol, N.J. de; Henegouwen, G.M.J.B. van; Gerritsma, K.W.
1979-01-01
During photochemical decomposition (lambda=254 nm) adrenaline, isoprenaline and noradrenaline in aqueous solution were converted to the corresponding aminochrome for 65, 56 and 35% respectively. In determining this conversion, photochemical instability of the aminochromes was taken into account. Irradiations were performed in such dilute solutions that the neglect of the inner filter effect is permissible. Furthermore, quantum yields for the decomposition of the aminochromes in aqueous solution are given. (Author)
Maika, Amelia; Mittinty, Murthy N.; Brinkman, Sally; Harper, Sam; Satriawan, Elan; Lynch, John W.
2013-01-01
Background Measuring social inequalities in health is common; however, research examining inequalities in child cognitive function is more limited. We investigated household expenditure-related inequality in children’s cognitive function in Indonesia in 2000 and 2007, the contributors to inequality in both time periods, and changes in the contributors to cognitive function inequalities between the periods. Methods Data from the 2000 and 2007 round of the Indonesian Family Life Survey (IFLS) were used. Study participants were children aged 7–14 years (n = 6179 and n = 6680 in 2000 and 2007, respectively). The relative concentration index (RCI) was used to measure the magnitude of inequality. Contribution of various contributors to inequality was estimated by decomposing the concentration index in 2000 and 2007. Oaxaca-type decomposition was used to estimate changes in contributors to inequality between 2000 and 2007. Results Expenditure inequality decreased by 45% from an RCI = 0.29 (95% CI 0.22 to 0.36) in 2000 to 0.16 (95% CI 0.13 to 0.20) in 2007 but the burden of poorer cognitive function was higher among the disadvantaged in both years. The largest contributors to inequality in child cognitive function were inequalities in per capita expenditure, use of improved sanitation and maternal high school attendance. Changes in maternal high school participation (27%), use of improved sanitation (25%) and per capita expenditures (18%) were largely responsible for the decreasing inequality in children’s cognitive function between 2000 and 2007. Conclusions Government policy to increase basic education coverage for women along with economic growth may have influenced gains in children’s cognitive function and reductions in inequalities in Indonesia. PMID:24205322
International Nuclear Information System (INIS)
Branger, Frédéric; Quirion, Philippe
2015-01-01
We analyse variations of carbon emissions in the European cement industry from 1990 to 2012, at the European level (EU 27), and at the national level for six major producers (Germany, France, Spain, the United Kingdom, Italy and Poland). We apply a Log-Mean Divisia Index (LMDI) method, cross-referencing data from three databases: the Getting the Numbers Right (GNR) database developed by the Cement Sustainability Initiative, the European Union Transaction Log (EUTL), and the Eurostat International Trade database. Our decomposition method allows seven channels of emission change to be distinguished: activity, clinker trade, clinker share, alternative fuels, thermal and electrical energy efficiency, and electricity decarbonisation. We find that, apart from a slow trend of emission reductions coming from technological improvements (first from a decrease in the clinker share, then from an increase in alternative fuels), most of the emission change can be attributed to the activity effect. Using counterfactual scenarios, we estimate that the introduction of the EU ETS brought small but positive technological abatement (2.2% ± 1.3% between 2005 and 2012). Moreover, we find that the European cement industry has gained 3.5 billion Euros of “overallocation profits”, mostly due to the slowdown of production. - Highlights: • We analyse variations of carbon emissions in the European cement industry. • We apply a Log-Mean Divisia Index (LMDI) method. • Most of the emission changes can be attributed to the activity effect. • The EU ETS brought small but positive technological abatement. • The European cement industry has gained 3.5 billion Euros of “overallocation profits”
García-Garrido, C; Sánchez-Jiménez, P E; Pérez-Maqueda, L A; Perejón, A; Criado, José M
2016-10-26
The polymer-to-ceramic transformation kinetics of two widely employed ceramic precursors, 1,3,5,7-tetramethyl-1,3,5,7-tetravinylcyclotetrasiloxane (TTCS) and polyureamethylvinylsilazane (CERASET), have been investigated using coupled thermogravimetry and mass spectrometry (TG-MS), Raman, XRD and FTIR. The thermally induced decomposition of the pre-ceramic polymer is the critical step in the synthesis of polymer derived ceramics (PDCs) and accurate kinetic modeling is key to attaining a complete understanding of the underlying process and to attempt any behavior predictions. However, obtaining a precise kinetic description of processes of such complexity, consisting of several largely overlapping physico-chemical processes comprising the cleavage of the starting polymeric network and the release of organic moieties, is extremely difficult. Here, by using the evolved gases detected by MS as a guide it has been possible to determine the number of steps that compose the overall process, which was subsequently resolved using a semiempirical deconvolution method based on the Frasier-Suzuki function. Such a function is more appropriate that the more usual Gaussian or Lorentzian functions since it takes into account the intrinsic asymmetry of kinetic curves. Then, the kinetic parameters of each constituent step were independently determined using both model-free and model-fitting procedures, and it was found that the processes obey mostly diffusion models which can be attributed to the diffusion of the released gases through the solid matrix. The validity of the obtained kinetic parameters was tested not only by the successful reconstruction of the original experimental curves, but also by predicting the kinetic curves of the overall processes yielded by different thermal schedules and by a mixed TTCS-CERASET precursor.
Hernández, Ana Belén; Okonta, Felix; Freeman, Ntuli
2017-07-01
Thermochemical valorisation processes that allow energy to be recovered from sewage sludge, such as pyrolysis and gasification, have demonstrated great potential as convenient alternatives to conventional sewage sludge disposal technologies. Moreover, these processes may benefit from CO 2 recycling. Today, the scaling up of these technologies requires an advanced knowledge of the reactivity of sewage sludge and the characteristics of the products, specific to the thermochemical process. In this study the behaviour of sewage sludge during thermochemical conversion, under different atmospheres (N 2 , CO 2 and air), was studied, using TGA-FTIR, in order to understand the effects of different atmospheric gases on the kinetics of degradation and on the gaseous products. The different steps observed during the solid degradation were related with the production of different gaseous compounds. A higher oxidative degree of the atmosphere surrounding the sample resulted in higher reaction rates and a shift of the degradation mechanisms to lower temperatures, especially for the mechanisms taking place at temperatures above 400 °C. Finally, a multiple first-order reaction model was proposed to compare the kinetic parameters obtained under different atmospheres. Overall, the highest activation energies were obtained for combustion. This work proves that CO 2 , an intermediate oxidative atmosphere between N 2 and air, results in an intermediate behaviour (intermediate peaks in the derivative thermogravimetric curves and intermediate activation energies) during the thermochemical decomposition of sewage sludge. Overall, it can be concluded that the kinetics of these different processes require a different approach for their scaling up and specific consideration of their characteristic reaction temperatures and rates should be evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bosomprah, Samuel; Aryeetey, Genevieve Cecelia; Nonvignon, Justice; Adanu, Richard M
2014-12-24
The single most critical intervention to improve maternal and neonatal survival is to ensure that a competent health worker with midwifery skills is present at every birth, and transport is available to a referral facility for obstetric care in case of an emergency. This study aims to describe changes in percentage of skilled birth attendants in Ghana and to identify causes of the observed changes as well as the contribution of different categories of mother's characteristics to these changes. This study uses two successive nationally representative household surveys: the 2003 and 2008 Ghana Demographic and Health Surveys (GDHS). The two datasets have comparable information on household characteristics and skilled attendants at birth at the time of the survey. The 2003 GDHS database includes information on 6,251 households and 3639 live births in the five years preceding the survey, whereas the 2008 GDHS database had information on11, 778 households and 2909 live births in the five years preceding the survey. A decomposition approach was used to explain the observed change in percentage of skilled birth attendants. Random-effects generalized least square regression was used to explore the effect of changes in population structure in respect of the mother's characteristics on percentage of skilled birth attendants over the period. Overall, the data showed absolute gain in the proportion of births attended by a health professional from 47.1% in 2003 to 58.7% in 2008, which represents 21.9% of gap closed to reach universal coverage. The increase in skilled birth attendants was found to be caused by changes in general health behaviour. The gain is regardless of the mother's characteristics. The structural change in the proportion of births in respect of birth order and mother's education had little effect on the change in percentage of skilled birth attendants. Improvement in general health behaviour can potentially contribute to an accelerated increase in proportion
Directory of Open Access Journals (Sweden)
Ajmera M
2014-04-01
Full Text Available Mayank Ajmera,1 Amit D Raval,1 Chan Shen,2 Usha Sambamoorthi1 1Department of Pharmaceutical Systems and Policy, School of Pharmacy, School of Medicine, West Virginia University, Morgantown, WV, USA; 2Department of Biostatistics and Health Services Research, University of Texas MD Anderson Cancer Center, Houston, TX, USA Objective: To estimate excess health care expenditures associated with gastroesophageal reflux disease (GERD among elderly individuals with chronic obstructive pulmonary disease (COPD and examine the contribution of predisposing characteristics, enabling resources, need variables, personal health care practices, and external environment factors to the excess expenditures, using the Blinder-Oaxaca linear decomposition technique. Methods: This study utilized a cross-sectional, retrospective study design, using data from multiple years (2006-2009 of the Medicare Current Beneficiary Survey linked with fee-for-service Medicare claims. Presence of COPD and GERD was identified using diagnoses codes. Health care expenditures consisted of inpatient, outpatient, prescription drugs, dental, medical provider, and other services. For the analysis, t-tests were used to examine unadjusted subgroup differences in average health care expenditures by the presence of GERD. Ordinary least squares regressions on log-transformed health care expenditures were conducted to estimate the excess health care expenditures associated with GERD. The Blinder-Oaxaca linear decomposition technique was used to determine the contribution of predisposing characteristics, enabling resources, need variables, personal health care practices, and external environment factors, to excess health care expenditures associated with GERD. Results: Among elderly Medicare beneficiaries with COPD, 29.3% had co-occurring GERD. Elderly Medicare beneficiaries with COPD/GERD had 1.5 times higher ($36,793 vs $24,722 [P<0.001] expenditures than did those with COPD/no GERD. Ordinary
Visualization of pairwise and multilocus linkage disequilibrium structure using latent forests.
Directory of Open Access Journals (Sweden)
Raphaël Mourad
Full Text Available Linkage disequilibrium study represents a major issue in statistical genetics as it plays a fundamental role in gene mapping and helps us to learn more about human history. The linkage disequilibrium complex structure makes its exploratory data analysis essential yet challenging. Visualization methods, such as the triangular heat map implemented in Haploview, provide simple and useful tools to help understand complex genetic patterns, but remain insufficient to fully describe them. Probabilistic graphical models have been widely recognized as a powerful formalism allowing a concise and accurate modeling of dependences between variables. In this paper, we propose a method for short-range, long-range and chromosome-wide linkage disequilibrium visualization using forests of hierarchical latent class models. Thanks to its hierarchical nature, our method is shown to provide a compact view of both pairwise and multilocus linkage disequilibrium spatial structures for the geneticist. Besides, a multilocus linkage disequilibrium measure has been designed to evaluate linkage disequilibrium in hierarchy clusters. To learn the proposed model, a new scalable algorithm is presented. It constrains the dependence scope, relying on physical positions, and is able to deal with more than one hundred thousand single nucleotide polymorphisms. The proposed algorithm is fast and does not require phase genotypic data.
Prediction of microsleeps using pairwise joint entropy and mutual information between EEG channels.
Baseer, Abdul; Weddell, Stephen J; Jones, Richard D
2017-07-01
Microsleeps are involuntary and brief instances of complete loss of responsiveness, typically of 0.5-15 s duration. They adversely affect performance in extended attention-driven jobs and can be fatal. Our aim was to predict microsleeps from 16 channel EEG signals. Two information theoretic concepts - pairwise joint entropy and mutual information - were independently used to continuously extract features from EEG signals. k-nearest neighbor (kNN) with k = 3 was used to calculate both joint entropy and mutual information. Highly correlated features were discarded and the rest were ranked using Fisher score followed by an average of 3-fold cross-validation area under the curve of the receiver operating characteristic (AUC ROC ). Leave-one-out method (LOOM) was performed to test the performance of microsleep prediction system on independent data. The best prediction for 0.25 s ahead was AUCROC, sensitivity, precision, geometric mean (GM), and φ of 0.93, 0.68, 0.33, 0.75, and 0.38 respectively with joint entropy using single linear discriminant analysis (LDA) classifier.
Directory of Open Access Journals (Sweden)
Beidou Xi
2012-06-01
Full Text Available China’s industry accounts for 46.8% of the national Gross Domestic Product (GDP and plays an important strategic role in its economic growth. On the other hand, industrial wastewater is also the major source of water pollution. In order to examine the relationship between the underlying driving forces and various environmental indicators, values of two critical industrial wastewater pollutant discharge parameters (Chemical Oxygen Demand (COD and ammonia nitrogen (NH_{4}-N, between 2001 and 2009, were decomposed into three factors: i.e., production effects (caused by change in the scale of economic activity, structure effects (caused by change in economic structure and intensity effects (caused by change in technological level of each sector, using additive version of the Logarithmic Mean Divisia Index (LMDI I decomposition method. Results showed that: (1 the average annual effect of COD discharges in China was −2.99%, whereas the production effect, the structure effect, and the intensity effect were 14.64%, −1.39%, and −16.24%, respectively. Similarly, the average effect of NH_{4}-N discharges was −4.03%, while the production effect, the structure effect, and the intensity effect were 16.18%, −2.88%, and −17.33%, respectively; (2 the production effect was the major factor responsible for the increase in COD and NH_{4}-N discharges, accounting for 45% and 44% of the total contribution, respectively; (3 the intensity effect, which accounted for 50% and 48% of the total contribution, respectively, exerted a dominant decremental effect on COD and NH_{4}-N discharges; intensity effect was further decomposed into cleaner production effect and pollution abatement effect with the cleaner production effect accounting for 60% and 55% of the reduction of COD and NH_{4}-N, respectively; (4 the major contributors to incremental COD and NH_{4}-N discharges were divided among industrial sub
Directory of Open Access Journals (Sweden)
Prashant SINGH
2011-03-01
Full Text Available This paper presents theoretical analysis of a new approach for development of surface acoustic wave (SAW sensor array based odor recognition system. The construction of sensor array employs a single polymer interface for selective sorption of odorant chemicals in vapor phase. The individual sensors are however coated with different thicknesses. The idea of sensor coating thickness variation is for terminating solvation and diffusion kinetics of vapors into polymer up to different stages of equilibration on different sensors. This is expected to generate diversity in information content of the sensors transient. The analysis is based on wavelet decomposition of transient signals. The single sensor transients have been used earlier for generating odor identity signatures based on wavelet approximation coefficients. In the present work, however, we exploit variability in diffusion kinetics due to polymer thicknesses for making odor signatures. This is done by fusion of the wavelet coefficients from different sensors in the array, and then applying the principal component analysis. We find that the present approach substantially enhances the vapor class separability in feature space. The validation is done by generating synthetic sensor array data based on well-established SAW sensor theory.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances
Directory of Open Access Journals (Sweden)
Manuel Gil
2014-09-01
Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.
Gil, Manuel
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
Energy Technology Data Exchange (ETDEWEB)
Bond, Stephen D.
2014-01-01
The availability of efficient algorithms for long-range pairwise interactions is central to the success of numerous applications, ranging in scale from atomic-level modeling of materials to astrophysics. This report focuses on the implementation and analysis of the multilevel summation method for approximating long-range pairwise interactions. The computational cost of the multilevel summation method is proportional to the number of particles, N, which is an improvement over FFTbased methods whos cost is asymptotically proportional to N logN. In addition to approximating electrostatic forces, the multilevel summation method can be use to efficiently approximate convolutions with long-range kernels. As an application, we apply the multilevel summation method to a discretized integral equation formulation of the regularized generalized Poisson equation. Numerical results are presented using an implementation of the multilevel summation method in the LAMMPS software package. Preliminary results show that the computational cost of the method scales as expected, but there is still a need for further optimization.
Heo, Muyoung; Kim, Suhkmann; Moon, Eun-Joung; Cheon, Mookyung; Chung, Kwanghoon; Chang, Iksoo
2005-07-01
Although a coarse-grained description of proteins is a simple and convenient way to attack the protein folding problem, the construction of a global pairwise energy function which can simultaneously recognize the native folds of many proteins has resulted in partial success. We have sought the possibility of a systematic improvement of this pairwise-contact energy function as we extended the parameter space of amino acids, incorporating local environments of amino acids, beyond a 20×20 matrix. We have studied the pairwise contact energy functions of 20×20 , 60×60 , and 180×180 matrices depending on the extent of parameter space, and compared their effect on the learnability of energy parameters in the context of a gapless threading, bearing in mind that a 20×20 pairwise contact matrix has been shown to be too simple to recognize the native folds of many proteins. In this paper, we show that the construction of a global pairwise energy function was achieved using 1006 training proteins of a homology of less than 30%, which include all representatives of different protein classes. After parametrizing the local environments of the amino acids into nine categories depending on three secondary structures and three kinds of hydrophobicity (desolvation), the 16290 pairwise contact energies (scores) of the amino acids could be determined by perceptron learning and protein threading. These could simultaneously recognize all the native folds of the 1006 training proteins. When these energy parameters were tested on the 382 test proteins of a homology of less than 90%, 370 (96.9%) proteins could recognize their native folds. We set up a simple thermodynamic framework in the conformational space of decoys to calculate the unfolded fraction and the specific heat of real proteins. The different thermodynamic stabilities of E.coli ribonuclease H (RNase H) and its mutants were well described in our calculation, agreeing with the experiment.
SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.
Directory of Open Access Journals (Sweden)
Brejnev Muhizi Muhire
Full Text Available The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV. There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT, a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms.
Decomposing Nekrasov decomposition
Energy Technology Data Exchange (ETDEWEB)
Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)
2016-02-16
AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.
Decomposing Nekrasov decomposition
International Nuclear Information System (INIS)
Morozov, A.; Zenkevich, Y.
2016-01-01
AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.
Symmetric Tensor Decomposition
DEFF Research Database (Denmark)
Brachat, Jerome; Comon, Pierre; Mourrain, Bernard
2010-01-01
We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....
Pairwise additivity in the nuclear magnetic resonance interactions of atomic xenon.
Hanni, Matti; Lantto, Perttu; Vaara, Juha
2009-04-14
Nuclear magnetic resonance (NMR) of atomic (129/131)Xe is used as a versatile probe of the structure and dynamics of various host materials, due to the sensitivity of the Xe NMR parameters to intermolecular interactions. The principles governing this sensitivity can be investigated using the prototypic system of interacting Xe atoms. In the pairwise additive approximation (PAA), the binary NMR chemical shift, nuclear quadrupole coupling (NQC), and spin-rotation (SR) curves for the xenon dimer are utilized for fast and efficient evaluation of the corresponding NMR tensors in small xenon clusters Xe(n) (n = 2-12). If accurate, the preparametrized PAA enables the analysis of the NMR properties of xenon clusters, condensed xenon phases, and xenon gas without having to resort to electronic structure calculations of instantaneous configurations for n > 2. The binary parameters for Xe(2) at different internuclear distances were obtained at the nonrelativistic Hartree-Fock level of theory. Quantum-chemical (QC) calculations at the corresponding level were used to obtain the NMR parameters of the Xe(n) (n = 2-12) clusters at the equilibrium geometries. Comparison of PAA and QC data indicates that the direct use of the binary property curves of Xe(2) can be expected to be well-suited for the analysis of Xe NMR in the gaseous phase dominated by binary collisions. For use in condensed phases where many-body effects should be considered, effective binary property functions were fitted using the principal components of QC tensors from Xe(n) clusters. Particularly, the chemical shift in Xe(n) is strikingly well-described by the effective PAA. The coordination number Z of the Xe site is found to be the most important factor determining the chemical shift, with the largest shifts being found for high-symmetry sites with the largest Z. This is rationalized in terms of the density of virtual electronic states available for response to magnetic perturbations.
A handbook of decomposition methods in analytical chemistry
International Nuclear Information System (INIS)
Bok, R.
1984-01-01
Decomposition methods of metals, alloys, fluxes, slags, calcine, inorganic salts, oxides, nitrides, carbides, borides, sulfides, ores, minerals, rocks, concentrates, glasses, ceramics, organic substances, polymers, phyto- and biological materials from the viewpoint of sample preparation for analysis have been described. The methods are systemitized according to decomposition principle: thermal with the use of electricity, irradiation, dissolution with participation of chemical reactions and without it. Special equipment for different decomposition methods is described. Bibliography contains 3420 references
Directory of Open Access Journals (Sweden)
Paulo Antonio Delgado-Arredondo
2015-01-01
Full Text Available Induction motors are critical components for most industries and the condition monitoring has become necessary to detect faults. There are several techniques for fault diagnosis of induction motors and analyzing the startup transient vibration signals is not as widely used as other techniques like motor current signature analysis. Vibration analysis gives a fault diagnosis focused on the location of spectral components associated with faults. Therefore, this paper presents a comparative study of different time-frequency analysis methodologies that can be used for detecting faults in induction motors analyzing vibration signals during the startup transient. The studied methodologies are the time-frequency distribution of Gabor (TFDG, the time-frequency Morlet scalogram (TFMS, multiple signal classification (MUSIC, and fast Fourier transform (FFT. The analyzed vibration signals are one broken rotor bar, two broken bars, unbalance, and bearing defects. The obtained results have shown the feasibility of detecting faults in induction motors using the time-frequency spectral analysis applied to vibration signals, and the proposed methodology is applicable when it does not have current signals and only has vibration signals. Also, the methodology has applications in motors that are not fed directly to the supply line, in such cases the analysis of current signals is not recommended due to poor current signal quality.
A Bayesian method for detecting pairwise associations in compositional data.
Directory of Open Access Journals (Sweden)
Emma Schwager
2017-11-01
Full Text Available Compositional data consist of vectors of proportions normalized to a constant sum from a basis of unobserved counts. The sum constraint makes inference on correlations between unconstrained features challenging due to the information loss from normalization. However, such correlations are of long-standing interest in fields including ecology. We propose a novel Bayesian framework (BAnOCC: Bayesian Analysis of Compositional Covariance to estimate a sparse precision matrix through a LASSO prior. The resulting posterior, generated by MCMC sampling, allows uncertainty quantification of any function of the precision matrix, including the correlation matrix. We also use a first-order Taylor expansion to approximate the transformation from the unobserved counts to the composition in order to investigate what characteristics of the unobserved counts can make the correlations more or less difficult to infer. On simulated datasets, we show that BAnOCC infers the true network as well as previous methods while offering the advantage of posterior inference. Larger and more realistic simulated datasets further showed that BAnOCC performs well as measured by type I and type II error rates. Finally, we apply BAnOCC to a microbial ecology dataset from the Human Microbiome Project, which in addition to reproducing established ecological results revealed unique, competition-based roles for Proteobacteria in multiple distinct habitats.
DEFF Research Database (Denmark)
Bøtker, Johan P; Karmwar, Pranav; Strachan, Clare J
2011-01-01
to analyse the cryo-milled samples. The high similarity between the ¿-indomethacin cryogenic ball milled samples and the crude ¿-indomethacin indicated that milled samples retained residual order of the ¿-form. The PDF analysis encompassed the capability of achieving a correlation with the physical......The aim of this study was to investigate the usefulness of the atomic pair-wise distribution function (PDF) to detect the extension of disorder/amorphousness induced into a crystalline drug using a cryo-milling technique, and to determine the optimal milling times to achieve amorphisation. The PDF...... properties determined from DSC, ss-NMR and stability experiments. Multivariate data analysis (MVDA) was used to visualize the differences in the PDF and XRPD data. The MVDA approach revealed that PDF is more efficient in assessing the introduced degree of disorder in ¿-indomethacin after cryo-milling than...
Document Level Assessment of Document Retrieval Systems in a Pairwise System Evaluation
Rajagopal, Prabha; Ravana, Sri Devi
2017-01-01
Introduction: The use of averaged topic-level scores can result in the loss of valuable data and can cause misinterpretation of the effectiveness of system performance. This study aims to use the scores of each document to evaluate document retrieval systems in a pairwise system evaluation. Method: The chosen evaluation metrics are document-level…
Image ranking in video sequences using pairwise image comparisons and temporal smoothing
CSIR Research Space (South Africa)
Burke, Michael
2016-12-01
Full Text Available The ability to predict the importance of an image is highly desirable in computer vision. This work introduces an image ranking scheme suitable for use in video or image sequences. Pairwise image comparisons are used to determine image ‘interest...
Eisinga, R.N.; Heskes, T.M.; Pelzer, B.J.; Grotenhuis, H.F. te
2017-01-01
Background: The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to
Revisiting the classification of curtoviruses based on genome-wide pairwise identity
Varsani, Arvind
2014-01-25
Members of the genus Curtovirus (family Geminiviridae) are important pathogens of many wild and cultivated plant species. Until recently, relatively few full curtovirus genomes have been characterised. However, with the 19 full genome sequences now available in public databases, we revisit the proposed curtovirus species and strain classification criteria. Using pairwise identities coupled with phylogenetic evidence, revised species and strain demarcation guidelines have been instituted. Specifically, we have established 77% genome-wide pairwise identity as a species demarcation threshold and 94% genome-wide pairwise identity as a strain demarcation threshold. Hence, whereas curtovirus sequences with >77% genome-wide pairwise identity would be classified as belonging to the same species, those sharing >94% identity would be classified as belonging to the same strain. We provide step-by-step guidelines to facilitate the classification of newly discovered curtovirus full genome sequences and a set of defined criteria for naming new species and strains. The revision yields three curtovirus species: Beet curly top virus (BCTV), Spinach severe surly top virus (SpSCTV) and Horseradish curly top virus (HrCTV). © 2014 Springer-Verlag Wien.
Revisiting the classification of curtoviruses based on genome-wide pairwise identity
Varsani, Arvind; Martin, Darren Patrick; Navas-Castillo, Jesú s; Moriones, Enrique; Herná ndez-Zepeda, Cecilia; Idris, Ali; Murilo Zerbini, F.; Brown, Judith K.
2014-01-01
Members of the genus Curtovirus (family Geminiviridae) are important pathogens of many wild and cultivated plant species. Until recently, relatively few full curtovirus genomes have been characterised. However, with the 19 full genome sequences now available in public databases, we revisit the proposed curtovirus species and strain classification criteria. Using pairwise identities coupled with phylogenetic evidence, revised species and strain demarcation guidelines have been instituted. Specifically, we have established 77% genome-wide pairwise identity as a species demarcation threshold and 94% genome-wide pairwise identity as a strain demarcation threshold. Hence, whereas curtovirus sequences with >77% genome-wide pairwise identity would be classified as belonging to the same species, those sharing >94% identity would be classified as belonging to the same strain. We provide step-by-step guidelines to facilitate the classification of newly discovered curtovirus full genome sequences and a set of defined criteria for naming new species and strains. The revision yields three curtovirus species: Beet curly top virus (BCTV), Spinach severe surly top virus (SpSCTV) and Horseradish curly top virus (HrCTV). © 2014 Springer-Verlag Wien.
A gradient approximation for calculating Debye temperatures from pairwise interatomic potentials
International Nuclear Information System (INIS)
Jackson, D.P.
1975-09-01
A simple gradient approximation is given for calculating the effective Debye temperature of a cubic crystal from central pairwise interatomic potentials. For examples of the Morse potential applied to cubic metals the results are in generally good agreement with experiment. (author)
On the calculation of x-ray scattering signals from pairwise radial distribution functions
DEFF Research Database (Denmark)
Dohn, Asmus Ougaard; Biasin, Elisa; Haldrup, Kristoffer
2015-01-01
We derive a formulation for evaluating (time-resolved) x-ray scattering signals of solvated chemical systems, based on pairwise radial distribution functions, with the aim of this formulation to accompany molecular dynamics simulations. The derivation is described in detail to eliminate any possi...
Žvokelj, Matej; Zupan, Samo; Prebil, Ivan
2011-10-01
The article presents a novel non-linear multivariate and multiscale statistical process monitoring and signal denoising method which combines the strengths of the Kernel Principal Component Analysis (KPCA) non-linear multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD) to handle multiscale system dynamics. The proposed method which enables us to cope with complex even severe non-linear systems with a wide dynamic range was named the EEMD-based multiscale KPCA (EEMD-MSKPCA). The method is quite general in nature and could be used in different areas for various tasks even without any really deep understanding of the nature of the system under consideration. Its efficiency was first demonstrated by an illustrative example, after which the applicability for the task of bearing fault detection, diagnosis and signal denosing was tested on simulated as well as actual vibration and acoustic emission (AE) signals measured on purpose-built large-size low-speed bearing test stand. The positive results obtained indicate that the proposed EEMD-MSKPCA method provides a promising tool for tackling non-linear multiscale data which present a convolved picture of many events occupying different regions in the time-frequency plane.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Liu, Tingting; Zhang, Ling; Wang, Shutao; Cui, Yaoyao; Wang, Yutian; Liu, Lingfei; Yang, Zhe
2018-03-01
Qualitative and quantitative analysis of polycyclic aromatic hydrocarbons (PAHs) was carried out by three-dimensional fluorescence spectroscopy combining with Alternating Weighted Residue Constraint Quadrilinear Decomposition (AWRCQLD). The experimental subjects were acenaphthene (ANA) and naphthalene (NAP). Firstly, in order to solve the redundant information of the three-dimensional fluorescence spectral data, the wavelet transform was used to compress data in preprocessing. Then, the four-dimensional data was constructed by using the excitation-emission fluorescence spectra of different concentration PAHs. The sample data was obtained from three solvents that are methanol, ethanol and Ultra-pure water. The four-dimensional spectral data was analyzed by AWRCQLD, then the recovery rate of PAHs was obtained from the three solvents and compared respectively. On one hand, the results showed that PAHs can be measured more accurately by the high-order data, and the recovery rate was higher. On the other hand, the results presented that AWRCQLD can better reflect the superiority of four-dimensional algorithm than the second-order calibration and other third-order calibration algorithms. The recovery rate of ANA was 96.5% 103.3% and the root mean square error of prediction was 0.04 μgL- 1. The recovery rate of NAP was 96.7% 115.7% and the root mean square error of prediction was 0.06 μgL- 1.
Further investigations of the W-test for pairwise epistasis testing [version 1; referees: 2 approved
Directory of Open Access Journals (Sweden)
Richard Howey
2017-07-01
Full Text Available Background: In a recent paper, a novel W-test for pairwise epistasis testing was proposed that appeared, in computer simulations, to have higher power than competing alternatives. Application to genome-wide bipolar data detected significant epistasis between SNPs in genes of relevant biological function. Network analysis indicated that the implicated genes formed two separate interaction networks, each containing genes highly related to autism and neurodegenerative disorders. Methods: Here we investigate further the properties and performance of the W-test via theoretical evaluation, computer simulations and application to real data. Results: We demonstrate that, for common variants, the W-test is closely related to several existing tests of association allowing for interaction, including logistic regression on 8 degrees of freedom, although logistic regression can show inflated type I error for low minor allele frequencies, whereas the W-test shows good/conservative type I error control. Although in some situations the W-test can show higher power, logistic regression is not limited to tests on 8 degrees of freedom but can instead be taylored to impose greater structure on the assumed alternative hypothesis, offering a power advantage when the imposed structure matches the true structure. Conclusions: The W-test is a potentially useful method for testing for association - without necessarily implying interaction - between genetic variants disease, particularly when one or more of the genetic variants are rare. For common variants, the advantages of the W-test are less clear, and, indeed, there are situations where existing methods perform better. In our investigations, we further uncover a number of problems with the practical implementation and application of the W-test (to bipolar disorder previously described, apparently due to inadequate use of standard data quality-control procedures. This observation leads us to urge caution in
Thermal decomposition process of silver behenate
International Nuclear Information System (INIS)
Liu Xianhao; Lu Shuxia; Zhang Jingchang; Cao Weiliang
2006-01-01
The thermal decomposition processes of silver behenate have been studied by infrared spectroscopy (IR), X-ray diffraction (XRD), combined thermogravimetry-differential thermal analysis-mass spectrometry (TG-DTA-MS), transmission electron microscopy (TEM) and UV-vis spectroscopy. The TG-DTA and the higher temperature IR and XRD measurements indicated that complicated structural changes took place while heating silver behenate, but there were two distinct thermal transitions. During the first transition at 138 deg. C, the alkyl chains of silver behenate were transformed from an ordered into a disordered state. During the second transition at about 231 deg. C, a structural change took place for silver behenate, which was the decomposition of silver behenate. The major products of the thermal decomposition of silver behenate were metallic silver and behenic acid. Upon heating up to 500 deg. C, the final product of the thermal decomposition was metallic silver. The combined TG-MS analysis showed that the gas products of the thermal decomposition of silver behenate were carbon dioxide, water, hydrogen, acetylene and some small molecule alkenes. TEM and UV-vis spectroscopy were used to investigate the process of the formation and growth of metallic silver nanoparticles
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
International Nuclear Information System (INIS)
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-01-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Generalized decompositions of dynamic systems and vector Lyapunov functions
Ikeda, M.; Siljak, D. D.
1981-10-01
The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.
Thermal decomposition of zirconium compounds with some aromatic hydroxycarboxylic acids
Energy Technology Data Exchange (ETDEWEB)
Koshel, A V; Malinko, L A; Karlysheva, K F; Sheka, I A; Shchepak, N I [AN Ukrainskoj SSR, Kiev. Inst. Obshchej i Neorganicheskoj Khimii
1980-02-01
By the thermogravimetry method investigated are processes of thermal decomposition of different zirconium compounds with mandelic, parabromomandelic, salicylic and sulphosalicylic acids. For identification of decomposition products the specimens have been kept at the temperature of effects up to the constant weight. Taken are IR-spectra, rentgenoarams, carried out is elementary analysis of decomposition products. It is stated that thermal decomposition of the investigated compounds passes in stages; the final product of thermolysis is ZrO/sub 2/. Nonhydrolized compounds are stable at heating in the air up to 200-265 deg. Hydroxy compounds begin to decompose at lower temperature (80-100 deg).
International Nuclear Information System (INIS)
Salta, Myrsine; Polatidis, Heracles; Haralambopoulos, Dias
2009-01-01
A bottom-up methodological framework was developed and applied for the period 1985-2002, to selected manufacturing sub-sectors in Greece namely, food, beverages and tobacco, iron and steel, non-ferrous metals, non-metallic minerals and paper. Disaggregate physical data were aggregated according to their specific energy consumption (SEC) values and physical energy efficiency indicators were estimated. The Logarithmic Mean Divisia index method was also used and the effects of the production, structure and energy efficiency to changes in sub-sectoral manufacturing energy use were further assessed. Primary physical energy efficiency improved by 28% for the iron and steel and by 9% for the non-metallic minerals industries, compared to the base year 1990. For the food, beverages and tobacco and the paper sub-sectors, primary efficiency deteriorated by 20% and by 15%, respectively; finally electricity efficiency deteriorated by 7% for the non-ferrous metals. Sub-sectoral energy use is mainly driven by production output and energy efficiency changes. Sensitivity analysis showed that alternative SEC values do not influence the results whereas the selected base year is more critical for this analysis. Significant efficiency improvements refer to 'heavy' industry; 'light' industry needs further attention by energy policy to modernize its production plants and improve its efficiency
Energy Technology Data Exchange (ETDEWEB)
Salta, Myrsine; Polatidis, Heracles; Haralambopoulos, Dias [Energy Management Laboratory, Department of Environment, University of the Aegean, University Hill, Mytilene 81100 (Greece)
2009-01-15
A bottom-up methodological framework was developed and applied for the period 1985-2002, to selected manufacturing sub-sectors in Greece namely, food, beverages and tobacco, iron and steel, non-ferrous metals, non-metallic minerals and paper. Disaggregate physical data were aggregated according to their specific energy consumption (SEC) values and physical energy efficiency indicators were estimated. The Logarithmic Mean Divisia index method was also used and the effects of the production, structure and energy efficiency to changes in sub-sectoral manufacturing energy use were further assessed. Primary physical energy efficiency improved by 28% for the iron and steel and by 9% for the non-metallic minerals industries, compared to the base year 1990. For the food, beverages and tobacco and the paper sub-sectors, primary efficiency deteriorated by 20% and by 15%, respectively; finally electricity efficiency deteriorated by 7% for the non-ferrous metals. Sub-sectoral energy use is mainly driven by production output and energy efficiency changes. Sensitivity analysis showed that alternative SEC values do not influence the results whereas the selected base year is more critical for this analysis. Significant efficiency improvements refer to ''heavy'' industry; ''light'' industry needs further attention by energy policy to modernize its production plants and improve its efficiency. (author)
Linear and Nonlinear Multiset Canonical Correlation Analysis (invited talk)
DEFF Research Database (Denmark)
Hilger, Klaus Baggesen; Nielsen, Allan Aasbjerg; Larsen, Rasmus
2002-01-01
This paper deals with decompositioning of multiset data. Friedman's alternating conditional expectations (ACE) algorithm is extended to handle multiple sets of variables of different mixtures. The new algorithm finds estimates of the optimal transformations of the involved variables that maximize...... the sum of the pair-wise correlations over all sets. The new algorithm is termed multi-set ACE (MACE) and can find multiple orthogonal eigensolutions. MACE is a generalization of the linear multiset correlations analysis (MCCA). It handles multivariate multisets of arbitrary mixtures of both continuous...
Dolomite decomposition under CO2
International Nuclear Information System (INIS)
Guerfa, F.; Bensouici, F.; Barama, S.E.; Harabi, A.; Achour, S.
2004-01-01
Full text.Dolomite (MgCa (CO 3 ) 2 is one of the most abundant mineral species on the surface of the planet, it occurs in sedimentary rocks. MgO, CaO and Doloma (Phase mixture of MgO and CaO, obtained from the mineral dolomite) based materials are attractive steel-making refractories because of their potential cost effectiveness and world wide abundance more recently, MgO is also used as protective layers in plasma screen manufacture ceel. The crystal structure of dolomite was determined as rhombohedral carbonates, they are layers of Mg +2 and layers of Ca +2 ions. It dissociates depending on the temperature variations according to the following reactions: MgCa (CO 3 ) 2 → MgO + CaO + 2CO 2 .....MgCa (CO 3 ) 2 → MgO + Ca + CaCO 3 + CO 2 .....This latter reaction may be considered as a first step for MgO production. Differential thermal analysis (DTA) are used to control dolomite decomposition and the X-Ray Diffraction (XRD) was used to elucidate thermal decomposition of dolomite according to the reaction. That required samples were heated to specific temperature and holding times. The average particle size of used dolomite powders is 0.3 mm, as where, the heating temperature was 700 degree celsius, using various holding times (90 and 120 minutes). Under CO 2 dolomite decomposed directly to CaCO 3 accompanied by the formation of MgO, no evidence was offered for the MgO formation of either CaO or MgCO 3 , under air, simultaneous formation of CaCO 3 , CaO and accompanied dolomite decomposition
Energy Technology Data Exchange (ETDEWEB)
de la Rue du Can, Stephane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hasanbeigi, Ali [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2010-12-01
This report on the California Energy Balance version 2 (CALEB v2) database documents the latest update and improvements to CALEB version 1 (CALEB v1) and provides a complete picture of how energy is supplied and consumed in the State of California. The CALEB research team at Lawrence Berkeley National Laboratory (LBNL) performed the research and analysis described in this report. CALEB manages highly disaggregated data on energy supply, transformation, and end-use consumption for about 40 different energy commodities, from 1990 to 2008. This report describes in detail California's energy use from supply through end-use consumption as well as the data sources used. The report also analyzes trends in energy demand for the "Manufacturing" and "Building" sectors. Decomposition analysis of energy consumption combined with measures of the activity driving that consumption quantifies the effects of factors that shape energy consumption trends. The study finds that a decrease in energy intensity has had a very significant impact on reducing energy demand over the past 20 years. The largest impact can be observed in the industry sector where energy demand would have had increased by 358 trillion British thermal units (TBtu) if subsectoral energy intensities had remained at 1997 levels. Instead, energy demand actually decreased by 70 TBtu. In the "Building" sector, combined results from the "Service" and "Residential" subsectors suggest that energy demand would have increased by 264 TBtu (121 TBtu in the "Services" sector and 143 TBtu in the "Residential" sector) during the same period, 1997 to 2008. However, energy demand increased at a lesser rate, by only 162 TBtu (92 TBtu in the "Services" sector and 70 TBtu in the "Residential" sector). These energy intensity reductions can be indicative of energyefficiency improvements during the past 10 years. The research presented in this report provides a basis for developing an energy-efficiency performance index to measure
DEFF Research Database (Denmark)
Abildtrup, Jens; Audsley, E.; Fekete-Farkas, M.
2006-01-01
Assessment of the vulnerability of agriculture to climate change is strongly dependent on concurrent changes in socio-economic development pathways. This paper presents an integrated approach to the construction of socio-economic scenarios required for the analysis of climate change impacts...... on European agricultural land use. The scenarios are interpreted from the storylines described in the intergovernmental panel on climate change (IPCC) special report on emission scenarios (SRES), which ensures internal consistency between the evolution of socio-economics and climate change. A stepwise...... downscaling procedure based on expert-judgement and pairwise comparison is presented to obtain quantitative socio-economic parameters, e.g. prices and productivity estimates that are input to the ACCELERATES integrated land use model. In the first step, the global driving forces are identified and quantified...
Loveday, J.; Christodoulou, L.; Norberg, P.; Peacock, J. A.; Baldry, I. K.; Bland-Hawthorn, J.; Brown, M. J. I.; Colless, M.; Driver, S. P.; Holwerda, B. W.; Hopkins, A. M.; Kafle, P. R.; Liske, J.; Lopez-Sanchez, A. R.; Taylor, E. N.
2018-03-01
The galaxy pairwise velocity dispersion (PVD) can provide important tests of non-standard gravity and galaxy formation models. We describe measurements of the PVD of galaxies in the Galaxy and Mass Assembly (GAMA) survey as a function of projected separation and galaxy luminosity. Due to the faint magnitude limit (r PVD to smaller scales (r⊥ = 0.01 h - 1 Mpc) than previous work. The measured PVD at projected separations r⊥ ≲ 1 h - 1 Mpc increases near monotonically with increasing luminosity from σ12 ≈ 200 km s - 1 at Mr = -17 mag to σ12 ≈ 600 km s - 1 at Mr ≈ -22 mag. Analysis of the Gonzalez-Perez et al. (2014) GALFORM semi-analytic model yields no such trend of PVD with luminosity: the model overpredicts the PVD for faint galaxies. This is most likely a result of the model placing too many low-luminosity galaxies in massive haloes.
Inverse scale space decomposition
DEFF Research Database (Denmark)
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Cacciatori, Sergio L; Marrani, Alessio
2013-01-01
By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.
International Nuclear Information System (INIS)
Fernández González, P.; Landajo, M.; Presno, M.J.
2014-01-01
This paper aims at analysing the factors behind the change in aggregate energy consumption in the EU-27, also identifying differences between member states. The logarithmic-mean Divisia index method (LMDI) is applied to multiplicatively decompose, at the country level, the variation in aggregate energy consumption in the EU-27 member states for the 2001–2008 period. We also analyse the sensitivity of the results when several aggregation levels are considered, with energy intensity used as the criterion to aggregate countries. This allows us to check robustness of results, also enabling an improved understanding of both inter and intra-unit effects. Results indicate that improvements in energy efficiency in the EU-27 were not enough to overcome the pressure of European economic activity on aggregate energy consumption. Mediterranean countries, and especially former communist states, increased their energy consumptions, most of them favoured by structural change. The analysis also reveals that the impact of intra-group movements on aggregate energy consumption is partially offset when moving from higher to lower aggregation levels. - Highlights: • Increase in EU-27 aggregate energy consumption is decomposed through LMDI at 3 levels. • We present the subgroup activity effect and we demonstrate its nulls consequences. • Structural and intensity group effects lose influence when moving to a higher level. • R and D, quality energies, efficient technologies, are main tools to lower energy consumption. • Structural effect: “Green” attitudes and changes in consumer choices, also necessary
Wang, Cheng; Dong, Da; Wang, Haoshu; Müller, Karin; Qin, Yong; Wang, Hailong; Wu, Weixiang
2016-01-01
Compost habitats sustain a vast ensemble of microbes specializing in the degradation of lignocellulosic plant materials and are thus important both for their roles in the global carbon cycle and as potential sources of biochemical catalysts for advanced biofuels production. Studies have revealed substantial diversity in compost microbiomes, yet how this diversity relates to functions and even to the genes encoding lignocellulolytic enzymes remains obscure. Here, we used a metagenomic analysis of the rice straw-adapted (RSA) microbial consortia enriched from compost ecosystems to decipher the systematic and functional contexts within such a distinctive microbiome. Analyses of the 16S pyrotag library and 5 Gbp of metagenomic sequence showed that the phylum Actinobacteria was the predominant group among the Bacteria in the RSA consortia, followed by Proteobacteria, Firmicutes, Chloroflexi, and Bacteroidetes. The CAZymes profile revealed that CAZyme genes in the RSA consortia were also widely distributed within these bacterial phyla. Strikingly, about 46.1 % of CAZyme genes were from actinomycetal communities, which harbored a substantially expanded catalog of the cellobiohydrolase, β-glucosidase, acetyl xylan esterase, arabinofuranosidase, pectin lyase, and ligninase genes. Among these communities, a variety of previously unrecognized species was found, which reveals a greater ecological functional diversity of thermophilic Actinobacteria than previously assumed. These data underline the pivotal role of thermophilic Actinobacteria in lignocellulose biodegradation processes in the compost habitat. Besides revealing a new benchmark for microbial enzymatic deconstruction of lignocelluloses, the results suggest that actinomycetes found in compost ecosystems are potential candidates for mining efficient lignocellulosic enzymes in the biofuel industry.
Classification between normal and tumor tissues based on the pair-wise gene expression ratio
International Nuclear Information System (INIS)
Yap, YeeLeng; Zhang, XueWu; Ling, MT; Wang, XiangHong; Wong, YC; Danchin, Antoine
2004-01-01
Precise classification of cancer types is critically important for early cancer diagnosis and treatment. Numerous efforts have been made to use gene expression profiles to improve precision of tumor classification. However, reliable cancer-related signals are generally lacking. Using recent datasets on colon and prostate cancer, a data transformation procedure from single gene expression to pair-wise gene expression ratio is proposed. Making use of the internal consistency of each expression profiling dataset this transformation improves the signal to noise ratio of the dataset and uncovers new relevant cancer-related signals (features). The efficiency in using the transformed dataset to perform normal/tumor classification was investigated using feature partitioning with informative features (gene annotation) as discriminating axes (single gene expression or pair-wise gene expression ratio). Classification results were compared to the original datasets for up to 10-feature model classifiers. 82 and 262 genes that have high correlation to tissue phenotype were selected from the colon and prostate datasets respectively. Remarkably, data transformation of the highly noisy expression data successfully led to lower the coefficient of variation (CV) for the within-class samples as well as improved the correlation with tissue phenotypes. The transformed dataset exhibited lower CV when compared to that of single gene expression. In the colon cancer set, the minimum CV decreased from 45.3% to 16.5%. In prostate cancer, comparable CV was achieved with and without transformation. This improvement in CV, coupled with the improved correlation between the pair-wise gene expression ratio and tissue phenotypes, yielded higher classification efficiency, especially with the colon dataset – from 87.1% to 93.5%. Over 90% of the top ten discriminating axes in both datasets showed significant improvement after data transformation. The high classification efficiency achieved suggested
Two Notes on Discrimination and Decomposition
DEFF Research Database (Denmark)
Nielsen, Helena Skyt
1998-01-01
1. It turns out that the Oaxaca-Blinder wage decomposition is inadequate when it comes to calculation of separate contributions for indicator variables. The contributions are not robust against a change of reference group. I extend the Oaxaca-Blinder decomposition to handle this problem. 2. The p....... The paper suggests how to use the logit model to decompose the gender difference in the probability of an occurrence. The technique is illustrated by an analysis of discrimination in child labor in rural Zambia....
Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter
2018-01-01
Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930
Thermal Plasma decomposition of fluoriated greenhouse gases
Energy Technology Data Exchange (ETDEWEB)
Choi, Soo Seok; Watanabe, Takayuki [Tokyo Institute of Technology, Yokohama (Japan); Park, Dong Wha [Inha University, Incheon (Korea, Republic of)
2012-02-15
Fluorinated compounds mainly used in the semiconductor industry are potent greenhouse gases. Recently, thermal plasma gas scrubbers have been gradually replacing conventional burn-wet type gas scrubbers which are based on the combustion of fossil fuels because high conversion efficiency and control of byproduct generation are achievable in chemically reactive high temperature thermal plasma. Chemical equilibrium composition at high temperature and numerical analysis on a complex thermal flow in the thermal plasma decomposition system are used to predict the process of thermal decomposition of fluorinated gas. In order to increase economic feasibility of the thermal plasma decomposition process, increase of thermal efficiency of the plasma torch and enhancement of gas mixing between the thermal plasma jet and waste gas are discussed. In addition, noble thermal plasma systems to be applied in the thermal plasma gas treatment are introduced in the present paper.
Thermal decomposition of barium valerate in argon
DEFF Research Database (Denmark)
Torres, P.; Norby, Poul; Grivel, Jean-Claude
2015-01-01
The thermal decomposition of barium valerate (Ba(C4H9CO2)(2)/Ba-pentanoate) was studied in argon by means of thermogravimetry, differential thermal analysis, IR-spectroscopy, X-ray diffraction and hot-stage optical microscopy. Melting takes place in two different steps, at 200 degrees C and 280...
2015-12-01
The material flow account of Tangshan City was established by material flow analysis (MFA) method to analyze the periodical characteristics of material input and output in the operation of economy-environment system, and the impact of material input and output intensities on economic development. Using econometric model, the long-term interaction mechanism and relationship among the indexes of gross domestic product (GDP) , direct material input (DMI), domestic processed output (DPO) were investigated after unit root hypothesis test, Johansen cointegration test, vector error correction model, impulse response function and variance decomposition. The results showed that during 1992-2011, DMI and DPO both increased, and the growth rate of DMI was higher than that of DPO. The input intensity of DMI increased, while the intensity of DPO fell in volatility. Long-term stable cointegration relationship existed between GDP, DMI and DPO. Their interaction relationship showed a trend from fluctuation to gradual ste adiness. DMI and DPO had strong, positive impacts on economic development in short-term, but the economy-environment system gradually weakened these effects by short-term dynamically adjusting indicators inside and outside of the system. Ultimately, the system showed a long-term equilibrium relationship. The effect of economic scale on economy was gradually increasing. After decomposing the contribution of each index to GDP, it was found that DMI's contribution grew, GDP's contribution declined, DPO's contribution changed little. On the whole, the economic development of Tangshan City has followed the traditional production path of resource-based city, mostly depending on the material input which caused high energy consumption and serous environmental pollution.
Directory of Open Access Journals (Sweden)
Junhong Zhou
Full Text Available Human aging into senescence diminishes the capacity of the postural control system to adapt to the stressors of everyday life. Diminished adaptive capacity may be reflected by a loss of the fractal-like, multiscale complexity within the dynamics of standing postural sway (i.e., center-of-pressure, COP. We therefore studied the relationship between COP complexity and adaptive capacity in 22 older and 22 younger healthy adults. COP magnitude dynamics were assessed from raw data during quiet standing with eyes open and closed, and complexity was quantified with a new technique termed empirical mode decomposition embedded detrended fluctuation analysis (EMD-DFA. Adaptive capacity of the postural control system was assessed with the sharpened Romberg test. As compared to traditional DFA, EMD-DFA more accurately identified trends in COP data with intrinsic scales and produced short and long-term scaling exponents (i.e., α(Short, α(Long with greater reliability. The fractal-like properties of COP fluctuations were time-scale dependent and highly complex (i.e., α(Short values were close to one over relatively short time scales. As compared to younger adults, older adults demonstrated lower short-term COP complexity (i.e., greater α(Short values in both visual conditions (p>0.001. Closing the eyes decreased short-term COP complexity, yet this decrease was greater in older compared to younger adults (p<0.001. In older adults, those with higher short-term COP complexity exhibited better adaptive capacity as quantified by Romberg test performance (r(2 = 0.38, p<0.001. These results indicate that an age-related loss of COP complexity of magnitude series may reflect a clinically important reduction in postural control system functionality as a new biomarker.
Pairwise correlations via quantum discord and its geometric measure in a four-qubit spin chain
Directory of Open Access Journals (Sweden)
Abdel-Baset A. Mohamed
2013-04-01
Full Text Available The dynamic of pairwise correlations, including quantum entanglement (QE and discord (QD with geometric measure of quantum discord (GMQD, are shown in the four-qubit Heisenberg XX spin chain. The results show that the effect of the entanglement degree of the initial state on the pairwise correlations is stronger for alternate qubits than it is for nearest-neighbor qubits. This parameter results in sudden death for QE, but it cannot do so for QD and GMQD. With different values for this entanglement parameter of the initial state, QD and GMQD differ and are sensitive for any change in this parameter. It is found that GMQD is more robust than both QD and QE to describe correlations with nonzero values, which offers a valuable resource for quantum computation.
Directory of Open Access Journals (Sweden)
Kae Y. Foo
2010-01-01
Full Text Available The task of localizing underwater assets involves the relative localization of each unit using only pairwise distance measurements, usually obtained from time-of-arrival or time-delay-of-arrival measurements. In the fluctuating underwater environment, a complete set of pair-wise distance measurements can often be difficult to acquire, thus hindering a straightforward closed-form solution in deriving the assets' relative coordinates. An iterative multidimensional scaling approach is presented based upon a weighted-majorization algorithm that tolerates missing or inaccurate distance measurements. Substantial modifications are proposed to optimize the algorithm, while the effects of refractive propagation paths are considered. A parametric study of the algorithm based upon simulation results is shown. An acoustic field-trial was then carried out, presenting field measurements to highlight the practical implementation of this algorithm.
A pragmatic pairwise group-decision method for selection of sites for nuclear power plants
International Nuclear Information System (INIS)
Kutbi, I.I.
1987-01-01
A pragmatic pairwise group-decision approach is applied to compare two regions in order to select the more suitable one for construction of nulcear power plants in the Kingdom of Saudi Arabia. The selection methodology is based on pairwise comparison by forced choice. The method facilitates rating of the regions or sites using simple calculations. Two regions, one close to Dhahran on the Arabian Gulf and another close to Jeddah on the Red Sea, are evaluated. No specific site in either region is considered at this stage. The comparison is based on a set of selection criteria which include (i) topography, (ii) geology, (iii) seismology, (iv) meteorology, (v) oceanography, (vi) hydrology and (vii) proximetry to oil and gas fields. The comparison shows that the Jeddah region is more suitable than the Dhahran region. (orig.)
International Nuclear Information System (INIS)
Daoud, M.; Ahl Laamara, R.
2012-01-01
We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states
Energy Technology Data Exchange (ETDEWEB)
Daoud, M., E-mail: m_daoud@hotmail.com [Department of Physics, Faculty of Sciences, University Ibnou Zohr, Agadir (Morocco); Ahl Laamara, R., E-mail: ahllaamara@gmail.com [LPHE-Modeling and Simulation, Faculty of Sciences, University Mohammed V, Rabat (Morocco); Centre of Physics and Mathematics, CPM, CNESTEN, Rabat (Morocco)
2012-07-16
We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states.
Implementation of domain decomposition and data decomposition algorithms in RMC code
International Nuclear Information System (INIS)
Liang, J.G.; Cai, Y.; Wang, K.; She, D.
2013-01-01
The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced
Multiscale principal component analysis
International Nuclear Information System (INIS)
Akinduko, A A; Gorban, A N
2014-01-01
Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis
Parihar, Abhinav; Shukla, Nikhil; Datta, Suman; Raychowdhury, Arijit
2015-02-01
Computing with networks of synchronous oscillators has attracted wide-spread attention as novel materials and device topologies have enabled realization of compact, scalable and low-power coupled oscillatory systems. Of particular interest are compact and low-power relaxation oscillators that have been recently demonstrated using MIT (metal-insulator-transition) devices using properties of correlated oxides. Further the computational capability of pairwise coupled relaxation oscillators has also been shown to outperform traditional Boolean digital logic circuits. This paper presents an analysis of the dynamics and synchronization of a system of two such identical coupled relaxation oscillators implemented with MIT devices. We focus on two implementations of the oscillator: (a) a D-D configuration where complementary MIT devices (D) are connected in series to provide oscillations and (b) a D-R configuration where it is composed of a resistor (R) in series with a voltage-triggered state changing MIT device (D). The MIT device acts like a hysteresis resistor with different resistances in the two different states. The synchronization dynamics of such a system has been analyzed with purely charge based coupling using a resistive (RC) and a capacitive (CC) element in parallel. It is shown that in a D-D configuration symmetric, identical and capacitively coupled relaxation oscillator system synchronizes to an anti-phase locking state, whereas when coupled resistively the system locks in phase. Further, we demonstrate that for certain range of values of RC and CC, a bistable system is possible which can have potential applications in associative computing. In D-R configuration, we demonstrate the existence of rich dynamics including non-monotonic flows and complex phase relationship governed by the ratios of the coupling impedance. Finally, the developed theoretical formulations have been shown to explain experimentally measured waveforms of such pairwise coupled
Carreno, Victor A.
2015-01-01
Pair-wise Trajectory Management (PTM) is a cockpit based delegated responsibility separation standard. When an air traffic service provider gives a PTM clearance to an aircraft and the flight crew accepts the clearance, the flight crew will maintain spacing and separation from a designated aircraft. A PTM along track algorithm will receive state information from the designated aircraft and from the own ship to produce speed guidance for the flight crew to maintain spacing and separation
Criteria for the singularity of a pairwise l1-distance matrix and their generalizations
International Nuclear Information System (INIS)
D'yakonov, Alexander G
2012-01-01
We study the singularity problem for the pairwise distance matrix of a system of points, as well as generalizations of this problem that are connected with applications to interpolation theory and with an algebraic approach to recognition problems. We obtain necessary and sufficient conditions on a system under which the dimension of the range space of polynomials of bounded degree over the columns of the distance matrix is less than the number of points in the system.
Criteria for the singularity of a pairwise l{sub 1}-distance matrix and their generalizations
Energy Technology Data Exchange (ETDEWEB)
D' yakonov, Alexander G [M. V. Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics, Moscow (Russian Federation)
2012-06-30
We study the singularity problem for the pairwise distance matrix of a system of points, as well as generalizations of this problem that are connected with applications to interpolation theory and with an algebraic approach to recognition problems. We obtain necessary and sufficient conditions on a system under which the dimension of the range space of polynomials of bounded degree over the columns of the distance matrix is less than the number of points in the system.
Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.
Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe
2018-02-19
Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.
SFESA: a web server for pairwise alignment refinement by secondary structure shifts.
Tong, Jing; Pei, Jimin; Grishin, Nick V
2015-09-03
Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.
Braakhekke, W.G.; Bruijn, de A.M.G.
2007-01-01
We explored an alternative method to analyse data of Cou¿teaux et al. [2002, Soil Biology and Biochemistry 34, 69-78] on the decomposition of a standard organic material in six soils along an altitudinal gradient in the Venezuelan Andes (65-3968 m a.s.l.). Cou¿teaux et al., fitted separate
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
Directory of Open Access Journals (Sweden)
Kaur Parminder
2012-08-01
Full Text Available Abstract Background An approach to molecular classification based on the comparative expression of protein pairs is presented. The method overcomes some of the present limitations in using peptide intensity data for class prediction for problems such as the detection of a disease, disease prognosis, or for predicting treatment response. Data analysis is particularly challenging in these situations due to sample size (typically tens being much smaller than the large number of peptides (typically thousands. Methods based upon high dimensional statistical models, machine learning or other complex classifiers generate decisions which may be very accurate but can be complex and difficult to interpret in simple or biologically meaningful terms. A classification scheme, called ProtPair, is presented that generates simple decision rules leading to accurate classification which is based on measurement of very few proteins and requires only relative expression values, providing specific targeted hypotheses suitable for straightforward validation. Results ProtPair has been tested against clinical data from 21 patients following a bone marrow transplant, 13 of which progress to idiopathic pneumonia syndrome (IPS. The approach combines multiple peptide pairs originating from the same set of proteins, with each unique peptide pair providing an independent measure of discriminatory power. The prediction rate of the ProtPair for IPS study as measured by leave-one-out CV is 69.1%, which can be very beneficial for clinical diagnosis as it may flag patients in need of closer monitoring. The “top ranked” proteins provided by ProtPair are known to be associated with the biological processes and pathways intimately associated with known IPS biology based on mouse models. Conclusions An approach to biomarker discovery, called ProtPair, is presented. ProtPair is based on the differential expression of pairs of peptides and the associated proteins. Using mass
Janković, Bojan
2009-10-01
The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.
Danburite decomposition by sulfuric acid
International Nuclear Information System (INIS)
Mirsaidov, U.; Mamatov, E.D.; Ashurov, N.A.
2011-01-01
Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by sulfuric acid. The process of decomposition of danburite concentrate by sulfuric acid was studied. The chemical nature of decomposition process of boron containing ore was determined. The influence of temperature on the rate of extraction of boron and iron oxides was defined. The dependence of decomposition of boron and iron oxides on process duration, dosage of H 2 SO 4 , acid concentration and size of danburite particles was determined. The kinetics of danburite decomposition by sulfuric acid was studied as well. The apparent activation energy of the process of danburite decomposition by sulfuric acid was calculated. The flowsheet of danburite processing by sulfuric acid was elaborated.
International Nuclear Information System (INIS)
Farajzadeh, Leila; Hornshøj, Henrik; Momeni, Jamal; Thomsen, Bo; Larsen, Knud; Hedegaard, Jakob; Bendixen, Christian; Madsen, Lone Bruhn
2013-01-01
Highlights: •Transcriptome sequencing yielded 223 mill porcine RNA-seq reads, and 59,000 transcribed locations. •Establishment of unique transcription profiles for ten porcine tissues including four brain tissues. •Comparison of transcription profiles at gene, isoform, promoter and transcription start site level. •Highlights a high level of regulation of neuro-related genes at both gene, isoform, and TSS level. •Our results emphasize the pig as a valuable animal model with respect to human biological issues. -- Abstract: The transcriptome is the absolute set of transcripts in a tissue or cell at the time of sampling. In this study RNA-Seq is employed to enable the differential analysis of the transcriptome profile for ten porcine tissues in order to evaluate differences between the tissues at the gene and isoform expression level, together with an analysis of variation in transcription start sites, promoter usage, and splicing. Totally, 223 million RNA fragments were sequenced leading to the identification of 59,930 transcribed gene locations and 290,936 transcript variants using Cufflinks with similarity to approximately 13,899 annotated human genes. Pairwise analysis of tissues for differential expression at the gene level showed that the smallest differences were between tissues originating from the porcine brain. Interestingly, the relative level of differential expression at the isoform level did generally not vary between tissue contrasts. Furthermore, analysis of differential promoter usage between tissues, revealed a proportionally higher variation between cerebellum (CBE) versus frontal cortex and cerebellum versus hypothalamus (HYP) than in the remaining comparisons. In addition, the comparison of differential transcription start sites showed that the number of these sites is generally increased in comparisons including hypothalamus in contrast to other pairwise assessments. A comprehensive analysis of one of the tissue contrasts, i
International Nuclear Information System (INIS)
Ko, Jong-Hwan.
1993-01-01
Firstly, this study investigaties the causes of sectoral growth and structural changes in the Korean economy. Secondly, it develops the borders of a consistent economic model in order to investigate simultaneously the different impacts of changes in energy and in the domestic economy. This is done any both the Input-Output-Decomposition analysis and a Computable General Equilibrium model (CGE Model). The CGE Model eliminates the disadvantages of the IO Model and allows the investigation of the interdegenerative of the various energy sectors with the economy. The Social Accounting Matrix serves as the data basis of the GCE Model. Simulated experiments have been comet out with the help of the GCE Model, indicating the likely impact of an oil price shock in the economy-sectorally and generally. (orig.) [de
Pairwise structure alignment specifically tuned for surface pockets and interaction interfaces
Cui, Xuefeng
2015-09-09
To detect and evaluate the similarities between the three-dimensional (3D) structures of two molecules, various kinds of methods have been proposed for the pairwise structure alignment problem [6, 9, 7, 11]. The problem plays important roles when studying the function and the evolution of biological molecules. Recently, pairwise structure alignment methods have been extended and applied on surface pocket structures [10, 3, 5] and interaction interface structures [8, 4]. The results show that, even when there are no global similarities discovered between the global sequences and the global structures, biological molecules or complexes could share similar functions because of well conserved pockets and interfaces. Thus, pairwise pocket and interface structure alignments are promising to unveil such shared functions that cannot be discovered by the well-studied global sequence and global structure alignments. State-of-the-art methods for pairwise pocket and interface structure alignments [4, 5] are direct extensions of the classic pairwise protein structure alignment methods, and thus such methods share a few limitations. First, the goal of the classic protein structure alignment methods is to align single-chain protein structures (i.e., a single fragment of residues connected by peptide bonds). However, we observed that pockets and interfaces tend to consist of tens of extremely short backbone fragments (i.e., three or fewer residues connected by peptide bonds). Thus, existing pocket and interface alignment methods based on the protein structure alignment methods still rely on the existence of long-enough backbone fragments, and the fragmentation issue of pockets and interfaces rises the risk of missing the optimal alignments. Moreover, existing interface structure alignment methods focus on protein-protein interfaces, and require a "blackbox preprocessing" before aligning protein-DNA and protein-RNA interfaces. Therefore, we introduce the PROtein STucture Alignment
Decomposition Technology Development of Organic Component in a Decontamination Waste Solution
International Nuclear Information System (INIS)
Jung, Chong Hun; Oh, W. Z.; Won, H. J.; Choi, W. K.; Kim, G. N.; Moon, J. K.
2007-11-01
Through the project of 'Decomposition Technology Development of Organic Component in a Decontamination Waste Solution', the followings were studied. 1. Investigation of decontamination characteristics of chemical decontamination process 2. Analysis of COD, ferrous ion concentration, hydrogen peroxide concentration 3. Decomposition tests of hardly decomposable organic compounds 4. Improvement of organic acid decomposition process by ultrasonic wave and UV light 5. Optimization of decomposition process using a surrogate decontamination waste solution
Decomposition Technology Development of Organic Component in a Decontamination Waste Solution
Energy Technology Data Exchange (ETDEWEB)
Jung, Chong Hun; Oh, W. Z.; Won, H. J.; Choi, W. K.; Kim, G. N.; Moon, J. K
2007-11-15
Through the project of 'Decomposition Technology Development of Organic Component in a Decontamination Waste Solution', the followings were studied. 1. Investigation of decontamination characteristics of chemical decontamination process 2. Analysis of COD, ferrous ion concentration, hydrogen peroxide concentration 3. Decomposition tests of hardly decomposable organic compounds 4. Improvement of organic acid decomposition process by ultrasonic wave and UV light 5. Optimization of decomposition process using a surrogate decontamination waste solution.
Directory of Open Access Journals (Sweden)
Mu-Tzu Shih
2015-02-01
Full Text Available Depth of anaesthesia (DoA is an important measure for assessing the degree to which the central nervous system of a patient is depressed by a general anaesthetic agent, depending on the potency and concentration with which anaesthesia is administered during surgery. We can monitor the DoA by observing the patient’s electroencephalography (EEG signals during the surgical procedure. Typically high frequency EEG signals indicates the patient is conscious, while low frequency signals mean the patient is in a general anaesthetic state. If the anaesthetist is able to observe the instantaneous frequency changes of the patient’s EEG signals during surgery this can help to better regulate and monitor DoA, reducing surgical and post-operative risks. This paper describes an approach towards the development of a 3D real-time visualization application which can show the instantaneous frequency and instantaneous amplitude of EEG simultaneously by using empirical mode decomposition (EMD and the Hilbert–Huang transform (HHT. HHT uses the EMD method to decompose a signal into so-called intrinsic mode functions (IMFs. The Hilbert spectral analysis method is then used to obtain instantaneous frequency data. The HHT provides a new method of analyzing non-stationary and nonlinear time series data. We investigate this approach by analyzing EEG data collected from patients undergoing surgical procedures. The results show that the EEG differences between three distinct surgical stages computed by using sample entropy (SampEn are consistent with the expected differences between these stages based on the bispectral index (BIS, which has been shown to be quantifiable measure of the effect of anaesthetics on the central nervous system. Also, the proposed filtering approach is more effective compared to the standard filtering method in filtering out signal noise resulting in more consistent results than those provided by the BIS. The proposed approach is therefore
Jia, Junsong; Gong, Zhihai; Gu, Zhongyu; Chen, Chundi; Xie, Dongming
2018-04-01
This study is the first attempt to investigate the drivers of Chinese industrial SO 2 and NO x emissions from both periodic and structural perspectives through a decomposition analysis using the logarithmic mean Divisia index (LMDI). The two pollutants' emissions were decomposed into output effects, structural effects, clean production effects, and pollution abatement effects. The results showed that China's industrial SO 2 discharge increased by 1.14 Mt during 2003-2014, and the contributions from the four effects were 23.17, - 1.88, - 3.80, and - 16.36 Mt, respectively. Likewise, NO x discharge changed by - 3.44 Mt over 2011-2014, and the corresponding contributions from the four effects were 2.97, - 0.62, - 1.84, and - 3.95 Mt. Thus, the output effect was mainly responsible for the growth of the two discharges. The average annual contribution rates of SO 2 and NO x from output were 14.33 and 5.97%, respectively, but pollution abatement technology presented the most obvious mitigating effects (- 10.11 and - 7.92%), followed by the mitigating effects of clean production technology (- 2.35 and - 3.7%), and the mitigation from the structural effect was the weakest (- 1.16 and - 1.25%, respectively), which meant pollutant reduction policies related to industrial structure adjustment should be a long-term measure for the two discharges. In addition, the sub-sectors of I20 (manufacture of raw chemical materials and chemical products), I24 (manufacture of non-metallic mineral products), and I26 (smelting and pressing of non-ferrous metals) were the major contributors to both discharges. Thus, these sub-sectors should be given priority consideration when designing mitigation-related measures. Last, some particular policy implications were recommended for reducing the two discharges, including that the government should seek a technological discharge reduction route.
Xiao, Xiaolin; Moreno-Moral, Aida; Rotival, Maxime; Bottolo, Leonardo; Petretto, Enrico
2014-01-01
Recent high-throughput efforts such as ENCODE have generated a large body of genome-scale transcriptional data in multiple conditions (e.g., cell-types and disease states). Leveraging these data is especially important for network-based approaches to human disease, for instance to identify coherent transcriptional modules (subnetworks) that can inform functional disease mechanisms and pathological pathways. Yet, genome-scale network analysis across conditions is significantly hampered by the paucity of robust and computationally-efficient methods. Building on the Higher-Order Generalized Singular Value Decomposition, we introduce a new algorithmic approach for efficient, parameter-free and reproducible identification of network-modules simultaneously across multiple conditions. Our method can accommodate weighted (and unweighted) networks of any size and can similarly use co-expression or raw gene expression input data, without hinging upon the definition and stability of the correlation used to assess gene co-expression. In simulation studies, we demonstrated distinctive advantages of our method over existing methods, which was able to recover accurately both common and condition-specific network-modules without entailing ad-hoc input parameters as required by other approaches. We applied our method to genome-scale and multi-tissue transcriptomic datasets from rats (microarray-based) and humans (mRNA-sequencing-based) and identified several common and tissue-specific subnetworks with functional significance, which were not detected by other methods. In humans we recapitulated the crosstalk between cell-cycle progression and cell-extracellular matrix interactions processes in ventricular zones during neocortex expansion and further, we uncovered pathways related to development of later cognitive functions in the cortical plate of the developing brain which were previously unappreciated. Analyses of seven rat tissues identified a multi-tissue subnetwork of co
Directory of Open Access Journals (Sweden)
Xiaolin Xiao
2014-01-01
Full Text Available Recent high-throughput efforts such as ENCODE have generated a large body of genome-scale transcriptional data in multiple conditions (e.g., cell-types and disease states. Leveraging these data is especially important for network-based approaches to human disease, for instance to identify coherent transcriptional modules (subnetworks that can inform functional disease mechanisms and pathological pathways. Yet, genome-scale network analysis across conditions is significantly hampered by the paucity of robust and computationally-efficient methods. Building on the Higher-Order Generalized Singular Value Decomposition, we introduce a new algorithmic approach for efficient, parameter-free and reproducible identification of network-modules simultaneously across multiple conditions. Our method can accommodate weighted (and unweighted networks of any size and can similarly use co-expression or raw gene expression input data, without hinging upon the definition and stability of the correlation used to assess gene co-expression. In simulation studies, we demonstrated distinctive advantages of our method over existing methods, which was able to recover accurately both common and condition-specific network-modules without entailing ad-hoc input parameters as required by other approaches. We applied our method to genome-scale and multi-tissue transcriptomic datasets from rats (microarray-based and humans (mRNA-sequencing-based and identified several common and tissue-specific subnetworks with functional significance, which were not detected by other methods. In humans we recapitulated the crosstalk between cell-cycle progression and cell-extracellular matrix interactions processes in ventricular zones during neocortex expansion and further, we uncovered pathways related to development of later cognitive functions in the cortical plate of the developing brain which were previously unappreciated. Analyses of seven rat tissues identified a multi
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo; Langer, Andreas; Schö nlieb, Carola-Bibiane
2010-01-01
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation
Caffo, Brian S.; Crainiceanu, Ciprian M.; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H.; Bassett, Susan Spear; Pekar, James J.
2010-01-01
Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer’s disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally s...
Zhang, Jinzhi; Chen, Tianju; Wu, Jingli; Wu, Jinhu
2015-09-01
Thermal decomposition of six representative components of municipal solid waste (MSW, including lignin, printing paper, cotton, rubber, polyvinyl chloride (PVC) and cabbage) was investigated by thermogravimetric-mass spectroscopy (TG-MS) under steam atmosphere. Compared with TG and derivative thermogravimetric (DTG) curves under N2 atmosphere, thermal decomposition of MSW components under steam atmosphere was divided into pyrolysis and gasification stages. In the pyrolysis stage, the shapes of TG and DTG curves under steam atmosphere were almost the same with those under N2 atmosphere. In the gasification stage, the presence of steam led to a greater mass loss because of the steam partial oxidation of char residue. The evolution profiles of H2, CH4, CO and CO2 were well consistent with DTG curves in terms of appearance of peaks and relevant stages in the whole temperature range, and the steam partial oxidation of char residue promoted the generation of more gas products in high temperature range. The multi-Gaussian distributed activation energy model (DAEM) was proved plausible to describe thermal decomposition behaviours of MSW components under steam atmosphere. Copyright © 2015 Elsevier Ltd. All rights reserved.
Erbium hydride decomposition kinetics.
Energy Technology Data Exchange (ETDEWEB)
Ferrizz, Robert Matthew
2006-11-01
Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.
Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.
Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz
2017-10-01
Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.
Effect of pairwise additivity on finite-temperature behavior of classical ideal gas
Shekaari, Ashkan; Jafari, Mahmoud
2018-05-01
Finite-temperature molecular dynamics simulations have been applied to inquire into the effect of pairwise additivity on the behavior of classical ideal gas within the temperature range of T = 250-4000 K via applying a variety of pair potentials and then examining the temperature dependence of a number of thermodynamical properties. Examining the compressibility factor reveals the most deviation from ideal-gas behavior for the Lennard-Jones system mainly due to the presence of both the attractive and repulsive terms. The systems with either attractive or repulsive intermolecular potentials are found to present no resemblance to real gases, but the most similarity to the ideal one as temperature rises.
Structural profiles of human miRNA families from pairwise clustering
DEFF Research Database (Denmark)
Kaczkowski, Bogumil; Þórarinsson, Elfar; Reiche, Kristin
2009-01-01
secondary structure already predicted, little is known about the patterns of structural conservation among pre-miRNAs. We address this issue by clustering the human pre-miRNA sequences based on pairwise, sequence and secondary structure alignment using FOLDALIGN, followed by global multiple alignment...... of obtained clusters by WAR. As a result, the common secondary structure was successfully determined for four FOLDALIGN clusters: the RF00027 structural family of the Rfam database and three clusters with previously undescribed consensus structures. Availability: http://genome.ku.dk/resources/mirclust...
The FOLDALIGN web server for pairwise structural RNA alignment and mutual motif search
DEFF Research Database (Denmark)
Havgaard, Jakob Hull; Lyngsø, Rune B.; Gorodkin, Jan
2005-01-01
FOLDALIGN is a Sankoff-based algorithm for making structural alignments of RNA sequences. Here, we present a web server for making pairwise alignments between two RNA sequences, using the recently updated version of FOLDALIGN. The server can be used to scan two sequences for a common structural RNA...... motif of limited size, or the entire sequences can be aligned locally or globally. The web server offers a graphical interface, which makes it simple to make alignments and manually browse the results. the web server can be accessed at http://foldalign.kvl.dk...
Computing the Skewness of the Phylogenetic Mean Pairwise Distance in Linear Time
DEFF Research Database (Denmark)
Tsirogiannis, Constantinos; Sandel, Brody Steven
2014-01-01
The phylogenetic Mean Pairwise Distance (MPD) is one of the most popular measures for computing the phylogenetic distance between a given group of species. More specifically, for a phylogenetic tree and for a set of species R represented by a subset of the leaf nodes of , the MPD of R is equal...... to the average cost of all possible simple paths in that connect pairs of nodes in R. Among other phylogenetic measures, the MPD is used as a tool for deciding if the species of a given group R are closely related. To do this, it is important to compute not only the value of the MPD for this group but also...
Benefits of Using Pairwise Trajectory Management in the Central East Pacific
Chartrand, Ryan; Ballard, Kathryn
2017-01-01
Pairwise Trajectory Management (PTM) is a concept that utilizes airborne and ground-based capabilities to enable airborne spacing operations in procedural airspace. This concept makes use of updated ground automation, Automatic Dependent Surveillance-Broadcast (ADS-B) and on board avionics generating real time guidance. An experiment was conducted to examine the potential benefits of implementing PTM in the Central East Pacific oceanic region. An explanation of the experiment and some of the results are included in this paper. The PTM concept allowed for an increase in the average time an aircraft is able to spend at its desired flight level and a reduction in fuel burn.
Hu, Shujuan; Chou, Jifan; Cheng, Jianbo
2018-04-01
In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.
Vanegas, Juan M; Torres-Sánchez, Alejandro; Arroyo, Marino
2014-02-11
Local stress fields are routinely computed from molecular dynamics trajectories to understand the structure and mechanical properties of lipid bilayers. These calculations can be systematically understood with the Irving-Kirkwood-Noll theory. In identifying the stress tensor, a crucial step is the decomposition of the forces on the particles into pairwise contributions. However, such a decomposition is not unique in general, leading to an ambiguity in the definition of the stress tensor, particularly for multibody potentials. Furthermore, a theoretical treatment of constraints in local stress calculations has been lacking. Here, we present a new implementation of local stress calculations that systematically treats constraints and considers a privileged decomposition, the central force decomposition, that leads to a symmetric stress tensor by construction. We focus on biomembranes, although the methodology presented here is widely applicable. Our results show that some unphysical behavior obtained with previous implementations (e.g. nonconstant normal stress profiles along an isotropic bilayer in equilibrium) is a consequence of an improper treatment of constraints. Furthermore, other valid force decompositions produce significantly different stress profiles, particularly in the presence of dihedral potentials. Our methodology reveals the striking effect of unsaturations on the bilayer mechanics, missed by previous stress calculation implementations.
Ohmichi, Yuya
2017-07-01
In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.
Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.
Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin
2017-11-15
Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.
Wood decomposition as influenced by invertebrates.
Ulyshen, Michael D
2016-02-01
The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
International Nuclear Information System (INIS)
Wolf, D.; Keblinski, P.; Phillpot, S.R.; Eggebrecht, J.
1999-01-01
Based on a recent result showing that the net Coulomb potential in condensed ionic systems is rather short ranged, an exact and physically transparent method permitting the evaluation of the Coulomb potential by direct summation over the r -1 Coulomb pair potential is presented. The key observation is that the problems encountered in determining the Coulomb energy by pairwise, spherically truncated r -1 summation are a direct consequence of the fact that the system summed over is practically never neutral. A simple method is developed that achieves charge neutralization wherever the r -1 pair potential is truncated. This enables the extraction of the Coulomb energy, forces, and stresses from a spherically truncated, usually charged environment in a manner that is independent of the grouping of the pair terms. The close connection of our approach with the Ewald method is demonstrated and exploited, providing an efficient method for the simulation of even highly disordered ionic systems by direct, pairwise r -1 summation with spherical truncation at rather short range, i.e., a method which fully exploits the short-ranged nature of the interactions in ionic systems. The method is validated by simulations of crystals, liquids, and interfacial systems, such as free surfaces and grain boundaries. copyright 1999 American Institute of Physics
GapMis: a tool for pairwise sequence alignment with a single gap.
Flouri, Tomás; Frousios, Kimon; Iliopoulos, Costas S; Park, Kunsoo; Pissis, Solon P; Tischler, German
2013-08-01
Pairwise sequence alignment has received a new motivation due to the advent of recent patents in next-generation sequencing technologies, particularly so for the application of re-sequencing---the assembly of a genome directed by a reference sequence. After the fast alignment between a factor of the reference sequence and a high-quality fragment of a short read by a short-read alignment programme, an important problem is to find the alignment between a relatively short succeeding factor of the reference sequence and the remaining low-quality part of the read allowing a number of mismatches and the insertion of a single gap in the alignment. We present GapMis, a tool for pairwise sequence alignment with a single gap. It is based on a simple algorithm, which computes a different version of the traditional dynamic programming matrix. The presented experimental results demonstrate that GapMis is more suitable and efficient than most popular tools for this task.
Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game
Zai, Dawei; Li, Jonathan; Guo, Yulan; Cheng, Ming; Huang, Pengdi; Cao, Xiaofei; Wang, Cheng
2017-12-01
It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.
International Nuclear Information System (INIS)
Hardy, David J.; Schulten, Klaus; Wolff, Matthew A.; Skeel, Robert D.; Xia, Jianlin
2016-01-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.
A water market simulator considering pair-wise trades between agents
Huskova, I.; Erfani, T.; Harou, J. J.
2012-04-01
In many basins in England no further water abstraction licences are available. Trading water between water rights holders has been recognized as a potentially effective and economically efficient strategy to mitigate increasing scarcity. A screening tool that could assess the potential for trade through realistic simulation of individual water rights holders would help assess the solution's potential contribution to local water management. We propose an optimisation-driven water market simulator that predicts pair-wise trade in a catchment and represents its interaction with natural hydrology and engineered infrastructure. A model is used to emulate licence-holders' willingness to engage in short-term trade transactions. In their simplest form agents are represented using an economic benefit function. The working hypothesis is that trading behaviour can be partially predicted based on differences in marginal values of water over space and time and estimates of transaction costs on pair-wise trades. We discuss the further possibility of embedding rules, norms and preferences of the different water user sectors to more realistically represent the behaviours, motives and constraints of individual licence holders. The potential benefits and limitations of such a social simulation (agent-based) approach is contrasted with our simulator where agents are driven by economic optimization. A case study based on the Dove River Basin (UK) demonstrates model inputs and outputs. The ability of the model to suggest impacts of water rights policy reforms on trading is discussed.
Directory of Open Access Journals (Sweden)
Woosang Lim
Full Text Available Hierarchical organizations of information processing in the brain networks have been known to exist and widely studied. To find proper hierarchical structures in the macaque brain, the traditional methods need the entire pairwise hierarchical relationships between cortical areas. In this paper, we present a new method that discovers hierarchical structures of macaque brain networks by using partial information of pairwise hierarchical relationsh