WorldWideScience

Sample records for level decomposition approach

  1. Multilevel index decomposition analysis: Approaches and application

    International Nuclear Information System (INIS)

    Xu, X.Y.; Ang, B.W.

    2014-01-01

    With the growing interest in using the technique of index decomposition analysis (IDA) in energy and energy-related emission studies, such as to analyze the impacts of activity structure change or to track economy-wide energy efficiency trends, the conventional single-level IDA may not be able to meet certain needs in policy analysis. In this paper, some limitations of single-level IDA studies which can be addressed through applying multilevel decomposition analysis are discussed. We then introduce and compare two multilevel decomposition procedures, which are referred to as the multilevel-parallel (M-P) model and the multilevel-hierarchical (M-H) model. The former uses a similar decomposition procedure as in the single-level IDA, while the latter uses a stepwise decomposition procedure. Since the stepwise decomposition procedure is new in the IDA literature, the applicability of the popular IDA methods in the M-H model is discussed and cases where modifications are needed are explained. Numerical examples and application studies using the energy consumption data of the US and China are presented. - Highlights: • We discuss the limitations of single-level decomposition in IDA applied to energy study. • We introduce two multilevel decomposition models, study their features and discuss how they can address the limitations. • To extend from single-level to multilevel analysis, necessary modifications to some popular IDA methods are discussed. • We further discuss the practical significance of the multilevel models and present examples and cases to illustrate

  2. LMDI decomposition approach: A guide for implementation

    International Nuclear Information System (INIS)

    Ang, B.W.

    2015-01-01

    Since it was first used by researchers to analyze industrial electricity consumption in the early 1980s, index decomposition analysis (IDA) has been widely adopted in energy and emission studies. Lately its use as the analytical component of accounting frameworks for tracking economy-wide energy efficiency trends has attracted considerable attention and interest among policy makers. The last comprehensive literature review of IDA was reported in 2000 which is some years back. After giving an update and presenting the key trends in the last 15 years, this study focuses on the implementation issues of the logarithmic mean Divisia index (LMDI) decomposition methods in view of their dominance in IDA in recent years. Eight LMDI models are presented and their origin, decomposition formulae, and strengths and weaknesses are summarized. Guidelines on the choice among these models are provided to assist users in implementation. - Highlights: • Guidelines for implementing LMDI decomposition approach are provided. • Eight LMDI decomposition models are summarized and compared. • The development of the LMDI decomposition approach is presented. • The latest developments of index decomposition analysis are briefly reviewed.

  3. Decomposition approaches to integration without a measure

    Czech Academy of Sciences Publication Activity Database

    Greco, S.; Mesiar, Radko; Rindone, F.; Sipeky, L.

    2016-01-01

    Roč. 287, č. 1 (2016), s. 37-47 ISSN 0165-0114 Institutional support: RVO:67985556 Keywords : Choquet integral * Decision making * Decomposition integral Subject RIV: BA - General Mathematics Impact factor: 2.718, year: 2016 http://library.utia.cas.cz/separaty/2016/E/mesiar-0457408.pdf

  4. Energy index decomposition methodology at the plant level

    Science.gov (United States)

    Kumphai, Wisit

    Scope and method of study. The dissertation explores the use of a high level energy intensity index as a facility-level energy performance monitoring indicator with a goal of developing a methodology for an economically based energy performance monitoring system that incorporates production information. The performance measure closely monitors energy usage, production quantity, and product mix and determines the production efficiency as a part of an ongoing process that would enable facility managers to keep track of and, in the future, be able to predict when to perform a recommissioning process. The study focuses on the use of the index decomposition methodology and explored several high level (industry, sector, and country levels) energy utilization indexes, namely, Additive Log Mean Divisia, Multiplicative Log Mean Divisia, and Additive Refined Laspeyres. One level of index decomposition is performed. The indexes are decomposed into Intensity and Product mix effects. These indexes are tested on a flow shop brick manufacturing plant model in three different climates in the United States. The indexes obtained are analyzed by fitting an ARIMA model and testing for dependency between the two decomposed indexes. Findings and conclusions. The results concluded that the Additive Refined Laspeyres index decomposition methodology is suitable to use on a flow shop, non air conditioned production environment as an energy performance monitoring indicator. It is likely that this research can be further expanded in to predicting when to perform a recommissioning process.

  5. Introducing the Improved Heaviside Approach to Partial Fraction Decomposition to Undergraduate Students: Results and Implications from a Pilot Study

    Science.gov (United States)

    Man, Yiu-Kwong

    2012-01-01

    Partial fraction decomposition is a useful technique often taught at senior secondary or undergraduate levels to handle integrations, inverse Laplace transforms or linear ordinary differential equations, etc. In recent years, an improved Heaviside's approach to partial fraction decomposition was introduced and developed by the author. An important…

  6. An optimization approach for fitting canonical tensor decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M. (Sandia National Laboratories, Albuquerque, NM); Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  7. What drives credit rating changes? : a return decomposition approach

    OpenAIRE

    Cho, Hyungjin; Choi, Sun Hwa

    2015-01-01

    This paper examines the relative importance of a shock to expected cash flows (i.e., cash-flow news) and a shock to expected discount rates (i.e., discount-rate news) in credit rating changes. Specifically, we use a Vector Autoregressive model to implement the return decomposition of Campbell and Shiller (Review of Financial Studies, 1, 1988, 195) and Vuolteenaho (Journal of Finance, 57, 2002, 233) to extract cash-flow news and discount-rate news from stock returns at the firm-level. We find ...

  8. Linear decomposition approach for a class of nonconvex programming problems.

    Science.gov (United States)

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  9. Foreign exchange predictability and the carry trade: a decomposition approach

    Czech Academy of Sciences Publication Activity Database

    Anatolyev, Stanislav; Gospodinov, N.; Jamali, I.; Liu, X.

    2017-01-01

    Roč. 42, June (2017), s. 199-211 ISSN 0927-5398 Institutional support: RVO:67985998 Keywords : exchange rate forecasting * carry trade * return decomposition Subject RIV: AH - Economics OBOR OECD: Finance Impact factor: 0.979, year: 2016

  10. Foreign exchange predictability and the carry trade: a decomposition approach

    Czech Academy of Sciences Publication Activity Database

    Anatolyev, Stanislav; Gospodinov, N.; Jamali, I.; Liu, X.

    2017-01-01

    Roč. 42, June (2017), s. 199-211 ISSN 0927-5398 Institutional support: Progres-Q24 Keywords : exchange rate forecasting * carry trade * return decomposition Subject RIV: AH - Economics OBOR OECD: Finance Impact factor: 0.979, year: 2016

  11. Hourly forecasting of global solar radiation based on multiscale decomposition methods: A hybrid approach

    International Nuclear Information System (INIS)

    Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted

    2017-01-01

    This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.

  12. Simplified approaches to some nonoverlapping domain decomposition methods

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jinchao

    1996-12-31

    An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.

  13. Multi-country comparisons of energy performance: The index decomposition analysis approach

    International Nuclear Information System (INIS)

    Ang, B.W.; Xu, X.Y.; Su, Bin

    2015-01-01

    Index decomposition analysis (IDA) is a popular tool for studying changes in energy consumption over time in a country or region. This specific application of IDA, which may be called temporal decomposition analysis, has been extended by researchers and analysts to study variations in energy consumption or energy efficiency between countries or regions, i.e. spatial decomposition analysis. In spatial decomposition analysis, the main objective is often to understand the relative contributions of overall activity level, activity structure, and energy intensity in explaining differences in total energy consumption between two countries or regions. We review the literature of spatial decomposition analysis, investigate the methodological issues, and propose a spatial decomposition analysis framework for multi-region comparisons. A key feature of the proposed framework is that it passes the circularity test and provides consistent results for multi-region comparisons. A case study in which 30 regions in China are compared and ranked based on their performance in energy consumption is presented. - Highlights: • We conducted cross-regional comparisons of energy consumption using IDA. • We proposed two criteria for IDA method selection in spatial decomposition analysis. • We proposed a new model for regional comparison that passes the circularity test. • Features of the new model are illustrated using the data of 30 regions in China

  14. Digital Level Layers for Digital Curve Decomposition and Vectorization

    Directory of Open Access Journals (Sweden)

    Laurent Provot

    2014-07-01

    Full Text Available The purpose of this paper is to present Digital Level Layers and show the motivations for working with such analytical primitives in the framework of Digital Geometry. We first compare their properties to morphological and topological counterparts, and then we explain how to recognize them and use them to decompose or vectorize digital curves and contours.

  15. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    Science.gov (United States)

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  16. An Improved Dynamic Programming Decomposition Approach for Network Revenue Management

    OpenAIRE

    Dan Zhang

    2011-01-01

    We consider a nonlinear nonseparable functional approximation to the value function of a dynamic programming formulation for the network revenue management (RM) problem with customer choice. We propose a simultaneous dynamic programming approach to solve the resulting problem, which is a nonlinear optimization problem with nonlinear constraints. We show that our approximation leads to a tighter upper bound on optimal expected revenue than some known bounds in the literature. Our approach can ...

  17. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    Science.gov (United States)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  18. Grid-based electronic structure calculations: The tensor decomposition approach

    Energy Technology Data Exchange (ETDEWEB)

    Rakhuba, M.V., E-mail: rakhuba.m@gmail.com [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Oseledets, I.V., E-mail: i.oseledets@skoltech.ru [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Institute of Numerical Mathematics, Russian Academy of Sciences, Gubkina St. 8, 119333 Moscow (Russian Federation)

    2016-05-01

    We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  19. The Hierarchical Database Decomposition Approach to Database Concurrency Control.

    Science.gov (United States)

    1984-12-01

    approach, we postulate a model of transaction behavior under two phase locking as shown in Figure 39(a) and a model of that under multiversion ...transaction put in the block queue until it is reactivated. Under multiversion timestamping, however, the request is always granted. Once the request

  20. A new approach for the beryl mineral decomposition: elemental characterisation using ICP-AES and FAAS

    International Nuclear Information System (INIS)

    Nathan, Usha; Premadas, A.

    2013-01-01

    A new approach for the beryl mineral sample decomposition and solution preparation method suitable for the elemental analysis using ICP-AES and FAAS is described. For the complete sample decomposition four different decomposition procedures are employed such as with (i) ammonium bi-fluoride alone (ii) a mixture of ammonium bi-fluoride and ammonium sulphate (iii) powdered mixture of NaF and KHF 2 in 1: 3 ratio, and (iv) acid digestion treatment using hydrofluoric acid and nitric acid mixture, and the residue fused with a powdered mixture NaF and KHF 2 . Elements like Be, Al, Fe, Mn, Ti, Cr, Ca, Mg, and Nb are determined by ICP-AES and Na, K, Rb and Cs are determined by FAAS method. Fusion with 2g ammonium bifluoride flux alone is sufficient for the complete decomposition of 0.400 gram sample. The values obtained by this decomposition procedure are agreed well with the reported method. Accuracy of the proposed method was checked by analyzing synthetic samples prepared in the laboratory by mixing high purity oxides having a chemical composition similar to natural beryl mineral. It indicates that the accuracy of the method is very good, and the reproducibility is characterized by the RSD 1 to 4% for the elements studied. (author)

  1. Simultaneously Exploiting Two Formulations: an Exact Benders Decomposition Approach

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Gamst, Mette; Spoorendonk, Simon

    When modelling a given problem using linear programming techniques several possibilities often exist, and each results in a different mathematical formulation of the problem. Usually, advantages and disadvantages can be identified in any single formulation. In this paper we consider mixed integer...... to the standard branch-and-price approach from the literature, the method shows promising performance and appears to be an attractive alternative....

  2. An Approach to Operational Analysis: Doctrinal Task Decomposition

    Science.gov (United States)

    2016-08-04

    Once the unit is selected , CATS will output all of the doctrinal collective tasks associated with the unit. Currently, CATS outputs this information...Army unit are controlled data items, but for explanation purposes consider this simple example using a restaurant as the unit of interest. Table 1...shows an example Task Model for a restaurant using language and format similar to what CATS provides. Only 3 levels are shown in the example, but

  3. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  4. Application of Wavelet Decomposition to Removing Barometric and Tidal Response in Borehole Water Level

    Institute of Scientific and Technical Information of China (English)

    Yan Rui; Huang Fuqiong; Chen Yong

    2007-01-01

    Wavelet decomposition is used to analyze barometric fluctuation and earth tidal response in borehole water level changes. We apply wavelet analysis method to the decomposition of barometric fluctuation and earth tidal response into several temporal series in different frequency ranges. Barometric and tidal coefficients in different frequency ranges are computed with least squares method to remove barometric and tidal response. Comparing this method with general linear regression analysis method, we find wavelet analysis method can efficiently remove barometric and earth tidal response in borehole water level. Wavelet analysis method is based on wave theory and vibration theories. It not only considers the frequency characteristic of the observed data but also the temporal characteristic, and it can get barometric and tidal coefficients in different frequency ranges. This method has definite physical meaning.

  5. Cellular decomposition in vikalloys

    International Nuclear Information System (INIS)

    Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.

    1981-01-01

    Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru

  6. Design of tailor-made chemical blend using a decomposition-based computer-aided approach

    DEFF Research Database (Denmark)

    Yunus, Nor Alafiza; Gernaey, Krist; Manan, Z.A.

    2011-01-01

    Computer aided techniques form an efficient approach to solve chemical product design problems such as the design of blended liquid products (chemical blending). In chemical blending, one tries to find the best candidate, which satisfies the product targets defined in terms of desired product...... methodology for blended liquid products that identifies a set of feasible chemical blends. The blend design problem is formulated as a Mixed Integer Nonlinear Programming (MINLP) model where the objective is to find the optimal blended gasoline or diesel product subject to types of chemicals...... and their compositions and a set of desired target properties of the blended product as design constraints. This blend design problem is solved using a decomposition approach, which eliminates infeasible and/or redundant candidates gradually through a hierarchy of (property) model based constraints. This decomposition...

  7. A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems

    Energy Technology Data Exchange (ETDEWEB)

    Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2017-02-01

    Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.

  8. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi; Presho, Michael; Calo, Victor M.; Efendiev, Yalchin R.

    2013-01-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  9. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi

    2013-11-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  10. Mode decomposition methods for flows in high-contrast porous media. A global approach

    KAUST Repository

    Ghommem, Mehdi; Calo, Victor M.; Efendiev, Yalchin R.

    2014-01-01

    We apply dynamic mode decomposition (DMD) and proper orthogonal decomposition (POD) methods to flows in highly-heterogeneous porous media to extract the dominant coherent structures and derive reduced-order models via Galerkin projection. Permeability fields with high contrast are considered to investigate the capability of these techniques to capture the main flow features and forecast the flow evolution within a certain accuracy. A DMD-based approach shows a better predictive capability due to its ability to accurately extract the information relevant to long-time dynamics, in particular, the slowly-decaying eigenmodes corresponding to largest eigenvalues. Our study enables a better understanding of the strengths and weaknesses of the applicability of these techniques for flows in high-contrast porous media. Furthermore, we discuss the robustness of DMD- and POD-based reduced-order models with respect to variations in initial conditions, permeability fields, and forcing terms. © 2013 Elsevier Inc.

  11. A Benders decomposition approach for a combined heat and power economic dispatch

    International Nuclear Information System (INIS)

    Abdolmohammadi, Hamid Reza; Kazemi, Ahad

    2013-01-01

    Highlights: • Benders decomposition algorithm to solve combined heat and power economic dispatch. • Decomposing the CHPED problem into master problem and subproblem. • Considering non-convex heat-power feasible region efficiently. • Solving 4 units and 5 units system with 2 and 3 co-generation units, respectively. • Obtaining better or as well results in terms of objective values. - Abstract: Recently, cogeneration units have played an increasingly important role in the utility industry. Therefore the optimal utilization of multiple combined heat and power (CHP) systems is an important optimization task in power system operation. Unlike power economic dispatch, which has a single equality constraint, two equality constraints must be met in combined heat and power economic dispatch (CHPED) problem. Moreover, in the cogeneration units, the power capacity limits are functions of the unit heat productions and the heat capacity limits are functions of the unit power generations. Thus, CHPED is a complicated optimization problem. In this paper, an algorithm based on Benders decomposition (BD) is proposed to solve the economic dispatch (ED) problem for cogeneration systems. In the proposed method, combined heat and power economic dispatch problem is decomposed into a master problem and subproblem. The subproblem generates the Benders cuts and master problem uses them as a new inequality constraint which is added to the previous constraints. The iterative process will continue until upper and lower bounds of the objective function optimal values are close enough and a converged optimal solution is found. Benders decomposition based approach is able to provide a good framework to consider the non-convex feasible operation regions of cogeneration units efficiently. In this paper, a four-unit system with two cogeneration units and a five-unit system with three cogeneration units are analyzed to exhibit the effectiveness of the proposed approach. In all cases, the

  12. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  13. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    Science.gov (United States)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  14. A singular-value decomposition approach to X-ray spectral estimation from attenuation data

    International Nuclear Information System (INIS)

    Tominaga, Shoji

    1986-01-01

    A singular-value decomposition (SVD) approach is described for estimating the exposure-rate spectral distributions of X-rays from attenuation data measured withvarious filtrations. This estimation problem with noisy measurements is formulated as the problem of solving a system of linear equations with an ill-conditioned nature. The principle of the SVD approach is that a response matrix, representing the X-ray attenuation effect by filtrations at various energies, can be expanded into summation of inherent component matrices, and thereby the spectral distributions can be represented as a linear combination of some component curves. A criterion function is presented for choosing the components needed to form a reliable estimate. The feasibility of the proposed approach is studied in detail in a computer simulation using a hypothetical X-ray spectrum. The application results of the spectral distributions emitted from a therapeutic X-ray generator are shown. Finally some advantages of this approach are pointed out. (orig.)

  15. A Subspace Approach to the Structural Decomposition and Identification of Ankle Joint Dynamic Stiffness.

    Science.gov (United States)

    Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E

    2017-06-01

    The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.

  16. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  17. Comparison of sugar molecule decomposition through glucose and fructose: a high-level quantum chemical study.

    Energy Technology Data Exchange (ETDEWEB)

    Assary, R. S.; Curtiss, L. A. (Center for Nanoscale Materials); ( MSD); (Northwestern Univ.)

    2012-02-01

    Efficient chemical conversion of biomass is essential to produce sustainable energy and industrial chemicals. Industrial level conversion of glucose to useful chemicals, such as furfural, hydroxymethylfurfural, and levulinic acid, is a major step in the biomass conversion but is difficult because of the formation of undesired products and side reactions. To understand the molecular level reaction mechanisms involved in the decomposition of glucose and fructose, we have carried out high-level quantum chemical calculations [Gaussian-4 (G4) theory]. Selective 1,2-dehydration, keto-enol tautomerization, isomerization, retro-aldol condensation, and hydride shifts of glucose and fructose molecules were investigated. Detailed kinetic and thermodynamic analyses indicate that, for acyclic glucose and fructose molecules, the dehydration and isomerization require larger activation barriers compared to the retro-aldol reaction at 298 K in neutral medium. The retro-aldol reaction results in the formation of C2 and C4 species from glucose and C3 species from fructose. The formation of the most stable C3 species, dihydroxyacetone from fructose, is thermodynamically downhill. The 1,3-hydride shift leads to the cleavage of the C-C bond in the acyclic species; however, the enthalpy of activation is significantly higher (50-55 kcal/mol) than that of the retro-aldol reaction (38 kcal/mol) mainly because of the sterically hindered distorted four-membered transition state compared to the hexa-membered transition state in the retro-aldol reaction. Both tautomerization and dehydration are catalyzed by a water molecule in aqueous medium; however, water has little effect on the retro-aldol reaction. Isomerization of glucose to fructose and glyceraldehyde to dihydroxyacetone proceeds through hydride shifts that require an activation enthalpy of about 40 kcal/mol at 298 K in water medium. This investigation maps out accurate energetics of the decomposition of glucose and fructose molecules

  18. An effective secondary decomposition approach for wind power forecasting using extreme learning machine trained by crisscross optimization

    International Nuclear Information System (INIS)

    Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo

    2017-01-01

    Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.

  19. A domain decomposition approach for full-field measurements based identification of local elastic parameters

    KAUST Repository

    Lubineau, Gilles

    2015-03-01

    We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.

  20. Squeezing more information out of time variable gravity data with a temporal decomposition approach

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Bordoni, A.; Aoudia, A.

    2012-01-01

    an explorative approach based on a suitable time series decomposition, which does not rely on predefined time signatures. The comparison and validation against the fitting approach commonly used in GRACE literature shows a very good agreement for what concerns trends and periodic signals on one side......A measure of the Earth's gravity contains contributions from solid Earth as well as climate-related phenomena, that cannot be easily distinguished both in time and space. After more than 7years, the GRACE gravity data available now support more elaborate analysis on the time series. We propose...... used to assess the possibility of finding evidence of meaningful geophysical signals different from hydrology over Africa in GRACE data. In this case we conclude that hydrological phenomena are dominant and so time variable gravity data in Africa can be directly used to calibrate hydrological models....

  1. Entropy-Based Method of Choosing the Decomposition Level in Wavelet Threshold De-noising

    Directory of Open Access Journals (Sweden)

    Yan-Fang Sang

    2010-06-01

    Full Text Available In this paper, the energy distributions of various noises following normal, log-normal and Pearson-III distributions are first described quantitatively using the wavelet energy entropy (WEE, and the results are compared and discussed. Then, on the basis of these analytic results, a method for use in choosing the decomposition level (DL in wavelet threshold de-noising (WTD is put forward. Finally, the performance of the proposed method is verified by analysis of both synthetic and observed series. Analytic results indicate that the proposed method is easy to operate and suitable for various signals. Moreover, contrary to traditional white noise testing which depends on “autocorrelations”, the proposed method uses energy distributions to distinguish real signals and noise in noisy series, therefore the chosen DL is reliable, and the WTD results of time series can be improved.

  2. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    Science.gov (United States)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  3. Dominant pole placement with fractional order PID controllers: D-decomposition approach.

    Science.gov (United States)

    Mandić, Petar D; Šekara, Tomislav B; Lazarević, Mihailo P; Bošković, Marko

    2017-03-01

    Dominant pole placement is a useful technique designed to deal with the problem of controlling a high order or time-delay systems with low order controller such as the PID controller. This paper tries to solve this problem by using D-decomposition method. Straightforward analytic procedure makes this method extremely powerful and easy to apply. This technique is applicable to a wide range of transfer functions: with or without time-delay, rational and non-rational ones, and those describing distributed parameter systems. In order to control as many different processes as possible, a fractional order PID controller is introduced, as a generalization of classical PID controller. As a consequence, it provides additional parameters for better adjusting system performances. The design method presented in this paper tunes the parameters of PID and fractional PID controller in order to obtain good load disturbance response with a constraint on the maximum sensitivity and sensitivity to noise measurement. Good set point response is also one of the design goals of this technique. Numerous examples taken from the process industry are given, and D-decomposition approach is compared with other PID optimization methods to show its effectiveness. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A demodulating approach based on local mean decomposition and its applications in mechanical fault diagnosis

    International Nuclear Information System (INIS)

    Chen, Baojia; He, Zhengjia; Chen, Xuefeng; Cao, Hongrui; Cai, Gaigai; Zi, Yanyang

    2011-01-01

    Since machinery fault vibration signals are usually multicomponent modulation signals, how to decompose complex signals into a set of mono-components whose instantaneous frequency (IF) has physical sense has become a key issue. Local mean decomposition (LMD) is a new kind of time–frequency analysis approach which can decompose a signal adaptively into a set of product function (PF) components. In this paper, a modulation feature extraction method-based LMD is proposed. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. The computed IF and IA are displayed together in the form of time–frequency representation (TFR). Modulation features can be extracted from the spectrum analysis of the IA and IF. In order to make the IF have physical meaning, the phase-unwrapping algorithm and IF processing method of extrema are presented in detail along with a simulation FM signal example. Besides, the dependence of the LMD method on the signal-to-noise ratio (SNR) is also investigated by analyzing synthetic signals which are added with Gaussian noise. As a result, the recommended critical SNRs for PF decomposition and IF extraction are given according to the practical application. Successful fault diagnosis on a rolling bearing and gear of locomotive bogies shows that LMD has better identification capacity for modulation signal processing and is very suitable for failure detection in rotating machinery

  5. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    International Nuclear Information System (INIS)

    Li, Ning; Liang, Caiping; Yang, Jianguo; Zhou, Rui

    2016-01-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner–Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated. (paper)

  6. A data-driven decomposition approach to model aerodynamic forces on flapping airfoils

    Science.gov (United States)

    Raiola, Marco; Discetti, Stefano; Ianiro, Andrea

    2017-11-01

    In this work, we exploit a data-driven decomposition of experimental data from a flapping airfoil experiment with the aim of isolating the main contributions to the aerodynamic force and obtaining a phenomenological model. Experiments are carried out on a NACA 0012 airfoil in forward flight with both heaving and pitching motion. Velocity measurements of the near field are carried out with Planar PIV while force measurements are performed with a load cell. The phase-averaged velocity fields are transformed into the wing-fixed reference frame, allowing for a description of the field in a domain with fixed boundaries. The decomposition of the flow field is performed by means of the POD applied on the velocity fluctuations and then extended to the phase-averaged force data by means of the Extended POD approach. This choice is justified by the simple consideration that aerodynamic forces determine the largest contributions to the energetic balance in the flow field. Only the first 6 modes have a relevant contribution to the force. A clear relationship can be drawn between the force and the flow field modes. Moreover, the force modes are closely related (yet slightly different) to the contributions of the classic potential models in literature, allowing for their correction. This work has been supported by the Spanish MINECO under Grant TRA2013-41103-P.

  7. Tracking European Union CO2 emissions through LMDI (logarithmic-mean Divisia index) decomposition. The activity revaluation approach

    International Nuclear Information System (INIS)

    Fernández González, P.; Landajo, M.; Presno, M.J.

    2014-01-01

    Aggregate CO 2 emitted to the atmosphere from a given region could be determined by monitoring several distinctive components. In this paper we propose five decomposition factors: population, production per capita, fuel mix, carbonization and energy intensity. The latter is commonly used as a proxy for energy efficiency. The problem arises when defining this concept, as there is little consensus among authors on how to measure energy intensity (using either physical or monetary activity indicators). In this paper we analyse several measurement possibilities, presenting and developing a number of approaches based on the LMDI (logarithmic-mean Divisia index) methodology, to decompose changes in aggregate CO 2 emissions. The resulting methodologies are so-called MB (monetary based), IR (intensity refactorization) and AR (activity revaluation) approaches. Then, we apply these methodologies to analyse changes in carbon dioxide emissions in the EU (European Union) power sector, both as a whole and at country level. Our findings show the strong impact of changes in the energy mix factor on aggregate CO 2 emission levels, although a number of differences among countries are detected which lead to specific environmental recommendations. - Highlights: • New Divisia-based decomposition analysis removing price influence is presented. • We apply refined methodologies to decompose changes in CO 2 emissions in the EU (European Union). • Changes in fuel mix appear as the main driving force in CO 2 emissions reduction. • GDPpc growth becomes a direct contributor to emissions drop, especially in Western EU. • Innovation and technical change: less helpful tools when eliminating the price effect

  8. Four-level time decomposition quasi-static power flow and successive disturbances analysis. [Power system disturbances

    Energy Technology Data Exchange (ETDEWEB)

    Jovanovic, S M [Nikola Tesla Inst., Belgrade (YU)

    1990-01-01

    This paper presents a model and an appropriate numerical procedure for a four-level time decomposition quasi-static power flow and successive disturbances analysis of power systems. The analysis consists of the sequential computation of the zero, primary, secondary and tertiary quasi-static states and of the estimation of successive structural disturbances during the 1200 s dynamics after a structural disturbance. The model is developed by detailed inspection of the time decomposition characteristics of automatic protection and control devices. Adequate speed of the numerical procedure is attained by a specific application of the inversion matrix lemma and the decoupled model constant coefficient matrices. The four-level time decomposition quasi-static method is intended for security and emergency analysis. (author).

  9. Periodic oscillatory solution in delayed competitive-cooperative neural networks: A decomposition approach

    International Nuclear Information System (INIS)

    Yuan Kun; Cao Jinde

    2006-01-01

    In this paper, the problems of exponential convergence and the exponential stability of the periodic solution for a general class of non-autonomous competitive-cooperative neural networks are analyzed via the decomposition approach. The idea is to divide the connection weights into inhibitory or excitatory types and thereby to embed a competitive-cooperative delayed neural network into an augmented cooperative delay system through a symmetric transformation. Some simple necessary and sufficient conditions are derived to ensure the componentwise exponential convergence and the exponential stability of the periodic solution of the considered neural networks. These results generalize and improve the previous works, and they are easy to check and apply in practice

  10. WEALTH-BASED INEQUALITY IN CHILD IMMUNIZATION IN INDIA: A DECOMPOSITION APPROACH.

    Science.gov (United States)

    Debnath, Avijit; Bhattacharjee, Nairita

    2018-05-01

    SummaryDespite years of health and medical advancement, children still suffer from infectious diseases that are vaccine preventable. India reacted in 1978 by launching the Expanded Programme on Immunization in an attempt to reduce the incidence of vaccine-preventable diseases (VPDs). Although the nation has made remarkable progress over the years, there is significant variation in immunization coverage across different socioeconomic strata. This study attempted to identify the determinants of wealth-based inequality in child immunization using a new, modified method. The present study was based on 11,001 eligible ever-married women aged 15-49 and their children aged 12-23 months. Data were from the third District Level Household and Facility Survey (DLHS-3) of India, 2007-08. Using an approximation of Erreyger's decomposition technique, the study identified unequal access to antenatal care as the main factor associated with inequality in immunization coverage in India.

  11. Comparison of hybrid spectral-decomposition artificial neural network models for understanding climatic forcing of groundwater levels

    Science.gov (United States)

    Abrokwah, K.; O'Reilly, A. M.

    2017-12-01

    Groundwater is an important resource that is extracted every day because of its invaluable use for domestic, industrial and agricultural purposes. The need for sustaining groundwater resources is clearly indicated by declining water levels and has led to modeling and forecasting accurate groundwater levels. In this study, spectral decomposition of climatic forcing time series was used to develop hybrid wavelet analysis (WA) and moving window average (MWA) artificial neural network (ANN) models. These techniques are explored by modeling historical groundwater levels in order to provide understanding of potential causes of the observed groundwater-level fluctuations. Selection of the appropriate decomposition level for WA and window size for MWA helps in understanding the important time scales of climatic forcing, such as rainfall, that influence water levels. Discrete wavelet transform (DWT) is used to decompose the input time-series data into various levels of approximate and details wavelet coefficients, whilst MWA acts as a low-pass signal-filtering technique for removing high-frequency signals from the input data. The variables used to develop and validate the models were daily average rainfall measurements from five National Atmospheric and Oceanic Administration (NOAA) weather stations and daily water-level measurements from two wells recorded from 1978 to 2008 in central Florida, USA. Using different decomposition levels and different window sizes, several WA-ANN and MWA-ANN models for simulating the water levels were created and their relative performances compared against each other. The WA-ANN models performed better than the corresponding MWA-ANN models; also higher decomposition levels of the input signal by the DWT gave the best results. The results obtained show the applicability and feasibility of hybrid WA-ANN and MWA-ANN models for simulating daily water levels using only climatic forcing time series as model inputs.

  12. Quantifying the effect of plant growth on litter decomposition using a novel, triple-isotope label approach

    Science.gov (United States)

    Ernakovich, J. G.; Baldock, J.; Carter, T.; Davis, R. A.; Kalbitz, K.; Sanderman, J.; Farrell, M.

    2017-12-01

    Microbial degradation of plant detritus is now accepted as a major stabilizing process of organic matter in soils. Most of our understanding of the dynamics of decomposition come from laboratory litter decay studies in the absence of plants, despite the fact that litter decays in the presence of plants in many native and managed systems. There is growing evidence that living plants significantly impact the degradation and stabilization of litter carbon (C) due to changes in the chemical and physical nature of soils in the rhizosphere. For example, mechanistic studies have observed stimulatory effects of root exudates on litter decomposition, and greenhouse studies have shown that living plants accelerate detrital decay. Despite this, we lack a quantitative understanding of the contribution of living plants to litter decomposition and how interactions of these two sources of C build soil organic matter (SOM). We used a novel triple-isotope approach to determine the effect of living plants on litter decomposition and C cycling. In the first stage of the experiment, we grew a temperate grass commonly used for forage, Poa labillardieri, in a continuously-labelled atmosphere of 14CO2 fertilized with K15NO3, such that the grass biomass was uniformly labelled with 14C and 15N. In the second stage, we constructed litter decomposition mescososms with and without a living plant to test for the effect of a growing plant on litter decomposition. The 14C/15N litter was decomposed in a sandy clay loam while a temperate forage grass, Lolium perenne, grew in an atmosphere of enriched 13CO2. The fate of the litter-14C/15N and plant-13C was traced into soil mineral fractions and dissolved organic matter (DOM) over the course of nine weeks using four destructive harvests of the mesocosms. Our preliminary results suggest that living plants play a major role in the degradation of plant litter, as litter decomposition was greater, both in rate and absolute amount, for soil mesocosms

  13. Assessment of perfusion by dynamic contrast-enhanced imaging using a deconvolution approach based on regression and singular value decomposition.

    Science.gov (United States)

    Koh, T S; Wu, X Y; Cheong, L H; Lim, C C T

    2004-12-01

    The assessment of tissue perfusion by dynamic contrast-enhanced (DCE) imaging involves a deconvolution process. For analysis of DCE imaging data, we implemented a regression approach to select appropriate regularization parameters for deconvolution using the standard and generalized singular value decomposition methods. Monte Carlo simulation experiments were carried out to study the performance and to compare with other existing methods used for deconvolution analysis of DCE imaging data. The present approach is found to be robust and reliable at the levels of noise commonly encountered in DCE imaging, and for different models of the underlying tissue vasculature. The advantages of the present method, as compared with previous methods, include its efficiency of computation, ability to achieve adequate regularization to reproduce less noisy solutions, and that it does not require prior knowledge of the noise condition. The proposed method is applied on actual patient study cases with brain tumors and ischemic stroke, to illustrate its applicability as a clinical tool for diagnosis and assessment of treatment response.

  14. Synthesis and Characterization of Sb2S3 Nanorods via Complex Decomposition Approach

    Directory of Open Access Journals (Sweden)

    Abdolali Alemi

    2011-01-01

    Full Text Available Based on the complex decomposition approach, a simple hydrothermal method has been developed for the synthesizing of Sb2S3 nanorods with high yield in 24 h at 150∘C. The powder X-ray diffraction pattern shows the Sb2S3 crystals belong to the orthorhombic phase with calculated lattice parameters a=1.120 nm, b=1.128 nm, and c=0.383 nm. The quantification of energy dispersive X-ray spectrometric analysis peaks give an atomic ratio of 2 : 3 for Sb : S. TEM and SEM studies reveal that the appearance of the as-prepared Sb2S3 is rod-like which is composed of nanorods with the typical width of 30–160 nm and length of up to 6 μm. High-resolution transmission electron microscopic (HRTEM studies reveal that the Sb2S3 is oriented in the [10-1] growth direction. The band gap calculated from the absorption spectra is found to be 3.29 ev, indicating a considerable blue shift relative to the bulk. The formation mechanism of Sb2S3 nanostructures is proposed.

  15. A Tensor Decomposition-Based Approach for Detecting Dynamic Network States From EEG.

    Science.gov (United States)

    Mahyari, Arash Golibagh; Zoltowski, David M; Bernat, Edward M; Aviyente, Selin

    2017-01-01

    Functional connectivity (FC), defined as the statistical dependency between distinct brain regions, has been an important tool in understanding cognitive brain processes. Most of the current works in FC have focused on the assumption of temporally stationary networks. However, recent empirical work indicates that FC is dynamic due to cognitive functions. The purpose of this paper is to understand the dynamics of FC for understanding the formation and dissolution of networks of the brain. In this paper, we introduce a two-step approach to characterize the dynamics of functional connectivity networks (FCNs) by first identifying change points at which the network connectivity across subjects shows significant changes and then summarizing the FCNs between consecutive change points. The proposed approach is based on a tensor representation of FCNs across time and subjects yielding a four-mode tensor. The change points are identified using a subspace distance measure on low-rank approximations to the tensor at each time point. The network summarization is then obtained through tensor-matrix projections across the subject and time modes. The proposed framework is applied to electroencephalogram (EEG) data collected during a cognitive control task. The detected change-points are consistent with a priori known ERN interval. The results show significant connectivities in medial-frontal regions which are consistent with widely observed ERN amplitude measures. The tensor-based method outperforms conventional matrix-based methods such as singular value decomposition in terms of both change-point detection and state summarization. The proposed tensor-based method captures the topological structure of FCNs which provides more accurate change-point-detection and state summarization.

  16. Simulation-optimization of large agro-hydrosystems using a decomposition approach

    Science.gov (United States)

    Schuetze, Niels; Grundmann, Jens

    2014-05-01

    In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.

  17. A Distributed Approach to System-Level Prognostics

    Science.gov (United States)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  18. Partial information decomposition as a unified approach to the specification of neural goal functions.

    Science.gov (United States)

    Wibral, Michael; Priesemann, Viola; Kay, Jim W; Lizier, Joseph T; Phillips, William A

    2017-03-01

    In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy', which builds on combining external input and prior knowledge in a synergistic manner. We suggest that

  19. An additive matrix preconditioning method with application for domain decomposition and two-level matrix partitionings

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe

    2010-01-01

    Roč. 5910, - (2010), s. 76-83 ISSN 0302-9743. [International Conference on Large-Scale Scientific Computations, LSSC 2009 /7./. Sozopol, 04.06.2009-08.06.2009] R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z30860518 Keywords : additive matrix * condition number * domain decomposition Subject RIV: BA - General Mathematics www.springerlink.com

  20. CP decomposition approach to blind separation for DS-CDMA system using a new performance index

    Science.gov (United States)

    Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss

    2014-12-01

    In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.

  1. A novel thermal decomposition approach for the synthesis of silica-iron oxide core–shell nanoparticles

    International Nuclear Information System (INIS)

    Kishore, P.N.R.; Jeevanandam, P.

    2012-01-01

    Highlights: ► Silica-iron oxide core–shell nanoparticles have been synthesized by a novel thermal decomposition approach. ► The silica-iron oxide core–shell nanoparticles are superparamagnetic at room temperature. ► The silica-iron oxide core–shell nanoparticles serve as good photocatalyst for the degradation of Rhodamine B. - Abstract: A simple thermal decomposition approach for the synthesis of magnetic nanoparticles consisting of silica as core and iron oxide nanoparticles as shell has been reported. The iron oxide nanoparticles were deposited on the silica spheres (mean diameter = 244 ± 13 nm) by the thermal decomposition of iron (III) acetylacetonate, in diphenyl ether, in the presence of SiO 2 . The core–shell nanoparticles were characterized by X-ray diffraction, infrared spectroscopy, field emission-scanning electron microscopy coupled with energy dispersive X-ray analysis, transmission electron microscopy, diffuse reflectance spectroscopy, and magnetic measurements. The results confirm the presence of iron oxide nanoparticles on the silica core. The core–shell nanoparticles are superparamagnetic at room temperature indicating the presence of iron oxide nanoparticles on silica. The core–shell nanoparticles have been demonstrated as good photocatalyst for the degradation of Rhodamine B.

  2. Decomposition of environmentally persistent perfluorooctanoic acid in water by photochemical approaches.

    Science.gov (United States)

    Hori, Hisao; Hayakawa, Etsuko; Einaga, Hisahiro; Kutsuna, Shuzo; Koike, Kazuhide; Ibusuki, Takashi; Kiatagawa, Hiroshi; Arakawa, Ryuichi

    2004-11-15

    The decomposition of persistent and bioaccumulative perfluorooctanoic acid (PFOA) in water by UV-visible light irradiation, by H202 with UV-visible light irradiation, and by a tungstic heteropolyacid photocatalyst was examined to develop a technique to counteract stationary sources of PFOA. Direct photolysis proceeded slowly to produce CO2, F-, and short-chain perfluorocarboxylic acids. Compared to the direct photolysis, H2O2 was less effective in PFOA decomposition. On the other hand, the heteropolyacid photocatalyst led to efficient PFOA decomposition and the production of F- ions and CO2. The photocatalyst also suppressed the accumulation of short-chain perfluorocarboxylic acids in the reaction solution. PFOA in the concentrations of 0.34-3.35 mM, typical of those in wastewaters after an emulsifying process in fluoropolymer manufacture, was completely decomposed by the catalyst within 24 h of irradiation from a 200-W xenon-mercury lamp, with no accompanying catalyst degradation, permitting the catalyst to be reused in consecutive runs. Gas chromatography/mass spectrometry (GC/MS) measurements showed no trace of environmentally undesirable species such as CF4, which has a very high global-warming potential. When the (initial PFOA)/(initial catalyst) molar ratio was 10: 1, the turnover number for PFOA decomposition reached 4.33 over 24 h of irradiation.

  3. Numerical difficulties associated with using equality constraints to achieve multi-level decomposition in structural optimization

    Science.gov (United States)

    Thareja, R.; Haftka, R. T.

    1986-01-01

    There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.

  4. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  5. A 3D domain decomposition approach for the identification of spatially varying elastic material parameters

    KAUST Repository

    Moussawi, Ali

    2015-02-24

    Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems.

  6. Decomposition of Changes in Earnings Inequality in China: A Distributional Approach

    OpenAIRE

    Chi, Wei; Li, Bo; Yu, Qiumei

    2007-01-01

    Using the nationwide household data, this study examines the changes in the Chinese urban income distributions from 1987 to 1996 and from 1996 to 2004, and investigates the causes of these changes. The Oaxaca-Blinder decomposition method is applied to decomposing the mean earnings increases, and the Firpo-Fortin-Lemieux method based upon a recentered influence function is used to decompose the changes in the income distribution and the inequality measures such as the variance and the 10-90 r...

  7. A new decomposition method for parallel processing multi-level optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Min Soo; Choi, Dong Hoon

    2002-01-01

    In practical designs, most of the multidisciplinary problems have a large-size and complicate design system. Since multidisciplinary problems have hundreds of analyses and thousands of variables, the grouping of analyses and the order of the analyses in the group affect the speed of the total design cycle. Therefore, it is very important to reorder and regroup the original design processes in order to minimize the total computational cost by decomposing large multidisciplinary problems into several MultiDisciplinary Analysis SubSystems (MDASS) and by processing them in parallel. In this study, a new decomposition method is proposed for parallel processing of multidisciplinary design optimization, such as Collaborative Optimization (CO) and Individual Discipline Feasible (IDF) method. Numerical results for two example problems are presented to show the feasibility of the proposed method

  8. A multi-sectoral decomposition analysis of city-level greenhouse gas emissions: Case study of Tianjin, China

    International Nuclear Information System (INIS)

    Kang, Jidong; Zhao, Tao; Liu, Nan; Zhang, Xin; Xu, Xianshuo; Lin, Tao

    2014-01-01

    To better understand how city-level greenhouse gas (GHG) emissions have evolved, we performed a multi-sectoral decomposition analysis to disentangle the GHG emissions in Tianjin from 2001 to 2009. Five sectors were considered, including the agricultural, industrial, transportation, commercial and other sectors. An industrial sub-sector decomposition analysis was further performed in the six high-emission industrial branches. The results show that, for all five sectors in Tianjin, economic growth was the most important factor driving the increase in emissions, while energy efficiency improvements were primarily responsible for the decrease in emissions. In comparison, the influences from energy mix shift and emission coefficient changes were relatively marginal. The disaggregated decomposition in the industry further revealed that energy efficiency improvement has been widely achieved in the industrial branches, which was especially true for the Smelting and Pressing of Ferrous Metals and Chemical Raw Materials and Chemical Products sub-sectors. However, the energy efficiency declined in a few branches, e.g., Petroleum Processing and Coking Products. Moreover, the increased emissions related to industrial structure shift were primarily due to the expansion of Smelting and Pressing of Ferrous Metals; its share in the total industry output increased from 5.62% to 16.1% during the examined period. - Highlights: • We perform the LMDI analysis on the emissions in five sectors of Tianjin. • Economic growth was the most important factor for the emissions increase. • Energy efficiency improvements mainly contributed to the emission decrease. • Negative energy intensity effect was observed in most of the industrial sub-sectors. • Industrial structure change largely resulted in emission increase

  9. Mechanistic approach for the kinetics of the decomposition of nitrous oxide over calcined hydrotalcites

    Energy Technology Data Exchange (ETDEWEB)

    Dandl, H.; Emig, G. [Lehrstuhl fuer Technische Chemie I, Erlangen (Germany)

    1998-03-27

    A highly active catalyst for the decomposition of N{sub 2}O was prepared by the thermal treatment of CoLaAl-hydrotalcite. For this catalyst the reaction rate was determined at various partial pressures of N{sub 2}O, O{sub 2} and H{sub 2}O in a temperature range from 573K to 823K. The kinetic simulation resulted in a mechanistic model. The energies of activation and rate coefficients are estimated for the main steps of the reaction

  10. Single step thermal decomposition approach to prepare supported γ-Fe2O3 nanoparticles

    International Nuclear Information System (INIS)

    Sharma, Geetu; Jeevanandam, P.

    2012-01-01

    γ-Fe 2 O 3 nanoparticles supported on MgO (macro-crystalline and nanocrystalline) were prepared by an easy single step thermal decomposition method. Thermal decomposition of iron acetylacetonate in diphenyl ether, in the presence of the supports followed by calcination, leads to iron oxide nanoparticles supported on MgO. The X-ray diffraction results indicate the stability of γ-Fe 2 O 3 phase on MgO (macro-crystalline and nanocrystalline) up to 1150 °C. The scanning electron microscopy images show that the supported iron oxide nanoparticles are agglomerated while the energy dispersive X-ray analysis indicates the presence of iron, magnesium and oxygen in the samples. Transmission electron microscopy images indicate the presence of smaller γ-Fe 2 O 3 nanoparticles on nanocrystalline MgO. The magnetic properties of the supported magnetic nanoparticles at various calcination temperatures (350-1150 °C) were studied using a superconducting quantum interference device which indicates superparamagnetic behavior.

  11. Approaches to understanding the semi-stable phase of litter decomposition

    Science.gov (United States)

    Preston, C. M.; Trofymow, J. A.

    2012-12-01

    The slowing or even apparent cessation of litter decomposition with time has been widely observed, but causes remain poorly understood. We examine the question in part through data from CIDET (the Canadian Intersite Decomposition Experiment) for 10 foliar litters at one site with MAT 6.7 degrees C. The initial rapid C loss in the first year for some litters is followed by a second phase (1-7y) with decay rates from 0.21-0.79/y, influenced by initial litter chemistry especially the ratio AUR/N (acid-unhydrolyzable residue, negative). By contrast, 10-23% of the initial litter C mass entered the semi-stable decay phase (>7 y) with modeled decay rates of 0.0021-0.0035/y. The slowing and convergence of k values was similar to trends in chemical composition. From 7-12 y, concentrations of Ca, Mg, K, P, Mn and Zn generally declined and became more similar among litters, and total N converged around 20 mg/g. Non-polar and water-soluble extractables and acid solubles continued to decrease slowly and AUR to increase. Solid-state C-13 NMR showed continuing slight declines in O- and di-O-alkyl C and increases in alkyl, methoxyl, aryl and carboxyl C. CIDET and other studies now clearly show that lignin is not selectively preserved, and that AUR is not a measure of foliar lignin as it includes components from condensed tannins and long-chain alkyl C. Interaction with soil minerals strongly enhances soil C stabilization, but what slows decomposition so much in organic horizons? The role of inherent "chemical recalcitrance" or possible formation of new covalent bonds is hotly debated in soil science, but increasingly complex or random molecular structures no doubt present greater challenges to enzymes. A relevant observation from soils and geochemistry is that decomposition results in a decline in individual compounds that can be identified from chemical analysis and a corresponding increase in the "molecularly uncharacterizable component" (MUC). Long-term declines in Ca, Mg, K

  12. An architectural approach to level design

    CERN Document Server

    Totten, Christopher W

    2014-01-01

    Explore Level Design through the Lens of Architectural and Spatial Experience TheoryWritten by a game developer and professor trained in architecture, An Architectural Approach to Level Design is one of the first books to integrate architectural and spatial design theory with the field of level design. It explores the principles of level design through the context and history of architecture, providing information useful to both academics and game development professionals.Understand Spatial Design Principles for Game Levels in 2D, 3D, and Multiplayer ApplicationsThe book presents architectura

  13. A Dual Decomposition Approach to Partial Crosstalk Cancelation in a Multiuser DMT-xDSL Environment

    Directory of Open Access Journals (Sweden)

    Verlinden Jan

    2007-01-01

    Full Text Available In modern DSL systems, far-end crosstalk is a major source of performance degradation. Crosstalk cancelation schemes have been proposed to mitigate the effect of crosstalk. However, the complexity of crosstalk cancelation grows with the square of the number of lines in the binder. Fortunately, most of the crosstalk originates from a limited number of lines and, for DMT-based xDSL systems, on a limited number of tones. As a result, a fraction of the complexity of full crosstalk cancelation suffices to cancel most of the crosstalk. The challenge is then to determine which crosstalk to cancel on which tones, given a complexity constraint. This paper presents an algorithm based on a dual decomposition to optimally solve this problem. The proposed algorithm naturally incorporates rate constraints and the complexity of the algorithm compares favorably to a known resource allocation algorithm, where a multiuser extension is made to incorporate the rate constraints.

  14. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.

    Science.gov (United States)

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-08-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.

  15. Healthcare Expenditures Associated with Depression Among Individuals with Osteoarthritis: Post-Regression Linear Decomposition Approach.

    Science.gov (United States)

    Agarwal, Parul; Sambamoorthi, Usha

    2015-12-01

    Depression is common among individuals with osteoarthritis and leads to increased healthcare burden. The objective of this study was to examine excess total healthcare expenditures associated with depression among individuals with osteoarthritis in the US. Adults with self-reported osteoarthritis (n = 1881) were identified using data from the 2010 Medical Expenditure Panel Survey (MEPS). Among those with osteoarthritis, chi-square tests and ordinary least square regressions (OLS) were used to examine differences in healthcare expenditures between those with and without depression. Post-regression linear decomposition technique was used to estimate the relative contribution of different constructs of the Anderson's behavioral model, i.e., predisposing, enabling, need, personal healthcare practices, and external environment factors, to the excess expenditures associated with depression among individuals with osteoarthritis. All analysis accounted for the complex survey design of MEPS. Depression coexisted among 20.6 % of adults with osteoarthritis. The average total healthcare expenditures were $13,684 among adults with depression compared to $9284 among those without depression. Multivariable OLS regression revealed that adults with depression had 38.8 % higher healthcare expenditures (p regression linear decomposition analysis indicated that 50 % of differences in expenditures among adults with and without depression can be explained by differences in need factors. Among individuals with coexisting osteoarthritis and depression, excess healthcare expenditures associated with depression were mainly due to comorbid anxiety, chronic conditions and poor health status. These expenditures may potentially be reduced by providing timely intervention for need factors or by providing care under a collaborative care model.

  16. What drives the change in China's energy intensity: Combining decomposition analysis and econometric analysis at the provincial level

    International Nuclear Information System (INIS)

    Song, Feng; Zheng, Xinye

    2012-01-01

    We employ decomposition analysis and econometric analysis to investigate the driving forces behind China's changing energy intensity using a provincial-level panel data set for the period from 1995 to 2009. The decomposition analysis indicates that: (a) all of the provinces except for a few experienced efficiency improvement, while around three-fourths of the provinces' economics became more energy intensive or remained unchanged; (b) consequently the efficiency improvement accounts for more than 90% of China's energy intensity change as opposed to the economic structural change. The econometric analysis shows that the rising income plays a significant role in the reduction of energy intensity while the effect of energy price is relatively limited. The result may reflect the urgency of deregulating the price and establishing a market-oriented pricing system in China's energy sector. The implementation of the energy intensity reduction policies in the Eleventh Five-Year Plan (FYP) has helped reverse the increasing trend of energy intensity since 2002. Although the Chinese Government intended to change the industry-led economic growth pattern, it seems that most of the policy effects flow through the efficiency improvement as opposed to the economic structure adjustment. More fundamental changes to the economic structure are needed to achieve more sustainable progress in energy intensity reduction. - Highlights: ► We examine the determinants of China's energy intensity change at provincial level. ► Rising income plays a significant role in reducing China's energy intensity. ► Policy effects mainly flow through the efficiency improvement. ► Fundamental structure changes are needed to further reduce China's energy intensity.

  17. Understanding determinants of unequal distribution of stillbirth in Tehran, Iran: a concentration index decomposition approach.

    Science.gov (United States)

    Almasi-Hashiani, Amir; Sepidarkish, Mahdi; Safiri, Saeid; Khedmati Morasae, Esmaeil; Shadi, Yahya; Omani-Samani, Reza

    2017-05-17

    The present inquiry set to determine the economic inequality in history of stillbirth and understanding determinants of unequal distribution of stillbirth in Tehran, Iran. A population-based cross-sectional study was conducted on 5170 pregnancies in Tehran, Iran, since 2015. Principal component analysis (PCA) was applied to measure the asset-based economic status. Concentration index was used to measure socioeconomic inequality in stillbirth and then decomposed into its determinants. The concentration index and its 95% CI for stillbirth was -0.121 (-0.235 to -0.002). Decomposition of the concentration index showed that mother's education (50%), mother's occupation (30%), economic status (26%) and father's age (12%) had the highest positive contributions to measured inequality in stillbirth history in Tehran. Mother's age (17%) had the highest negative contribution to inequality. Stillbirth is unequally distributed among Iranian women and is mostly concentrated among low economic status people. Mother-related factors had the highest positive and negative contributions to inequality, highlighting specific interventions for mothers to redress inequality. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. A new approach for crude oil price analysis based on empirical mode decomposition

    International Nuclear Information System (INIS)

    Zhang, Xun; Wang, Shou-Yang; Lai, K.K.

    2008-01-01

    The importance of understanding the underlying characteristics of international crude oil price movements attracts much attention from academic researchers and business practitioners. Due to the intrinsic complexity of the oil market, however, most of them fail to produce consistently good results. Empirical Mode Decomposition (EMD), recently proposed by Huang et al., appears to be a novel data analysis method for nonlinear and non-stationary time series. By decomposing a time series into a small number of independent and concretely implicational intrinsic modes based on scale separation, EMD explains the generation of time series data from a novel perspective. Ensemble EMD (EEMD) is a substantial improvement of EMD which can better separate the scales naturally by adding white noise series to the original time series and then treating the ensemble averages as the true intrinsic modes. In this paper, we extend EEMD to crude oil price analysis. First, three crude oil price series with different time ranges and frequencies are decomposed into several independent intrinsic modes, from high to low frequency. Second, the intrinsic modes are composed into a fluctuating process, a slowly varying part and a trend based on fine-to-coarse reconstruction. The economic meanings of the three components are identified as short term fluctuations caused by normal supply-demand disequilibrium or some other market activities, the effect of a shock of a significant event, and a long term trend. Finally, the EEMD is shown to be a vital technique for crude oil price analysis. (author)

  19. Variational mode decomposition based approach for accurate classification of color fundus images with hemorrhages

    Science.gov (United States)

    Lahmiri, Salim; Shmuel, Amir

    2017-11-01

    Diabetic retinopathy is a disease that can cause a loss of vision. An early and accurate diagnosis helps to improve treatment of the disease and prognosis. One of the earliest characteristics of diabetic retinopathy is the appearance of retinal hemorrhages. The purpose of this study is to design a fully automated system for the detection of hemorrhages in a retinal image. In the first stage of our proposed system, a retinal image is processed with variational mode decomposition (VMD) to obtain the first variational mode, which captures the high frequency components of the original image. In the second stage, four texture descriptors are extracted from the first variational mode. Finally, a classifier trained with all computed texture descriptors is used to distinguish between images of healthy and unhealthy retinas with hemorrhages. Experimental results showed evidence of the effectiveness of the proposed system for detection of hemorrhages in the retina, since a perfect detection rate was achieved. Our proposed system for detecting diabetic retinopathy is simple and easy to implement. It requires only short processing time, and it yields higher accuracy in comparison with previously proposed methods for detecting diabetic retinopathy.

  20. APPROACHES TO QUALITY MANAGEMENT AT EUROPEAN LEVEL

    Directory of Open Access Journals (Sweden)

    Salagean Horatiu Catalin

    2013-07-01

    Full Text Available In the current economic context, quality has become a source of competitive advantage and organizations must perceive quality as something natural and human in order to achieve excellence. The proper question for the context of the internationalisation of the economy is whether the culture of the regions, states or nations affects the development in the quality management field and the quality approach. The present study tries to give a theoretical approach of how culture influences the quality approach at the European level. The study deals only with the European quality approach, beacuse at European level one could meet a great variety of models and methodologies. In the U.S.A. and Japan one could identify a specific cultural approach regarding quality. At the European level, we cannot discuss in the same terms, because each country has a different cultural specifics in terms of quality. In order to determine the cultural specificity of the countries surveyed, the study has used the most popular analysis tool of cultural dimensions, namely the Dutch Professor Geert Hofstede\\'s model. The model illustrates according to a survey, the organizational behavior of several countries and was able to identify a set of variables and fundamental dimensions, that differentiates one culture from another. An attempt was made to see if there are connections between the values of Hofestede \\'s cultural dimensions and the quality characteristics in the analysed countries. The study, describes on the one hand,the quality evolution from quality control to Total Quality Management and on the other hand, focuses on the quality approach modalities at European level. The second part of the paper is structured into two parts, addressing on the one hand the quality in countries of Western Europe, such as United Kingdom, France and Germany,because these three countries are considered to be the exponents of quality development in Western Europe. In the same time, the

  1. Foreign Policy: Approaches, Levels Of Analysis, Dimensions

    OpenAIRE

    Nina Šoljan

    2012-01-01

    This paper provides an overview of key issues related to foreign policy and foreign policy theories in the wider context of political science. Discussing the origins and development of foreign policy analysis (FPA), as well as scholarly work produced over time, it argues that today FPA encompasses a variety of theoretical approaches, models and tools. These share the understanding that foreign policy outputs cannot be fully explained if analysis is confined to the systemic level. Furthermore,...

  2. A solution approach to the ROADEF/EURO 2010 challenge based on Benders' Decomposition

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn

    them satisfy the constraints not part of the mixed integer program. A number of experiments are performed on the available benchmark instances. These experiments show that the approach is competitive on the smaller instances, but not for the larger ones. We believe the exact approach gives insight...... into the problem and additionally makes it possible to find lower bounds on the problem, which is typically not the case for the competing heuristics....

  3. Developing State Level Approaches under the State Level Concept

    International Nuclear Information System (INIS)

    Budlong Sylvester, K.; Murphy, C.L.; Boyer, B.; Pilat, J.F.

    2015-01-01

    With the pursuit of the State-Level Concept (SLC), the IAEA has sought to further evolve the international safeguards system in a manner which maintains (or improves) the effectiveness of the system in an environment of expanding demands and limited resources. The IAEA must not remain static and should continuously examine its practices to ensure it can capture opportunities for cost reductions while adapting to, and staying ahead of, emerging proliferation challenges. Contemporary safeguards have been focused on assessing the nuclear programme of the State as a whole, rather than on the basis of individual facilities. Since the IAEA's integrated safeguards program, State-level Approaches (SLAs) have been developed that seek to optimally combine the measures provided for by the Additional Protocol with those of traditional safeguards. This process resulted in facility specific approaches that, while making use of a State's broader conclusion, were nonetheless prescriptive. Designing SLAs on a State-by-State basis would avoid the shortcomings of a one-size-fits-all system. It would also enable the effective use of the Agency's information analysis and State evaluation efforts by linking this analysis to safeguards planning efforts. Acquisition Path Analysis (APA), along with the State Evaluation process, can be used to prioritize paths in a State in terms of their attractiveness for proliferation. While taking advantage of all safeguards relevant information, and tailoring safeguards to individual characteristics of the State, paths of the highest priority in all States will necessarily meet the same standard of coverage. Similarly, lower priority paths will have lower performance targets, thereby promoting nondiscrimination. Such an approach would improve understanding of safeguards implementation under the SLC and the rational for safeguards resource allocation. The potential roles for APA and performance targets in SLA development will be reviewed

  4. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    Science.gov (United States)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I

  5. CONFAC Decomposition Approach to Blind Identification of Underdetermined Mixtures Based on Generating Function Derivatives

    NARCIS (Netherlands)

    de Almeida, Andre L. F.; Luciani, Xavier; Stegeman, Alwin; Comon, Pierre

    This work proposes a new tensor-based approach to solve the problem of blind identification of underdetermined mixtures of complex-valued sources exploiting the cumulant generating function (CGF) of the observations. We show that a collection of second-order derivatives of the CGF of the

  6. Network-constrained AC unit commitment under uncertainty: A Benders' decomposition approach

    DEFF Research Database (Denmark)

    Nasri, Amin; Kazempour, Seyyedjalal; Conejo, Antonio J.

    2015-01-01

    . The proposed model is formulated as a two-stage stochastic programming problem, whose first-stage refers to the day-ahead market, and whose second-stage represents real-time operation. The proposed Benders’ approach allows decomposing the original problem, which is mixed-integer nonlinear and generally...... intractable, into a mixed-integer linear master problem and a set of nonlinear, but continuous subproblems, one per scenario. In addition, to temporally decompose the proposed ac unit commitment problem, a heuristic technique is used to relax the inter-temporal ramping constraints of the generating units...

  7. Regional level approach for increasing energy efficiency

    International Nuclear Information System (INIS)

    Viholainen, Juha; Luoranen, Mika; Väisänen, Sanni; Niskanen, Antti; Horttanainen, Mika; Soukka, Risto

    2016-01-01

    Highlights: • Comprehensive snapshot of regional energy system for decision makers. • Connecting regional sustainability targets and energy planning. • Involving local players in energy planning. - Abstract: Actions for increasing the renewable share in the energy supply and improving both production and end-use energy efficiency are often built into the regional level sustainability targets. Because of this, many local stakeholders such as local governments, energy producers and distributors, industry, and public and private sector operators require information on the current state and development aspects of the regional energy efficiency. The drawback is that an overall view on the focal energy system operators, their energy interests, and future energy service needs in the region is often not available for the stakeholders. To support the local energy planning and management of the regional energy services, an approach for increasing the regional energy efficiency is being introduced. The presented approach can be seen as a solid framework for gathering the required data for energy efficiency analysis and also evaluating the energy system development, planned improvement actions, and the required energy services at the region. This study defines the theoretical structure of the energy efficiency approach and the required steps for revealing such energy system improvement actions that support the regional energy plan. To demonstrate the use of the approach, a case study of a Finnish small-town of Lohja is presented. In the case example, possible actions linked to the regional energy targets were evaluated with energy efficiency analysis. The results of the case example are system specific, but the conducted study can be seen as a justified example of generating easily attainable and transparent information on the impacts of different improvement actions on the regional energy system.

  8. Decompositions of manifolds

    CERN Document Server

    Daverman, Robert J

    2007-01-01

    Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve

  9. Tailor-made Design of Chemical Blends using Decomposition-based Computer-aided Approach

    DEFF Research Database (Denmark)

    Yunus, Nor Alafiza; Manan, Zainuddin Abd.; Gernaey, Krist

    (properties). In this way, first the systematic computer-aided technique establishes the search space, and then narrows it down in subsequent steps until a small number of feasible and promising candidates remain and then experimental work may be conducted to verify if any or all the candidates satisfy......Computer aided technique is an efficient approach to solve chemical product design problems such as design of blended liquid products (chemical blending). In chemical blending, one tries to find the best candidate, which satisfies the product targets defined in terms of desired product attributes...... is decomposed into two stages. The first stage investigates the mixture stability where all unstable mixtures are eliminated and the stable blend candidates are retained for further testing. In the second stage, the blend candidates have to satisfy a set of target properties that are ranked according...

  10. Chemical decomposition of high-level nuclear waste storage/disposal glasses under irradiation. 1997 annual progress report

    International Nuclear Information System (INIS)

    Griscom, D.L.; Merzbacher, C.I.

    1997-01-01

    'The objective of this research is to use the sensitive technique of electron spin resonance (ESR) to look for evidence of radiation-induced chemical decomposition of vitreous forms contemplated for immobilization of plutonium and/or high-level nuclear wastes, to interpret this evidence in terms of existing knowledge of glass structure, and to recommend certain materials for further study by other techniques, particularly electron microscopy and measurements of gas evolution by high-vacuum mass spectroscopy. Previous ESR studies had demonstrated that an effect of y rays on a simple binary potassium silicate glass was to induce superoxide (O 2 - ) and ozonide (O 3 - ) as relatively stable product of long-term irradiation Accordingly, some of the first experiments performed as a part of the present effort involved repeating this work. A glass of composition 44 K 2 O: 56 SiO 2 was prepared from reagent grade K 2 CO3 and SiO 2 powders melted in a Pt crucible in air at 1,200 C for 1.5 hr. A sample irradiated to a dose of 1 MGy (1 MGy = 10 8 rad) indeed yielded the same ESR results as before. To test the notion that the complex oxygen ions detected may be harbingers of radiation-induced phase separation or bubble formation, a small-angle neutron scattering (SANS) experiment was performed. SANS is theoretically capable of detecting voids or bubbles as small as 10 305 in diameter. A preliminary experiment was carried out with the collaboration of Dr. John Barker (NIST). The SANS spectra for the irradiated and unirradiated samples were indistiguishable. A relatively high incoherent background (probably due to the presence of protons) may obscure scattering from small gas bubbles and therefore decrease the effective resolution of this technique. No further SANS experiments are planned at this time.'

  11. Oxidative degradation of low and intermediate level Radioactive organic wastes 2. Acid decomposition on spent Ion-Exchange resins

    International Nuclear Information System (INIS)

    Ghattas, N.K.; Eskander, S.B.

    1995-01-01

    The present work provides a simplified, effective and economic method for the chemical decomposition of radioactively contaminated solid organic waste, especially spent ion - exchange resins. The goal is to achieve volume reduction and to avoid technical problems encountered in processes used for similar purposes (incineration, pyrolysis). Factors efficiency and kinetics of the oxidation of the ion exchange resins in acid medium using hydrogen peroxide as oxidant, namely, duration of treatment and the acid to resin ratio were studied systematically on a laboratory scale. Moreover the percent composition of the off-gas evolved during the decomposition process was analysed. 3 figs., 5 tabs

  12. Oxidative degradation of low and intermediate level Radioactive organic wastes 2. Acid decomposition on spent Ion-Exchange resins

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, N K; Eskander, S B [Radioisotope dept., atomic energy authority, (Egypt)

    1995-10-01

    The present work provides a simplified, effective and economic method for the chemical decomposition of radioactively contaminated solid organic waste, especially spent ion - exchange resins. The goal is to achieve volume reduction and to avoid technical problems encountered in processes used for similar purposes (incineration, pyrolysis). Factors efficiency and kinetics of the oxidation of the ion exchange resins in acid medium using hydrogen peroxide as oxidant, namely, duration of treatment and the acid to resin ratio were studied systematically on a laboratory scale. Moreover the percent composition of the off-gas evolved during the decomposition process was analysed. 3 figs., 5 tabs.

  13. Socioeconomic inequality of unintended pregnancy in the Iranian population: a decomposition approach.

    Science.gov (United States)

    Omani-Samani, Reza; Amini Rarani, Mostafa; Sepidarkish, Mahdi; Khedmati Morasae, Esmaeil; Maroufizadeh, Saman; Almasi-Hashiani, Amir

    2018-05-09

    There are several studies regarding the predictors or risk factors of unintended pregnancy, but only a small number of studies have been carried out concerning the socio-economic factors influencing the unintended pregnancy rate. This study aimed to determine the socioeconomic inequality of unintended pregnancy in Tehran, Iran, as a developing country. In this hospital based cross-sectional study, 5152 deliveries from 103 hospitals in Tehran (the capital of Iran) were included in the analysis in July 2015. Socioeconomic status (SES) was measured through an asset-based method and principal component analysis was carried out to calculate the household SES. The concentration index and curve was used to measure SES inequality in unintended pregnancy, and then decomposed into its determinants. The data was analyzed by statistical Stata software. The Wagstaff normalized concentration index of unintended pregnancy (- 0.108 (95% Confidence Interval (CI) = - 0.119 ~ - 0.054)) endorses that unintended pregnancy is more concentrated among poorer mothers. The results showed that SES accounted for 27% of unintended pregnancy inequality, followed by the mother's nationality (19%), father's age (16%), mother's age (10%), father's education level (7%) and Body Mass Index (BMI) groups (5%). Unintended pregnancy is unequally distributed among Iranian women and is more concentrated among poor women. Economic status had the most positive contribution, explaining 27% of inequality in unintended pregnancy.

  14. Insights from a Regime Decomposition Approach on CERES and CloudSat-inferred Cloud Radiative Effects

    Science.gov (United States)

    Oreopoulos, L.; Cho, N.; Lee, D.

    2015-12-01

    Our knowledge of the Cloud Radiative Effect (CRE) not only at the Top-of-the-Atmosphere (TOA), but also (with the help of some modeling) at the surface (SFC) and within the atmospheric column (ATM) has been steadily growing in recent years. Not only do we have global values for these CREs, but we can now also plot global maps of their geographical distribution. The next step in our effort to advance our knowledge of CRE is to systematically assess the contributions of prevailing cloud systems to the global values. The presentation addresses this issue directly. We identify the world's prevailing cloud systems, which we call "Cloud Regimes" (CRs) via clustering analysis of MODIS (Aqua-Terra) daily joint histograms of Cloud Top Pressure and Cloud Optical Thickness (TAU) at 1 degree scales. We then composite CERES diurnal values of CRE (TOA, SFC, ATM) separately for each CR by averaging these values for each CR occurrence, and thus find the contribution of each CR to the global value of CRE. But we can do more. We can actually decompose vertical profiles of inferred instantaneous CRE from CloudSat/CALIPSO (2B-FLXHR-LIDAR product) by averaging over Aqua CR occurrences (since A-Train formation flying allows collocation). Such an analysis greatly enhances our understanding of the radiative importance of prevailing cloud mixtures at different atmospheric levels. We can, for example, in addition to examining whether the CERES findings on which CRs contribute to radiative cooling and warming of the atmospheric column are consistent with CloudSat, also gain insight on why and where exactly this happens from the shape of the full instantaneous CRE vertical profiles.

  15. Ozone decomposition

    Directory of Open Access Journals (Sweden)

    Batakliev Todor

    2014-06-01

    Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates

  16. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  17. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Decomposition techniques

    Science.gov (United States)

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  19. Understanding determinants of socioeconomic inequality in mental health in Iran's capital, Tehran: a concentration index decomposition approach.

    Science.gov (United States)

    Morasae, Esmaeil Khedmati; Forouzan, Ameneh Setareh; Majdzadeh, Reza; Asadi-Lari, Mohsen; Noorbala, Ahmad Ali; Hosseinpoor, Ahmad Reza

    2012-03-26

    Mental health is of special importance regarding socioeconomic inequalities in health. On the one hand, mental health status mediates the relationship between economic inequality and health; on the other hand, mental health as an "end state" is affected by social factors and socioeconomic inequality. In spite of this, in examining socioeconomic inequalities in health, mental health has attracted less attention than physical health. As a first attempt in Iran, the objectives of this paper were to measure socioeconomic inequality in mental health, and then to untangle and quantify the contributions of potential determinants of mental health to the measured socioeconomic inequality. In a cross-sectional observational study, mental health data were taken from an Urban Health Equity Assessment and Response Tool (Urban HEART) survey, conducted on 22 300 Tehran households in 2007 and covering people aged 15 and above. Principal component analysis was used to measure the economic status of households. As a measure of socioeconomic inequality, a concentration index of mental health was applied and decomposed into its determinants. The overall concentration index of mental health in Tehran was -0.0673 (95% CI = -0.070 - -0.057). Decomposition of the concentration index revealed that economic status made the largest contribution (44.7%) to socioeconomic inequality in mental health. Educational status (13.4%), age group (13.1%), district of residence (12.5%) and employment status (6.5%) also proved further important contributors to the inequality. Socioeconomic inequalities exist in mental health status in Iran's capital, Tehran. Since the root of this avoidable inequality is in sectors outside the health system, a holistic mental health policy approach which includes social and economic determinants should be adopted to redress the inequitable distribution of mental health.

  20. Approaches to teaching primary level mathematics

    Directory of Open Access Journals (Sweden)

    Caroline Long

    2014-12-01

    Full Text Available In this article we explore approaches to curriculum in the primary school in order to map and manage the omissions implicit in the current unfolding of the Curriculum and Assessment Policy Statement for mathematics. The focus of school-based research has been on curriculum coverage and cognitive depth. To address the challenges of teaching mathematics from the perspective of the learner, we ask whether the learners engage with the subject in such a way that they build foundations for more advanced mathematics. We firstly discuss three approaches that inform the teaching of mathematics in the primary school and which may be taken singly or in conjunction into organising the curriculum: the topics approach, the process approach, and the conceptual fields approach. Each of the approaches is described and evaluated by presenting both their advantages and disadvantages. We then expand on the conceptual fields approach by means of an illustrative example. The planning of an instructional design integrates both a topics and a process approach into a conceptual fields approach. To address conceptual depth within this approach, we draw on five dimensions required for understanding a mathematical concept. In conclusion, we reflect on an approach to curriculum development that draws on the integrated theory of conceptual fields to support teachers and learners in the quest for improved teaching and learning.

  1. A Unified Approach towards Decomposition and Coordination for Multi-level Optimization

    NARCIS (Netherlands)

    De Wit, A.J.

    2009-01-01

    Complex systems, such as those encountered in aerospace engineering, can typically be considered as a hierarchy of individual coupled elements. This hierarchy is reflected in the analysis techniques that are used to analyze the physcial characteristics of the system. Consequently, a hierarchy of

  2. Decomposing the causes of socioeconomic-related health inequality among urban and rural populations in China: a new decomposition approach.

    Science.gov (United States)

    Cai, Jiaoli; Coyte, Peter C; Zhao, Hongzhong

    2017-07-18

    In recent decades, China has experienced tremendous economic growth and also witnessed growing socioeconomic-related health inequality. The study aims to explore the potential causes of socioeconomic-related health inequality in urban and rural areas of China over the past two decades. This study used six waves of the China Health and Nutrition Survey (CHNS) from 1991 to 2006. The recentered influence function (RIF) regression decomposition method was employed to decompose socioeconomic-related health inequality in China. Health status was derived from self-rated health (SRH) scores. The analyses were conducted on urban and rural samples separately. We found that the average level of health status declined from 1989 to 2006 for both urban and rural populations. Average health scores were greater for the rural population compared with those for the urban population. We also found that there exists pro-rich health inequality in China. While income and secondary education were the main factors to reduce health inequality, older people, unhealthy lifestyles and a poor home environment increased inequality. Health insurance had the opposite effects on health inequality for urban and rural populations, resulting in lower inequality for urban populations and higher inequality for their rural counterparts. These findings suggest that an effective way to reduce socioeconomic-related health inequality is not only to increase income and improve access to health care services, but also to focus on improvements in the lifestyles and the home environment. Specifically, for rural populations, it is particularly important to improve the design of health insurance and implement a more comprehensive insurance package that can effectively target the rural poor. Moreover, it is necessary to comprehensively promote the flush toilets and tap water in rural areas. For urban populations, in addition to promoting universal secondary education, healthy lifestyles should be promoted

  3. Prokaryotic regulatory systems biology: Common principles governing the functional architectures of Bacillus subtilis and Escherichia coli unveiled by the natural decomposition approach.

    Science.gov (United States)

    Freyre-González, Julio A; Treviño-Quintanilla, Luis G; Valtierra-Gutiérrez, Ilse A; Gutiérrez-Ríos, Rosa María; Alonso-Pavón, José A

    2012-10-31

    Escherichia coli and Bacillus subtilis are two of the best-studied prokaryotic model organisms. Previous analyses of their transcriptional regulatory networks have shown that they exhibit high plasticity during evolution and suggested that both converge to scale-free-like structures. Nevertheless, beyond this suggestion, no analyses have been carried out to identify the common systems-level components and principles governing these organisms. Here we show that these two phylogenetically distant organisms follow a set of common novel biologically consistent systems principles revealed by the mathematically and biologically founded natural decomposition approach. The discovered common functional architecture is a diamond-shaped, matryoshka-like, three-layer (coordination, processing, and integration) hierarchy exhibiting feedback, which is shaped by four systems-level components: global transcription factors (global TFs), locally autonomous modules, basal machinery and intermodular genes. The first mathematical criterion to identify global TFs, the κ-value, was reassessed on B. subtilis and confirmed its high predictive power by identifying all the previously reported, plus three potential, master regulators and eight sigma factors. The functionally conserved cores of modules, basal cell machinery, and a set of non-orthologous common physiological global responses were identified via both orthologous genes and non-orthologous conserved functions. This study reveals novel common systems principles maintained between two phylogenetically distant organisms and provides a comparison of their lifestyle adaptations. Our results shed new light on the systems-level principles and the fundamental functions required by bacteria to sustain life. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Synthetic generation of myocardial blood-oxygen-level-dependent MRI time series via structural sparse decomposition modeling.

    Science.gov (United States)

    Rusu, Cristian; Morisi, Rita; Boschetto, Davide; Dharmakumar, Rohan; Tsaftaris, Sotirios A

    2014-07-01

    This paper aims to identify approaches that generate appropriate synthetic data (computer generated) for cardiac phase-resolved blood-oxygen-level-dependent (CP-BOLD) MRI. CP-BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by 1) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and 2) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease.

  5. An Efficient Approach for Pixel Decomposition to Increase the Spatial Resolution of Land Surface Temperature Images from MODIS Thermal Infrared Band Data

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2014-12-01

    Full Text Available Land surface temperature (LST images retrieved from the thermal infrared (TIR band data of Moderate Resolution Imaging Spectroradiometer (MODIS have much lower spatial resolution than the MODIS visible and near-infrared (VNIR band data. The coarse pixel scale of MODIS LST images (1000 m under nadir have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250–500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD. Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI and building index (NDBI, reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER with much higher spatial resolution than MODIS data was on-board the same platform (Terra as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error

  6. An efficient approach for pixel decomposition to increase the spatial resolution of land surface temperature images from MODIS thermal infrared band data.

    Science.gov (United States)

    Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe

    2014-12-25

    Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250-500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2

  7. Synthesis of SiOx@CdS core–shell nanoparticles by simple thermal decomposition approach and studies on their optical properties

    International Nuclear Information System (INIS)

    Kandula, Syam; Jeevanandam, P.

    2014-01-01

    Highlights: • SiO x @CdS nanoparticles have been synthesized by a novel thermal decomposition approach. • The method is easy and there is no need for surface functionalization of silica core. • SiO x @CdS nanoparticles show different optical properties compared to pure CdS. - Abstract: SiO x @CdS core–shell nanoparticles have been synthesized by a simple thermal decomposition approach. The synthesis involves two steps. In the first step, SiO x spheres were synthesized using StÖber’s process. Then, cadmium sulfide nanoparticles were deposited on the SiO x spheres by the thermal decomposition of cadmium acetate and thiourea in ethylene glycol at 180 °C. Electron microscopy results show uniform deposition of cadmium sulfide nanoparticles on the surface of SiO x spheres. Electron diffraction patterns confirm crystalline nature of the cadmium sulfide nanoparticles on silica and high resolution transmission electron microscopy images clearly show the lattice fringes due to cubic cadmium sulfide. Diffuse reflectance spectroscopy results show blue shift of band gap absorption of SiO x @CdS core–shell nanoparticles with respect to bulk cadmium sulfide and this is attributed to quantum size effect. Photoluminescence results show enhancement in intensity of band edge emission and weaker emission due to surface defects in SiO x @CdS core–shell nanoparticles compared to pure cadmium sulfide nanoparticles

  8. Drift and transmission FT-IR spectroscopy of forest soils: an approach to determine decomposition processes of forest litter

    International Nuclear Information System (INIS)

    Haberhauer, G.; Gerzabek, M.H.

    1999-06-01

    A method is described to characterize organic soil layers using Fourier transformed infrared spectroscopy. The applicability of FT-IR, either dispersive or transmission, to investigate decomposition processes of spruce litter in soil originating from three different forest sites in two climatic regions was studied. Spectral information of transmission and diffuse reflection FT-IR spectra was analyzed and compared. For data evaluation Kubelka Munk (KM) transformation was applied to the DRIFT spectra. Sample preparation for DRIFT is simpler and less time consuming in comparison to transmission FT-IR, which uses KBr pellets. A variety of bands characteristics of molecular structures and functional groups has been identified for these complex samples. Analysis of both transmission FT-IR and DRIFT, showed that the intensity of distinct bands is a measure of the decomposition of forest litter. Interferences due to water adsorption spectra were reduced by DRIFT measurement in comparison to transmission FT-IR spectroscopy. However, data analysis revealed that intensity changes of several bands of DRIFT and transmission FT-IR were significantly correlated with soil horizons. The application of regression models enables identification and differentiation of organic forest soil horizons and allows to determine the decomposition status of soil organic matter in distinct layers. On the basis of the data presented in this study, it may be concluded that FT-IR spectroscopy is a powerful tool for the investigation of decomposition dynamics in forest soils. (author)

  9. APPROACHES TO QUALITY MANAGEMENT AT EUROPEAN LEVEL

    OpenAIRE

    Salagean Horatiu Catalin; Ilies Radu; Gherman Mihai; Cioban Bogdan

    2013-01-01

    In the current economic context, quality has become a source of competitive advantage and organizations must perceive quality as something natural and human in order to achieve excellence. The proper question for the context of the internationalisation of the economy is whether the culture of the regions, states or nations affects the development in the quality management field and the quality approach. The present study tries to give a theoretical approach of how culture influences the quali...

  10. BWR level estimation using Kalman Filtering approach

    International Nuclear Information System (INIS)

    Garner, G.; Divakaruni, S.M.; Meyer, J.E.

    1986-01-01

    Work is in progress on development of a system for Boiling Water Reactor (BWR) vessel level validation and failure detection. The levels validated include the liquid level both inside and outside the core shroud. This work is a major part of a larger effort to develop a complete system for BWR signal validation. The demonstration plant is the Oyster Creek BWR. Liquid level inside the core shroud is not directly measured during full power operation. This level must be validated using measurements of other quantities and analytic models. Given the available sensors, analytic models for level that are based on mass and energy balances can contain open integrators. When such a model is driven by noisy measurements, the model predicted level will deviate from the true level over time. To validate the level properly and to avoid false alarms, the open integrator must be stabilized. In addition, plant parameters will change slowly with time. The respective model must either account for these plant changes or be insensitive to them to avoid false alarms and maintain sensitivity to true failures of level instrumentation. Problems are addressed here by combining the extended Kalman Filter and Parity Space Decision/Estimator. The open integrator is stabilized by integrating from the validated estimate at the beginning of each sampling interval, rather than from the model predicted value. The model is adapted to slow plant/sensor changes by updating model parameters on-line

  11. The Methane to Carbon Dioxide Ratio Produced during Peatland Decomposition and a Simple Approach for Distinguishing This Ratio

    Science.gov (United States)

    Chanton, J.; Hodgkins, S. B.; Cooper, W. T.; Glaser, P. H.; Corbett, J. E.; Crill, P. M.; Saleska, S. R.; Rich, V. I.; Holmes, B.; Hines, M. E.; Tfaily, M.; Kostka, J. E.

    2014-12-01

    Peatland organic matter is cellulose-like with an oxidation state of approximately zero. When this material decomposes by fermentation, stoichiometry dictates that CH4 and CO2 should be produced in a ratio approaching one. While this is generally the case in temperate zones, this production ratio is often departed from in boreal peatlands, where the ratio of belowground CH4/CO2 production varies between 0.1 and 1, indicating CO2 production by a mechanism in addition to fermentation. The in situ CO2/CH4 production ratio may be ascertained by analysis of the 13C isotopic composition of these products, because CO2 production unaccompanied by methane production produces CO2 with an isotopic composition similar to the parent organic matter while methanogenesis produces 13C depleted methane and 13C enriched CO2. The 13C enrichment in the subsurface CO2 pool is directly related to the amount of if formed from methane production and the isotopic composition of the methane itself. Excess CO2 production is associated with more acidic conditions, Sphagnum vegetation, high and low latitudes, methane production dominated by hydrogenotrophic methane production, 13C depleted methane, and generally, more nutrient depleted conditions. Three theories have been offered to explain these observations— 1) inhibition of acetate utilization, acetate build-up and diffusion to the surface and eventual aerobic oxidation, 2) the use of humic acids as electron acceptors, and the 3) utilization of organic oxygen to produce CO2. In support of #3, we find that 13C-NMR, Fourier transform infrared (FT IR) spectroscopy, and Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR-MS) clearly show the evolution of polysaccharides and cellulose towards more decomposed humified alkyl compounds stripped of organic oxygen utilized to form CO2. Such decomposition results in more negative carbon oxidation states varying from -1 to -2. Coincident with this reduction in oxidation state, is the

  12. Patient-Specific Seizure Detection in Long-Term EEG Using Signal-Derived Empirical Mode Decomposition (EMD)-based Dictionary Approach.

    Science.gov (United States)

    Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar

    2018-06-25

    The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability

  13. Disocclusion: a variational approach using level lines.

    Science.gov (United States)

    Masnou, Simon

    2002-01-01

    Object recognition, robot vision, image and film restoration may require the ability to perform disocclusion. We call disocclusion the recovery of occluded areas in a digital image by interpolation from their vicinity. It is shown in this paper how disocclusion can be performed by means of the level-lines structure, which offers a reliable, complete and contrast-invariant representation of images. Level-lines based disocclusion yields a solution that may have strong discontinuities. The proposed method is compatible with Kanizsa's amodal completion theory.

  14. Semiparametric score level fusion: Gaussian copula approach

    NARCIS (Netherlands)

    Susyanyo, N.; Klaassen, C.A.J.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2015-01-01

    Score level fusion is an appealing method for combining multi-algorithms, multi- representations, and multi-modality biometrics due to its simplicity. Often, scores are assumed to be independent, but even for dependent scores, accord- ing to the Neyman-Pearson lemma, the likelihood ratio is the

  15. NEW APPROACHES: Reading in Advanced level physics

    Science.gov (United States)

    Fagan, Dorothy

    1997-11-01

    Teachers often report that their A-level pupils are unwilling to read physics-related material. What is it about physics texts that deters pupils from reading them? Are they just too difficult for 16 - 18 year olds, or is it that pupils lack specific reading skills? This article describes some of the results from my research into pupils' reading of physics-related texts and tries to clarify the situation.

  16. Methodological approaches to the assessment level of social responsibility

    OpenAIRE

    Vorona, E.

    2010-01-01

    A study of current approaches to assessing the level of social responsibility. Proposed methodological approach to evaluating the performance of the social responsibility of railway transport. Conceptual Basis of social reporting in rail transport.

  17. Investigation of privatization by level crossing approach

    Science.gov (United States)

    Vahabi, M.; Jafari, G. R.

    2009-09-01

    Privatization - a political as well as an economic policy - is generally defined as the transfer of a property or the responsibility for it from the public to the private sector. But privatization is not merely the transfer of the ownership and efficiency of the market should be considered. A successful privatization program induces better profitability and efficiency, higher output, more investment, etc. The main method of privatization is through introducing new stocks to the market to motivate competition. However, for a successful privatization the capability of a market for absorbing the new stock should also be considered. Without paying attention to this aspect, privatization through the introduction of new stocks may lead to reduced market efficiency. We study, based on the complexity theory and in particular the concept of Level Crossing, the effect of the stages of the development, activity, risk, and the waiting times for special events on the privatization.

  18. A solution of nonlinear equation for the gravity wave spectra from Adomian decomposition method: a first approach

    Directory of Open Access Journals (Sweden)

    Antonio Gledson Goulart

    2013-12-01

    Full Text Available In this paper, the equation for the gravity wave spectra in mean atmosphere is analytically solved without linearization by the Adomian decomposition method. As a consequence, the nonlinear nature of problem is preserved and the errors found in the results are only due to the parameterization. The results, with the parameterization applied in the simulations, indicate that the linear solution of the equation is a good approximation only for heights shorter than ten kilometers, because the linearization the equation leads to a solution that does not correctly describe the kinetic energy spectra.

  19. Carbon dynamics in peatlands under changing hydrology. Effects of water level drawdown on litter quality, microbial enzyme activities and litter decomposition rates

    Energy Technology Data Exchange (ETDEWEB)

    Strakova, P.

    2010-07-01

    Pristine peatlands are carbon (C) accumulating wetland ecosystems sustained by a high water level (WL) and consequent anoxia that slows down decomposition. Persistent WL drawdown as a response to climate and/or land-use change directly affects decomposition: increased oxygenation stimulates decomposition of the 'old C' (peat) sequestered under prior anoxic conditions. Responses of the 'new C' (plant litter) in terms of quality, production and decomposability, and the consequences for the whole C cycle of peatlands are not fully understood. WL drawdown induces changes in plant community resulting in shift in dominance from Sphagnum and graminoids to shrubs and trees. There is increasing evidence that the indirect effects of WL drawdown via the changes in plant communities will have more impact on the ecosystem C cycling than any direct effects. The aim of this study is to disentangle the direct and indirect effects of WL drawdown on the 'new C' by measuring the relative importance of (1) environmental parameters (WL depth, temperature, soil chemistry) and (2) plant community composition on litter production, microbial activity, litter decomposition rates and, consequently, on the C accumulation. This information is crucial for modelling C cycle under changing climate and/or land-use. The effects of WL drawdown were tested in a large-scale experiment with manipulated WL at two time scales and three nutrient regimes. Furthermore, the effect of climate on litter decomposability was tested along a north-south gradient. Additionally, a novel method for estimating litter chemical quality and decomposability was explored by combining Near infrared spectroscopy with multivariate modelling. WL drawdown had direct effects on litter quality, microbial community composition and activity and litter decomposition rates. However, the direct effects of WL drawdown were overruled by the indirect effects via changes in litter type composition and

  20. Revisiting the Granger Causality Relationship between Energy Consumption and Economic Growth in China: A Multi-Timescale Decomposition Approach

    Directory of Open Access Journals (Sweden)

    Lei Jiang

    2017-12-01

    Full Text Available The past four decades have witnessed rapid growth in the rate of energy consumption in China. A great deal of energy consumption has led to two major issues. One is energy shortages and the other is environmental pollution caused by fossil fuel combustion. Since energy saving plays a substantial role in addressing both issues, it is of vital importance to study the intrinsic characteristics of energy consumption and its relationship with economic growth. The topic of the nexus between energy consumption and economic growth has been hotly debated for years. However, conflicting conclusions have been drawn. In this paper, we provide a novel insight into the characteristics of the growth rate of energy consumption in China from a multi-timescale perspective by means of adaptive time-frequency data analysis; namely, the ensemble empirical mode decomposition method, which is suitable for the analysis of non-linear time series. Decomposition led to four intrinsic mode function (IMF components and a trend component with different periods. Then, we repeated the same procedure for the growth rate of China’s GDP and obtained four similar IMF components and a trend component. In the second stage, we performed the Granger causality test. The results demonstrated that, in the short run, there was a bidirectional causality relationship between economic growth and energy consumption, and in the long run a unidirectional relationship running from economic growth to energy consumption.

  1. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  2. Sea level rise and the geoid: factor analysis approach

    Directory of Open Access Journals (Sweden)

    Alexey Sadovski

    2013-08-01

    Full Text Available Sea levels are rising around the world, and this is a particular concern along most of the coasts of the United States. A 1989 EPA report shows that sea levels rose 5-6 inches more than the global average along the Mid-Atlantic and Gulf Coasts in the last century. The main reason for this is coastal land subsidence. This sea level rise is considered more as relative sea level rise than global sea level rise. Thus, instead of studying sea level rise globally, this paper describes a statistical approach by using factor analysis of regional sea level rates of change. Unlike physical models and semi-empirical models that attempt to approach how much and how fast sea levels are changing, this methodology allows for a discussion of the factor(s that statistically affects sea level rates of change, and seeks patterns to explain spatial correlations.

  3. Socioeconomic inequality in abdominal obesity among older people in Purworejo District, Central Java, Indonesia - a decomposition analysis approach.

    Science.gov (United States)

    Pujilestari, Cahya Utamie; Nyström, Lennarth; Norberg, Margareta; Weinehall, Lars; Hakimi, Mohammad; Ng, Nawi

    2017-12-12

    Obesity has become a global health challenge as its prevalence has increased globally in recent decades. Studies in high-income countries have shown that obesity is more prevalent among the poor. In contrast, obesity is more prevalent among the rich in low- and middle-income countries, hence requiring different focal points to design public health policies in the latter contexts. We examined socioeconomic inequalities in abdominal obesity in Purworejo District, Central Java, Indonesia and identified factors contributing to the inequalities. We utilised data from the WHO-INDEPTH Study on global AGEing and adult health (WHO-INDEPTH SAGE) conducted in the Purworejo Health and Demographic Surveillance System (HDSS) in Purworejo District, Indonesia in 2010. The study included 14,235 individuals aged 50 years and older. Inequalities in abdominal obesity across wealth groups were assessed separately for men and women using concentration indexes. Decomposition analysis was conducted to assess the determinants of socioeconomic inequalities in abdominal obesity. Abdominal obesity was five-fold more prevalent among women than in men (30% vs. 6.1%; p < 0.001). The concentration index (CI) analysis showed that socioeconomic inequalities in abdominal obesity were less prominent among women (CI = 0.26, SE = 0.02, p < 0.001) compared to men (CI = 0.49, SE = 0.04, p < 0.001). Decomposition analysis showed that physical labour was the major determinant of socioeconomic inequalities in abdominal obesity among men, explaining 47% of the inequalities, followed by poor socioeconomic status (31%), ≤ 6 years of education (15%) and current smoking (11%). The three major determinants of socioeconomic inequalities in abdominal obesity among women were poor socio-economic status (48%), physical labour (17%) and no formal education (16%). Abdominal obesity was more prevalent among older women in a rural Indonesian setting. Socioeconomic inequality in

  4. Decomposition of groundwater level fluctuations using transfer modelling in an area with shallow to deep unsaturated zones

    Science.gov (United States)

    Gehrels, J. C.; van Geer, F. C.; de Vries, J. J.

    1994-05-01

    Time series analysis of the fluctuations in shallow groundwater levels in the Netherlands lowlands have revealed a large-scale decline in head during recent decades as a result of an increase in land drainage and groundwater withdrawal. The situation is more ambiguous in large groundwater bodies located in the eastern part of the country, where the unsaturated zone increases from near zero along the edges to about 40 m in the centre of the area. As depth of the unsaturated zone increases, groundwater level reacts with an increasing delay to fluctuations in climate and influences of human activities. The aim of the present paper is to model groundwater level fluctuations in these areas using a linear stochastic transfer function model, relating groundwater levels to estimated precipitation excess, and to separate artificial components from the natural groundwater regime. In this way, the impact of groundwater withdrawal and the reclamation of a 1000 km 2 polder area on the groundwater levels in the adjoining higher ground could be assessed. It became evident that the linearity assumption of the transfer functions becomes a serious drawback in areas with the deepest groundwater levels, because of non-linear processes in the deep unsaturated zone and the non-synchronous arrival of recharge in the saturated zone. Comparison of the results from modelling the influence of reclamation with an analytical solution showed that the lowering of groundwater level is partly compensated by reduced discharge and therefore is less than expected.

  5. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  6. A novel thermal decomposition approach to synthesize hydroxyapatite-silver nanocomposites and their antibacterial action against GFP-expressing antibiotic resistant E. coli.

    Science.gov (United States)

    Sahni, Geetika; Gopinath, P; Jeevanandam, P

    2013-03-01

    A novel thermal decomposition approach to synthesize hydroxyapatite-silver (Hap-Ag) nanocomposites has been reported. The nanocomposites were characterized by X-ray diffraction, field emission scanning electron microscopy coupled with energy dispersive X-ray analysis, transmission electron microscopy and diffuse reflectance spectroscopy techniques. Antibacterial activity studies for the nanocomposites were explored using a new rapid access method employing recombinant green fluorescent protein (GFP) expressing antibiotic resistant Escherichia coli (E. coli). The antibacterial activity was studied by visual turbidity analysis, optical density analysis, fluorescence spectroscopy and microscopy. The mechanism of bactericidal action of the nanocomposites on E. coli was investigated using atomic force microscopy, and TEM analysis. Excellent bactericidal activity at low concentration of the nanocomposites was observed which may allow their use in the production of microbial contamination free prosthetics. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Losses of soil organic carbon by converting tropical forest to plantations: Assessment of erosion and decomposition by new δ13C approach

    Science.gov (United States)

    Guillaume, Thomas; Muhammad, Damris; Kuzyakov, Yakov

    2015-04-01

    Indonesia lost more tropical forest than all of Brazil in 2012, mainly driven by the rubber, oil palm and timber industries. Nonetheless, the effects of converting forest to oil palm and rubber plantations on soil organic carbon (SOC) stocks remain unclear. We analyzed SOC losses after lowland rainforest conversion to oil palm, intensive rubber and extensive rubber plantations in Jambi province on Sumatra Island. We developed and applied a new δ13C based approach to assess and separate two processes: 1) erosion and 2) decomposition. Carbon contents in the Ah horizon under oil palm and rubber plantations were strongly reduced: up to 70% and 62%, respectively. The decrease was lower under extensive rubber plantations (41%). The C content in the subsoil was similar in the forest and the plantations. We therefore assumed that a shift to higher δ13C values in the subsoil of the plantations corresponds to the losses of the upper soil layer by erosion. Erosion was estimated by comparing the δ13C profiles in the undisturbed soils under forest with the disturbed soils under plantations. The estimated erosion was the strongest in oil palm (35±8 cm) and rubber (33±10 cm) plantations. The 13C enrichment of SOC used as a proxy of its turnover indicates a decrease of SOC decomposition rate in the Ah horizon under oil palm plantations after forest conversion. SOC availability, measured by microbial respiration rate and Fourier Transformed Infrared Spectroscopy, was lower under oil palm plantations. Despite similar trends in C losses and erosion in intensive plantations, our results indicate that microorganisms in oil palm plantations mineralized mainly the old C stabilized prior to conversion, whereas microorganisms under rubber plantations mineralized the fresh C from the litter, leaving the old C pool mainly untouched. Based on the lack of C input from litter, we expect further losses of SOC under oil palm plantations, which therefore are a less sustainable land

  8. Evaluation of the Factors of Russian Regions’ Convergence / Divergence in the Level of Budget Provision Based on the Decomposition of the Theil - Bernoulli Index

    Directory of Open Access Journals (Sweden)

    Marina Yuryevna Malkina

    2016-09-01

    Full Text Available The study focuses on the Russian regions’ disparities in the level of budget expenditures per capita and their dynamics. The paper assesses contribution of main factors and their correlation, as well as the stages of budget process, to the regional imbalances in the public sector. The author also presents regions’ budget expenditures per capita in a form of five-factor multiplicative model which at the same time demonstrates the sequence of the stages of budget process. To estimate regions’ inequality in budget expenditures and other related variables the researcher employs the Theil - Bernoulli index which is sensitive to excessive poverty. Its decomposition, made on the basis of the Duro and Esteban technique, allows evaluating the structure of inter- regional disparities in the public sector. The results include following: 1 static assessments of the factors contribution to the regions’ convergence in budget expenditure per capita at the stages of GRP production, receipt and distribution of taxes among levels of budget system, the stages of attraction of inter-budgetary support and budget deficit financing; 2 dynamic assessments of the factors contribution to regions’ convergence / divergence in the level of budgetary expenditure per capita for 9 years. The findings may be useful in optimizing the policy of inter-budgetary equalization in Russia

  9. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  10. Mathematical modelling of the decomposition of explosives

    International Nuclear Information System (INIS)

    Smirnov, Lev P

    2010-01-01

    Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.

  11. Forecasting daily lake levels using artificial intelligence approaches

    Science.gov (United States)

    Kisi, Ozgur; Shiri, Jalal; Nikoofar, Bagher

    2012-04-01

    Accurate prediction of lake-level variations is important for planning, design, construction, and operation of lakeshore structures and also in the management of freshwater lakes for water supply purposes. In the present paper, three artificial intelligence approaches, namely artificial neural networks (ANNs), adaptive-neuro-fuzzy inference system (ANFIS), and gene expression programming (GEP), were applied to forecast daily lake-level variations up to 3-day ahead time intervals. The measurements at the Lake Iznik in Western Turkey, for the period of January 1961-December 1982, were used for training, testing, and validating the employed models. The results obtained by the GEP approach indicated that it performs better than ANFIS and ANNs in predicting lake-level variations. A comparison was also made between these artificial intelligence approaches and convenient autoregressive moving average (ARMA) models, which demonstrated the superiority of GEP, ANFIS, and ANN models over ARMA models.

  12. Uma Abordagem para a Decomposição de Processos de Nego̿cio para Execução em Nuvens Computacionais (in Portugese; An approach to business processes decomposition for cloud deployment)

    NARCIS (Netherlands)

    Povoa, Lucas Venezian; Lopes de Souza, Wanderley; Ferreira Pires, Luis; Duipmans, Evert F.; do Prado, Antonio Francisco

    Due to safety requirements, certain data or activities of a business process should be kept within the user premises, while others can be allocated to a cloud environment. This paper presents a generic approach to business processes decomposition taking into account the allocation of activities and

  13. A Fusion Approach to Feature Extraction by Wavelet Decomposition and Principal Component Analysis in Transient Signal Processing of SAW Odor Sensor Array

    Directory of Open Access Journals (Sweden)

    Prashant SINGH

    2011-03-01

    Full Text Available This paper presents theoretical analysis of a new approach for development of surface acoustic wave (SAW sensor array based odor recognition system. The construction of sensor array employs a single polymer interface for selective sorption of odorant chemicals in vapor phase. The individual sensors are however coated with different thicknesses. The idea of sensor coating thickness variation is for terminating solvation and diffusion kinetics of vapors into polymer up to different stages of equilibration on different sensors. This is expected to generate diversity in information content of the sensors transient. The analysis is based on wavelet decomposition of transient signals. The single sensor transients have been used earlier for generating odor identity signatures based on wavelet approximation coefficients. In the present work, however, we exploit variability in diffusion kinetics due to polymer thicknesses for making odor signatures. This is done by fusion of the wavelet coefficients from different sensors in the array, and then applying the principal component analysis. We find that the present approach substantially enhances the vapor class separability in feature space. The validation is done by generating synthetic sensor array data based on well-established SAW sensor theory.

  14. Structure versus level: A unified approach to campaign evaluation

    DEFF Research Database (Denmark)

    Scholderer, Joachim; Grunert, Klaus G.

    2001-01-01

    Based on a modified version of the theory of planned behavior (Ajzen, 1985), a general model for the evaluation of social interventions is developed. Whilst common practice defines campaign success in terms of absolute levels of the target variables, the present approach stresses changes...

  15. The state-level approach: moving beyond integrated safeguards

    International Nuclear Information System (INIS)

    Tape, James W.

    2008-01-01

    The concept of a State-Level Approach (SLA) for international safeguards planning, implementation, and evaluation was contained in the Conceptual Framework for Integrated Safeguards (IS) agreed in 2002. This paper describes briefly the key elements of the SLA, including State-level factors and high-level safeguards objectives, and considers different cases in which application of the SLA methodology could address safeguards for 'suspect' States, 'good' States, and Nuclear Weapons States hosting fuel cycle centers. The continued use and further development of the SLA to customize safeguards for each State, including for States already under IS, is seen as central to effective and efficient safeguards for an expanding nuclear world.

  16. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    Science.gov (United States)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react

  17. Multi-level approach for parametric roll analysis

    Science.gov (United States)

    Kim, Taeyoung; Kim, Yonghwan

    2011-03-01

    The present study considers multi-level approach for the analysis of parametric roll phenomena. Three kinds of computation method, GM variation, impulse response function (IRF), and Rankine panel method, are applied for the multi-level approach. IRF and Rankine panel method are based on the weakly nonlinear formulation which includes nonlinear Froude- Krylov and restoring forces. In the computation result of parametric roll occurrence test in regular waves, IRF and Rankine panel method show similar tendency. Although the GM variation approach predicts the occurrence of parametric roll at twice roll natural frequency, its frequency criteria shows a little difference. Nonlinear roll motion in bichromatic wave is also considered in this study. To prove the unstable roll motion in bichromatic waves, theoretical and numerical approaches are applied. The occurrence of parametric roll is theoretically examined by introducing the quasi-periodic Mathieu equation. Instability criteria are well predicted from stability analysis in theoretical approach. From the Fourier analysis, it has been verified that difference-frequency effects create the unstable roll motion. The occurrence of unstable roll motion in bichromatic wave is also observed in the experiment.

  18. Comparing structural decomposition analysis and index

    International Nuclear Information System (INIS)

    Hoekstra, Rutger; Van den Bergh, Jeroen C.J.M.

    2003-01-01

    To analyze and understand historical changes in economic, environmental, employment or other socio-economic indicators, it is useful to assess the driving forces or determinants that underlie these changes. Two techniques for decomposing indicator changes at the sector level are structural decomposition analysis (SDA) and index decomposition analysis (IDA). For example, SDA and IDA have been used to analyze changes in indicators such as energy use, CO 2 -emissions, labor demand and value added. The changes in these variables are decomposed into determinants such as technological, demand, and structural effects. SDA uses information from input-output tables while IDA uses aggregate data at the sector-level. The two methods have developed quite independently, which has resulted in each method being characterized by specific, unique techniques and approaches. This paper has three aims. First, the similarities and differences between the two approaches are summarized. Second, the possibility of transferring specific techniques and indices is explored. Finally, a numerical example is used to illustrate differences between the two approaches

  19. A novel approach for fault detection and classification of the thermocouple sensor in Nuclear Power Plant using Singular Value Decomposition and Symbolic Dynamic Filter

    International Nuclear Information System (INIS)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-01-01

    Highlights: • A novel approach to classify the fault pattern using data-driven methods. • Application of robust reconstruction method (SVD) to identify the faulty sensor. • Analysing fault pattern for plenty of sensors using SDF with less time complexity. • An efficient data-driven model is designed to the false and missed alarms. - Abstract: A mathematical model with two layers is developed using data-driven methods for thermocouple sensor fault detection and classification in Nuclear Power Plants (NPP). The Singular Value Decomposition (SVD) based method is applied to detect the faulty sensor from a data set of all sensors, at the first layer. In the second layer, the Symbolic Dynamic Filter (SDF) is employed to classify the fault pattern. If SVD detects any false fault, it is also re-evaluated by the SDF, i.e., the model has two layers of checking to balance the false alarms. The proposed fault detection and classification method is compared with the Principal Component Analysis. Two case studies are taken from Fast Breeder Test Reactor (FBTR) to prove the efficiency of the proposed method.

  20. Light-quarkonium spectra and orbital-angular-momentum decomposition in a Bethe-Salpeter-equation approach

    Energy Technology Data Exchange (ETDEWEB)

    Hilger, T.; Krassnigg, A. [University of Graz, NAWI Graz, Institute of Physics, Graz (Austria); Gomez-Rocha, M. [ECT*, Villazzano, Trento (Italy)

    2017-09-15

    We investigate the light-quarkonium spectrum using a covariant Dyson-Schwinger-Bethe-Salpeter-equation approach to QCD. We discuss splittings among as well as orbital angular momentum properties of various states in detail and analyze common features of mass splittings with regard to properties of the effective interaction. In particular, we predict the mass of anti ss exotic 1{sup -+} states, and identify orbital angular momentum content in the excitations of the ρ meson. Comparing our covariant model results, the ρ and its second excitation being predominantly S-wave, the first excitation being predominantly D-wave, to corresponding conflicting lattice-QCD studies, we investigate the pion-mass dependence of the orbital-angular-momentum assignment and find a crossing at a scale of m{sub π} ∝ 1.4 GeV. If this crossing turns out to be a feature of the spectrum generated by lattice-QCD studies as well, it may reconcile the different results, since they have been obtained at different values of m{sub π}. (orig.)

  1. Antecedents of Organisational Creativity: A Multi-Level Approach

    Directory of Open Access Journals (Sweden)

    Ritu Gupta

    2016-06-01

    Full Text Available The purpose of this literature review is to provide a better understanding of the antecedents of organisational creativity with a multi-level approach. Organisational creativity is a sum total of the creativity accounted for by the individual employees of the organisation, the cumulative creativity of a team or group and creativity arising out of different structural components of an organisation. Some of the antecedents identified from the literature include personality, intrinsic motivation, group cohesion, social inhibition, cognitive interference, leader member exchange, organisational culture and climate, amongst others at individual, group and organisational level. Based on the literature review, suggestions for future research and research propositions have been proposed.

  2. Threshold Levels of Infant and Under-Five Mortality for Crossover between Life Expectancies at Ages Zero, One and Five in India: A Decomposition Analysis.

    Science.gov (United States)

    Dubey, Manisha; Ram, Usha; Ram, Faujdar

    2015-01-01

    Under the prevailing conditions of imbalanced life table and historic gender discrimination in India, our study examines crossover between life expectancies at ages zero, one and five years for India and quantifies the relative share of infant and under-five mortality towards this crossover. We estimate threshold levels of infant and under-five mortality required for crossover using age specific death rates during 1981-2009 for 16 Indian states by sex (comprising of India's 90% population in 2011). Kitagawa decomposition equations were used to analyse relative share of infant and under-five mortality towards crossover. India experienced crossover between life expectancies at ages zero and five in 2004 for menand in 2009 for women; eleven and nine Indian states have experienced this crossover for men and women, respectively. Men usually experienced crossover four years earlier than the women. Improvements in mortality below ages five have mostly contributed towards this crossover. Life expectancy at age one exceeds that at age zero for both men and women in India except for Kerala (the only state to experience this crossover in 2000 for men and 1999 for women). For India, using life expectancy at age zero and under-five mortality rate together may be more meaningful to measure overall health of its people until the crossover. Delayed crossover for women, despite higher life expectancy at birth than for men reiterates that Indian women are still disadvantaged and hence use of life expectancies at ages zero, one and five become important for India. Greater programmatic efforts to control leading causes of death during the first month and 1-59 months in high child mortality areas can help India to attain this crossover early.

  3. Threshold Levels of Infant and Under-Five Mortality for Crossover between Life Expectancies at Ages Zero, One and Five in India: A Decomposition Analysis.

    Directory of Open Access Journals (Sweden)

    Manisha Dubey

    Full Text Available Under the prevailing conditions of imbalanced life table and historic gender discrimination in India, our study examines crossover between life expectancies at ages zero, one and five years for India and quantifies the relative share of infant and under-five mortality towards this crossover.We estimate threshold levels of infant and under-five mortality required for crossover using age specific death rates during 1981-2009 for 16 Indian states by sex (comprising of India's 90% population in 2011. Kitagawa decomposition equations were used to analyse relative share of infant and under-five mortality towards crossover.India experienced crossover between life expectancies at ages zero and five in 2004 for menand in 2009 for women; eleven and nine Indian states have experienced this crossover for men and women, respectively. Men usually experienced crossover four years earlier than the women. Improvements in mortality below ages five have mostly contributed towards this crossover. Life expectancy at age one exceeds that at age zero for both men and women in India except for Kerala (the only state to experience this crossover in 2000 for men and 1999 for women.For India, using life expectancy at age zero and under-five mortality rate together may be more meaningful to measure overall health of its people until the crossover. Delayed crossover for women, despite higher life expectancy at birth than for men reiterates that Indian women are still disadvantaged and hence use of life expectancies at ages zero, one and five become important for India. Greater programmatic efforts to control leading causes of death during the first month and 1-59 months in high child mortality areas can help India to attain this crossover early.

  4. Interconnected levels of multi-stage marketing: A triadic approach

    OpenAIRE

    Vedel, Mette; Geersbro, Jens; Ritter, Thomas

    2012-01-01

    Multi-stage marketing gains increasing attention as knowledge of and influence on the customer's customer become more critical for the firm's success. Despite this increasing managerial relevance, systematic approaches for analyzing multi-stage marketing are still missing. This paper conceptualizes different levels of multi-stage marketing and illustrates these stages with a case study. In addition, a triadic perspective is introduced as an analytical tool for multi-stage marketing research. ...

  5. Prospects for regional safeguards systems - State-level Approach

    International Nuclear Information System (INIS)

    Peixoto, O.J.M.

    2013-01-01

    The increased co-operation with Regional Safeguard's System (RSAC) is a relevant tool for strengthening effectiveness and improving the efficiency of the international safeguard. The new safeguards system that emerges from the application of the Additional Protocol (INFCIRC/540) and the full use of State-level Concept is a challenge and an opportunity for effectively incorporate RSAC into the international safeguards scheme. The challenge is to determine how the co-operation and coordination will be implemented on this new safeguards scheme. This paper presents some discussions and prospects on the issues to be faced by RSAC and IAEA during the implementation of State-level Approach (SLA) using all information available. It is also discussed how different levels of co-operation could be achieved when SLA is applied by IAEA safeguards. The paper is followed by the slides of the presentation. (authors)

  6. An approach for the accurate measurement of social morality levels.

    Science.gov (United States)

    Liu, Haiyan; Chen, Xia; Zhang, Bo

    2013-01-01

    In the social sciences, computer-based modeling has become an increasingly important tool receiving widespread attention. However, the derivation of the quantitative relationships linking individual moral behavior and social morality levels, so as to provide a useful basis for social policy-making, remains a challenge in the scholarly literature today. A quantitative measurement of morality from the perspective of complexity science constitutes an innovative attempt. Based on the NetLogo platform, this article examines the effect of various factors on social morality levels, using agents modeling moral behavior, immoral behavior, and a range of environmental social resources. Threshold values for the various parameters are obtained through sensitivity analysis; and practical solutions are proposed for reversing declines in social morality levels. The results show that: (1) Population size may accelerate or impede the speed with which immoral behavior comes to determine the overall level of social morality, but it has no effect on the level of social morality itself; (2) The impact of rewards and punishment on social morality levels follows the "5∶1 rewards-to-punishment rule," which is to say that 5 units of rewards have the same effect as 1 unit of punishment; (3) The abundance of public resources is inversely related to the level of social morality; (4) When the cost of population mobility reaches 10% of the total energy level, immoral behavior begins to be suppressed (i.e. the 1/10 moral cost rule). The research approach and methods presented in this paper successfully address the difficulties involved in measuring social morality levels, and promise extensive application potentials.

  7. Short-term cover crop decomposition inorganic and conventional soils : Soil microbial and nutrient cycling indicator variables associated with different levels of soil suppressiveness to Pythium aphanidermatum

    NARCIS (Netherlands)

    Grünwald, N.J.; Hu, S.; Bruggen, van A.H.C.

    2000-01-01

    Stages of oat–vetch cover crop decomposition were characterized over time in terms of carbon and nitrogen cycling, microbial activity and community dynamics in organically and conventionally managed soils in a field experiment and a laboratory incubation experiment. We subsequently determined which

  8. Multi-domain/multi-method numerical approach for neutron transport equation; Couplage de methodes et decomposition de domaine pour la resolution de l'equation du transport des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Girardi, E

    2004-12-15

    A new methodology for the solution of the neutron transport equation, based on domain decomposition has been developed. This approach allows us to employ different numerical methods together for a whole core calculation: a variational nodal method, a discrete ordinate nodal method and a method of characteristics. These new developments authorize the use of independent spatial and angular expansion, non-conformal Cartesian and unstructured meshes for each sub-domain, introducing a flexibility of modeling which is not allowed in today available codes. The effectiveness of our multi-domain/multi-method approach has been tested on several configurations. Among them, one particular application: the benchmark model of the Phebus experimental facility at Cea-Cadarache, shows why this new methodology is relevant to problems with strong local heterogeneities. This comparison has showed that the decomposition method brings more accuracy all along with an important reduction of the computer time.

  9. Erbium hydride decomposition kinetics.

    Energy Technology Data Exchange (ETDEWEB)

    Ferrizz, Robert Matthew

    2006-11-01

    Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.

  10. A systems-level approach for investigating organophosphorus pesticide toxicity.

    Science.gov (United States)

    Zhu, Jingbo; Wang, Jing; Ding, Yan; Liu, Baoyue; Xiao, Wei

    2018-03-01

    The full understanding of the single and joint toxicity of a variety of organophosphorus (OP) pesticides is still unavailable, because of the extreme complex mechanism of action. This study established a systems-level approach based on systems toxicology to investigate OP pesticide toxicity by incorporating ADME/T properties, protein prediction, and network and pathway analysis. The results showed that most OP pesticides are highly toxic according to the ADME/T parameters, and can interact with significant receptor proteins to cooperatively lead to various diseases by the established OP pesticide -protein and protein-disease networks. Furthermore, the studies that multiple OP pesticides potentially act on the same receptor proteins and/or the functionally diverse proteins explained that multiple OP pesticides could mutually enhance toxicological synergy or additive on a molecular/systematic level. To the end, the integrated pathways revealed the mechanism of toxicity of the interaction of OP pesticides and elucidated the pathogenesis induced by OP pesticides. This study demonstrates a systems-level approach for investigating OP pesticide toxicity that can be further applied to risk assessments of various toxins, which is of significant interest to food security and environmental protection. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-01-01

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  12. Approaches to emergency management teaching at the master's level.

    Science.gov (United States)

    Alexander, David

    2013-01-01

    Training and education enable emergency managers to deal with complex situations, create durable networks of people with appropriate expertise, and ensure that knowledge is utilized to improve resilience in the face of disaster risk. Although there is a discrete literature on emergency management training, few attempts have been made to create an overview that discusses the key issues and proposes a standardized approach. This article examines the nature of training and education in emergency and disaster management. It analyzes the composition and requirements of courses at the master's degree level, which is considered to be the most appropriate tier for in-depth instruction in this field. This article defines "training" and "education" in the context of emergency management courses. It reviews the developing profile of the emergency manager in the light of training requirements. This article examines the question of whether emergency management is a branch of management science or whether it is something distinct and separate. Attention is given to the composition of a core curriculum and to the most appropriate pedagogical forms of delivering it. The article reviews the arguments for and against standardization of the curriculum and describes some of the pedagogical methods for delivering courses. Briefly, it considers the impact on training and education of new pedagogic methods based on information technology. It is concluded that the master's level is particularly suited to emergency and crisis management education, as it enables students to complement the in-depth knowledge they acquired in their disciplinary first degrees with a broader synthetic approach at the postgraduate level. Some measures of standardization of course offerings are desirable, in favor of creating a core curriculum that will ensure that essential core knowledge is imparted. Education and training in this field should include problem-solving approaches that enable students to learn

  13. Attention-level transitory response: a novel hybrid BCI approach

    Science.gov (United States)

    Diez, Pablo F.; Garcés Correa, Agustina; Orosco, Lorena; Laciar, Eric; Mut, Vicente

    2015-10-01

    Objective. People with disabilities may control devices such as a computer or a wheelchair by means of a brain-computer interface (BCI). BCI based on steady-state visual evoked potentials (SSVEP) requires visual stimulation of the user. However, this SSVEP-based BCI suffers from the ‘Midas touch effect’, i.e., the BCI can detect an SSVEP even when the user is not gazing at the stimulus. Then, these incorrect detections deteriorate the performance of the system, especially in asynchronous BCI because ongoing EEG is classified. In this paper, a novel transitory response of the attention-level of the user is reported. It was used to develop a hybrid BCI (hBCI). Approach. Three methods are proposed to detect the attention-level of the user. They are based on the alpha rhythm and theta/beta rate. The proposed hBCI scheme is presented along with these methods. Hence, the hBCI sends a command only when the user is at a high-level of attention, or in other words, when the user is really focused on the task being performed. The hBCI was tested over two different EEG datasets. Main results. The performance of the hybrid approach is superior to the standard one. Improvements of 20% in accuracy and 10 bits min-1 are reported. Moreover, the attention-level is extracted from the same EEG channels used in SSVEP detection and this way, no extra hardware is needed. Significance. A transitory response of EEG signal is used to develop the attention-SSVEP hBCI which is capable of reducing the Midas touch effect.

  14. A solution approach based on Benders decomposition for the preventive maintenance scheduling problem of a stochastic large-scale energy system

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn

    2013-01-01

    This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need...... to be regularly taken down for refueling and maintenance, in such away that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed...

  15. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  16. Thermal decomposition of pyrite

    International Nuclear Information System (INIS)

    Music, S.; Ristic, M.; Popovic, S.

    1992-01-01

    Thermal decomposition of natural pyrite (cubic, FeS 2 ) has been investigated using X-ray diffraction and 57 Fe Moessbauer spectroscopy. X-ray diffraction analysis of pyrite ore from different sources showed the presence of associated minerals, such as quartz, szomolnokite, stilbite or stellerite, micas and hematite. Hematite, maghemite and pyrrhotite were detected as thermal decomposition products of natural pyrite. The phase composition of the thermal decomposition products depends on the terature, time of heating and starting size of pyrite chrystals. Hematite is the end product of the thermal decomposition of natural pyrite. (author) 24 refs.; 6 figs.; 2 tabs

  17. Multimodal biometric system using rank-level fusion approach.

    Science.gov (United States)

    Monwar, Md Maruf; Gavrilova, Marina L

    2009-08-01

    In many real-world applications, unimodal biometric systems often face significant limitations due to sensitivity to noise, intraclass variability, data quality, nonuniversality, and other factors. Attempting to improve the performance of individual matchers in such situations may not prove to be highly effective. Multibiometric systems seek to alleviate some of these problems by providing multiple pieces of evidence of the same identity. These systems help achieve an increase in performance that may not be possible using a single-biometric indicator. This paper presents an effective fusion scheme that combines information presented by multiple domain experts based on the rank-level fusion integration method. The developed multimodal biometric system possesses a number of unique qualities, starting from utilizing principal component analysis and Fisher's linear discriminant methods for individual matchers (face, ear, and signature) identity authentication and utilizing the novel rank-level fusion method in order to consolidate the results obtained from different biometric matchers. The ranks of individual matchers are combined using the highest rank, Borda count, and logistic regression approaches. The results indicate that fusion of individual modalities can improve the overall performance of the biometric system, even in the presence of low quality data. Insights on multibiometric design using rank-level fusion and its performance on a variety of biometric databases are discussed in the concluding section.

  18. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  19. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  20. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  1. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  2. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  3. Decomposition of Sodium Tetraphenylborate

    International Nuclear Information System (INIS)

    Barnes, M.J.

    1998-01-01

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability

  4. A Character Level Based and Word Level Based Approach for Chinese-Vietnamese Machine Translation

    Directory of Open Access Journals (Sweden)

    Phuoc Tran

    2016-01-01

    Full Text Available Chinese and Vietnamese have the same isolated language; that is, the words are not delimited by spaces. In machine translation, word segmentation is often done first when translating from Chinese or Vietnamese into different languages (typically English and vice versa. However, it is a matter for consideration that words may or may not be segmented when translating between two languages in which spaces are not used between words, such as Chinese and Vietnamese. Since Chinese-Vietnamese is a low-resource language pair, the sparse data problem is evident in the translation system of this language pair. Therefore, while translating, whether it should be segmented or not becomes more important. In this paper, we propose a new method for translating Chinese to Vietnamese based on a combination of the advantages of character level and word level translation. In addition, a hybrid approach that combines statistics and rules is used to translate on the word level. And at the character level, a statistical translation is used. The experimental results showed that our method improved the performance of machine translation over that of character or word level translation.

  5. A Character Level Based and Word Level Based Approach for Chinese-Vietnamese Machine Translation.

    Science.gov (United States)

    Tran, Phuoc; Dinh, Dien; Nguyen, Hien T

    2016-01-01

    Chinese and Vietnamese have the same isolated language; that is, the words are not delimited by spaces. In machine translation, word segmentation is often done first when translating from Chinese or Vietnamese into different languages (typically English) and vice versa. However, it is a matter for consideration that words may or may not be segmented when translating between two languages in which spaces are not used between words, such as Chinese and Vietnamese. Since Chinese-Vietnamese is a low-resource language pair, the sparse data problem is evident in the translation system of this language pair. Therefore, while translating, whether it should be segmented or not becomes more important. In this paper, we propose a new method for translating Chinese to Vietnamese based on a combination of the advantages of character level and word level translation. In addition, a hybrid approach that combines statistics and rules is used to translate on the word level. And at the character level, a statistical translation is used. The experimental results showed that our method improved the performance of machine translation over that of character or word level translation.

  6. Note on Symplectic SVD-Like Decomposition

    Directory of Open Access Journals (Sweden)

    AGOUJIL Said

    2016-02-01

    Full Text Available The aim of this study was to introduce a constructive method to compute a symplectic singular value decomposition (SVD-like decomposition of a 2n-by-m rectangular real matrix A, based on symplectic refectors.This approach used a canonical Schur form of skew-symmetric matrix and it allowed us to compute eigenvalues for the structured matrices as Hamiltonian matrix JAA^T.

  7. Identification of liquid-phase decomposition species and reactions for guanidinium azotetrazolate

    International Nuclear Information System (INIS)

    Kumbhakarna, Neeraj R.; Shah, Kaushal J.; Chowdhury, Arindrajit; Thynell, Stefan T.

    2014-01-01

    Highlights: • Guanidinium azotetrazolate (GzT) is a high-nitrogen energetic material. • FTIR spectroscopy and ToFMS spectrometry were used for species identification. • Quantum mechanics was used to identify transition states and decomposition pathways. • Important reactions in the GzT liquid-phase decomposition process were identified. • Initiation of decomposition occurs via ring opening, releasing N 2 . - Abstract: The objective of this work is to analyze the decomposition of guanidinium azotetrazolate (GzT) in the liquid phase by using a combined experimental and computational approach. The experimental part involves the use of Fourier transform infrared (FTIR) spectroscopy to acquire the spectral transmittance of the evolved gas-phase species from rapid thermolysis, as well as to acquire spectral transmittance of the condensate and residue formed from the decomposition. Time-of-flight mass spectrometry (ToFMS) is also used to acquire mass spectra of the evolved gas-phase species. Sub-milligram samples of GzT were heated at rates of about 2000 K/s to a set temperature (553–573 K) where decomposition occurred under isothermal conditions. N 2 , NH 3 , HCN, guanidine and melamine were identified as products of decomposition. The computational approach is based on using quantum mechanics for confirming the identity of the species observed in experiments and for identifying elementary chemical reactions that formed these species. In these ab initio techniques, various levels of theory and basis sets were used. Based on the calculated enthalpy and free energy values of various molecular structures, important reaction pathways were identified. Initiation of decomposition of GzT occurs via ring opening to release N 2

  8. Azimuthal decomposition of optical modes

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-07-01

    Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...

  9. Empirically Examining the Performance of Approaches to Multi-Level Matching to Study the Effect of School-Level Interventions

    Science.gov (United States)

    Hallberg, Kelly; Cook, Thomas D.; Figlio, David

    2013-01-01

    The goal of this paper is to provide guidance for applied education researchers in using multi-level data to study the effects of interventions implemented at the school level. Two primary approaches are currently employed in observational studies of the effect of school-level interventions. One approach employs intact school matching: matching…

  10. Robot Learning from Demonstration: A Task-level Planning Approach

    Directory of Open Access Journals (Sweden)

    Staffan Ekvall

    2008-09-01

    Full Text Available In this paper, we deal with the problem of learning by demonstration, task level learning and planning for robotic applications that involve object manipulation. Preprogramming robots for execution of complex domestic tasks such as setting a dinner table is of little use, since the same order of subtasks may not be conceivable in the run time due to the changed state of the world. In our approach, we aim to learn the goal of the task and use a task planner to reach the goal given different initial states of the world. For some tasks, there are underlying constraints that must be fulfille, and knowing just the final goal is not sufficient. We propose two techniques for constraint identification. In the first case, the teacher can directly instruct the system about the underlying constraints. In the second case, the constraints are identified by the robot itself based on multiple observations. The constraints are then considered in the planning phase, allowing the task to be executed without violating any of them. We evaluate our work on a real robot performing pick-and-place tasks.

  11. A framework for bootstrapping morphological decomposition

    CSIR Research Space (South Africa)

    Joubert, LJ

    2004-11-01

    Full Text Available The need for a bootstrapping approach to the morphological decomposition of words in agglutinative languages such as isiZulu is motivated, and the complexities of such an approach are described. The authors then introduce a generic framework which...

  12. A first level trigger approach for the CBM experiment

    International Nuclear Information System (INIS)

    Steinle, Christian Alexander

    2012-01-01

    In view of the very heavy CBM experiment constraints on the first level trigger, no conventional trigger is obviously applicable. Hence a fast trigger algorithm with the goal of realization in reconfigurable hardware had to be developed to fulfil all requirements of the experiment. In this connection the general Hough transform, which is already utilized in several other experiments, is used as a basis. This approach constitutes further a global method for tracking, which transforms all particle interaction points with the detector stations by means of a defined formula into a parameter space corresponding to the momentum of the particle tracks. This formula is of course developed especially for the given environment of CBM and defines thus the core of the applied three dimensional Hough transform. As the main focus of attention is furthermore on the realization of the needed data throughput, the necessary complex formula calculations give reason to outsource predefined formula results in look-up tables. This circumstance offers then collaterally the possibility to utilize any other sufficiently precise method like Runge-Kutta of fourth order for example to compute these look-up tables, because this computation can be evidently done offline without any effect on the Hough transform's processing speed. For algorithm simulation purposes the CBMROOT framework provides the module 'track', which is written in the programming language C++. This module includes many analyses for the determination of algorithm parameters, which can be even executed automatically to some extent. In addition to this, there are of course also analyses for the measurement of the algorithm's quality as well as for the individual rating of each partial step of the algorithm. Consequently the milestone of a customizable level one tracking algorithm, which can be used without any specific knowledge, is now obtained. Besides this, the investigated concepts are explicitly considered in the

  13. A first level trigger approach for the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Steinle, Christian Alexander

    2012-07-01

    In view of the very heavy CBM experiment constraints on the first level trigger, no conventional trigger is obviously applicable. Hence a fast trigger algorithm with the goal of realization in reconfigurable hardware had to be developed to fulfil all requirements of the experiment. In this connection the general Hough transform, which is already utilized in several other experiments, is used as a basis. This approach constitutes further a global method for tracking, which transforms all particle interaction points with the detector stations by means of a defined formula into a parameter space corresponding to the momentum of the particle tracks. This formula is of course developed especially for the given environment of CBM and defines thus the core of the applied three dimensional Hough transform. As the main focus of attention is furthermore on the realization of the needed data throughput, the necessary complex formula calculations give reason to outsource predefined formula results in look-up tables. This circumstance offers then collaterally the possibility to utilize any other sufficiently precise method like Runge-Kutta of fourth order for example to compute these look-up tables, because this computation can be evidently done offline without any effect on the Hough transform's processing speed. For algorithm simulation purposes the CBMROOT framework provides the module 'track', which is written in the programming language C++. This module includes many analyses for the determination of algorithm parameters, which can be even executed automatically to some extent. In addition to this, there are of course also analyses for the measurement of the algorithm's quality as well as for the individual rating of each partial step of the algorithm. Consequently the milestone of a customizable level one tracking algorithm, which can be used without any specific knowledge, is now obtained. Besides this, the investigated concepts are explicitly considered in the

  14. A first level trigger approach for the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Steinle, Christian Alexander

    2012-07-01

    In view of the very heavy CBM experiment constraints on the first level trigger, no conventional trigger is obviously applicable. Hence a fast trigger algorithm with the goal of realization in reconfigurable hardware had to be developed to fulfil all requirements of the experiment. In this connection the general Hough transform, which is already utilized in several other experiments, is used as a basis. This approach constitutes further a global method for tracking, which transforms all particle interaction points with the detector stations by means of a defined formula into a parameter space corresponding to the momentum of the particle tracks. This formula is of course developed especially for the given environment of CBM and defines thus the core of the applied three dimensional Hough transform. As the main focus of attention is furthermore on the realization of the needed data throughput, the necessary complex formula calculations give reason to outsource predefined formula results in look-up tables. This circumstance offers then collaterally the possibility to utilize any other sufficiently precise method like Runge-Kutta of fourth order for example to compute these look-up tables, because this computation can be evidently done offline without any effect on the Hough transform's processing speed. For algorithm simulation purposes the CBMROOT framework provides the module 'track', which is written in the programming language C++. This module includes many analyses for the determination of algorithm parameters, which can be even executed automatically to some extent. In addition to this, there are of course also analyses for the measurement of the algorithm's quality as well as for the individual rating of each partial step of the algorithm. Consequently the milestone of a customizable level one tracking algorithm, which can be used without any specific knowledge, is now obtained. Besides this, the investigated concepts are explicitly

  15. Photochemical decomposition of catecholamines

    International Nuclear Information System (INIS)

    Mol, N.J. de; Henegouwen, G.M.J.B. van; Gerritsma, K.W.

    1979-01-01

    During photochemical decomposition (lambda=254 nm) adrenaline, isoprenaline and noradrenaline in aqueous solution were converted to the corresponding aminochrome for 65, 56 and 35% respectively. In determining this conversion, photochemical instability of the aminochromes was taken into account. Irradiations were performed in such dilute solutions that the neglect of the inner filter effect is permissible. Furthermore, quantum yields for the decomposition of the aminochromes in aqueous solution are given. (Author)

  16. Fate of mercury in tree litter during decomposition

    Directory of Open Access Journals (Sweden)

    A. K. Pokharel

    2011-09-01

    Full Text Available We performed a controlled laboratory litter incubation study to assess changes in dry mass, carbon (C mass and concentration, mercury (Hg mass and concentration, and stoichiometric relations between elements during decomposition. Twenty-five surface litter samples each, collected from four forest stands, were placed in incubation jars open to the atmosphere, and were harvested sequentially at 0, 3, 6, 12, and 18 months. Using a mass balance approach, we observed significant mass losses of Hg during decomposition (5 to 23 % of initial mass after 18 months, which we attribute to gaseous losses of Hg to the atmosphere through a gas-permeable filter covering incubation jars. Percentage mass losses of Hg generally were less than observed dry mass and C mass losses (48 to 63 % Hg loss per unit dry mass loss, although one litter type showed similar losses. A field control study using the same litter types exposed at the original collection locations for one year showed that field litter samples were enriched in Hg concentrations by 8 to 64 % compared to samples incubated for the same time period in the laboratory, indicating strong additional sorption of Hg in the field likely from atmospheric deposition. Solubility of Hg, assessed by exposure of litter to water upon harvest, was very low (<0.22 ng Hg g−1 dry mass and decreased with increasing stage of decomposition for all litter types. Our results indicate potentially large gaseous emissions, or re-emissions, of Hg originally associated with plant litter upon decomposition. Results also suggest that Hg accumulation in litter and surface layers in the field is driven mainly by additional sorption of Hg, with minor contributions from "internal" accumulation due to preferential loss of C over Hg. Litter types showed highly species-specific differences in Hg levels during decomposition suggesting that emissions, retention, and sorption of Hg are dependent on litter type.

  17. Sea level rise and the geoid: factor analysis approach

    OpenAIRE

    Song, Hongzhi; Sadovski, Alexey; Jeffress, Gary

    2013-01-01

    Sea levels are rising around the world, and this is a particular concern along most of the coasts of the United States. A 1989 EPA report shows that sea levels rose 5-6 inches more than the global average along the Mid-Atlantic and Gulf Coasts in the last century. The main reason for this is coastal land subsidence. This sea level rise is considered more as relative sea level rise than global sea level rise. Thus, instead of studying sea level rise globally, this paper describes a statistical...

  18. A green approach towards adoption of chemical reaction model on 2,5-dimethyl-2,5-di-(tert-butylperoxy)hexane decomposition by differential isoconversional kinetic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Das, Mitali; Shu, Chi-Min, E-mail: shucm@yuntech.edu.tw

    2016-01-15

    Highlights: • Thermally degraded DBPH products are identified. • An appropriate mathematical model was selected for decomposition study. • Differential isoconversional analysis was performed to obtain kinetic parameters. • Simulation on thermal analysis model was conducted for the best storage conditions. - Abstract: This study investigated the thermal degradation products of 2,5-dimethyl-2,5-di-(tert-butylperoxy) hexane (DBPH), by TG/GC/MS to identify runaway reaction and thermal safety parameters. It also included the determination of time to maximum rate under adiabatic conditions (TMR{sub ad}) and self-accelerating decomposition temperature obtained through Advanced Kinetics and Technology Solutions. The apparent activation energy (E{sub a}) was calculated from differential isoconversional kinetic analysis method using differential scanning calorimetry experiments. The E{sub a} value obtained by Friedman analysis is in the range of 118.0–149.0 kJ mol{sup −1}. The TMR{sub ad} was 24.0 h with an apparent onset temperature of 82.4 °C. This study has also established an efficient benchmark for a thermal hazard assessment of DBPH that can be applied to assure safer storage conditions.

  19. A Knowledge Level Approach to Collaborative Problem Solving

    OpenAIRE

    Jennings, N. R.

    1992-01-01

    This paper proposes, characterizes and outlines the benefits of a new computer level specifically for multi-agent problem solvers. This level is called the cooperation knowledge level and involves describing and developing richer and more explicit models of common social phenomena. We then focus on one particular form of social interaction in which groups of agents decide they wish to work together, in a collaborative manner, to tackle a common problem. A domain independent model (called join...

  20. Spectral Tensor-Train Decomposition

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Marzouk, Youssef M.

    2016-01-01

    The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT...... adaptive Smolyak approach. The method is also used to approximate the solution of an elliptic PDE with random input data. The open source software and examples presented in this work are available online (http://pypi.python.org/pypi/TensorToolbox/)....

  1. Regional Approach to Building Operational Level Capacity for ...

    African Journals Online (AJOL)

    In order to strengthen public health disaster management capacities at the operational level in six countries of the Eastern Africa region, the USAID-funded leadership project worked through the HEALTH Alliance, a network of seven schools of public health from six countries in the region to train district-level teams.

  2. Rostering from staffing levels: a branch-and-price approach

    NARCIS (Netherlands)

    van der Veen, Egbert; Veltman, Bart

    Many rostering methods first create shifts from some given staffing levels, and after that create rosters from the set of created shifts. Although such a method has some nice properties, it also has some bad ones. In this paper we outline a method that creates rosters directly from staffing levels.

  3. Understanding high-level radwaste disposal through the historical approach

    International Nuclear Information System (INIS)

    Cramer, E.N.

    1993-01-01

    As an example of different needs in communication to persons other than physical scientists, a teacher needs an approach somewhere between just assigning to an advanced student the book Understanding Radioactive Waste to review and administering to a science class the several-week module open-quotes Science, Society and America's Nuclear Waste.close quotes When a nuclear professional reviews for the class the lengthy US research and development (R ampersand D) program, a broader approach is provided that can focus on geology or management aspects or can be made suitable for middle and elementary schools' earth or environmental science studies

  4. Developing a Dual-Level Capabilities Approach: Using Constructivist Grounded Theory and Feminist Ethnography to Enhance the Capabilities Approaches

    Science.gov (United States)

    Hall, Kia M. Q.

    2014-01-01

    In this study, a dual-level capabilities approach to development is introduced. This approach intends to improve upon individual-focused capabilities approaches developed by Amartya Sen and Martha Nussbaum. Based upon seven months of ethnographic research in the Afro-descendant, autochthonous Garifuna community of Honduras, constructivist grounded…

  5. An approach for determining the acceptable levels of nuclear risk

    International Nuclear Information System (INIS)

    1978-03-01

    The objective of this study was to develop a methodology for determining the acceptable levels of risk with respect to nuclear energy. It was concluded that the Atomic Energy Control Board should identify the interest groups that affect its choice of an acceptable level of risk, determine their expectations, and balance the expectations of the various groups such that the resulting acceptable level of risk is still acceptable to the Board. This would be done by interviewing experts on the subject of nuclear safety, developing and pretesting a public questionnaire, and surveying the public on acceptable cost-risk combinations

  6. Model ecosystem approach to estimate community level effects of radiation

    Energy Technology Data Exchange (ETDEWEB)

    Masahiro, Doi; Nobuyuki, Tanaka; Shoichi, Fuma; Nobuyoshi, Ishii; Hiroshi, Takeda; Zenichiro, Kawabata [National Institute of Radiological Sciences, Environmental and Toxicological Sciences Research Group, Chiba (Japan)

    2004-07-01

    Mathematical computer model is developed to simulate the population dynamics and dynamic mass budgets of the microbial community realized as a self sustainable aquatic ecological system in the tube. Autotrophic algae, heterotrophic protozoa and sapro-trophic bacteria live symbiotically with inter-species' interactions as predator-prey relationship, competition for the common resource, autolysis of detritus and detritus-grazing food chain, etc. The simulation model is the individual-based parallel model, built in the demographic stochasticity, environmental stochasticity by dividing the aquatic environment into patches. Validity of the model is checked by the multifaceted data of the microcosm experiments. In the analysis, intrinsic parameters of umbrella endpoints (lethality, morbidity, reproductive growth, mutation) are manipulated at the individual level, and tried to find the population level, community level and ecosystem level disorders of ecologically crucial parameters (e.g. intrinsic growth rate, carrying capacity, variation, etc.) that related to the probability of population extinction. (author)

  7. Model ecosystem approach to estimate community level effects of radiation

    International Nuclear Information System (INIS)

    Masahiro, Doi; Nobuyuki, Tanaka; Shoichi, Fuma; Nobuyoshi, Ishii; Hiroshi, Takeda; Zenichiro, Kawabata

    2004-01-01

    Mathematical computer model is developed to simulate the population dynamics and dynamic mass budgets of the microbial community realized as a self sustainable aquatic ecological system in the tube. Autotrophic algae, heterotrophic protozoa and sapro-trophic bacteria live symbiotically with inter-species' interactions as predator-prey relationship, competition for the common resource, autolysis of detritus and detritus-grazing food chain, etc. The simulation model is the individual-based parallel model, built in the demographic stochasticity, environmental stochasticity by dividing the aquatic environment into patches. Validity of the model is checked by the multifaceted data of the microcosm experiments. In the analysis, intrinsic parameters of umbrella endpoints (lethality, morbidity, reproductive growth, mutation) are manipulated at the individual level, and tried to find the population level, community level and ecosystem level disorders of ecologically crucial parameters (e.g. intrinsic growth rate, carrying capacity, variation, etc.) that related to the probability of population extinction. (author)

  8. Transgenic approaches in potato: effects on glycoalkaloids levels

    African Journals Online (AJOL)

    Sayyar

    2013-02-20

    Feb 20, 2013 ... effects of plant transformation on known toxic compounds has been an area of immense interest. Compositional .... cells carrying rDNA molecules are induced to regenerate .... Glycoalkaloids in toxic levels disrupt membrane.

  9. Managing low-level radioactive wastes: a proposed approach

    International Nuclear Information System (INIS)

    1980-08-01

    This document is a consensus report of the Low-Level Waste Strategy Task Force. It covers system-wide issues; generation, treatment, and packaging; transportation; and disposal. Recommendations are made

  10. Decomposing Nekrasov decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)

    2016-02-16

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  11. Decomposing Nekrasov decomposition

    International Nuclear Information System (INIS)

    Morozov, A.; Zenkevich, Y.

    2016-01-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  12. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....

  13. Teaching Epidemiology at the Undergraduate Level: Considerations and Approaches.

    Science.gov (United States)

    Goldmann, Emily; Stark, James H; Kapadia, Farzana; McQueen, Matthew B

    2018-06-01

    The rapid growth in undergraduate public health education has offered training in epidemiology to an increasing number of undergraduate students. Epidemiology courses introduce undergraduate students to a population health perspective and provide opportunities for these students to build essential skills and competencies such as ethical reasoning, teamwork, comprehension of scientific methods, critical thinking, quantitative and information literacy, ability to analyze public health information, and effective writing and oral communication. Taking a varied approach and incorporating active learning and assessment strategies can help engage students in the material, improve comprehension of key concepts, and further develop key competencies. In this commentary, we present examples of how epidemiology may be taught in the undergraduate setting. Evaluation of these approaches and others would be a valuable next step.

  14. A systems biology approach for pathway level analysis

    OpenAIRE

    Draghici, Sorin; Khatri, Purvesh; Tarca, Adi Laurentiu; Amin, Kashyap; Done, Arina; Voichita, Calin; Georgescu, Constantin; Romero, Roberto

    2007-01-01

    A common challenge in the analysis of genomics data is trying to understand the underlying phenomenon in the context of all complex interactions taking place on various signaling pathways. A statistical approach using various models is universally used to identify the most relevant pathways in a given experiment. Here, we show that the existing pathway analysis methods fail to take into consideration important biological aspects and may provide incorrect results in certain situations. By usin...

  15. A Distributed Approach to System-Level Prognostics

    Science.gov (United States)

    2012-09-01

    the end of (useful) life ( EOL ) and/or the remaining useful life (RUL) of components, subsystems, or systems. The prognostics problem itself can be...system state estimate, computes EOL and/or RUL. In this paper, we focus on a model-based prognostics approach (Orchard & Vachtse- vanos, 2009; Daigle...been focused on individual components, and determining their EOL and RUL, e.g., (Orchard & Vachtsevanos, 2009; Saha & Goebel, 2009; Daigle & Goebel

  16. Approaches of the competitiveness at the macroeconomic level

    Directory of Open Access Journals (Sweden)

    Viorica MIREA,

    2011-06-01

    Full Text Available In this article we approach gradually the competitiveness. Thus we present in the introduction some definitions of this concept, although there isn’t a widely accepted definition for this term. Then, we present how this indicator can be measured and used to achieve the national or European strategies. Taking into account the current economic and financial crisis, we presented the measures approved by the Romanian Government for this stage and their effects on the productivity and competitiveness.

  17. Approaches towards establishing of clearance levels in Japan

    International Nuclear Information System (INIS)

    Sumitani, N.; Watanabe, I.; Okoshi, M.

    1998-01-01

    It is important to establish necessary regulatory systems for decommissioning waste management, especially to establish clearance levels from regulatory control. To establish the regulatory systems, the Nuclear Safety Commission (NSC) has been discussing the unconditional clearance levels for materials from nuclear reactors since May 1997. The NSC tries to derive unconditional clearance levels for the materials such as concrete and ferrous metal, arising from nuclear reactor decommissioning. In the derivation, both disposal and recycle/reuse of the materials are considered. Typical scenarios and parameter values for dose estimation are selected considering the Japanese natural and social conditions. Preliminary clearance levels were derived from 10 μSv/yr of individual dose criterion and deterministic analysis. For most radionuclides, the preliminary results are the same order of magnitude recommended in IAEA-TECDOC-855. Some radionuclides such as β emitters, however, are different order of magnitude from those recommended in IAEA-TECDOC-855. It is necessary that international organizations lead the discussions on the clearance levels to conclude final consensus. (author)

  18. FDG decomposition products

    International Nuclear Information System (INIS)

    Macasek, F.; Buriova, E.

    2004-01-01

    In this presentation authors present the results of analysis of decomposition products of [ 18 ]fluorodexyglucose. It is concluded that the coupling of liquid chromatography - mass spectrometry with electrospray ionisation is a suitable tool for quantitative analysis of FDG radiopharmaceutical, i.e. assay of basic components (FDG, glucose), impurities (Kryptofix) and decomposition products (gluconic and glucuronic acids etc.); 2-[ 18 F]fluoro-deoxyglucose (FDG) is sufficiently stable and resistant towards autoradiolysis; the content of radiochemical impurities (2-[ 18 F]fluoro-gluconic and 2-[ 18 F]fluoro-glucuronic acids in expired FDG did not exceed 1%

  19. Generalized decompositions of dynamic systems and vector Lyapunov functions

    Science.gov (United States)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  20. Levels of processing and Eye Movements: A Stimulus driven approach

    DEFF Research Database (Denmark)

    Mulvey, Fiona Bríd

    2014-01-01

    movements can be controlled either by bottom up stimulus properties or by top down cognitive control, studies have compared eye movements in real world tasks and searched for indicators of cognitive load or level of attention when task demands increase. Extracting the effects of cognitive processing on eye......The aim of this research is to investigate the explication of levels of attention through eye movement parameters. Previous research from disparate fields have suggested that eye movements are related to cognitive processing, however, the exact nature of the relationship is unclear. Since eye...... to investigate individual differences in levels of processing within the normal population using existing constructs and tests of cognitive style. Study 4 investigates these stimuli and the eye movements of a clinical group with known interruption to the dorsal stream of processing, and subsequent isolated...

  1. Managing low-level radioactive wastes: a proposed approach

    International Nuclear Information System (INIS)

    1983-04-01

    Chapters are devoted to the following: introduction; a brief description of low-level radioactive wastes and their management; system-side issues; waste reduction and packaging; transportation; disposal; issues for further study; and summary of recommendations. Nine appendices are included

  2. An Empirical Approach to Determining Advertising Spending Level.

    Science.gov (United States)

    Sunoo, D. H.; Lin, Lynn Y. S.

    To assess the relationship between advertising and consumer promotion and to determine the optimal short-term advertising spending level for a product, a research project was undertaken by a major food manufacturer. One thousand homes subscribing to a dual-system cable television service received either no advertising exposure to the product or…

  3. A Situational Approach to Middle Level Teacher Leadership.

    Science.gov (United States)

    White, George P.; Greenwood, Scott C.

    2002-01-01

    Examines the emerging concept of teacher as leader in the classroom, and offers a useful framework for practice. Finds that to exercise situational leadership in the classroom, teachers vary supportive behavior and directive behavior in response to four levels of student task development: telling, consulting, participating, and delegating. (SD)

  4. Managing low-level radioactive wastes: a proposed approach

    International Nuclear Information System (INIS)

    Peel, J.W.; Levin, G.B.

    1980-01-01

    In 1978, President Carter established the Interagency Review Group on Nuclear Waste Management (IRG) to review the nation's plans and progress in managing radioactive wastes. In its final report, issued in March 1979, the group recommended that the Department of Energy (DOE) assume responsibility for developing a national plan for the management of low-level wastes. Toward this end, DOE directed that a strategy be developed to guide federal and state officials in resolving issues critical to the safe management of low-level wastes. EG and G Idaho, Inc. was selected as the lead contractor for the Low-Level Waste Management Program and was given responsibility for developing the strategy. A 25 member task force was formed which included individuals from federal agencies, states, industry, universities, and public interest groups. The task force identified nineteen broad issues covering the generation, treatment, packaging, transportation, and disposal of low-level wastes. Alternatives for the resolution of each issue were proposed and recommendations were made which, taken together, form the draft strategy. These recommendations are summarized in this document

  5. A NEW APPROACH TO FINANCIAL REGULATION AT THE EUROPEAN LEVEL

    Directory of Open Access Journals (Sweden)

    Ioana Laura VALEANU

    2015-12-01

    Full Text Available With the recent financial and economic crisis onset, a fragility of the financial system became apparent, under the form of a series of vulnerabilities and failures with a strong destabilizing impact on the economy. In this context, a new approach to financial stability has outlined itself, pleading for a more extensive financial regulation and macro-prudential supervision, complementary to the micro-prudential one. The objectives of this article are to highlight the context and need for a new financial regulatory framework and underline the main problems of the banking system the new European regulations addresses.

  6. Management of high level nuclear waste - the nordic approach

    International Nuclear Information System (INIS)

    Engstrom, S.; Aikas, T.

    2000-01-01

    Both the Swedish and the Finnish nuclear waste programmes are aimed at disposal of encapsulated spent nuclear fuel into the crystalline bedrock. In both countries research and development work have been performed since the 1970's. The focus of the programme in both countries is now shifting to practical demonstration of encapsulation technology. In parallel a site-selection programme is being carried out. Finland has selected a site at Eurajoki and is currently waiting for the Government to agree to the choice of the site. In Sweden, at least two sites will be selected by year 2001 with the goal, after performed drillings, to select one of them around 2008. Site selection for the deep repository is probably the most difficult and most sensitive part of the whole programme. The repository will be sited at a suitable place in Sweden respectively Finland where high safety requirements will be met with the consent of the concerned municipality. If there is a Nordic approach to tackle this issue that would probably be: - A stepwise approach in which the disposal is implemented in gradually each step having a decision making stage leading to a commitment of various parties involved to the following stage. -A total transparency of the work performed and the decision making process. - A genuine will from the industry to establish a dialogue with the public in the involved communities. - A will to take the time and the patience necessary to establish a constructive working relationship with the communities participating in the site selection. (authors)

  7. Level dynamics: An approach to the study of avoided level crossings and transition to chaos

    International Nuclear Information System (INIS)

    Wang, S.; Chu, S.Y.

    1993-01-01

    The Dyson-Pechukas level dynamics has been reformulated and made suitable for studying avoided level crossings and transition to chaos. The N-level dynamics is converted into a many-body problem of one-dimensional Coulomb gas with N-constituent particles having intrinsic excitations. It is shown that local fluctuation of the level distribution is generated by a large number of avoided level crossings. The role played by avoided level crossings in generating chaoticity in level dynamics is similar to the role played by short-range collisions in causing thermalization in many-body dynamics. Furthermore, the effect of level changing rates in producing avoided level crossings is the same as particle velocities in causing particle-particle collisions. A one-dimensional su(2) Hamiltonian has been constructed as an illustration of the level dynamics, showing how the avoided level crossings cause the transition from a regular distribution to the chaotic Gaussian orthogonal ensemble (GOE) distribution of the levels. The existence of the one-dimensional su(2) Hamiltonian which can show both GOE and Poisson level statistics is remarkable and deserves further investigation

  8. Multi-level and hybrid modelling approaches for systems biology.

    Science.gov (United States)

    Bardini, R; Politano, G; Benso, A; Di Carlo, S

    2017-01-01

    During the last decades, high-throughput techniques allowed for the extraction of a huge amount of data from biological systems, unveiling more of their underling complexity. Biological systems encompass a wide range of space and time scales, functioning according to flexible hierarchies of mechanisms making an intertwined and dynamic interplay of regulations. This becomes particularly evident in processes such as ontogenesis, where regulative assets change according to process context and timing, making structural phenotype and architectural complexities emerge from a single cell, through local interactions. The information collected from biological systems are naturally organized according to the functional levels composing the system itself. In systems biology, biological information often comes from overlapping but different scientific domains, each one having its own way of representing phenomena under study. That is, the different parts of the system to be modelled may be described with different formalisms. For a model to have improved accuracy and capability for making a good knowledge base, it is good to comprise different system levels, suitably handling the relative formalisms. Models which are both multi-level and hybrid satisfy both these requirements, making a very useful tool in computational systems biology. This paper reviews some of the main contributions in this field.

  9. Level Set Approach to Anisotropic Wet Etching of Silicon

    Directory of Open Access Journals (Sweden)

    Branislav Radjenović

    2010-05-01

    Full Text Available In this paper a methodology for the three dimensional (3D modeling and simulation of the profile evolution during anisotropic wet etching of silicon based on the level set method is presented. Etching rate anisotropy in silicon is modeled taking into account full silicon symmetry properties, by means of the interpolation technique using experimentally obtained values for the etching rates along thirteen principal and high index directions in KOH solutions. The resulting level set equations are solved using an open source implementation of the sparse field method (ITK library, developed in medical image processing community, extended for the case of non-convex Hamiltonians. Simulation results for some interesting initial 3D shapes, as well as some more practical examples illustrating anisotropic etching simulation in the presence of masks (simple square aperture mask, convex corner undercutting and convex corner compensation, formation of suspended structures are shown also. The obtained results show that level set method can be used as an effective tool for wet etching process modeling, and that is a viable alternative to the Cellular Automata method which now prevails in the simulations of the wet etching process.

  10. Simulation approaches to probabilistic structural design at the component level

    International Nuclear Information System (INIS)

    Stancampiano, P.A.

    1978-01-01

    In this paper, structural failure of large nuclear components is viewed as a random process with a low probability of occurrence. Therefore, a statistical interpretation of probability does not apply and statistical inferences cannot be made due to the sparcity of actual structural failure data. In such cases, analytical estimates of the failure probabilities may be obtained from stress-strength interference theory. Since the majority of real design applications are complex, numerical methods are required to obtain solutions. Monte Carlo simulation appears to be the best general numerical approach. However, meaningful applications of simulation methods suggest research activities in three categories: methods development, failure mode models development, and statistical data models development. (Auth.)

  11. Micro-Level Management of Agricultural Inputs: Emerging Approaches

    Directory of Open Access Journals (Sweden)

    Jonathan Weekley

    2012-12-01

    Full Text Available Through the development of superior plant varieties that benefit from high agrochemical inputs and irrigation, the agricultural Green Revolution has doubled crop yields, yet introduced unintended impacts on environment. An expected 50% growth in world population during the 21st century demands novel integration of advanced technologies and low-input production systems based on soil and plant biology, targeting precision delivery of inputs synchronized with growth stages of crop plants. Further, successful systems will integrate subsurface water, air and nutrient delivery, real-time soil parameter data and computer-based decision-making to mitigate plant stress and actively manipulate microbial rhizosphere communities that stimulate productivity. Such an approach will ensure food security and mitigate impacts of climate change.

  12. Can differences in soil community composition after peat meadow restoration lead to different decomposition and mineralization rates?

    NARCIS (Netherlands)

    Dijk, van J.; Didden, W.A.M.; Kuenen, F.; Bodegom, van P.M.; Verhoef, H.A.; Aerts, R.

    2009-01-01

    Reducing decomposition and mineralization of organic matter by increasing groundwater levels is a common approach to reduce plant nutrient availability in many peat meadow restoration projects. The soil community is the main driver of these processes, but how community composition is affected by

  13. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  14. Priming of soil carbon decomposition in two inner Mongolia grassland soils following sheep dung addition: A study using13C natural abundance approach

    DEFF Research Database (Denmark)

    Ma, Xiuzhi; Ambus, Per; Wang, Shiping

    2013-01-01

    To investigate the effect of sheep dung on soil carbon (C) sequestration, a 152 days incubation experiment was conducted with soils from two different Inner Mongolian grasslands, i.e. a Leymus chinensis dominated grassland representing the climax community (2.1% organic matter content) and a heav......To investigate the effect of sheep dung on soil carbon (C) sequestration, a 152 days incubation experiment was conducted with soils from two different Inner Mongolian grasslands, i.e. a Leymus chinensis dominated grassland representing the climax community (2.1% organic matter content......) and a heavily degraded Artemisia frigida dominated community (1.3% organic matter content). Dung was collected from sheep either fed on L. chinensis (C3 plant with δ13C = -26.8‰; dung δ13C = -26.2‰) or Cleistogenes squarrosa (C4 plant with δ13C = -14.6‰; dung δ13C = -15.7‰). Fresh C3 and C4 sheep dung was mixed......-amended controls. In both grassland soils, ca. 60% of the evolved CO2 originated from the decomposing sheep dung and 40% from the native soil C. Priming effects of soil C decomposition were observed in both soils, i.e. 1.4 g and 1.6 g additional soil C kg-1 dry soil had been emitted as CO2 for the L. chinensis...

  15. Approach to evaluating health level and adaptation possibilities in schoolchildren

    Directory of Open Access Journals (Sweden)

    O.V. Andrieieva

    2014-02-01

    Full Text Available Purpose: substantiate the results of theoretical and practical investigations aimed at improving the health of students. Material: the study involved 187 children including 103 boys and 84 girls aged 7-10 years. Results: through a rapid assessment of physical health it was found that pupils of primary school age have an average level of the functional state of the organism, with a minimum resistance to risk factors (chronic non-infective diseases, etc.. For the first time, a technique for determining the level of adaptation and reserve capacity of school students proposed by Ukrainian hygienists was used in physical culture and sports practice. Conclusions: the technique reveals strain in adaptation mechanisms that corresponds to donozological condition. An idea is proposed that Nordic walking, through the positive impact on the body of aerobic mode of energy supply, is able to increase the reserve-adaptive capabilities of primary school students by improvement of their health as well as to solve the problems of health formation and health care in the physical education of youth.

  16. Immigration and Firm Performance: a city-level approach

    Directory of Open Access Journals (Sweden)

    Mercedes Teruel Carrizosa

    2009-10-01

    Full Text Available This article analyses the effect of immigration flows on the growthand efficiency of manufacturing firms in Spanish cities. While most studies werefocusing on the effect immigrants have on labour markets at an aggregate level,here, we argue that the impact of immigration on firm performance should not onlybe considered in terms of the labour market, but also in terms of how city’s amenitiescan affect the performance of firms. Implementing a panel data methodology,we show that the immigrants’ increasing pressure has a positive effect on labourproductivity and wages and a negative effect on the job evolution of these manufacturingfirms. In addition, both small and new firms are more sensitive to thepressures of immigrant inflow, while foreign market oriented firms report higherproductivity levels and a less marked impact of immigration than their counterparts.We also present a set of instruments to control for endogeneity. It allows us toconfirm the effect of local immigration flows on the performance of manufacturingfirms.

  17. A panchayat level primary-care approach for adolescent services.

    Science.gov (United States)

    Nair, M K C; Leena, M L; George, Babu; Sunitha, R M; Prasanna, G L; Russell, P S

    2012-01-01

    To develop a model for providing community adolescent care services in the primary care setting Need assessment was done among adolescents and perceived problems of adolescents were studied using qualitative and quantitative methods. Based on the results of these studies, a Family Life Education (FLE) module was prepared. Awareness programs were organized for all stakeholders in the community on adolescent issues. All anganwadi workers in the panchayat were trained to take interactive sessions for all the adolescents in the panchayat using the FLE module. Ward based Teen Clubs were formed in all the 13 wards of the Panchayat separately for boys and girls and FLE classes were given to them through anganwadi workers. An Adolescent Clinic was set up to provide necessary medical and counseling facilities. Adolescent Health Card was distributed to all Teen Club members and those who attended the adolescent clinics. The present approach stresses the need and feasibility of adolescent-centered, community-based interventions. The authors' experience showed that before starting any adolescent program, community awareness generation about the need and content of the program is very important for its success. The experience of this model has made it possible to up-scale the program to seven districts of southern Kerala as a service model. The experiences of the program gave a realistic picture of the needs and problems of adolescents and a simple feasible model for providing services to adolescents in the primary care setting that can be easily replicated in other parts of India.

  18. Prediction of groundwater levels from lake levels and climate data using ANN approach

    OpenAIRE

    Dogan, Ahmet; Demirpence, Husnu; Cobaner, Murat

    2008-01-01

    There are many environmental concerns relating to the quality and quantity of surface and groundwater. It is very important to estimate the quantity of water by using readily available climate data for managing water resources of the natural environment. As a case study an artificial neural network (ANN) methodology is developed for estimating the groundwater levels (upper Floridan aquifer levels) as a function of monthly averaged precipitation, evaporation, and measured levels of Magnolia an...

  19. A Feasible Approach for Implementing Greater Levels of Satellite Autonomy

    Science.gov (United States)

    Lindsay, Steve; Zetocha, Paul

    2002-01-01

    In this paper, we propose a means for achieving increasingly autonomous satellite operations. We begin with a brief discussion of the current state-of-the-art in satellite ground operations and flight software, as well as the real and perceived technical and political obstacles to increasing the levels of autonomy on today's satellites. We then present a list of system requirements that address these hindrances and include the artificial intelligence (AI) technologies with the potential to satisfy these requirements. We conclude with a discussion of how the space industry can use this information to incorporate increased autonomy. From past experience we know that autonomy will not just "happen," and we know that the expensive course of manually intensive operations simply cannot continue. Our goal is to present the aerospace industry with an analysis that will begin moving us in the direction of autonomous operations.

  20. Multilevel domain decomposition for electronic structure calculations

    International Nuclear Information System (INIS)

    Barrault, M.; Cances, E.; Hager, W.W.; Le Bris, C.

    2007-01-01

    We introduce a new multilevel domain decomposition method (MDD) for electronic structure calculations within semi-empirical and density functional theory (DFT) frameworks. This method iterates between local fine solvers and global coarse solvers, in the spirit of domain decomposition methods. Using this approach, calculations have been successfully performed on several linear polymer chains containing up to 40,000 atoms and 200,000 atomic orbitals. Both the computational cost and the memory requirement scale linearly with the number of atoms. Additional speed-up can easily be obtained by parallelization. We show that this domain decomposition method outperforms the density matrix minimization (DMM) method for poor initial guesses. Our method provides an efficient preconditioner for DMM and other linear scaling methods, variational in nature, such as the orbital minimization (OM) procedure

  1. Are litter decomposition and fire linked through plant species traits?

    Science.gov (United States)

    Cornelissen, Johannes H C; Grootemaat, Saskia; Verheijen, Lieneke M; Cornwell, William K; van Bodegom, Peter M; van der Wal, René; Aerts, Rien

    2017-11-01

    Contents 653 I. 654 II. 657 III. 659 IV. 661 V. 662 VI. 663 VII. 665 665 References 665 SUMMARY: Biological decomposition and wildfire are connected carbon release pathways for dead plant material: slower litter decomposition leads to fuel accumulation. Are decomposition and surface fires also connected through plant community composition, via the species' traits? Our central concept involves two axes of trait variation related to decomposition and fire. The 'plant economics spectrum' (PES) links biochemistry traits to the litter decomposability of different fine organs. The 'size and shape spectrum' (SSS) includes litter particle size and shape and their consequent effect on fuel bed structure, ventilation and flammability. Our literature synthesis revealed that PES-driven decomposability is largely decoupled from predominantly SSS-driven surface litter flammability across species; this finding needs empirical testing in various environmental settings. Under certain conditions, carbon release will be dominated by decomposition, while under other conditions litter fuel will accumulate and fire may dominate carbon release. Ecosystem-level feedbacks between decomposition and fire, for example via litter amounts, litter decomposition stage, community-level biotic interactions and altered environment, will influence the trait-driven effects on decomposition and fire. Yet, our conceptual framework, explicitly comparing the effects of two plant trait spectra on litter decomposition vs fire, provides a promising new research direction for better understanding and predicting Earth surface carbon dynamics. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  2. A Systems-Level Approach to Characterizing Effects of ENMs ...

    Science.gov (United States)

    Engineered nanomaterials (ENMs) represent a new regulatory challenge because of their unique properties and their potential to interact with ecological organisms at various developmental stages, in numerous environmental compartments. Traditional toxicity tests have proven to be unreliable due to their short-term nature and the subtle responses often observed following ENM exposure. In order to fully assess the potential for various ENMs to affect responses in organisms and ecosystems, we are using a systems-level framework to link molecular initiating events with changes in whole-organism responses, and to identify how these changes may translate across scales to disrupt important ecosystem processes. This framework utilizes information from nanoparticle characteristics and exposures to help make linkages across scales. We have used Arabidopsis thaliana as a model organism to identify potential transcriptome changes in response to specific ENMs. In addition, we have focused on plant species of agronomic importance to follow multi-generational changes in physiology and phenology, as well as epigenetic markers to identify possible mechanisms of inheritance. We are employing and developing complementary analytical tools (plasma-based and synchrotron spectroscopies, microscopy, and molecular and stable-isotopic techniques) to follow movement of ENMs and ENM products in plants as they develop. These studies have revealed that changes in gene expression do not a

  3. Fate of mercury in tree litter during decomposition

    Science.gov (United States)

    Pokharel, A. K.; Obrist, D.

    2011-09-01

    We performed a controlled laboratory litter incubation study to assess changes in dry mass, carbon (C) mass and concentration, mercury (Hg) mass and concentration, and stoichiometric relations between elements during decomposition. Twenty-five surface litter samples each, collected from four forest stands, were placed in incubation jars open to the atmosphere, and were harvested sequentially at 0, 3, 6, 12, and 18 months. Using a mass balance approach, we observed significant mass losses of Hg during decomposition (5 to 23 % of initial mass after 18 months), which we attribute to gaseous losses of Hg to the atmosphere through a gas-permeable filter covering incubation jars. Percentage mass losses of Hg generally were less than observed dry mass and C mass losses (48 to 63 % Hg loss per unit dry mass loss), although one litter type showed similar losses. A field control study using the same litter types exposed at the original collection locations for one year showed that field litter samples were enriched in Hg concentrations by 8 to 64 % compared to samples incubated for the same time period in the laboratory, indicating strong additional sorption of Hg in the field likely from atmospheric deposition. Solubility of Hg, assessed by exposure of litter to water upon harvest, was very low (associated with plant litter upon decomposition. Results also suggest that Hg accumulation in litter and surface layers in the field is driven mainly by additional sorption of Hg, with minor contributions from "internal" accumulation due to preferential loss of C over Hg. Litter types showed highly species-specific differences in Hg levels during decomposition suggesting that emissions, retention, and sorption of Hg are dependent on litter type.

  4. Generalized first-order kinetic model for biosolids decomposition and oxidation during hydrothermal treatment.

    Science.gov (United States)

    Shanableh, A

    2005-01-01

    The main objective of this study was to develop generalized first-order kinetic models to represent hydrothermal decomposition and oxidation of biosolids within a wide range of temperatures (200-450 degrees C). A lumping approach was used in which oxidation of the various organic ingredients was characterized by the chemical oxygen demand (COD), and decomposition was characterized by the particulate (i.e., nonfilterable) chemical oxygen demand (PCOD). Using the Arrhenius equation (k = k(o)e(-Ea/RT)), activation energy (Ea) levels were derived from 42 continuous-flow hydrothermal treatment experiments conducted at temperatures in the range of 200-450 degrees C. Using predetermined values for k(o) in the Arrhenius equation, the activation energies of the various organic ingredients were separated into 42 values for oxidation and a similar number for decomposition. The activation energy values were then classified into levels representing the relative ease at which the organic ingredients of the biosolids were oxidized or decomposed. The resulting simple first-order kinetic models adequately represented, within the experimental data range, hydrothermal decomposition of the organic particles as measured by PCOD and oxidation of the organic content as measured by COD. The modeling approach presented in the paper provide a simple and general framework suitable for assessing the relative reaction rates of the various organic ingredients of biosolids.

  5. Meddling with middle modalities: a decomposition approach to mental health inequalities between intersectional gender and economic middle groups in northern Sweden

    Directory of Open Access Journals (Sweden)

    Per E. Gustafsson

    2016-11-01

    Full Text Available Background: Intersectionality has received increased interest within population health research in recent years, as a concept and framework to understand entangled dimensions of health inequalities, such as gender and socioeconomic inequalities in health. However, little attention has been paid to the intersectional middle groups, referring to those occupying positions of mixed advantage and disadvantage. Objective: This article aimed to 1 examine mental health inequalities between intersectional groups reflecting structural positions of gender and economic affluence and 2 decompose any observed health inequalities, among middle groups, into contributions from experiences and conditions representing processes of privilege and oppression. Design: Participants (N=25,585 came from the cross-sectional ‘Health on Equal Terms’ survey covering 16- to 84-year-olds in the four northernmost counties of Sweden. Six intersectional positions were constructed from gender (woman vs. men and tertiles (low vs. medium vs. high of disposable income. Mental health was measured through the General Health Questionnaire-12. Explanatory variables covered areas of material conditions, job relations, violence, domestic burden, and healthcare contacts. Analysis of variance (Aim 1 and Blinder-Oaxaca decomposition analysis (Aim 2 were used. Results: Significant mental health inequalities were found between dominant (high-income women and middle-income men and subordinate (middle-income women and low-income men middle groups. The health inequalities between adjacent middle groups were mostly explained by violence (mid-income women vs. men comparison; material conditions (mid- vs. low-income men comparison; and material needs, job relations, and unmet medical needs (high- vs. mid-income women comparison. Conclusions: The study suggests complex processes whereby dominant middle groups in the intersectional space of economic affluence and gender can leverage strategic

  6. Inverse scale space decomposition

    DEFF Research Database (Denmark)

    Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane

    2018-01-01

    We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

  7. Magic Coset Decompositions

    CERN Document Server

    Cacciatori, Sergio L; Marrani, Alessio

    2013-01-01

    By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.

  8. Nested grids ILU-decomposition (NGILU)

    NARCIS (Netherlands)

    Ploeg, A. van der; Botta, E.F.F.; Wubs, F.W.

    1996-01-01

    A preconditioning technique is described which shows, in many cases, grid-independent convergence. This technique only requires an ordering of the unknowns based on the different levels of multigrid, and an incomplete LU-decomposition based on a drop tolerance. The method is demonstrated on a

  9. Direct observation of nanowire growth and decomposition

    DEFF Research Database (Denmark)

    Rackauskas, Simas; Shandakov, Sergey D; Jiang, Hua

    2017-01-01

    knowledge, so far this has been only postulated, but never observed at the atomic level. By means of in situ environmental transmission electron microscopy we monitored and examined the atomic layer transformation at the conditions of the crystal growth and its decomposition using CuO nanowires selected...

  10. Priming of soil carbon decomposition in two Inner Mongolia grassland soils following sheep dung addition: a study using ¹³C natural abundance approach.

    Science.gov (United States)

    Ma, Xiuzhi; Ambus, Per; Wang, Shiping; Wang, Yanfen; Wang, Chengjie

    2013-01-01

    To investigate the effect of sheep dung on soil carbon (C) sequestration, a 152 days incubation experiment was conducted with soils from two different Inner Mongolian grasslands, i.e. a Leymus chinensis dominated grassland representing the climax community (2.1% organic matter content) and a heavily degraded Artemisia frigida dominated community (1.3% organic matter content). Dung was collected from sheep either fed on L. chinensis (C3 plant with δ¹³C = -26.8‰; dung δ¹³C = -26.2‰) or Cleistogenes squarrosa (C₄ plant with δ¹³C = -14.6‰; dung δ¹³C = -15.7‰). Fresh C₃ and C₄ sheep dung was mixed with the two grassland soils and incubated under controlled conditions for analysis of ¹³C-CO₂ emissions. Soil samples were taken at days 17, 43, 86, 127 and 152 after sheep dung addition to detect the δ¹³C signal in soil and dung components. Analysis revealed that 16.9% and 16.6% of the sheep dung C had decomposed, of which 3.5% and 2.8% was sequestrated in the soils of L. chinensis and A. frigida grasslands, respectively, while the remaining decomposed sheep dung was emitted as CO₂. The cumulative amounts of C respired from dung treated soils during 152 days were 7-8 times higher than in the un-amended controls. In both grassland soils, ca. 60% of the evolved CO₂ originated from the decomposing sheep dung and 40% from the native soil C. Priming effects of soil C decomposition were observed in both soils, i.e. 1.4 g and 1.6 g additional soil C kg⁻¹ dry soil had been emitted as CO₂ for the L. chinensis and A. frigida soils, respectively. Hence, the net C losses from L. chinensis and A. frigida soils were 0.6 g and 0.9 g C kg⁻¹ soil, which was 2.6% and 7.0% of the total C in L. chinensis and A. frigida grasslands soils, respectively. Our results suggest that grazing of degraded Inner Mongolian pastures may cause a net soil C loss due to the positive priming effect, thereby accelerating soil deterioration.

  11. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika; Amato, Nancy M.; Lu, Yanyan; Lien, Jyh-Ming

    2013-01-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  12. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika

    2013-02-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  13. Using an Ecosystem Approach to complement protection schemes based on organism-level endpoints

    International Nuclear Information System (INIS)

    Bradshaw, Clare; Kapustka, Lawrence; Barnthouse, Lawrence; Brown, Justin; Ciffroy, Philippe; Forbes, Valery; Geras'kin, Stanislav; Kautsky, Ulrik; Bréchignac, François

    2014-01-01

    Radiation protection goals for ecological resources are focussed on ecological structures and functions at population-, community-, and ecosystem-levels. The current approach to radiation safety for non-human biota relies on organism-level endpoints, and as such is not aligned with the stated overarching protection goals of international agencies. Exposure to stressors can trigger non-linear changes in ecosystem structure and function that cannot be predicted from effects on individual organisms. From the ecological sciences, we know that important interactive dynamics related to such emergent properties determine the flows of goods and services in ecological systems that human societies rely upon. A previous Task Group of the IUR (International Union of Radioecology) has presented the rationale for adding an Ecosystem Approach to the suite of tools available to manage radiation safety. In this paper, we summarize the arguments for an Ecosystem Approach and identify next steps and challenges ahead pertaining to developing and implementing a practical Ecosystem Approach to complement organism-level endpoints currently used in radiation safety. - Highlights: • An Ecosystem Approach to radiation safety complements the organism-level approach. • Emergent properties in ecosystems are not captured by organism-level endpoints. • The proposed Ecosystem Approach better aligns with management goals. • Practical guidance with respect to system-level endpoints is needed. • Guidance on computational model selection would benefit an Ecosystem Approach

  14. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  15. Processing approaches to cognition: the impetus from the levels-of-processing framework.

    Science.gov (United States)

    Roediger, Henry L; Gallo, David A; Geraci, Lisa

    2002-01-01

    Processing approaches to cognition have a long history, from act psychology to the present, but perhaps their greatest boost was given by the success and dominance of the levels-of-processing framework. We review the history of processing approaches, and explore the influence of the levels-of-processing approach, the procedural approach advocated by Paul Kolers, and the transfer-appropriate processing framework. Processing approaches emphasise the procedures of mind and the idea that memory storage can be usefully conceptualised as residing in the same neural units that originally processed information at the time of encoding. Processing approaches emphasise the unity and interrelatedness of cognitive processes and maintain that they can be dissected into separate faculties only by neglecting the richness of mental life. We end by pointing to future directions for processing approaches.

  16. Danburite decomposition by sulfuric acid

    International Nuclear Information System (INIS)

    Mirsaidov, U.; Mamatov, E.D.; Ashurov, N.A.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by sulfuric acid. The process of decomposition of danburite concentrate by sulfuric acid was studied. The chemical nature of decomposition process of boron containing ore was determined. The influence of temperature on the rate of extraction of boron and iron oxides was defined. The dependence of decomposition of boron and iron oxides on process duration, dosage of H 2 SO 4 , acid concentration and size of danburite particles was determined. The kinetics of danburite decomposition by sulfuric acid was studied as well. The apparent activation energy of the process of danburite decomposition by sulfuric acid was calculated. The flowsheet of danburite processing by sulfuric acid was elaborated.

  17. Thermal decomposition of lutetium propionate

    DEFF Research Database (Denmark)

    Grivel, Jean-Claude

    2010-01-01

    The thermal decomposition of lutetium(III) propionate monohydrate (Lu(C2H5CO2)3·H2O) in argon was studied by means of thermogravimetry, differential thermal analysis, IR-spectroscopy and X-ray diffraction. Dehydration takes place around 90 °C. It is followed by the decomposition of the anhydrous...... °C. Full conversion to Lu2O3 is achieved at about 1000 °C. Whereas the temperatures and solid reaction products of the first two decomposition steps are similar to those previously reported for the thermal decomposition of lanthanum(III) propionate monohydrate, the final decomposition...... of the oxycarbonate to the rare-earth oxide proceeds in a different way, which is here reminiscent of the thermal decomposition path of Lu(C3H5O2)·2CO(NH2)2·2H2O...

  18. Domain decomposition multigrid for unstructured grids

    Energy Technology Data Exchange (ETDEWEB)

    Shapira, Yair

    1997-01-01

    A two-level preconditioning method for the solution of elliptic boundary value problems using finite element schemes on possibly unstructured meshes is introduced. It is based on a domain decomposition and a Galerkin scheme for the coarse level vertex unknowns. For both the implementation and the analysis, it is not required that the curves of discontinuity in the coefficients of the PDE match the interfaces between subdomains. Generalizations to nonmatching or overlapping grids are made.

  19. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  20. Interactive Approach for Multi-Level Multi-Objective Fractional Programming Problems with Fuzzy Parameters

    Directory of Open Access Journals (Sweden)

    M.S. Osman

    2018-03-01

    Full Text Available In this paper, an interactive approach for solving multi-level multi-objective fractional programming (ML-MOFP problems with fuzzy parameters is presented. The proposed interactive approach makes an extended work of Shi and Xia (1997. In the first phase, the numerical crisp model of the ML-MOFP problem has been developed at a confidence level without changing the fuzzy gist of the problem. Then, the linear model for the ML-MOFP problem is formulated. In the second phase, the interactive approach simplifies the linear multi-level multi-objective model by converting it into separate multi-objective programming problems. Also, each separate multi-objective programming problem of the linear model is solved by the ∊-constraint method and the concept of satisfactoriness. Finally, illustrative examples and comparisons with the previous approaches are utilized to evince the feasibility of the proposed approach.

  1. Synthesis of carbon nanotubes by catalytic vapor decomposition ...

    Indian Academy of Sciences (India)

    Carbon nanotubes (CNTs); catalytic vapor decomposition; soap bubble mass flowmeter. ... [4,13,14], makes them an excellent candidate for use as a dielectric in supercapac- itors [15]. ... the change in liquid level in the scrubber. After the ...

  2. Mobility Modelling through Trajectory Decomposition and Prediction

    OpenAIRE

    Faghihi, Farbod

    2017-01-01

    The ubiquity of mobile devices with positioning sensors make it possible to derive user's location at any time. However, constantly sensing the position in order to track the user's movement is not feasible, either due to the unavailability of sensors, or computational and storage burdens. In this thesis, we present and evaluate a novel approach for efficiently tracking user's movement trajectories using decomposition and prediction of trajectories. We facilitate tracking by taking advantage ...

  3. Decomposition studies of group 6 hexacarbonyl complexes. Pt. 2. Modelling of the decomposition process

    Energy Technology Data Exchange (ETDEWEB)

    Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)

    2016-11-01

    The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.

  4. A Coordinated Approach to Communicating Pediatric-Related Information on Pandemic Influenza at the Community Level

    Energy Technology Data Exchange (ETDEWEB)

    HCTT CHE

    2009-12-16

    The purpose of this document is to provide a suggested approach, based on input from pediatric stakeholders, to communicating pediatric-related information on pandemic influenza at the community level in a step-by-step manner.

  5. Fast modal decomposition for optical fibers using digital holography.

    Science.gov (United States)

    Lyu, Meng; Lin, Zhiquan; Li, Guowei; Situ, Guohai

    2017-07-26

    Eigenmode decomposition of the light field at the output end of optical fibers can provide fundamental insights into the nature of electromagnetic-wave propagation through the fibers. Here we present a fast and complete modal decomposition technique for step-index optical fibers. The proposed technique employs digital holography to measure the light field at the output end of the multimode optical fiber, and utilizes the modal orthonormal property of the basis modes to calculate the modal coefficients of each mode. Optical experiments were carried out to demonstrate the proposed decomposition technique, showing that this approach is fast, accurate and cost-effective.

  6. Endorsing the Practical Endorsement? OCR's Approach to Practical Assessment in Science A-Levels

    Science.gov (United States)

    Evans, Steve; Wade, Neil

    2015-01-01

    This article summarises the practical requirements for new science A-levels in biology, chemistry and physics for first teaching from September 2015. It discusses the background to how the new approach was reached and how OCR has seen this taking shape in our assessment models. The opportunities presented by this new approach to practical…

  7. Geometric decomposition of the conformation tensor in viscoelastic turbulence

    Science.gov (United States)

    Hameduddin, Ismail; Meneveau, Charles; Zaki, Tamer A.; Gayme, Dennice F.

    2018-05-01

    This work introduces a mathematical approach to analysing the polymer dynamics in turbulent viscoelastic flows that uses a new geometric decomposition of the conformation tensor, along with associated scalar measures of the polymer fluctuations. The approach circumvents an inherent difficulty in traditional Reynolds decompositions of the conformation tensor: the fluctuating tensor fields are not positive-definite and so do not retain the physical meaning of the tensor. The geometric decomposition of the conformation tensor yields both mean and fluctuating tensor fields that are positive-definite. The fluctuating tensor in the present decomposition has a clear physical interpretation as a polymer deformation relative to the mean configuration. Scalar measures of this fluctuating conformation tensor are developed based on the non-Euclidean geometry of the set of positive-definite tensors. Drag-reduced viscoelastic turbulent channel flow is then used an example case study. The conformation tensor field, obtained using direct numerical simulations, is analysed using the proposed framework.

  8. Decomposition of oxalate precipitates by photochemical reaction

    International Nuclear Information System (INIS)

    Jae-Hyung Yoo; Eung-Ho Kim

    1999-01-01

    A photo-radiation method was applied to decompose oxalate precipitates so that it can be dissolved into dilute nitric acid. This work has been studied as a part of partitioning of minor actinides. Minor actinides can be recovered from high-level wastes as oxalate precipitates, but they tend to be coprecipitated together with lanthanide oxalates. This requires another partitioning step for mutual separation of actinide and lanthanide groups. In this study, therefore, some experimental work of photochemical decomposition of oxalate was carried out to prove its feasibility as a step of partitioning process. The decomposition of oxalic acid in the presence of nitric acid was performed in advance in order to understand the mechanistic behaviour of oxalate destruction, and then the decomposition of neodymium oxalate, which was chosen as a stand-in compound representing minor actinide and lanthanide oxalates, was examined. The decomposition rate of neodymium oxalate was found as 0.003 mole/hr at the conditions of 0.5 M HNO 3 and room temperature when a mercury lamp was used as a light source. (author)

  9. Low-Pass Filtering Approach via Empirical Mode Decomposition Improves Short-Scale Entropy-Based Complexity Estimation of QT Interval Variability in Long QT Syndrome Type 1 Patients

    Directory of Open Access Journals (Sweden)

    Vlasta Bari

    2014-09-01

    Full Text Available Entropy-based complexity of cardiovascular variability at short time scales is largely dependent on the noise and/or action of neural circuits operating at high frequencies. This study proposes a technique for canceling fast variations from cardiovascular variability, thus limiting the effect of these overwhelming influences on entropy-based complexity. The low-pass filtering approach is based on the computation of the fastest intrinsic mode function via empirical mode decomposition (EMD and its subtraction from the original variability. Sample entropy was exploited to estimate complexity. The procedure was applied to heart period (HP and QT (interval from Q-wave onset to T-wave end variability derived from 24-hour Holter recordings in 14 non-mutation carriers (NMCs and 34 mutation carriers (MCs subdivided into 11 asymptomatic MCs (AMCs and 23 symptomatic MCs (SMCs. All individuals belonged to the same family developing long QT syndrome type 1 (LQT1 via KCNQ1-A341V mutation. We found that complexity indexes computed over EMD-filtered QT variability differentiated AMCs from NMCs and detected the effect of beta-blocker therapy, while complexity indexes calculated over EMD-filtered HP variability separated AMCs from SMCs. The EMD-based filtering method enhanced features of the cardiovascular control that otherwise would have remained hidden by the dominant presence of noise and/or fast physiological variations, thus improving classification in LQT1.

  10. Proton mass decomposition

    Science.gov (United States)

    Yang, Yi-Bo; Chen, Ying; Draper, Terrence; Liang, Jian; Liu, Keh-Fei

    2018-03-01

    We report the results on the proton mass decomposition and also on the related quark and glue momentum fractions. The results are based on overlap valence fermions on four ensembles of Nf = 2 + 1 DWF configurations with three lattice spacings and volumes, and several pion masses including the physical pion mass. With 1-loop pertur-bative calculation and proper normalization of the glue operator, we find that the u, d, and s quark masses contribute 9(2)% to the proton mass. The quark energy and glue field energy contribute 31(5)% and 37(5)% respectively in the MS scheme at µ = 2 GeV. The trace anomaly gives the remaining 23(1)% contribution. The u, d, s and glue momentum fractions in the MS scheme are consistent with the global analysis at µ = 2 GeV.

  11. Art of spin decomposition

    International Nuclear Information System (INIS)

    Chen Xiangsong; Sun Weimin; Wang Fan; Goldman, T.

    2011-01-01

    We analyze the problem of spin decomposition for an interacting system from a natural perspective of constructing angular-momentum eigenstates. We split, from the total angular-momentum operator, a proper part which can be separately conserved for a stationary state. This part commutes with the total Hamiltonian and thus specifies the quantum angular momentum. We first show how this can be done in a gauge-dependent way, by seeking a specific gauge in which part of the total angular-momentum operator vanishes identically. We then construct a gauge-invariant operator with the desired property. Our analysis clarifies what is the most pertinent choice among the various proposals for decomposing the nucleon spin. A similar analysis is performed for extracting a proper part from the total Hamiltonian to construct energy eigenstates.

  12. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  13. A Decomposition Algorithm for Learning Bayesian Network Structures from Data

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Cordero Hernandez, Jorge

    2008-01-01

    It is a challenging task of learning a large Bayesian network from a small data set. Most conventional structural learning approaches run into the computational as well as the statistical problems. We propose a decomposition algorithm for the structure construction without having to learn...... the complete network. The new learning algorithm firstly finds local components from the data, and then recover the complete network by joining the learned components. We show the empirical performance of the decomposition algorithm in several benchmark networks....

  14. Decomposition methods for unsupervised learning

    DEFF Research Database (Denmark)

    Mørup, Morten

    2008-01-01

    This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...

  15. Material elemental decomposition in dual and multi-energy CT via a sparsity-dictionary approach for proton stopping power ratio calculation.

    Science.gov (United States)

    Shen, Chenyang; Li, Bin; Chen, Liyuan; Yang, Ming; Lou, Yifei; Jia, Xun

    2018-04-01

    Accurate calculation of proton stopping power ratio (SPR) relative to water is crucial to proton therapy treatment planning, since SPR affects prediction of beam range. Current standard practice derives SPR using a single CT scan. Recent studies showed that dual-energy CT (DECT) offers advantages to accurately determine SPR. One method to further improve accuracy is to incorporate prior knowledge on human tissue composition through a dictionary approach. In addition, it is also suggested that using CT images with multiple (more than two) energy channels, i.e., multi-energy CT (MECT), can further improve accuracy. In this paper, we proposed a sparse dictionary-based method to convert CT numbers of DECT or MECT to elemental composition (EC) and relative electron density (rED) for SPR computation. A dictionary was constructed to include materials generated based on human tissues of known compositions. For a voxel with CT numbers of different energy channels, its EC and rED are determined subject to a constraint that the resulting EC is a linear non-negative combination of only a few tissues in the dictionary. We formulated this as a non-convex optimization problem. A novel algorithm was designed to solve the problem. The proposed method has a unified structure to handle both DECT and MECT with different number of channels. We tested our method in both simulation and experimental studies. Average errors of SPR in experimental studies were 0.70% in DECT, 0.53% in MECT with three energy channels, and 0.45% in MECT with four channels. We also studied the impact of parameter values and established appropriate parameter values for our method. The proposed method can accurately calculate SPR using DECT and MECT. The results suggest that using more energy channels may improve the SPR estimation accuracy. © 2018 American Association of Physicists in Medicine.

  16. Time space domain decomposition methods for reactive transport - Application to CO2 geological storage

    International Nuclear Information System (INIS)

    Haeberlein, F.

    2011-01-01

    Reactive transport modelling is a basic tool to model chemical reactions and flow processes in porous media. A totally reduced multi-species reactive transport model including kinetic and equilibrium reactions is presented. A structured numerical formulation is developed and different numerical approaches are proposed. Domain decomposition methods offer the possibility to split large problems into smaller subproblems that can be treated in parallel. The class of Schwarz-type domain decomposition methods that have proved to be high-performing algorithms in many fields of applications is presented with a special emphasis on the geometrical viewpoint. Numerical issues for the realisation of geometrical domain decomposition methods and transmission conditions in the context of finite volumes are discussed. We propose and validate numerically a hybrid finite volume scheme for advection-diffusion processes that is particularly well-suited for the use in a domain decomposition context. Optimised Schwarz waveform relaxation methods are studied in detail on a theoretical and numerical level for a two species coupled reactive transport system with linear and nonlinear coupling terms. Well-posedness and convergence results are developed and the influence of the coupling term on the convergence behaviour of the Schwarz algorithm is studied. Finally, we apply a Schwarz waveform relaxation method on the presented multi-species reactive transport system. (author)

  17. A stochastic approach to the derivation of exemption and clearance levels

    International Nuclear Information System (INIS)

    Deckert, A.

    1997-01-01

    Deciding what clearance levels are appropriate for a particular waste stream inherently involves a number of uncertainties. Some of these uncertainties can be quantified using stochastic modeling techniques, which can aid the process of decision making. In this presentation the German approach to dealing with the uncertainties involved in setting clearance levels is addressed. (author)

  18. Environmental health risk assessment of ambient lead levels in Lisbon, Portugal: A full chain study approach

    DEFF Research Database (Denmark)

    Casimiro, E.; Philippe Ciffroy, P.; Serpa, P.

    2011-01-01

    to calculate the Pb levels in the various body systems. Our results showed a low health risk from Pb exposures. It also identified that ingestion of leafy vegetables (i.e. lettuce, cabbage, and spinach) and fruits contribute the most to total Pb blood levels. This full chain assessment approach of the 2FUN...

  19. Effects of Brain-Based Learning Approach on Students' Motivation and Attitudes Levels in Science Class

    Science.gov (United States)

    Akyurek, Erkan; Afacan, Ozlem

    2013-01-01

    The purpose of the study was to examine the effect of brain-based learning approach on attitudes and motivation levels in 8th grade students' science classes. The main reason for examining attitudes and motivation levels, the effect of the short-term motivation, attitude shows the long-term effect. The pre/post-test control group research model…

  20. Dealing with Phrase Level Co-Articulation (PLC) in speech recognition: a first approach

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; van Leeuwen, David A.; Robinson, Tony; Renals, Steve

    1999-01-01

    Whereas nowadays within-word co-articulation effects are usually sufficiently dealt with in automatic speech recognition, this is not always the case with phrase level co-articulation effects (PLC). This paper describes a first approach in dealing with phrase level co-articulation by applying these

  1. Bregmanized Domain Decomposition for Image Restoration

    KAUST Repository

    Langer, Andreas

    2012-05-22

    Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.

  2. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    Science.gov (United States)

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  3. Defining acceptable levels for ecological indicators: an approach for considering social values.

    Science.gov (United States)

    Smyth, Robyn L; Watzin, Mary C; Manning, Robert E

    2007-03-01

    Ecological indicators can facilitate an adaptive management approach, but only if acceptable levels for those indicators have been defined so that the data collected can be interpreted. Because acceptable levels are an expression of the desired state of the ecosystem, the process of establishing acceptable levels should incorporate not just ecological understanding but also societal values. The goal of this research was to explore an approach for defining acceptable levels of ecological indicators that explicitly considers social perspectives and values. We used a set of eight indicators that were related to issues of concern in the Lake Champlain Basin. Our approach was based on normative theory. Using a stakeholder survey, we measured respondent normative evaluations of varying levels of our indicators. Aggregated social norm curves were used to determine the level at which indicator values shifted from acceptable to unacceptable conditions. For seven of the eight indicators, clear preferences were interpretable from these norm curves. For example, closures of public beaches because of bacterial contamination and days of intense algae bloom went from acceptable to unacceptable at 7-10 days in a summer season. Survey respondents also indicated that the number of fish caught from Lake Champlain that could be safely consumed each month was unacceptably low and the number of streams draining into the lake that were impaired by storm water was unacceptably high. If indicators that translate ecological conditions into social consequences are carefully selected, we believe the normative approach has considerable merit for defining acceptable levels of valued ecological system components.

  4. A level set approach for shock-induced α-γ phase transition of RDX

    Science.gov (United States)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  5. Evaluation of the molecular level visualisation approach for teaching and learning chemistry in Thailand

    Science.gov (United States)

    Phenglengdi, Butsari

    This research evaluates the use of a molecular level visualisation approach in Thai secondary schools. The goal is to obtain insights about the usefulness of this approach, and to examine possible improvements in how the approach might be applied in the future. The methodology used for this research used both qualitative and quantitative approaches. Data were collected in the form of pre- and post-intervention multiple choice questions, open-ended-questions, drawing exercises, one-to-one interviews and video recordings of class activity. The research was conducted in two phases, involving a total of 261 students from the 11th Grade in Thailand. The use of VisChem animations in three studies was evaluated in Phase I. Study 1 was a pilot study exploring the benefits of incorporating VisChem animations to portray the molecular level. Study 2 compared test results between students exposed to these animations of molecular level events, and those not. Finally, in Study 3, test results were gathered from different types of schools (a rural school, a city school, and a university school). The results showed that students (and teachers) had misconceptions at the molecular level, and VisChem animations could help students understand chemistry concepts at the molecular level across all three types of schools. While the animation treatment group had a better score on the topic of states of water, the non-animation treatment group had a better score on the topic of dissolving sodium chloride in water than the animation group. The molecular level visualisation approach as a learning design was evaluated in Phase II. This approach involved a combination of VisChem animations, pictures, and diagrams together with the seven-step VisChem learning design. The study involved three classes of students, each with a different treatment, described as Class A - Traditional approach; Class B - VisChem animations with traditional approach; and Class C - Molecular level visualisation approach

  6. Development of a matrix approach to estimate soil clean-up levels for BTEX compounds

    International Nuclear Information System (INIS)

    Erbas-White, I.; San Juan, C.

    1993-01-01

    A draft state-of-the-art matrix approach has been developed for the State of Washington to estimate clean-up levels for benzene, toluene, ethylbenzene and xylene (BTEX) in deep soils based on an endangerment approach to groundwater. Derived soil clean-up levels are estimated using a combination of two computer models, MULTIMED and VLEACH. The matrix uses a simple scoring system that is used to assign a score at a given site based on the parameters such as depth to groundwater, mean annual precipitation, type of soil, distance to potential groundwater receptor and the volume of contaminated soil. The total score is then used to obtain a soil clean-up level from a table. The general approach used involves the utilization of computer models to back-calculate soil contaminant levels in the vadose zone that would create that particular contaminant concentration in groundwater at a given receptor. This usually takes a few iterations of trial runs to estimate the clean-up levels since the models use the soil clean-up levels as ''input'' and the groundwater levels as ''output.'' The selected contaminant levels in groundwater are Model Toxic control Act (MTCA) values used in the State of Washington

  7. Generalized Fisher index or Siegel-Shapley decomposition?

    International Nuclear Information System (INIS)

    De Boer, Paul

    2009-01-01

    It is generally believed that index decomposition analysis (IDA) and input-output structural decomposition analysis (SDA) [Rose, A., Casler, S., Input-output structural decomposition analysis: a critical appraisal, Economic Systems Research 1996; 8; 33-62; Dietzenbacher, E., Los, B., Structural decomposition techniques: sense and sensitivity. Economic Systems Research 1998;10; 307-323] are different approaches in energy studies; see for instance Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763]. In this paper it is shown that the generalized Fisher approach, introduced in IDA by Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763] for the decomposition of an aggregate change in a variable in r = 2, 3 or 4 factors is equivalent to SDA. They base their formulae on the very complicated generic formula that Shapley [Shapley, L., A value for n-person games. In: Kuhn H.W., Tucker A.W. (Eds), Contributions to the theory of games, vol. 2. Princeton University: Princeton; 1953. p. 307-317] derived for his value of n-person games, and mention that Siegel [Siegel, I.H., The generalized 'ideal' index-number formula. Journal of the American Statistical Association 1945; 40; 520-523] gave their formulae using a different route. In this paper tables are given from which the formulae of the generalized Fisher approach can easily be derived for the cases of r = 2, 3 or 4 factors. It is shown that these tables can easily be extended to cover the cases of r = 5 and r = 6 factors. (author)

  8. Danburite decomposition by hydrochloric acid

    International Nuclear Information System (INIS)

    Mamatov, E.D.; Ashurov, N.A.; Mirsaidov, U.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by hydrochloric acid. The interaction of boron containing ores of Ak-Arkhar Deposit of Tajikistan with mineral acids, including hydrochloric acid was studied. The optimal conditions of extraction of valuable components from danburite composition were determined. The chemical composition of danburite of Ak-Arkhar Deposit was determined as well. The kinetics of decomposition of calcined danburite by hydrochloric acid was studied. The apparent activation energy of the process of danburite decomposition by hydrochloric acid was calculated.

  9. AUTONOMOUS GAUSSIAN DECOMPOSITION

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  10. AUTONOMOUS GAUSSIAN DECOMPOSITION

    International Nuclear Information System (INIS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-01-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes

  11. Networks as a Privileged Way to Develop Mesoscopic Level Approaches in Systems Biology

    OpenAIRE

    Alessandro Giuliani

    2014-01-01

    The methodologies advocated in computational biology are in many cases proper system-level approaches. These methodologies are variously connected to the notion of “mesosystem” and thus on the focus on relational structures that are at the basis of biological regulation. Here, I describe how the formalization of biological systems by means of graph theory constitutes an extremely fruitful approach to biology. I suggest the epistemological relevance of the notion of graph resides in its multil...

  12. Cellulose decomposition in a 50 MVA transformer

    International Nuclear Information System (INIS)

    Piechalak, B.W.

    1992-01-01

    Dissolved gas-in-oil analysis for carbon monoxide and carbon dioxide has been used for years to predict cellulose decomposition in a transformer. However, the levels at which these gases become significant have not been widely agreed upon. This paper evaluates the gas analysis results from the nitrogen blanket and the oil of a 50 MVA unit auxiliary transformer in terms of whether accelerated thermal breakdown or normal aging of the paper is occurring. Furthermore, this paper presents additional data on carbon monoxide and carbon dioxide levels in unit and system auxiliary transformers at generating stations and explains why their levels differ

  13. Primary decomposition of zero-dimensional ideals over finite fields

    Science.gov (United States)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  14. Strategic Uncertainty in Markets for Nonrenewable Resources: A Level-k Approach

    Directory of Open Access Journals (Sweden)

    Ingmar Vierhaus

    2017-01-01

    Full Text Available Existing models of nonrenewable resources assume that sophisticated agents compete with other sophisticated agents. This study instead uses a level-k approach to examine cases where the focal agent is uncertain about the strategy of his opponent or predicts that the opponent will act in a nonsophisticated manner. Level-0 players are randomized uniformly across all possible actions, and level-k players best respond to the action of player k-1. We study a dynamic nonrenewable resource game with a large number of actions. We are able to solve for the level-1 strategy by reducing the averaging problem to an optimization problem against a single action. We show that lower levels of strategic reasoning are close to the Walras and collusive benchmark, whereas higher level strategies converge to the Nash-Hotelling equilibrium. These results are then fitted to experimental data, suggesting that the level of sophistication of participants increased over the course of the experiment.

  15. Methodical approaches to value assessment and determination of the capitalization level of high-rise construction

    Science.gov (United States)

    Smirnov, Vitaly; Dashkov, Leonid; Gorshkov, Roman; Burova, Olga; Romanova, Alina

    2018-03-01

    The article presents the analysis of the methodological approaches to cost estimation and determination of the capitalization level of high-rise construction objects. Factors determining the value of real estate were considered, three main approaches for estimating the value of real estate objects are given. The main methods of capitalization estimation were analyzed, the most reasonable method for determining the level of capitalization of high-rise buildings was proposed. In order to increase the value of real estate objects, the author proposes measures that enable to increase significantly the capitalization of the enterprise through more efficient use of intangible assets and goodwill.

  16. Analysis of factors affecting satisfaction level on problem based learning approach using structural equation modeling

    Science.gov (United States)

    Hussain, Nur Farahin Mee; Zahid, Zalina

    2014-12-01

    Nowadays, in the job market demand, graduates are expected not only to have higher performance in academic but they must also be excellent in soft skill. Problem-Based Learning (PBL) has a number of distinct advantages as a learning method as it can deliver graduates that will be highly prized by industry. This study attempts to determine the satisfaction level of engineering students on the PBL Approach and to evaluate their determinant factors. The Structural Equation Modeling (SEM) was used to investigate how the factors of Good Teaching Scale, Clear Goals, Student Assessment and Levels of Workload affected the student satisfaction towards PBL approach.

  17. An integrated approach to strategic planning in the civilian high-level radioactive waste management program

    International Nuclear Information System (INIS)

    Sprecher, W.M.; Katz, J.; Redmond, R.J.

    1992-01-01

    This paper describes the approach that the Office of Civilian Radioactive Waste Management (OCRWM) of the Department of Energy (DOE) is taking to the task of strategic planning for the civilian high-level radioactive waste management program. It highlights selected planning products and activities that have emerged over the past year. It demonstrates that this approach is an integrated one, both in the sense of being systematic on the program level but also as a component of DOE strategic planning efforts. Lastly, it indicates that OCRWM strategic planning takes place in a dynamic environment and consequently is a process that is still evolving in response to the demands placed upon it

  18. A Decomposition Approach for Shipboard Manpower Scheduling

    Science.gov (United States)

    2009-01-01

    generalizes the bin-packing problem with no conflicts ( BPP ) which is known to be NP-hard (Garey and Johnson 1979). Hence our focus is to obtain a lower...to the BPP ; while the so called constrained packing lower bound also takes conflict constraints into account. Their computational study indicates

  19. Bayesian approach to magnetotelluric tensor decomposition

    Czech Academy of Sciences Publication Activity Database

    Červ, Václav; Pek, Josef; Menvielle, M.

    2010-01-01

    Roč. 53, č. 2 (2010), s. 21-32 ISSN 1593-5213 R&D Projects: GA AV ČR IAA200120701; GA ČR GA205/04/0746; GA ČR GA205/07/0292 Institutional research plan: CEZ:AV0Z30120515 Keywords : galvanic distortion * telluric distortion * impedance tensor * basic procedure * inversion * noise Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.336, year: 2010

  20. The Methodological Approach to Determining the Level of Formation and Provision of Enterprise Personnel Security

    Directory of Open Access Journals (Sweden)

    Gavkalova Nataliia L.

    2016-11-01

    Full Text Available The aim of the article is to substantiate the methodical approach to determining the level of formation and provision of enterprise personnel security. By analyzing, systematizing and generalizing scientific achievements of many scientists, approaches to the evaluation of personnel security at the enterprise were considered, a set of indices for evaluation of personnel security was defined. There justified the urgency of creating a comprehensive approach to evaluation of personnel security that includes implementation of the following stages: defining a list of indices corresponding to the level of formation and provision of personnel security with the help of the expert evaluation method; calculating integral indices of personnel security for each component and the corresponding level by means of the taxonomic analysis; grouping enterprises by the level of formation and provision of personnel security with the use of the cluster and discriminant analysis. It is found that the implementation of this approach will allow not only determining the level of formation and provision of personnel security at the enterprise, but also developing appropriate recommendations on improving its state. Prospects for further research in this direction are evaluation of conditions for formation and provision of personnel security at the enterprise, which will enable revealing negative destabilizing factors that influence personnel security

  1. hermal decomposition of irradiated casein molecules

    International Nuclear Information System (INIS)

    Ali, M.A.; Elsayed, A.A.

    1998-01-01

    NON-Isothermal studies were carried out using the derivatograph where thermogravimetry (TG) and differential thermogravimetry (DTG) measurements were used to obtain the activation energies of the first and second reactions for casein (glyco-phospho-protein) decomposition before and after exposure to 1 Gy γ-rays and up to 40 x 1 04 μg Gy fast neutrons. 25C f was used as a source of fast neutrons, associated with γ-rays. 137 Cs source was used as pure γ-source. The activation energies for the first and second reactions for casein decomposition were found to be smaller at 400 μGy than that at lower and higher fast neutron doses. However, no change in activation energies was observed after γ-irradiation. it is concluded from the present study that destruction of casein molecules by low level fast neutron doses may lead to changes of shelf storage period of milk

  2. NRSA enzyme decomposition model data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Microbial enzyme activities measured at more than 2000 US streams and rivers. These enzyme data were then used to predict organic matter decomposition and microbial...

  3. Dynamic systems approaches and levels of analysis in the nervous system

    Science.gov (United States)

    Parker, David; Srivastava, Vipin

    2013-01-01

    Various analyses are applied to physiological signals. While epistemological diversity is necessary to address effects at different levels, there is often a sense of competition between analyses rather than integration. This is evidenced by the differences in the criteria needed to claim understanding in different approaches. In the nervous system, neuronal analyses that attempt to explain network outputs in cellular and synaptic terms are rightly criticized as being insufficient to explain global effects, emergent or otherwise, while higher-level statistical and mathematical analyses can provide quantitative descriptions of outputs but can only hypothesize on their underlying mechanisms. The major gap in neuroscience is arguably our inability to translate what should be seen as complementary effects between levels. We thus ultimately need approaches that allow us to bridge between different spatial and temporal levels. Analytical approaches derived from critical phenomena in the physical sciences are increasingly being applied to physiological systems, including the nervous system, and claim to provide novel insight into physiological mechanisms and opportunities for their control. Analyses of criticality have suggested several important insights that should be considered in cellular analyses. However, there is a mismatch between lower-level neurophysiological approaches and statistical phenomenological analyses that assume that lower-level effects can be abstracted away, which means that these effects are unknown or inaccessible to experimentalists. As a result experimental designs often generate data that is insufficient for analyses of criticality. This review considers the relevance of insights from analyses of criticality to neuronal network analyses, and highlights that to move the analyses forward and close the gap between the theoretical and neurobiological levels, it is necessary to consider that effects at each level are complementary rather than in

  4. A New Approach to Site Demand-Based Level Inventory Optimization

    Science.gov (United States)

    2016-06-01

    Note: If probability distributions are estimated based on mean and variance , use ˆ qix  and 2ˆ( )qi to generate these. q in , number of...TO SITE DEMAND-BASED LEVEL INVENTORY OPTIMIZATION by Tacettin Ersoz June 2016 Thesis Advisor: Javier Salmeron Second Reader: Emily...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW APPROACH TO SITE DEMAND-BASED LEVEL INVENTORY OPTIMIZATION 5. FUNDING NUMBERS 6

  5. Practice-level approaches for behavioral counseling and patient health behaviors.

    Science.gov (United States)

    Balasubramanian, Bijal A; Cohen, Deborah J; Clark, Elizabeth C; Isaacson, Nicole F; Hung, Dorothy Y; Dickinson, L Miriam; Fernald, Douglas H; Green, Larry A; Crabtree, Benjamin F

    2008-11-01

    There is little empirical evidence to show that a practice-level approach that includes identifying patients in need of health behavior advice and linking them to counseling resources either in the practice or in the community results in improvements in patients' behaviors. This study examined whether patients in primary care practices that had practice-level approaches for physical activity and healthy-diet counseling were more likely to have healthier behaviors than patients in practices without practice-level approaches. A cross-sectional study of 54 primary care practices was conducted from July 2005 to January 2007. Practices were categorized into four groups depending on whether they had both identification tools (health risk assessment, registry) and linking strategies (within practice or to community resources); identification tools but no linking strategies; linking strategies but no identification tools; or neither identification tools nor linking strategies. Controlling for patient and practice characteristics, practices that had both identification tools and linking strategies for physical activity counseling were 80% more likely (95% CI=1.25, 2.59) to have patients who reported exercising regularly compared to practices that lacked both. Also, practices that had either identification tools or linking strategies but not both were approximately 50% more likely to have patients who reported exercising regularly. The use of a greater number of practice-level approaches for physical activity counseling was associated with higher odds of patients' reporting exercising regularly (p for trend=0.0002). Use of identification tools and linking strategies for healthy-eating counseling was not associated with patients' reports of healthy diets. This study suggests that practice-level approaches may enable primary care practices to help patients improve physical activity. However, these approaches may have different effects on different behaviors, and merit further

  6. Randomized interpolative decomposition of separated representations

    Science.gov (United States)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  7. Tensor gauge condition and tensor field decomposition

    Science.gov (United States)

    Zhu, Ben-Chao; Chen, Xiang-Song

    2015-10-01

    We discuss various proposals of separating a tensor field into pure-gauge and gauge-invariant components. Such tensor field decomposition is intimately related to the effort of identifying the real gravitational degrees of freedom out of the metric tensor in Einstein’s general relativity. We show that as for a vector field, the tensor field decomposition has exact correspondence to and can be derived from the gauge-fixing approach. The complication for the tensor field, however, is that there are infinitely many complete gauge conditions in contrast to the uniqueness of Coulomb gauge for a vector field. The cause of such complication, as we reveal, is the emergence of a peculiar gauge-invariant pure-gauge construction for any gauge field of spin ≥ 2. We make an extensive exploration of the complete tensor gauge conditions and their corresponding tensor field decompositions, regarding mathematical structures, equations of motion for the fields and nonlinear properties. Apparently, no single choice is superior in all aspects, due to an awkward fact that no gauge-fixing can reduce a tensor field to be purely dynamical (i.e. transverse and traceless), as can the Coulomb gauge in a vector case.

  8. DOE systems approach to a low-level waste management information system: summary paper

    International Nuclear Information System (INIS)

    Esparza, V.

    1987-01-01

    The LLWMP is performing an assessment of waste information systems currently in use at each DOE site for recording LLW data. The assessment is being conducted to determine what changes to the waste information systems, if any, are desirable to support implementation of this systems approach to LLW management. Recommendations will be made to DOE from this assessment and what would be involved to modify current DOE waste generator information practices to support an appropriately structured overall DOE LLW data systems. In support of reducing the uncertainty of decision-making, DOE has selected a systems approach to keep pace with an evolving regulatory climate to low-level waste. This approach considers the effects of each stage of the entire low-level waste management process. The proposed systems approach starts with the disposal side of the waste management system and progresses towards the waste generation side of the waste management system. Using this approach provides quantitative performance to be achieved. In addition, a systems approach also provides a method for selecting appropriate technology based on engineering models

  9. Implementation of a VLSI Level Zero Processing system utilizing the functional component approach

    Science.gov (United States)

    Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.

    1991-01-01

    A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.

  10. An Integrated Skills Approach Using Feature Movies in EFL at Tertiary Level

    Science.gov (United States)

    Tuncay, Hidayet

    2014-01-01

    This paper presents the results of a case study based on an integrated skills approach using feature movies (DVDs) in EFL syllabi at the tertiary level. 100 students took part in the study and the data was collected through a three - section survey questionnaire: demographic items, 18 likert scale questions and an open-ended question. The data…

  11. Batter's Choice and Differentiated Pitch Levels in Softball: A Student-Centered Approach

    Science.gov (United States)

    Olsen, Edward B.

    2015-01-01

    Teaching youth softball presents several challenges to practitioners. Chief among these are the mixed ability levels, backgrounds and knowledge students have of certain games. The other problem is the "one size fits all" approach to pitching and hitting. In other words, many softball units allow for only one standard type of pitch…

  12. Research on the Field of Education Policy: Exploring Different Levels of Approach and Abstraction

    Science.gov (United States)

    Mainardes, Jefferson; Tello, César

    2016-01-01

    This paper, of theoretical nature, explores the levels of approach and abstraction of research in the field of education policy: description, analysis and understanding. Such categories were developed based on concepts of Bourdieu's theory and on the grounds of epistemological studies focused on education policy and meta-research. This paper…

  13. Improving Students' Chemical Literacy Levels on Thermochemical and Thermodynamics Concepts through a Context-Based Approach

    Science.gov (United States)

    Cigdemoglu, Ceyhan; Geban, Omer

    2015-01-01

    The aim of this study was to delve into the effect of context-based approach (CBA) over traditional instruction (TI) on students' chemical literacy level related to thermochemical and thermodynamics concepts. Four eleventh-grade classes with 118 students in total taught by two teachers from a public high school in 2012 fall semester were enrolled…

  14. Evaluation of Service Level Agreement Approaches for Portfolio Management in the Financial Industry

    Science.gov (United States)

    Pontz, Tobias; Grauer, Manfred; Kuebert, Roland; Tenschert, Axel; Koller, Bastian

    The idea of service-oriented Grid computing seems to have the potential for fundamental paradigm change and a new architectural alignment concerning the design of IT infrastructures. There is a wide range of technical approaches from scientific communities which describe basic infrastructures and middlewares for integrating Grid resources in order that by now Grid applications are technically realizable. Hence, Grid computing needs viable business models and enhanced infrastructures to move from academic application right up to commercial application. For a commercial usage of these evolutions service level agreements are needed. The developed approaches are primary of academic interest and mostly have not been put into practice. Based on a business use case of the financial industry, five service level agreement approaches have been evaluated in this paper. Based on the evaluation, a management architecture has been designed and implemented as a prototype.

  15. A decomposition of local labour-market conditions and their relevance for inequalities in transitions to vocational training

    OpenAIRE

    Hillmert, Steffen; Hartung, Andreas; Weßling, Katarina

    2017-01-01

    We investigate to what extent individual transitions to vocational training in Germany have been affected by local labour-market conditions. A statistical decomposition approach is developed and applied, allowing for a systematic differentiation between long-term change, short-term fluctuations, and structural regional differences in labour-market conditions. To study individual-level consequences for transitions to vocational training, regionalized labour-market data are merged with longitud...

  16. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  17. Operational intervention levels in a nuclear emergency, general concepts and a probabilistic approach

    International Nuclear Information System (INIS)

    Lauritzen, B.; Baeverstam, U.; Naadland Holo, E.; Sinkko, K.

    1997-12-01

    This report deals with Operational Intervention Levels (OILs) in a nuclear or radiation emergency. OILs are defined as the values of environmental measurements, in particular dose rate measurements, above which specific protective actions should be carried out in emergency exposure situations. The derivation and the application of OILs are discussed, and an overview of the presently adopted values is provided, with emphasis on the situation in the Nordic countries. A new, probabilistic approach to derive OILs is presented and the method is illustrated by calculating dose rate OILs in a simplified setting. Contrary to the standard approach, the probabilistic approach allows for optimization of OILs. It is argued, that optimized OILs may be much larger than the presently adopted or suggested values. It is recommended, that the probabilistic approach is further developed and employed in determining site specific OILs and in optimizing environmental measuring strategies. (au)

  18. Multimodal Approach for Automatic Emotion Recognition Applied to the Tension Levels Study in TV Newscasts

    Directory of Open Access Journals (Sweden)

    Moisés Henrique Ramos Pereira

    2015-12-01

    Full Text Available This article addresses a multimodal approach to automatic emotion recognition in participants of TV newscasts (presenters, reporters, commentators and others able to assist the tension levels study in narratives of events in this television genre. The methodology applies state-of-the-art computational methods to process and analyze facial expressions, as well as speech signals. The proposed approach contributes to semiodiscoursive study of TV newscasts and their enunciative praxis, assisting, for example, the identification of the communication strategy of these programs. To evaluate the effectiveness of the proposed approach was applied it in a video related to a report displayed on a Brazilian TV newscast great popularity in the state of Minas Gerais. The experimental results are promising on the recognition of emotions on the facial expressions of tele journalists and are in accordance with the distribution of audiovisual indicators extracted over a TV newscast, demonstrating the potential of the approach to support the TV journalistic discourse analysis.This article addresses a multimodal approach to automatic emotion recognition in participants of TV newscasts (presenters, reporters, commentators and others able to assist the tension levels study in narratives of events in this television genre. The methodology applies state-of-the-art computational methods to process and analyze facial expressions, as well as speech signals. The proposed approach contributes to semiodiscoursive study of TV newscasts and their enunciative praxis, assisting, for example, the identification of the communication strategy of these programs. To evaluate the effectiveness of the proposed approach was applied it in a video related to a report displayed on a Brazilian TV newscast great popularity in the state of Minas Gerais. The experimental results are promising on the recognition of emotions on the facial expressions of tele journalists and are in accordance

  19. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  20. Microscopic calculation of level densities: the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Alhassid, Yoram

    2012-01-01

    The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments

  1. Real interest parity decomposition

    Directory of Open Access Journals (Sweden)

    Alex Luiz Ferreira

    2009-09-01

    Full Text Available The aim of this paper is to investigate the general causes of real interest rate differentials (rids for a sample of emerging markets for the period of January 1996 to August 2007. To this end, two methods are applied. The first consists of breaking the variance of rids down into relative purchasing power pariety and uncovered interest rate parity and shows that inflation differentials are the main source of rids variation; while the second method breaks down the rids and nominal interest rate differentials (nids into nominal and real shocks. Bivariate autoregressive models are estimated under particular identification conditions, having been adequately treated for the identified structural breaks. Impulse response functions and error variance decomposition result in real shocks as being the likely cause of rids.O objetivo deste artigo é investigar as causas gerais dos diferenciais da taxa de juros real (rids para um conjunto de países emergentes, para o período de janeiro de 1996 a agosto de 2007. Para tanto, duas metodologias são aplicadas. A primeira consiste em decompor a variância dos rids entre a paridade do poder de compra relativa e a paridade de juros a descoberto e mostra que os diferenciais de inflação são a fonte predominante da variabilidade dos rids; a segunda decompõe os rids e os diferenciais de juros nominais (nids em choques nominais e reais. Sob certas condições de identificação, modelos autorregressivos bivariados são estimados com tratamento adequado para as quebras estruturais identificadas e as funções de resposta ao impulso e a decomposição da variância dos erros de previsão são obtidas, resultando em evidências favoráveis a que os choques reais são a causa mais provável dos rids.

  2. Minimax terminal approach problem in two-level hierarchical nonlinear discrete-time dynamical system

    Energy Technology Data Exchange (ETDEWEB)

    Shorikov, A. F., E-mail: afshorikov@mail.ru [Ural Federal University, 19 S. Mira, Ekaterinburg, 620002, Russia Institute of Mathematics and Mechanics, Ural Branch of Russian Academy of Sciences, 16 S. Kovalevskaya, Ekaterinburg, 620990 (Russian Federation)

    2015-11-30

    We consider a discrete–time dynamical system consisting of three controllable objects. The motions of all objects are given by the corresponding vector nonlinear or linear discrete–time recurrent vector relations, and control system for its has two levels: basic (first or I level) that is dominating and subordinate level (second or II level) and both have different criterions of functioning and united a priori by determined informational and control connections defined in advance. For the dynamical system in question, we propose a mathematical formalization in the form of solving a multistep problem of two-level hierarchical minimax program control over the terminal approach process with incomplete information and give a general scheme for its solving.

  3. Decomposition of oxalate precipitates by photochemical reaction

    International Nuclear Information System (INIS)

    Yoo, J.H.; Kim, E.H.

    1998-01-01

    A photo-radiation method was applied to decompose oxalate precipitates so that it can be dissolved into dilute nitric acid. This work has been studied as a part of partitioning of minor actinides. Minor actinides can be recovered from high-level wastes as oxalate precipitates, but they tend to be coprecipitated together with lanthanide oxalates. This requires another partitioning step for mutual separation of actinide and lanthanide groups. In this study, therefore, the photochemical decomposition mechanism of oxalates in the presence of nitric acid was elucidated by experimental work. The decomposition of oxalates was proved to be dominated by the reaction with hydroxyl radical generated from the nitric acid, rather than with nitrite ion also formed from nitrate ion. The decomposition rate of neodymium oxalate, which was chosen as a stand-in compound representing minor actinide and lanthanide oxalates, was found to be 0.003 M/hr at the conditions of 0.5 M HNO 3 and room temperature when a mercury lamp was used as a light source. (author)

  4. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  5. MADCam: The multispectral active decomposition camera

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Stegmann, Mikkel Bille

    2001-01-01

    A real-time spectral decomposition of streaming three-band image data is obtained by applying linear transformations. The Principal Components (PC), the Maximum Autocorrelation Factors (MAF), and the Maximum Noise Fraction (MNF) transforms are applied. In the presented case study the PC transform...... that utilised information drawn from the temporal dimension instead of the traditional spatial approach. Using the CIF format (352x288) frame rates up to 30 Hz are obtained and in VGA mode (640x480) up to 15 Hz....

  6. Dynamic probability evaluation of safety levels of earth-rockfill dams using Bayesian approach

    Directory of Open Access Journals (Sweden)

    Zi-wu Fan

    2009-06-01

    Full Text Available In order to accurately predict and control the aging process of dams, new information should be collected continuously to renew the quantitative evaluation of dam safety levels. Owing to the complex structural characteristics of dams, it is quite difficult to predict the time-varying factors affecting their safety levels. It is not feasible to employ dynamic reliability indices to evaluate the actual safety levels of dams. Based on the relevant regulations for dam safety classification in China, a dynamic probability description of dam safety levels was developed. Using the Bayesian approach and effective information mining, as well as real-time information, this study achieved more rational evaluation and prediction of dam safety levels. With the Bayesian expression of discrete stochastic variables, the a priori probabilities of the dam safety levels determined by experts were combined with the likelihood probability of the real-time check information, and the probability information for the evaluation of dam safety levels was renewed. The probability index was then applied to dam rehabilitation decision-making. This method helps reduce the difficulty and uncertainty of the evaluation of dam safety levels and complies with the current safe decision-making regulations for dams in China. It also enhances the application of current risk analysis methods for dam safety levels.

  7. Interdependence and contagion among industry-level US credit markets: An application of wavelet and VMD based copula approaches

    Science.gov (United States)

    Shahzad, Syed Jawad Hussain; Nor, Safwan Mohd; Kumar, Ronald Ravinesh; Mensi, Walid

    2017-01-01

    This study examines the interdependence and contagion among US industry-level credit markets. We use daily data of 11 industries from 17 December 2007 to 31 December 2014 for the time-frequency, namely, wavelet squared coherence analysis. The empirical analysis reveals that Basic Materials (Utilities) industry credit market has the highest (lowest) interdependence with other industries. Basic Materials credit market passes cyclical effect to all other industries. The little ;shift-contagion; as defined by Forbes and Rigobon (2002) is examined using elliptical and Archimedean copulas on the short-run decomposed series obtained through Variational Mode Decomposition (VMD). The contagion effects between US industry-level credit markets mainly occurred during the global financial crisis of 2007-08.

  8. Benders’ Decomposition for Curriculum-Based Course Timetabling

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian F.; Sørensen, Matias; Stidsen, Thomas R.

    2018-01-01

    feasibility. We compared our algorithm with other approaches from the literature for a total of 32 data instances. We obtained a lower bound on 23 of the instances, which were at least as good as the lower bounds obtained by the state-of-the-art, and on eight of these, our lower bounds were higher. On two......In this paper we applied Benders’ decomposition to the Curriculum-Based Course Timetabling (CBCT) problem. The objective of the CBCT problem is to assign a set of lectures to time slots and rooms. Our approach was based on segmenting the problem into time scheduling and room allocation problems...... of the instances, our lower bound was an improvement of the currently best-known. Lastly, we compared our decomposition to the model without the decomposition on an additional six instances, which are much larger than the other 32. To our knowledge, this was the first time that lower bounds were calculated...

  9. Technical approach to finalizing sensible soil cleanup levels at the Fernald Environmental Management Project

    International Nuclear Information System (INIS)

    Carr, D.; Hertel, B.; Jewett, M.; Janke, R.; Conner, B.

    1996-01-01

    The remedial strategy for addressing contaminated environmental media was recently finalized for the US Department of Energy's (DOE) Fernald Environmental Management Project (FEMP) following almost 10 years of detailed technical analysis. The FEMP represents one of the first major nuclear facilities to successfully complete the Remedial Investigation/Feasibility Study (RI/FS) phase of the environmental restoration process. A critical element of this success was the establishment of sensible cleanup levels for contaminated soil and groundwater both on and off the FEMP property. These cleanup levels were derived based upon a strict application of Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) regulations and guidance, coupled with positive input from the regulatory agencies and the local community regarding projected future land uses for the site. The approach for establishing the cleanup levels was based upon a Feasibility Study (FS) strategy that examined a bounding range of viable future land uses for the site. Within each land use, the cost and technical implications of a range of health-protective cleanup levels for the environmental media were analyzed. Technical considerations in driving these cleanup levels included: direct exposure routes to viable human receptors; cross- media impacts to air, surface water, and groundwater; technical practicality of attaining the levels; volume of affected media; impact to sensitive environmental receptors or ecosystems; and cost. This paper will discuss the technical approach used to support the finalization of the cleanup levels for the site. The final cleanup levels provide the last remaining significant piece to the puzzle of establishing a final site-wide remedial strategy for the FEMP, and positions the facility for the expedient completion of site-wide remedial activities

  10. Tabu search approaches for the multi-level warehouse layout problem with adjacency constraints

    Science.gov (United States)

    Zhang, G. Q.; Lai, K. K.

    2010-08-01

    A new multi-level warehouse layout problem, the multi-level warehouse layout problem with adjacency constraints (MLWLPAC), is investigated. The same item type is required to be located in adjacent cells, and horizontal and vertical unit travel costs are product dependent. An integer programming model is proposed to formulate the problem, which is NP hard. Along with a cube-per-order index policy based heuristic, the standard tabu search (TS), greedy TS, and dynamic neighbourhood based TS are presented to solve the problem. The computational results show that the proposed approaches can reduce the transportation cost significantly.

  11. How diverse are physics instructors’ attitudes and approaches to teaching undergraduate level quantum mechanics?

    International Nuclear Information System (INIS)

    Siddiqui, Shabnam; Singh, Chandralekha

    2017-01-01

    Understanding instructors’ attitudes and approaches to teaching undergraduate-level quantum mechanics can be helpful in developing effective instructional tools to help students learn quantum mechanics. Here we discuss the findings from a survey in which 12 university faculty members reflected on various issues related to undergraduate-level quantum mechanics teaching and learning. Topics included faculty members’ thoughts on the goals of a college quantum mechanics course, general challenges in teaching the subject matter, students’ preparation for the course, views about foundational issues and the difficulty in teaching certain topics, reflection on their own learning of quantum mechanics when they were students versus how they teach it to their students and the extent to which they incorporate contemporary topics into their courses. The findings related to instructors’ attitudes and approaches discussed here can be useful in improving teaching and learning of quantum mechanics. (paper)

  12. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  13. Adversarial risk analysis with incomplete information: a level-k approach.

    Science.gov (United States)

    Rothschild, Casey; McLay, Laura; Guikema, Seth

    2012-07-01

    This article proposes, develops, and illustrates the application of level-k game theory to adversarial risk analysis. Level-k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend-attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack. © 2011 Society for Risk Analysis.

  14. A probabilistic approach to the computation of the levelized cost of electricity

    International Nuclear Information System (INIS)

    Geissmann, Thomas

    2017-01-01

    This paper sets forth a novel approach to calculate the levelized cost of electricity (LCOE) using a probabilistic model that accounts for endogenous input parameters. The approach is applied to the example of a nuclear and gas power project. Monte Carlo simulation results show that a correlation between input parameters has a significant effect on the model outcome. By controlling for endogeneity, a statistically significant difference in the mean LCOE estimate and a change in the order of input leverages is observed. Moreover, the paper discusses the role of discounting options and external costs in detail. In contrast to the gas power project, the economic viability of the nuclear project is considerably weaker. - Highlights: • First model of levelized cost of electricity accounting for uncertainty and endogeneities in input parameters. • Allowance for endogeneities significantly affects results. • Role of discounting options and external costs is discussed and modelled.

  15. On the hadron mass decomposition

    Science.gov (United States)

    Lorcé, Cédric

    2018-02-01

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.

  16. On the hadron mass decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Lorce, Cedric [Universite Paris-Saclay, Centre de Physique Theorique, Ecole Polytechnique, CNRS, Palaiseau (France)

    2018-02-15

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force. (orig.)

  17. Disposal approach for long-lived low and intermediate-level radioactive waste

    International Nuclear Information System (INIS)

    Park, Jin Beak; Park, Joo Wan; Kim, Chang Lak

    2005-01-01

    There certainly exists the radioactive inventory that exceeds the waste acceptance criteria for final disposal of the low and intermediate-level radioactive waste. In this paper, current disposal status of the long-lived radioactive waste in several nations are summarized and the basic procedures for disposal approach are suggested. With this suggestion, intensive discussion and research activities can hopefully be launched to set down the possible resolutions to dispose of the long-lived radioactive waste

  18. Flipping for success: evaluating the effectiveness of a novel teaching approach in a graduate level setting

    OpenAIRE

    Moraros, John; Islam, Adiba; Yu, Stan; Banow, Ryan; Schindelka, Barbara

    2015-01-01

    Background Flipped Classroom is a model that?s quickly gaining recognition as a novel teaching approach among health science curricula. The purpose of this study was four-fold and aimed to compare Flipped Classroom effectiveness ratings with: 1) student socio-demographic characteristics, 2) student final grades, 3) student overall course satisfaction, and 4) course pre-Flipped Classroom effectiveness ratings. Methods The participants in the study consisted of 67 Masters-level graduate student...

  19. Approaches to assign security levels for radioactive substances and radiation sources

    International Nuclear Information System (INIS)

    Ivanov, M.V.; Petrovskij, N.P.; Pinchuk, G.N.; Telkov, S.N.; Kuzin, V.V.

    2011-01-01

    The article contains analyzed provisions on categorization of radioactive substances and radiation sources according to the extent of their potential danger. Above provisions are used in the IAEA documents and in Russian regulatory documents for differentiation of regulatory requirements to physical security. It is demonstrated that with the account of possible threats of violators, rules of physical protection of radiation sources and radioactive substances should be amended as regards the approaches to assign their categories and security levels [ru

  20. A scaling approach to project regional sea level rise and its uncertainties

    Directory of Open Access Journals (Sweden)

    M. Perrette

    2013-01-01

    Full Text Available Climate change causes global mean sea level to rise due to thermal expansion of seawater and loss of land ice from mountain glaciers, ice caps and ice sheets. Locally, sea level can strongly deviate from the global mean rise due to changes in wind and ocean currents. In addition, gravitational adjustments redistribute seawater away from shrinking ice masses. However, the land ice contribution to sea level rise (SLR remains very challenging to model, and comprehensive regional sea level projections, which include appropriate gravitational adjustments, are still a nascent field (Katsman et al., 2011; Slangen et al., 2011. Here, we present an alternative approach to derive regional sea level changes for a range of emission and land ice melt scenarios, combining probabilistic forecasts of a simple climate model (MAGICC6 with the new CMIP5 general circulation models. The contribution from ice sheets varies considerably depending on the assumptions for the ice sheet projections, and thus represents sizeable uncertainties for future sea level rise. However, several consistent and robust patterns emerge from our analysis: at low latitudes, especially in the Indian Ocean and Western Pacific, sea level will likely rise more than the global mean (mostly by 10–20%. Around the northeastern Atlantic and the northeastern Pacific coasts, sea level will rise less than the global average or, in some rare cases, even fall. In the northwestern Atlantic, along the American coast, a strong dynamic sea level rise is counteracted by gravitational depression due to Greenland ice melt; whether sea level will be above- or below-average will depend on the relative contribution of these two factors. Our regional sea level projections and the diagnosed uncertainties provide an improved basis for coastal impact analysis and infrastructure planning for adaptation to climate change.

  1. The application of an industry level participatory ergonomics approach in developing MSD interventions.

    Science.gov (United States)

    Tappin, D C; Vitalis, A; Bentley, T A

    2016-01-01

    Participatory ergonomics projects are traditionally applied within one organisation. In this study, a participative approach was applied across the New Zealand meat processing industry, involving multiple organisations and geographical regions. The purpose was to develop interventions to reduce musculoskeletal disorder (MSD) risk. This paper considers the value of an industry level participatory ergonomics approach in achieving this. The main rationale for a participative approach included the need for industry credibility, and to generate MSD interventions that address industry level MSD risk factors. An industry key stakeholder group became the primary vehicle for formal participation. The study resulted in an intervention plan that included the wider work system and industry practices. These interventions were championed across the industry by the key stakeholder group and have extended beyond the life of the study. While this approach helped to meet the study aim, the existence of an industry-supported key stakeholder group and a mandate for the initiative are important prerequisites for success. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  2. An approach to local diagnostic reference levels (DRL's) in the context of national and international DRL's

    International Nuclear Information System (INIS)

    Rogers, A.T.

    2001-01-01

    In recent years there has been a greater focus on the management of patient doses. This effort has been driven by the realisation of both the increasing magnitude of patient doses and their variation both intra- and inter-nationally. Legislators and guidance-issuing bodies have developed the idea of 'Diagnostic Reference Levels' (DRL's). In particular, the European Union, in their Council Directive 97/43/Euratom, required Member States to develop DRL's. The UK Government, when consolidating this EU Directive into UK legislation, extended the concept of DRL's from a national to an employer level. However, the methodologies used for development of national and international DRL's do not translate to a local level and hence a new approach is required. This paper describes one particular approach made by a UK hospital to introduce 'Local DRL's' in such a manner as to aid the optimisation process. This approach utilises a dose index, based on the local patient population, which is monitored for trends. Any trend in patient dose triggers an investigation linked to the clinical audit system within the Clinical Radiology Department. It is the audit cycle that ensures a continuing move towards an optimised situation. Additional triggers may be employed such as large patient dose variations. (author)

  3. A System of Systems Approach to Integrating Global Sea Level Change Application Programs

    Science.gov (United States)

    Bambachus, M. J.; Foster, R. S.; Powell, C.; Cole, M.

    2005-12-01

    The global sea level change application community has numerous disparate models used to make predications over various regional and temporal scales. These models have typically been focused on limited sets of data and optimized for specific areas or questions of interest. Increasingly, decision makers at the national, international, and local/regional levels require access to these application data models and want to be able to integrate large disparate data sets, with new ubiquitous sensor data, and use these data across models from multiple sources. These requirements will force the Global Sea Level Change application community to take a new system-of-systems approach to their programs. We present a new technical architecture approach to the global sea level change program that provides external access to the vast stores of global sea level change data, provides a collaboration forum for the discussion and visualization of data, and provides a simulation environment to evaluate decisions. This architectural approach will provide the tools to support multi-disciplinary decision making. A conceptual system of systems approach is needed to address questions around the multiple approaches to tracking and predicting Sea Level Change. A systems of systems approach would include (1) a forum of data providers, modelers, and users, (2) a service oriented architecture including interoperable web services with a backbone of Grid computing capability, and (3) discovery and access functionality to the information developed through this structure. Each of these three areas would be clearly designed to maximize communication, data use for decision making and flexibility and extensibility for evolution of technology and requirements. In contemplating a system-of-systems approach, it is important to highlight common understanding and coordination as foundational to success across the multiple systems. The workflow of science in different applications is often conceptually similar

  4. Abstract decomposition theorem and applications

    CERN Document Server

    Grossberg, R; Grossberg, Rami; Lessmann, Olivier

    2005-01-01

    Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).

  5. Presenting a Multi-level Superstructure Optimization Approach for Mechatronic System Design

    DEFF Research Database (Denmark)

    Pedersen, Henrik C.; Andersen, Torben Ole; Bech, Michael Møller

    2010-01-01

    Synergism and integration in the design process is what sets apart a Mechatronic System from a traditional, multidisciplinary system. However the typical design approach has been to divide the design problem into sub problems for each technology area (mechanics, electronics and control) and descr......Synergism and integration in the design process is what sets apart a Mechatronic System from a traditional, multidisciplinary system. However the typical design approach has been to divide the design problem into sub problems for each technology area (mechanics, electronics and control......) and describe the interface between the technologies, whereas the lack of well-established, systematic engineering methods to form the basic set-off in analysis and design of complete mechatronic systems has been obvious. The focus of the current paper is therefore to present an integrated design approach...... for mechatronic system design, utilizing a multi-level superstructure optimization based approach. Finally two design examples are presented and the possibilities and limitations of the approach are outlined....

  6. A multi-attribute approach to choosing adaptation strategies: Application to sea-level rise

    International Nuclear Information System (INIS)

    Smith, A.E.; Chu, H.Q.

    1994-01-01

    Selecting good adaptation strategies in anticipation of climate change is gaining increasing attention as it becomes increasingly clear that much of the likely change is already committed, and could not be avoided even with aggressive and immediate emissions reductions. Adaptation decision making will place special requirements on regional and local planners in the US and other countries, especially developing countries. Approaches, tools, and guidance will be useful to assist in an effective response to the challenge. This paper describes the value of using a multi-attribute approach for evaluating adaptation strategies and its implementation as a decision-support software tool to help planners understand and execute this approach. The multi-attribute approach described here explicitly addresses the fact that many aspects of the decision cannot be easily quantified, that future conditions are highly uncertain, and that there are issues of equity, flexibility, and coordination that may be as important to the decision as costs and benefits. The approach suggested also avoids trying to collapse information on all of the attributes to a single metric. Such metrics can obliterate insights about the nature of the trade-offs that must be made in choosing among very dissimilar types of responses to the anticipated threat of climate change. Implementation of such an approach requires management of much information, and an ability to easily manipulate its presentation while seeking acceptable trade-offs. The Adaptation Strategy Evaluator (ASE) was developed under funding from the US Environmental Protection Agency to provide user-friendly, PC-based guidance through the major steps of a multi-attribute evaluation. The initial application of ASE, and the focus of this paper, is adaptation to sea level rise. However, the approach can be easily adapted to any multi-attribute choice problem, including the range of other adaptation planning needs

  7. Thermal decomposition of biphenyl (1963); Decomposition thermique du biphenyle (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1962-06-15

    The rates of formation of the decomposition products of biphenyl; hydrogen, methane, ethane, ethylene, as well as triphenyl have been measured in the vapour and liquid phases at 460 deg. C. The study of the decomposition products of biphenyl at different temperatures between 400 and 460 deg. C has provided values of the activation energies of the reactions yielding the main products of pyrolysis in the vapour phase. Product and Activation energy: Hydrogen 73 {+-} 2 kCal/Mole; Benzene 76 {+-} 2 kCal/Mole; Meta-triphenyl 53 {+-} 2 kCal/Mole; Biphenyl decomposition 64 {+-} 2 kCal/Mole; The rate of disappearance of biphenyl is only very approximately first order. These results show the major role played at the start of the decomposition by organic impurities which are not detectable by conventional physico-chemical analysis methods and the presence of which accelerates noticeably the decomposition rate. It was possible to eliminate these impurities by zone-melting carried out until the initial gradient of the formation curves for the products became constant. The composition of the high-molecular weight products (over 250) was deduced from the mean molecular weight and the dosage of the aromatic C - H bonds by infrared spectrophotometry. As a result the existence in tars of hydrogenated tetra, penta and hexaphenyl has been demonstrated. (author) [French] Les vitesses de formation des produits de decomposition du biphenyle: hydrogene, methane, ethane, ethylene, ainsi que des triphenyles, ont ete mesurees en phase vapeur et en phase liquide a 460 deg. C. L'etude des produits de decomposition du biphenyle a differentes temperatures comprises entre 400 et 460 deg. C, a fourni les valeurs des energies d'activation des reactions conduisant aux principaux produits de la pyrolyse en phase vapeur. Produit et Energie d'activation: Hydrogene 73 {+-} 2 kcal/Mole; Benzene 76 {+-} 2 kcal/Mole; Metatriphenyle, 53 {+-} 2 kcal/Mole; Decomposition du biphenyle 64 {+-} 2 kcal/Mole; La

  8. On the correspondence between data revision and trend-cycle decomposition

    NARCIS (Netherlands)

    Dungey, M.; Jacobs, J. P. A. M.; Tian, J.; van Norden, S.

    2013-01-01

    This article places the data revision model of Jacobs and van Norden (2011) within a class of trend-cycle decompositions relating directly to the Beveridge-Nelson decomposition. In both these approaches, identifying restrictions on the covariance matrix under simple and realistic conditions may

  9. Word-level recognition of multifont Arabic text using a feature vector matching approach

    Science.gov (United States)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  10. Theoretical and experimental study: the size dependence of decomposition thermodynamics of nanomaterials

    International Nuclear Information System (INIS)

    Cui, Zixiang; Duan, Huijuan; Li, Wenjiao; Xue, Yongqiang

    2015-01-01

    In the processes of preparation and application of nanomaterials, the decomposition reactions of nanomaterials are often involved. However, there is a dramatic difference in decomposition thermodynamics between nanomaterials and the bulk counterparts, and the difference depends on the size of the particles that compose the nanomaterials. In this paper, the decomposition model of a nanoparticle was built, the theory of decomposition thermodynamics of nanomaterials was proposed, and the relations of the size dependence of thermodynamic quantities for the decomposition reactions were deduced. In experiment, taking the thermal decomposition of nano-Cu 2 (OH) 2 CO 3 with different particle sizes (the range of radius is at 8.95–27.4 nm) as a system, the reaction thermodynamic quantities were determined, and the regularities of size dependence of the quantities were summarized. These experimental regularities consist with the above thermodynamic relations. The results show that there is a significant effect of the size of particles composing a nanomaterial on the decomposition thermodynamics. When all the decomposition products are gases, the differences in thermodynamic quantities of reaction between the nanomaterials and the bulk counterparts depend on the particle size; while when one of the decomposition products is a solid, the differences depend on both the initial particle size of the nanoparticle and the decomposition ratio. When the decomposition ratio is very small, these differences are only related to the initial particle size; and when the radius of the nanoparticles approaches or exceeds 10 nm, the reaction thermodynamic functions and the logarithm of the equilibrium constant are linearly associated with the reciprocal of radius, respectively. The thermodynamic theory can quantificationally describe the regularities of the size dependence of thermodynamic quantities for decomposition reactions of nanomaterials, and contribute to the researches and the

  11. How to Track Adaptation to Climate Change: A Typology of Approaches for National-Level Application

    Directory of Open Access Journals (Sweden)

    James D. Ford

    2013-09-01

    Full Text Available The need to track climate change adaptation progress is being increasingly recognized but our ability to do the tracking is constrained by the complex nature of adaptation and the absence of measurable outcomes or indicators by which to judge if and how adaptation is occurring. We developed a typology of approaches by which climate change adaptation can be tracked globally at a national level. On the one hand, outcome-based approaches directly measure adaptation progress and effectiveness with reference to avoided climate change impacts. However, given that full exposure to climate change impacts will not happen for decades, alternative approaches focus on developing indicators or proxies by which adaptation can be monitored. These include systematic measures of adaptation readiness, processes undertaken to advance adaptation, policies and programs implemented to adapt, and measures of the impacts of these policies and programs on changing vulnerability. While these approaches employ various methods and data sources, and identify different components of adaptation progress to track at the national level, they all seek to characterize the current status of adaptation by which progress over time can be monitored. However, there are significant challenges to operationalizing these approaches, including an absence of systematically collected data on adaptation actions and outcomes, underlying difficulties of defining what constitutes "adaptation", and a disconnect between the timescale over which adaptation plays out and the practical need for evaluation to inform policy. Given the development of new adaptation funding streams, it is imperative that tools for monitoring progress are developed and validated for identifying trends and gaps in adaptation response.

  12. Interconnected levels of Multi-Stage Marketing – A Triadic approach

    DEFF Research Database (Denmark)

    Vedel, Mette; Geersbro, Jens; Ritter, Thomas

    2012-01-01

    must not only decide in general on the merits of multi-stage marketing for their firm, but must also decide on which level they will engage in multi-stage marketing. The triadic perspective enables a rich and multi-dimensional understanding of how different business relationships influence each other......Multi-stage marketing gains increasing attention as knowledge of and influence on the customer's customer become more critical for the firm's success. Despite this increasing managerial relevance, systematic approaches for analyzing multi-stage marketing are still missing. This paper conceptualizes...... different levels of multi-stage marketing and illustrates these stages with a case study. In addition, a triadic perspective is introduced as an analytical tool for multi-stage marketing research. The results from the case study indicate that multi-stage marketing exists on different levels. Thus, managers...

  13. A hybrid multi-level optimization approach for the dynamic synthesis/design and operation/control under uncertainty of a fuel cell system

    International Nuclear Information System (INIS)

    Kim, Kihyung; Spakovsky, Michael R. von; Wang, M.; Nelson, Douglas J.

    2011-01-01

    During system development, large-scale, complex energy systems require multi-disciplinary efforts to achieve system quality, cost, and performance goals. As systems become larger and more complex, the number of possible system configurations and technologies, which meet the designer's objectives optimally, increases greatly. In addition, both transient and environmental effects may need to be taken into account. Thus, the difficulty of developing the system via the formulation of a single optimization problem in which the optimal synthesis/design and operation/control of the system are achieved simultaneously is great and rather problematic. This difficulty is further heightened with the introduction of uncertainty analysis, which transforms the problem from a purely deterministic one into a probabilistic one. Uncertainties, system complexity and nonlinearity, and large numbers of decision variables quickly render the single optimization problem unsolvable by conventional, single-level, optimization strategies. To address these difficulties, the strategy adopted here combines a dynamic physical decomposition technique for large-scale optimization with a response sensitivity analysis method for quantifying system response uncertainties to given uncertainty sources. The feasibility of such a hybrid approach is established by applying it to the synthesis/design and operation/control of a 5 kW proton exchange membrane (PEM) fuel cell system.

  14. An iterative method for tri-level quadratic fractional programming problems using fuzzy goal programming approach

    Science.gov (United States)

    Kassa, Semu Mitiku; Tsegay, Teklay Hailay

    2017-08-01

    Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.

  15. Energy level alignment at hybridized organic-metal interfaces from a GW projection approach

    Science.gov (United States)

    Chen, Yifeng; Tamblyn, Isaac; Quek, Su Ying

    Energy level alignments at organic-metal interfaces are of profound importance in numerous (opto)electronic applications. Standard density functional theory (DFT) calculations generally give incorrect energy level alignments and missing long-range polarization effects. Previous efforts to address this problem using the many-electron GW method have focused on physisorbed systems where hybridization effects are insignificant. Here, we use state-of-the-art GW methods to predict the level alignment at the amine-Au interface, where molecular levels do hybridize with metallic states. This non-trivial hybridization implies that DFT result is a poor approximation to the quasiparticle states. However, we find that the self-energy operator is approximately diagonal in the molecular basis, allowing us to use a projection approach to predict the level alignments. Our results indicate that the metallic substrate reduces the HOMO-LUMO gap by 3.5 4.0 eV, depending on the molecular coverage/presence of Au adatoms. Our GW results are further compared with those of a simple image charge model that describes the level alignment in physisorbed systems. Syq and YC acknowledge Grant NRF-NRFF2013-07 and the medium-sized centre program from the National Research Foundation, Singapore.

  16. Lie bialgebras with triangular decomposition

    International Nuclear Information System (INIS)

    Andruskiewitsch, N.; Levstein, F.

    1992-06-01

    Lie bialgebras originated in a triangular decomposition of the underlying Lie algebra are discussed. The explicit formulas for the quantization of the Heisenberg Lie algebra and some motion Lie algebras are given, as well as the algebra of rational functions on the quantum Heisenberg group and the formula for the universal R-matrix. (author). 17 refs

  17. Decomposition of metal nitrate solutions

    International Nuclear Information System (INIS)

    Haas, P.A.; Stines, W.B.

    1982-01-01

    Oxides in powder form are obtained from aqueous solutions of one or more heavy metal nitrates (e.g. U, Pu, Th, Ce) by thermal decomposition at 300 to 800 deg C in the presence of about 50 to 500% molar concentration of ammonium nitrate to total metal. (author)

  18. Probability inequalities for decomposition integrals

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2017-01-01

    Roč. 315, č. 1 (2017), s. 240-248 ISSN 0377-0427 Institutional support: RVO:67985556 Keywords : Decomposition integral * Superdecomposition integral * Probability inequalities Subject RIV: BA - General Mathematics OBOR OECD: Statistics and probability Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/E/mesiar-0470959.pdf

  19. Thermal decomposition of ammonium hexachloroosmate

    DEFF Research Database (Denmark)

    Asanova, T I; Kantor, Innokenty; Asanov, I. P.

    2016-01-01

    Structural changes of (NH4)2[OsCl6] occurring during thermal decomposition in a reduction atmosphere have been studied in situ using combined energy-dispersive X-ray absorption spectroscopy (ED-XAFS) and powder X-ray diffraction (PXRD). According to PXRD, (NH4)2[OsCl6] transforms directly to meta...

  20. Optimal (Solvent) Mixture Design through a Decomposition Based CAMD methodology

    DEFF Research Database (Denmark)

    Achenie, L.; Karunanithi, Arunprakash T.; Gani, Rafiqul

    2004-01-01

    Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach is...

  1. TG-FTIR, DSC and quantum chemical studies of the thermal decomposition of quaternary methylammonium halides

    International Nuclear Information System (INIS)

    Sawicka, Marlena; Storoniak, Piotr; Skurski, Piotr; Blazejowski, Jerzy; Rak, Janusz

    2006-01-01

    The thermal decomposition of quaternary methylammonium halides was studied using thermogravimetry coupled to FTIR (TG-FTIR) and differential scanning calorimetry (DSC) as well as the DFT, MP2 and G2 quantum chemical methods. There is almost perfect agreement between the experimental IR spectra and those predicted at the B3LYP/6-311G(d,p) level: this has demonstrated for the first time that an equimolar mixture of trimethylamine and a methyl halide is produced as a result of decomposition. The experimental enthalpies of dissociation are 153.4, 171.2, and 186.7 kJ/mol for chloride, bromide and iodide, respectively, values that correlate well with the calculated enthalpies of dissociation based on crystal lattice energies and quantum chemical thermodynamic barriers. The experimental activation barriers estimated from the least-squares fit of the F1 kinetic model (first-order process) to thermogravimetric traces - 283, 244 and 204 kJ/mol for chloride, bromide and iodide, respectively - agree very well with theoretically calculated values. The theoretical approach assumed in this work has been shown capable of predicting the relevant characteristics of the thermal decomposition of solids with experimental accuracy

  2. Efficient morse decompositions of vector fields.

    Science.gov (United States)

    Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene

    2008-01-01

    Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.

  3. A Single-Granule-Level Approach Reveals Ecological Heterogeneity in an Upflow Anaerobic Sludge Blanket Reactor.

    Directory of Open Access Journals (Sweden)

    Kyohei Kuroda

    Full Text Available Upflow anaerobic sludge blanket (UASB reactor has served as an effective process to treat industrial wastewater such as purified terephthalic acid (PTA wastewater. For optimal UASB performance, balanced ecological interactions between syntrophs, methanogens, and fermenters are critical. However, much of the interactions remain unclear because UASB have been studied at a "macro"-level perspective of the reactor ecosystem. In reality, such reactors are composed of a suite of granules, each forming individual micro-ecosystems treating wastewater. Thus, typical approaches may be oversimplifying the complexity of the microbial ecology and granular development. To identify critical microbial interactions at both macro- and micro- level ecosystem ecology, we perform community and network analyses on 300 PTA-degrading granules from a lab-scale UASB reactor and two full-scale reactors. Based on MiSeq-based 16S rRNA gene sequencing of individual granules, different granule-types co-exist in both full-scale reactors regardless of granule size and reactor sampling depth, suggesting that distinct microbial interactions occur in different granules throughout the reactor. In addition, we identify novel networks of syntrophic metabolic interactions in different granules, perhaps caused by distinct thermodynamic conditions. Moreover, unseen methanogenic relationships (e.g. "Candidatus Aminicenantes" and Methanosaeta are observed in UASB reactors. In total, we discover unexpected microbial interactions in granular micro-ecosystems supporting UASB ecology and treatment through a unique single-granule level approach.

  4. A Single-Granule-Level Approach Reveals Ecological Heterogeneity in an Upflow Anaerobic Sludge Blanket Reactor

    Science.gov (United States)

    Mei, Ran; Narihiro, Takashi; Bocher, Benjamin T. W.; Yamaguchi, Takashi; Liu, Wen-Tso

    2016-01-01

    Upflow anaerobic sludge blanket (UASB) reactor has served as an effective process to treat industrial wastewater such as purified terephthalic acid (PTA) wastewater. For optimal UASB performance, balanced ecological interactions between syntrophs, methanogens, and fermenters are critical. However, much of the interactions remain unclear because UASB have been studied at a “macro”-level perspective of the reactor ecosystem. In reality, such reactors are composed of a suite of granules, each forming individual micro-ecosystems treating wastewater. Thus, typical approaches may be oversimplifying the complexity of the microbial ecology and granular development. To identify critical microbial interactions at both macro- and micro- level ecosystem ecology, we perform community and network analyses on 300 PTA–degrading granules from a lab-scale UASB reactor and two full-scale reactors. Based on MiSeq-based 16S rRNA gene sequencing of individual granules, different granule-types co-exist in both full-scale reactors regardless of granule size and reactor sampling depth, suggesting that distinct microbial interactions occur in different granules throughout the reactor. In addition, we identify novel networks of syntrophic metabolic interactions in different granules, perhaps caused by distinct thermodynamic conditions. Moreover, unseen methanogenic relationships (e.g. “Candidatus Aminicenantes” and Methanosaeta) are observed in UASB reactors. In total, we discover unexpected microbial interactions in granular micro-ecosystems supporting UASB ecology and treatment through a unique single-granule level approach. PMID:27936088

  5. Achieving Population-Level Change Through a System-Contextual Approach to Supporting Competent Parenting.

    Science.gov (United States)

    Sanders, Matthew R; Burke, Kylie; Prinz, Ronald J; Morawska, Alina

    2017-03-01

    The quality of parenting children receive affects a diverse range of child and youth outcomes. Addressing the quality of parenting on a broad scale is a critical part of producing a more nurturing society. To achieve a meaningful population-level reduction in the prevalence rates of child maltreatment and social and emotional problems that are directly or indirectly influenced by parenting practices requires the adoption of a broad ecological perspective in supporting families to raise children. We make the case for adopting a multilevel, whole of population approach to enhance competent parenting and describe the essential tasks that must be accomplished for the approach to be successful and its effects measurable. We describe how a theoretically integrated system of parenting support based on social learning and cognitive behavioral principles can be further strengthened when the broader community supports parental participation. Implications for policy and practice are discussed.

  6. Long-memory and the sea level-temperature relationship: a fractional cointegration approach.

    Science.gov (United States)

    Ventosa-Santaulària, Daniel; Heres, David R; Martínez-Hernández, L Catalina

    2014-01-01

    Through thermal expansion of oceans and melting of land-based ice, global warming is very likely contributing to the sea level rise observed during the 20th century. The amount by which further increases in global average temperature could affect sea level is only known with large uncertainties due to the limited capacity of physics-based models to predict sea levels from global surface temperatures. Semi-empirical approaches have been implemented to estimate the statistical relationship between these two variables providing an alternative measure on which to base potentially disrupting impacts on coastal communities and ecosystems. However, only a few of these semi-empirical applications had addressed the spurious inference that is likely to be drawn when one nonstationary process is regressed on another. Furthermore, it has been shown that spurious effects are not eliminated by stationary processes when these possess strong long memory. Our results indicate that both global temperature and sea level indeed present the characteristics of long memory processes. Nevertheless, we find that these variables are fractionally cointegrated when sea-ice extent is incorporated as an instrumental variable for temperature which in our estimations has a statistically significant positive impact on global sea level.

  7. Risk newsboy: approach for addressing uncertainty in developing action levels and cleanup limits

    International Nuclear Information System (INIS)

    Cooke, Roger; MacDonell, Margaret

    2007-01-01

    Site cleanup decisions involve developing action levels and residual limits for key contaminants, to assure health protection during the cleanup period and into the long term. Uncertainty is inherent in the toxicity information used to define these levels, based on incomplete scientific knowledge regarding dose-response relationships across various hazards and exposures at environmentally relevant levels. This problem can be addressed by applying principles used to manage uncertainty in operations research, as illustrated by the newsboy dilemma. Each day a newsboy must balance the risk of buying more papers than he can sell against the risk of not buying enough. Setting action levels and cleanup limits involves a similar concept of balancing and distributing risks and benefits in the face of uncertainty. The newsboy approach can be applied to develop health-based target concentrations for both radiological and chemical contaminants, with stakeholder input being crucial to assessing 'regret' levels. Associated tools include structured expert judgment elicitation to quantify uncertainty in the dose-response relationship, and mathematical techniques such as probabilistic inversion and iterative proportional fitting. (authors)

  8. Testing for Level Shifts in Fractionally Integrated Processes: a State Space Approach

    DEFF Research Database (Denmark)

    Monache, Davide Delle; Grassi, Stefano; Santucci de Magistris, Paolo

    Short memory models contaminated by level shifts have similar long-memory features as fractionally integrated processes. This makes hard to verify whether the true data generating process is a pure fractionally integrated process when employing standard estimation methods based on the autocorrela......Short memory models contaminated by level shifts have similar long-memory features as fractionally integrated processes. This makes hard to verify whether the true data generating process is a pure fractionally integrated process when employing standard estimation methods based...... on the autocorrelation function or the periodogram. In this paper, we propose a robust testing procedure, based on an encompassing parametric specification that allows to disentangle the level shifts from the fractionally integrated component. The estimation is carried out on the basis of a state-space methodology...... and it leads to a robust estimate of the fractional integration parameter also in presence of level shifts. Once the memory parameter is correctly estimated, we use the KPSS test for presence of level shift. The Monte Carlo simulations show how this approach produces unbiased estimates of the memory parameter...

  9. A Big Data and Learning Analytics Approach to Process-Level Feedback in Cognitive Simulations.

    Science.gov (United States)

    Pecaric, Martin; Boutis, Kathy; Beckstead, Jason; Pusic, Martin

    2017-02-01

    Collecting and analyzing large amounts of process data for the purposes of education can be considered a big data/learning analytics (BD/LA) approach to improving learning. However, in the education of health care professionals, the application of BD/LA is limited to date. The authors discuss the potential advantages of the BD/LA approach for the process of learning via cognitive simulations. Using the lens of a cognitive model of radiograph interpretation with four phases (orientation, searching/scanning, feature detection, and decision making), they reanalyzed process data from a cognitive simulation of pediatric ankle radiography where 46 practitioners from three expertise levels classified 234 cases online. To illustrate the big data component, they highlight the data available in a digital environment (time-stamped, click-level process data). Learning analytics were illustrated using algorithmic computer-enabled approaches to process-level feedback.For each phase, the authors were able to identify examples of potentially useful BD/LA measures. For orientation, the trackable behavior of re-reviewing the clinical history was associated with increased diagnostic accuracy. For searching/scanning, evidence of skipping views was associated with an increased false-negative rate. For feature detection, heat maps overlaid on the radiograph can provide a metacognitive visualization of common novice errors. For decision making, the measured influence of sequence effects can reflect susceptibility to bias, whereas computer-generated path maps can provide insights into learners' diagnostic strategies.In conclusion, the augmented collection and dynamic analysis of learning process data within a cognitive simulation can improve feedback and prompt more precise reflection on a novice clinician's skill development.

  10. TH-A-18C-07: Noise Suppression in Material Decomposition for Dual-Energy CT

    International Nuclear Information System (INIS)

    Dong, X; Petrongolo, M; Wang, T; Zhu, L

    2014-01-01

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on a calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future

  11. Calculation approaches for grid usage fees to influence the load curve in the distribution grid level

    International Nuclear Information System (INIS)

    Illing, Bjoern

    2014-01-01

    Dominated by the energy policy the decentralized German energy market is changing. One mature target of the government is to increase the contribution of renewable generation to the gross electricity consumption. In order to achieve this target disadvantages like an increased need for capacity management occurs. Load reduction and variable grid fees offer the grid operator solutions to realize capacity management by influencing the load profile. The evolution of the current grid fees towards more causality is required to adapt these approaches. Two calculation approaches are developed in this assignment. On the one hand multivariable grid fees keeping the current components demand and energy charge. Additional to the grid costs grid load dependent parameters like the amount of decentralized feed-ins, time and local circumstances as well as grid capacities are considered. On the other hand the grid fee flat-rate which represents a demand based model on a monthly level. Both approaches are designed to meet the criteria for future grid fees. By means of a case study the effects of the grid fees on the load profile at the low voltage grid is simulated. Thereby the consumption is represented by different behaviour models and the results are scaled at the benchmark grid area. The resulting load curve is analyzed concerning the effects of peak load reduction as well as the integration of renewable energy sources. Additionally the combined effect of grid fees and electricity tariffs is evaluated. Finally the work discusses the launching of grid fees in the tense atmosphere of politics, legislation and grid operation. Results of this work are two calculation approaches designed for grid operators to define the grid fees. Multivariable grid fees are based on the current calculation scheme. Hereby demand and energy charges are weighted by time, locational and load related dependencies. The grid fee flat-rate defines a limitation in demand extraction. Different demand levels

  12. MULTI-LEVEL SAMPLING APPROACH FOR CONTINOUS LOSS DETECTION USING ITERATIVE WINDOW AND STATISTICAL MODEL

    OpenAIRE

    Mohd Fo'ad Rohani; Mohd Aizaini Maarof; Ali Selamat; Houssain Kettani

    2010-01-01

    This paper proposes a Multi-Level Sampling (MLS) approach for continuous Loss of Self-Similarity (LoSS) detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS) statistical model. The Optimization Method (OM) is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance...

  13. Probability of extreme interference levels computed from reliability approaches: application to transmission lines with uncertain parameters

    International Nuclear Information System (INIS)

    Larbi, M.; Besnier, P.; Pecqueux, B.

    2014-01-01

    This paper deals with the risk analysis of an EMC default using a statistical approach. It is based on reliability methods from probabilistic engineering mechanics. A computation of probability of failure (i.e. probability of exceeding a threshold) of an induced current by crosstalk is established by taking into account uncertainties on input parameters influencing levels of interference in the context of transmission lines. The study has allowed us to evaluate the probability of failure of the induced current by using reliability methods having a relative low computational cost compared to Monte Carlo simulation. (authors)

  14. Approach to estimation of level of information security at enterprise based on genetic algorithm

    Science.gov (United States)

    V, Stepanov L.; V, Parinov A.; P, Korotkikh L.; S, Koltsov A.

    2018-05-01

    In the article, the way of formalization of different types of threats of information security and vulnerabilities of an information system of the enterprise and establishment is considered. In a type of complexity of ensuring information security of application of any new organized system, the concept and decisions in the sphere of information security are expedient. One of such approaches is the method of a genetic algorithm. For the enterprises of any fields of activity, the question of complex estimation of the level of security of information systems taking into account the quantitative and qualitative factors characterizing components of information security is relevant.

  15. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Science.gov (United States)

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  16. Optimization and Assessment of Wavelet Packet Decompositions with Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Schell Thomas

    2003-01-01

    Full Text Available In image compression, the wavelet transformation is a state-of-the-art component. Recently, wavelet packet decomposition has received quite an interest. A popular approach for wavelet packet decomposition is the near-best-basis algorithm using nonadditive cost functions. In contrast to additive cost functions, the wavelet packet decomposition of the near-best-basis algorithm is only suboptimal. We apply methods from the field of evolutionary computation (EC to test the quality of the near-best-basis results. We observe a phenomenon: the results of the near-best-basis algorithm are inferior in terms of cost-function optimization but are superior in terms of rate/distortion performance compared to EC methods.

  17. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices

    Science.gov (United States)

    Finn, Conor; Lizier, Joseph

    2018-04-01

    What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

  18. Investigating hydrogel dosimeter decomposition by chemical methods

    International Nuclear Information System (INIS)

    Jordan, Kevin

    2015-01-01

    The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products

  19. Best approaches to drug-resistance surveillance at the country level

    Directory of Open Access Journals (Sweden)

    A M Cabibbe

    2016-01-01

    Classical sequencing and NGS approaches have been successfully used in a recent study conducted in five countries with high burden of TB and multidrug resistant tuberculosis (MDR-TB and aimed at investigating levels of resistance to pyrazinamide among patients with TB by pncA sequencing [doi: 10.1016/S1473-3099(1630190-6]. This work innovatively demonstrated that the establishment of strong links between national (peripheral and reference laboratories and supranational laboratories, with the former possibly processing indirect or direct samples and generating sequencing data, and the latter supporting them for bioinformatics analysis and data interpretation, will soon make WGS and targeted NGS the preferred tools to conduct public health surveillances in TB field, thus helping the strategies adopted by TB control programs at local and national levels.

  20. Acculturation attitudes and ethnic prejudice in different education levels: a comparative approach.

    Directory of Open Access Journals (Sweden)

    Álvaro Retortillo Osuna

    2008-08-01

    Full Text Available Immigration has important demographic, economic, social and educational consequences in receiving countries. Nowadays international migrations cause essential changes in host countries, both at the macro and micro levels and both as a group and individually. From a mainly psychosocial perspective, the present paper points out the need to research, from a comparative approach, on the way students (at different education levels, both public and private regard newcomers living in their society. In addition, this research focuses on the preferences of the host population about immigrants’ social inclusion, in other words, acculturation attitudes. Because we behave with others according to the way we perceive them, we consider it necessary to try and get to know what image native students have of newcomers.

  1. Modular Approach for Continuous Cell-Level Balancing to Improve Performance of Large Battery Packs: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Muneed ur Rehman, M.; Evzelman, M.; Hathaway, K.; Zane, R.; Plett, G. L.; Smith, K.; Wood, E.; Maksimovic, D.

    2014-10-01

    Energy storage systems require battery cell balancing circuits to avoid divergence of cell state of charge (SOC). A modular approach based on distributed continuous cell-level control is presented that extends the balancing function to higher level pack performance objectives such as improving power capability and increasing pack lifetime. This is achieved by adding DC-DC converters in parallel with cells and using state estimation and control to autonomously bias individual cell SOC and SOC range, forcing healthier cells to be cycled deeper than weaker cells. The result is a pack with improved degradation characteristics and extended lifetime. The modular architecture and control concepts are developed and hardware results are demonstrated for a 91.2-Wh battery pack consisting of four series Li-ion battery cells and four dual active bridge (DAB) bypass DC-DC converters.

  2. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  3. Universality of Schmidt decomposition and particle identity

    Science.gov (United States)

    Sciara, Stefania; Lo Franco, Rosario; Compagno, Giuseppe

    2017-03-01

    Schmidt decomposition is a widely employed tool of quantum theory which plays a key role for distinguishable particles in scenarios such as entanglement characterization, theory of measurement and state purification. Yet, its formulation for identical particles remains controversial, jeopardizing its application to analyze general many-body quantum systems. Here we prove, using a newly developed approach, a universal Schmidt decomposition which allows faithful quantification of the physical entanglement due to the identity of particles. We find that it is affected by single-particle measurement localization and state overlap. We study paradigmatic two-particle systems where identical qubits and qutrits are located in the same place or in separated places. For the case of two qutrits in the same place, we show that their entanglement behavior, whose physical interpretation is given, differs from that obtained before by different methods. Our results are generalizable to multiparticle systems and open the way for further developments in quantum information processing exploiting particle identity as a resource.

  4. Spectral decomposition of nonlinear systems with memory

    Science.gov (United States)

    Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.

    2016-02-01

    We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.

  5. Examination of the Current Approaches to State-Level Nuclear Security Evaluation

    International Nuclear Information System (INIS)

    Kim, Chan; Yim, Mansung; Kim, So Young

    2014-01-01

    An effective global nuclear materials security system will cover all materials, employ international standards and best practices, and reduce risks by reducing weapons-usable nuclear material stocks and the number of locations where they are found. Such a system must also encourage states to accept peer reviews by outside experts in order to demonstrate that effective security is in place. It is thus critically important to create an integrative framework of state-level evaluation of nuclear security as a basis for measuring the level and progress of international effort to secure and control all nuclear materials. There have been studies to represent state-level nuclear security with a quantitative metric. A prime example is the Nuclear Materials Security Index (NMSI) by the Nuclear Threat Initiative (NTI). Another comprehensive study is the State Level Risk Metric by Texas A and M University (TAMU). This paper examines the current methods with respect to their strengths and weaknesses and identifies the directions for future research to improve upon the existing approaches

  6. Setting-level influences on implementation of the responsive classroom approach.

    Science.gov (United States)

    Wanless, Shannon B; Patton, Christine L; Rimm-Kaufman, Sara E; Deutsch, Nancy L

    2013-02-01

    We used mixed methods to examine the association between setting-level factors and observed implementation of a social and emotional learning intervention (Responsive Classroom® approach; RC). In study 1 (N = 33 3rd grade teachers after the first year of RC implementation), we identified relevant setting-level factors and uncovered the mechanisms through which they related to implementation. In study 2 (N = 50 4th grade teachers after the second year of RC implementation), we validated our most salient Study 1 finding across multiple informants. Findings suggested that teachers perceived setting-level factors, particularly principal buy-in to the intervention and individualized coaching, as influential to their degree of implementation. Further, we found that intervention coaches' perspectives of principal buy-in were more related to implementation than principals' or teachers' perspectives. Findings extend the application of setting theory to the field of implementation science and suggest that interventionists may want to consider particular accounts of school setting factors before determining the likelihood of schools achieving high levels of implementation.

  7. Designing Leadership models in a Three Level Unlimited Supply Chain: Non-Cooperative Game Theory Approach

    Directory of Open Access Journals (Sweden)

    Ahmad Jaafarnehad

    2015-09-01

    Full Text Available The role and importance of supply chain management, has faced with many challenges and problems. Although a comprehensive model of supply chain issues, has not been explained, we have to indicate that issues such as reviewing the theoretical foundations of information systems, marketing, financial management, logistical and organizational relations have been considered by many researchers. The objective of supply chain management is to improve various activities and components to increase overall supply chain system benefits. In order to achieve the overall objectives, many contradictions may occur between the components and different levels of supply chain and the contradictions that these disorders over time, result in decreased strength and competitiveness of the supply chain. Such conflicts, like marketing costs (advertising, pricing and inventory can occur during the supply chain life cycle. A Game Theory approach with respect to property is the appropriate tool for collaboration in the supply chain. This tool is used for collaborative making in any kind of supply chain such as cooperative supply chain and non-cooperative supply chain. In the present study and assuming a lack of cooperation between different levels of a supply chain, a dynamic game with complete information has been generated. In addition identifying appropriate leaders of various levels of the supply chain is considered. Non-Cooperative dynamic game mode (Stackelberg Game, for each of the three levels of supply chain including retailers, suppliers and manufacturers are modeled. Depending on the bargaining power and its position in the market, any level of supply chain can make a leader of the following rule. In the present study, the equilibrium model to play Stackelberg game may be led by a leader or leading players and ultimately identifying and modeling the appropriate unlimited three level supply chain are determined.

  8. Approach to mathematics in textbooks at tertiary level - exploring authors' views about their texts

    Science.gov (United States)

    Randahl, Mira

    2012-10-01

    The aim of this article is to present and discuss some results from an inquiry into mathematics textbooks authors' visions about their texts and approaches they choose when new concepts are introduced. Authors' responses are discussed in relation to results about students' difficulties with approaching calculus reported by previous research. A questionnaire has been designed and sent to seven authors of the most used calculus textbooks in Norway and four authors have responded. The responses show that the authors mainly view teaching in terms of transmission so they focus mainly on getting the mathematical content correct and 'clear'. The dominant view is that the textbook is intended to help the students to learn by explaining and clarifying. The authors prefer the approach to introduce new concepts based on the traditional way of perceiving mathematics as a system of definitions, examples and exercises. The results of this study may enhance our understanding of the role of the textbook at tertiary level. They may also form a foundation for further research.

  9. An optimized approach towards the treatment of high level liquid waste in the nuclear cycle

    International Nuclear Information System (INIS)

    Maio, V.; Todd, T.; Law, J.; Roach, J.; Sabharwall, P.

    2006-01-01

    Full text: One key long-standing issue that must be overcome to realize the successful growth of nuclear power is an economical, politically acceptable, stakeholder-compatible, and technically feasible resolution pertaining to the safe treatment and disposal of high-level liquid radioactive waste (HLLW). In addition to spent nuclear reactor fuel, HLLW poses a unique challenge in regard to environmental and security concerns, since future scenarios for a next generation of domestic and commercialized nuclear fuel cycle infrastructures must include reprocessing - the primary source of HLLW-to ensure the cost effectiveness of nuclear power as well as mitigate any threats as related to proliferation. Past attempts to immobilize HLLW - generated by both the weapons complex and the commercial power sector-have been plagued by an inability to convince the public and some technical peer reviewers that any proposed geological disposal sites (e.g., Yucca Mountain) can accommodate and contain the HLLW for a period of geological time equivalent to ten fold the radiological half-life of the longest lived of the actinides remaining after reprocessing. The paper explores combined equipment and chemical processing approaches for advancing and economizing the immobilization of high level liquid waste to ensure its long term durability, its decoupling from the unknown behavior of the repository over long geological time periods, and its economical formulation as required for the nuclear fuel cycle of the future. One approach involves the investigation of crystalline based waste forms as opposed to the glass/amorphous based waste forms, and how recent developments in crystalline forms show promise in sequestering the long lived actinides for over tens of millions of years. Another approach -compatible with the first- involves the use of an alternative melter technology-the Cold Crucible Induction Melter (CCIM)- to overcome the engineering material problems of Joule Heated Meters (JHM

  10. Speckle imaging using the principle value decomposition method

    International Nuclear Information System (INIS)

    Sherman, J.W.

    1978-01-01

    Obtaining diffraction-limited images in the presence of atmospheric turbulence is a topic of current interest. Two types of approaches have evolved: real-time correction and speckle imaging. A speckle imaging reconstruction method was developed by use of an ''optimal'' filtering approach. This method is based on a nonlinear integral equation which is solved by principle value decomposition. The method was implemented on a CDC 7600 for study. The restoration algorithm is discussed and its performance is illustrated. 7 figures

  11. Dictionary-Based Tensor Canonical Polyadic Decomposition

    Science.gov (United States)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  12. Decomposition of diesel oil by various microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Suess, A; Netzsch-Lehner, A

    1969-01-01

    Previous experiments demonstrated the decomposition of diesel oil in different soils. In this experiment the decomposition of /sup 14/C-n-Hexadecane labelled diesel oil by special microorganisms was studied. The results were as follows: (1) In the experimental soils the microorganisms Mycoccus ruber, Mycobacterium luteum and Trichoderma hamatum are responsible for the diesel oil decomposition. (2) By adding microorganisms to the soil an increase of the decomposition rate was found only in the beginning of the experiments. (3) Maximum decomposition of diesel oil was reached 2-3 weeks after incubation.

  13. A statistical approach to evaluate flood risk at the regional level: an application to Italy

    Science.gov (United States)

    Rossi, Mauro; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Guzzetti, Fausto; Sterlacchini, Simone; Zazzeri, Marco; Bonazzi, Alessandro; Carlesi, Andrea

    2016-04-01

    Floods are frequent and widespread in Italy, causing every year multiple fatalities and extensive damages to public and private structures. A pre-requisite for the development of mitigation schemes, including financial instruments such as insurance, is the ability to quantify their costs starting from the estimation of the underlying flood hazard. However, comprehensive and coherent information on flood prone areas, and estimates on the frequency and intensity of flood events, are not often available at scales appropriate for risk pooling and diversification. In Italy, River Basins Hydrogeological Plans (PAI), prepared by basin administrations, are the basic descriptive, regulatory, technical and operational tools for environmental planning in flood prone areas. Nevertheless, such plans do not cover the entire Italian territory, having significant gaps along the minor hydrographic network and in ungauged basins. Several process-based modelling approaches have been used by different basin administrations for the flood hazard assessment, resulting in an inhomogeneous hazard zonation of the territory. As a result, flood hazard assessments expected and damage estimations across the different Italian basin administrations are not always coherent. To overcome these limitations, we propose a simplified multivariate statistical approach for the regional flood hazard zonation coupled with a flood impact model. This modelling approach has been applied in different Italian basin administrations, allowing a preliminary but coherent and comparable estimation of the flood hazard and the relative impact. Model performances are evaluated comparing the predicted flood prone areas with the corresponding PAI zonation. The proposed approach will provide standardized information (following the EU Floods Directive specifications) on flood risk at a regional level which can in turn be more readily applied to assess flood economic impacts. Furthermore, in the assumption of an appropriate

  14. One-Channel Surface Electromyography Decomposition for Muscle Force Estimation

    Directory of Open Access Journals (Sweden)

    Wentao Sun

    2018-05-01

    Full Text Available Estimating muscle force by surface electromyography (sEMG is a non-invasive and flexible way to diagnose biomechanical diseases and control assistive devices such as prosthetic hands. To estimate muscle force using sEMG, a supervised method is commonly adopted. This requires simultaneous recording of sEMG signals and muscle force measured by additional devices to tune the variables involved. However, recording the muscle force of the lost limb of an amputee is challenging, and the supervised method has limitations in this regard. Although the unsupervised method does not require muscle force recording, it suffers from low accuracy due to a lack of reference data. To achieve accurate and easy estimation of muscle force by the unsupervised method, we propose a decomposition of one-channel sEMG signals into constituent motor unit action potentials (MUAPs in two steps: (1 learning an orthogonal basis of sEMG signals through reconstruction independent component analysis; (2 extracting spike-like MUAPs from the basis vectors. Nine healthy subjects were recruited to evaluate the accuracy of the proposed approach in estimating muscle force of the biceps brachii. The results demonstrated that the proposed approach based on decomposed MUAPs explains more than 80% of the muscle force variability recorded at an arbitrary force level, while the conventional amplitude-based approach explains only 62.3% of this variability. With the proposed approach, we were also able to achieve grip force control of a prosthetic hand, which is one of the most important clinical applications of the unsupervised method. Experiments on two trans-radial amputees indicated that the proposed approach improves the performance of the prosthetic hand in grasping everyday objects.

  15. A new approach to characterize very-low-level radioactive waste produced at hadron accelerators

    International Nuclear Information System (INIS)

    Zaffora, Biagio; Magistris, Matteo; Chevalier, Jean-Pierre; Luccioni, Catherine; Saporta, Gilbert; Ulrici, Luisa

    2017-01-01

    Radioactive waste is produced as a consequence of preventive and corrective maintenance during the operation of high-energy particle accelerators or associated dismantling campaigns. Their radiological characterization must be performed to ensure an appropriate disposal in the disposal facilities. The radiological characterization of waste includes the establishment of the list of produced radionuclides, called “radionuclide inventory”, and the estimation of their activity. The present paper describes the process adopted at CERN to characterize very-low-level radioactive waste with a focus on activated metals. The characterization method consists of measuring and estimating the activity of produced radionuclides either by experimental methods or statistical and numerical approaches. We adapted the so-called Scaling Factor (SF) and Correlation Factor (CF) techniques to the needs of hadron accelerators, and applied them to very-low-level metallic waste produced at CERN. For each type of metal we calculated the radionuclide inventory and identified the radionuclides that most contribute to hazard factors. The methodology proposed is of general validity, can be extended to other activated materials and can be used for the characterization of waste produced in particle accelerators and research centres, where the activation mechanisms are comparable to the ones occurring at CERN. - Highlights: • We developed a radiological characterization process for radioactive waste produced at particle accelerators. • We used extensive numerical experimentations and statistical analysis to predict a complete list of radionuclides in activated metals. • We used the new approach to characterize and dispose of more than 420 t of very-low-level radioactive waste.

  16. Excimer laser decomposition of silicone

    International Nuclear Information System (INIS)

    Laude, L.D.; Cochrane, C.; Dicara, Cl.; Dupas-Bruzek, C.; Kolev, K.

    2003-01-01

    Excimer laser irradiation of silicone foils is shown in this work to induce decomposition, ablation and activation of such materials. Thin (100 μm) laminated silicone foils are irradiated at 248 nm as a function of impacting laser fluence and number of pulsed irradiations at 1 s intervals. Above a threshold fluence of 0.7 J/cm 2 , material starts decomposing. At higher fluences, this decomposition develops and gives rise to (i) swelling of the irradiated surface and then (ii) emission of matter (ablation) at a rate that is not proportioned to the number of pulses. Taking into consideration the polymer structure and the foil lamination process, these results help defining the phenomenology of silicone ablation. The polymer decomposition results in two parts: one which is organic and volatile, and another part which is inorganic and remains, forming an ever thickening screen to light penetration as the number of light pulses increases. A mathematical model is developed that accounts successfully for this physical screening effect

  17. Indoor Semantic Modelling for Routing: The Two-Level Routing Approach for Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Liu Liu

    2017-11-01

    Full Text Available Humans perform many activities indoors and they show a growing need for indoor navigation, especially in unfamiliar buildings such as airports, museums and hospitals. Complexity of such buildings poses many challenges for building managers and visitors. Indoor navigation services play an important role in supporting these indoor activities. Indoor navigation covers extensive topics such as: 1 indoor positioning and localization; 2 indoor space representation for navigation model generation; 3 indoor routing computation; 4 human wayfinding behaviours; and 5 indoor guidance (e.g., textual directories. So far, a large number of studies of pedestrian indoor navigation have presented diverse navigation models and routing algorithms/methods. However, the major challenge is rarely referred to: how to represent the complex indoor environment for pedestrians and conduct routing according to the different roles and sizes of users. Such complex buildings contain irregular shapes, large open spaces, complicated obstacles and different types of passages. A navigation model can be very complicated if the indoors are accurately represented. Although most research demonstrates feasible indoor navigation models and related routing methods in regular buildings, the focus is still on a general navigation model for pedestrians who are simplified as circles. In fact, pedestrians represent different sizes, motion abilities and preferences (e.g., described in user profiles, which should be reflected in navigation models and be considered for indoor routing (e.g., relevant Spaces of Interest and Points of Interest. In order to address this challenge, this thesis proposes an innovative indoor modelling and routing approach – two-level routing. It specially targets the case of routing in complex buildings for distinct users. The conceptual (first level uses general free indoor spaces: this is represented by the logical network whose nodes represent the spaces and edges

  18. Gender Differences in Mental Well-Being: A Decomposition Analysis

    Science.gov (United States)

    Madden, David

    2010-01-01

    The General Health Questionnaire (GHQ) is frequently used as a measure of mental well-being. A consistent pattern across countries is that women report lower levels of mental well-being, as measured by the GHQ. This paper applies decomposition techniques to Irish data for 1994 and 2000 to examine the factors lying behind the gender differences in…

  19. The Use of Decompositions in International Trade Textbooks.

    Science.gov (United States)

    Highfill, Jannett K.; Weber, William V.

    1994-01-01

    Asserts that international trade, as compared with international finance or even international economics, is primarily an applied microeconomics field. Discusses decomposition analysis in relation to international trade and tariffs. Reports on an evaluation of the treatment of this topic in eight college-level economics textbooks. (CFR)

  20. Dinitraminic acid (HDN) isomerization and self-decomposition revisited

    International Nuclear Information System (INIS)

    Rahm, Martin; Brinck, Tore

    2008-01-01

    Density functional theory (DFT) and the ab initio based CBS-QB3 method have been used to study possible decomposition pathways of dinitraminic acid HN(NO 2 ) 2 (HDN) in gas-phase. The proton transfer isomer of HDN, O 2 NNN(O)OH, and its conformers can be formed and converted into each other through intra- and intermolecular proton transfer. The latter has been shown to proceed substantially faster via double proton transfer. The main mechanism for HDN decomposition is found to be initiated by a dissociation reaction, splitting of nitrogen dioxide from either HDN or the HDN isomer. This reaction has an activation enthalpy of 36.5 kcal/mol at the CBS-QB3 level, which is in good agreement with experimental estimates of the decomposition barrier

  1. Microbial community assembly and metabolic function during mammalian corpse decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Metcalf, J. L.; Xu, Z. Z.; Weiss, S.; Lax, S.; Van Treuren, W.; Hyde, E. R.; Song, S. J.; Amir, A.; Larsen, P.; Sangwan, N.; Haarmann, D.; Humphrey, G. C.; Ackermann, G.; Thompson, L. R.; Lauber, C.; Bibat, A.; Nicholas, C.; Gebert, M. J.; Petrosino, J. F.; Reed, S. C.; Gilbert, J. A.; Lynne, A. M.; Bucheli, S. R.; Carter, D. O.; Knight, R.

    2015-12-10

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.

  2. TRUST MODEL FOR SOCIAL NETWORK USING SINGULAR VALUE DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Davis Bundi Ntwiga

    2016-06-01

    Full Text Available For effective interactions to take place in a social network, trust is important. We model trust of agents using the peer to peer reputation ratings in the network that forms a real valued matrix. Singular value decomposition discounts the reputation ratings to estimate the trust levels as trust is the subjective probability of future expectations based on current reputation ratings. Reputation and trust are closely related and singular value decomposition can estimate trust using the real valued matrix of the reputation ratings of the agents in the network. Singular value decomposition is an ideal technique in error elimination when estimating trust from reputation ratings. Reputation estimation of trust is optimal at the discounting of 20 %.

  3. Microbial community assembly and metabolic function during mammalian corpse decomposition

    Science.gov (United States)

    Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R.; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C.; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob

    2016-01-01

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.

  4. Management of Sustainable Energy Efficient Development at the Local Level: Stakeholder-Oriented Approach

    Directory of Open Access Journals (Sweden)

    Horban Vasylyna B.

    2016-11-01

    Full Text Available There presented a theoretical rationale for the expediency of using the stakeholder-oriented approach to improve the process of management of sustainable energy efficient development at the local level. The evolution of theories by scientific schools that studied the concepts of «stakeholders» and «interested parties» is analyzed and generalized. A classification of types of stakeholders in the context of eighteen typological features is suggested, which allows to more effectively align their interests and contributes to establishing constructive forms of cooperation in order to achieve efficient final results. An algorithm of interaction with interested parties in achieving the goals of sustainable energy efficient development at the local level is elaborated. Typical motivational interests of stakeholders at the local level in the field of sustainable energy efficient development (on the example of Ukraine are identified. Instruments of prioritization of stakeholders depending on the life cycle stages of energy efficiency projects are proposed. The results obtained in the course of the research can be used to develop local energy efficient programs, business plans and feasibility studies for energy efficient projects.

  5. A hardware acceleration based on high-level synthesis approach for glucose-insulin analysis

    Science.gov (United States)

    Daud, Nur Atikah Mohd; Mahmud, Farhanahani; Jabbar, Muhamad Hairol

    2017-01-01

    In this paper, the research is focusing on Type 1 Diabetes Mellitus (T1DM). Since this disease requires a full attention on the blood glucose concentration with the help of insulin injection, it is important to have a tool that able to predict that level when consume a certain amount of carbohydrate during meal time. Therefore, to make it realizable, a Hovorka model which is aiming towards T1DM is chosen in this research. A high-level language is chosen that is C++ to construct the mathematical model of the Hovorka model. Later, this constructed code is converted into intellectual property (IP) which is also known as a hardware accelerator by using of high-level synthesis (HLS) approach which able to improve in terms of design and performance for glucose-insulin analysis tool later as will be explained further in this paper. This is the first step in this research before implementing the design into system-on-chip (SoC) to achieve a high-performance system for the glucose-insulin analysis tool.

  6. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    Science.gov (United States)

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.

  7. 60-year Nordic and arctic sea level reconstruction based on a reprocessed two decade altimetric sea level record and tide gauges

    OpenAIRE

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    2015-01-01

    Due to the sparsity and often poor quality of data, reconstructing Arctic sea level is highly challenging. We present a reconstruction of Arctic sea level covering 1950 to 2010, using the approaches from Church et al. (2004) and Ray and Douglas (2011). This involves decomposition of an altimetry calibration record into EOFs, and fitting these patterns to a historical tide gauge record.

  8. 60-year Nordic and arctic sea level reconstruction based on a reprocessed two decade altimetric sea level record and tide gauges

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    Due to the sparsity and often poor quality of data, reconstructing Arctic sea level is highly challenging. We present a reconstruction of Arctic sea level covering 1950 to 2010, using the approaches from Church et al. (2004) and Ray and Douglas (2011). This involves decomposition of an altimetry...

  9. IN SITU INFRARED STUDY OF CATALYTIC DECOMPOSITION OF NITRIC OXIDE (NO); FINAL

    International Nuclear Information System (INIS)

    Unknown

    1999-01-01

    The growing concerns for the environment and increasingly stringent standards for NO emission have presented a major challenge to control NO emissions from electric utility plants and automobiles. Catalytic decomposition of NO is the most attractive approach for the control of NO emission for its simplicity. Successful development of an effective catalyst for NO decomposition will greatly decrease the equipment and operation cost of NO control. Due to lack of understanding of the mechanism of NO decomposition, efforts on the search of an effective catalyst have been unsuccessful. Scientific development of an effective catalyst requires fundamental understanding of the nature of active site, the rate-limiting step, and an approach to prolong the life of the catalyst. The authors have investigated the feasibility of two novel approaches for improving catalyst activity and resistance to sintering. The first approach is the use of silanation to stabilize metal crystallites and supports for Cu-ZSM-5 and promoted Pt catalysts; the second is utilization of oxygen spillover and desorption to enhance NO decomposition activity. The silanation approach failed to stabilize Cu-ZSM-5 activity under hydrothermal condition. Silanation blocked the oxygen migration and inhibited oxygen desorption. Oxygen spillover was found to be an effective approach for promoting NO decomposition activity on Pt-based catalysts. Detailed mechanistic study revealed the oxygen inhibition in NO decomposition and reduction as the most critical issue in developing an effective catalytic approach for controlling NO emission

  10. Salient Object Detection via Structured Matrix Decomposition.

    Science.gov (United States)

    Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J

    2016-05-04

    Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.

  11. Investigating the role of male advantage and female disadvantage in explaining the discrimination effect of the gender pay gap in the Cameroon labor market. Oaxaca-Ransom decomposition approach

    Directory of Open Access Journals (Sweden)

    Dickson Thomas NDAMSA

    2015-05-01

    Full Text Available The paper assesses the sources of gender-based wage differentials and investigates the relative importance of the endowment effect, female disadvantage and male advantage in explaining gender-based wage differentials in the Cameroon labor market. Use is made of the Ordinary Least Square technique and the Oaxaca-Ransom decomposition. Oaxaca-Ransom decomposition results show that primary education, secondary education, tertiary education and professional training are sources of the gender pay gap. Our results also underline the importance of working experience, formal sector employment and urban residency in explaining wage differentials between male and female workers in the Cameroon labour market. Our findings reveal that education human capital explains a greater portion of the endowment effect and contributes little to the discrimination effect. Essentially, we observe that the discrimination effect has a worsening effect on the gender pay gap compared to the mitigating role of the endowment effect. Again, our results show that a greater part of the discrimination effect of the gender pay gap is attributed to female disadvantage in the Cameroon labor market.

  12. Sounding the warning bells: the need for a systems approach to understanding behaviour at rail level crossings.

    Science.gov (United States)

    Read, Gemma J M; Salmon, Paul M; Lenné, Michael G

    2013-09-01

    Collisions at rail level crossings are an international safety concern and have been the subject of considerable research effort. Modern human factors practice advocates a systems approach to investigating safety issues in complex systems. This paper describes the results of a structured review of the level crossing literature to determine the extent to which a systems approach has been applied. The measures used to determine if previous research was underpinned by a systems approach were: the type of analysis method utilised, the number of component relationships considered, the number of user groups considered, the number of system levels considered and the type of model described in the research. None of research reviewed was found to be consistent with a systems approach. It is recommended that further research utilise a systems approach to the study of the level crossing system to enable the identification of effective design improvements. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Surface EMG decomposition based on K-means clustering and convolution kernel compensation.

    Science.gov (United States)

    Ning, Yong; Zhu, Xiangjun; Zhu, Shanan; Zhang, Yingchun

    2015-03-01

    A new approach has been developed by combining the K-mean clustering (KMC) method and a modified convolution kernel compensation (CKC) method for multichannel surface electromyogram (EMG) decomposition. The KMC method was first utilized to cluster vectors of observations at different time instants and then estimate the initial innervation pulse train (IPT). The CKC method, modified with a novel multistep iterative process, was conducted to update the estimated IPT. The performance of the proposed K-means clustering-Modified CKC (KmCKC) approach was evaluated by reconstructing IPTs from both simulated and experimental surface EMG signals. The KmCKC approach successfully reconstructed all 10 IPTs from the simulated surface EMG signals with true positive rates (TPR) of over 90% with a low signal-to-noise ratio (SNR) of -10 dB. More than 10 motor units were also successfully extracted from the 64-channel experimental surface EMG signals of the first dorsal interosseous (FDI) muscles when a contraction force was held at 8 N by using the KmCKC approach. A "two-source" test was further conducted with 64-channel surface EMG signals. The high percentage of common MUs and common pulses (over 92% at all force levels) between the IPTs reconstructed from the two independent groups of surface EMG signals demonstrates the reliability and capability of the proposed KmCKC approach in multichannel surface EMG decomposition. Results from both simulated and experimental data are consistent and confirm that the proposed KmCKC approach can successfully reconstruct IPTs with high accuracy at different levels of contraction.

  14. Structural system identification based on variational mode decomposition

    Science.gov (United States)

    Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.

    2018-03-01

    In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.

  15. Generalized Benders’ Decomposition for topology optimization problems

    DEFF Research Database (Denmark)

    Munoz Queupumil, Eduardo Javier; Stolpe, Mathias

    2011-01-01

    ) problems with discrete design variables to global optimality. We present the theoretical aspects of the method, including a proof of finite convergence and conditions for obtaining global optimal solutions. The method is also linked to, and compared with, an Outer-Approximation approach and a mixed 0......–1 semi definite programming formulation of the considered problem. Several ways to accelerate the method are suggested and an implementation is described. Finally, a set of truss topology optimization problems are numerically solved to global optimality.......This article considers the non-linear mixed 0–1 optimization problems that appear in topology optimization of load carrying structures. The main objective is to present a Generalized Benders’ Decomposition (GBD) method for solving single and multiple load minimum compliance (maximum stiffness...

  16. Decomposition of childhood malnutrition in Cambodia.

    Science.gov (United States)

    Sunil, Thankam S; Sagna, Marguerite

    2015-10-01

    Childhood malnutrition is a major problem in developing countries, and in Cambodia, it is estimated that approximately 42% of the children are stunted, which is considered to be very high. In the present study, we examined the effects of proximate and socio-economic determinants on childhood malnutrition in Cambodia. In addition, we examined the effects of the changes in these proximate determinants on childhood malnutrition between 2000 and 2005. Our analytical approach included descriptive, logistic regression and decomposition analyses. Separate analyses are estimated for 2000 and 2005 survey. The primary component of the difference in stunting is attributable to the rates component, indicating that the decrease of stunting is due mainly to the decrease in stunting rates between 2000 and 2005. While majority of the differences in childhood malnutrition between 2000 and 2005 can be attributed to differences in the distribution of malnutrition determinants between 2000 and 2005, differences in their effects also showed some significance. © 2013 John Wiley & Sons Ltd.

  17. Comprehensive Approach for Monitoring and Analyzing the Activity Concentration Level of PET Isotopes

    International Nuclear Information System (INIS)

    Osovizky, A.; Paran, J.; Ankry, N.; Vulasky, E.; Ashkenazi, B.; Tal, N.; Dolev, E.; Gonen, E.

    2004-01-01

    A comprehensive approach for measuring and analyzing low concentration levels of positron emitter isotopes is introduced. The solution is based on a Continuous Air Monitoring Sampler (CAMS), Stack Monitoring System (SMS) and software package. Positron Emission Tomography (PET) is a major tool for both, biochemical research and non-invasive diagnostics for medicine imaging. The PET method utilizes short half life β + radioisotopes that are produced in cyclotron sites built especially for this purpose. The growing need for β + isotopes brought about a commonly wide use of cyclotrons next to populated areas. Isotopes production involves two possible radiation hazards deriving from the activity concentration; one refers to the nearby population by the activity released through the ventilation system and the other refers to the personnel working in the nuclear facility. A comprehensive system providing solution for both radiation hazards is introduced in this work

  18. Object-oriented Approach to High-level Network Monitoring and Management

    Science.gov (United States)

    Mukkamala, Ravi

    2000-01-01

    An absolute prerequisite for the management of large investigating methods to build high-level monitoring computer networks is the ability to measure their systems that are built on top of existing monitoring performance. Unless we monitor a system, we cannot tools. Due to the heterogeneous nature of the hope to manage and control its performance. In this underlying systems at NASA Langley Research Center, paper, we describe a network monitoring system that we use an object-oriented approach for the design, we are currently designing and implementing. Keeping, first, we use UML (Unified Modeling Language) to in mind the complexity of the task and the required model users' requirements. Second, we identify the flexibility for future changes, we use an object-oriented existing capabilities of the underlying monitoring design methodology. The system is built using the system. Third, we try to map the former with the latter. APIs offered by the HP OpenView system.

  19. Comparative approaches to siting low-level radioactive waste disposal facilities

    International Nuclear Information System (INIS)

    Newberry, W.F.

    1994-07-01

    This report describes activities in nine States to select site locations for new disposal facilities for low-level radioactive waste. These nine States have completed processes leading to identification of specific site locations for onsite investigations. For each State, the status, legal and regulatory framework, site criteria, and site selection process are described. In most cases, States and compact regions decided to assign responsibility for site selection to agencies of government and to use top-down mapping methods for site selection. The report discusses quantitative and qualitative techniques used in applying top-down screenings, various approaches for delineating units of land for comparison, issues involved in excluding land from further consideration, and different positions taken by the siting organizations in considering public acceptance, land use, and land availability as factors in site selection

  20. Horizontal decomposition of data table for finding one reduct

    Science.gov (United States)

    Hońko, Piotr

    2018-04-01

    Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.

  1. Solving network design problems via decomposition, aggregation and approximation

    CERN Document Server

    Bärmann, Andreas

    2016-01-01

    Andreas Bärmann develops novel approaches for the solution of network design problems as they arise in various contexts of applied optimization. At the example of an optimal expansion of the German railway network until 2030, the author derives a tailor-made decomposition technique for multi-period network design problems. Next, he develops a general framework for the solution of network design problems via aggregation of the underlying graph structure. This approach is shown to save much computation time as compared to standard techniques. Finally, the author devises a modelling framework for the approximation of the robust counterpart under ellipsoidal uncertainty, an often-studied case in the literature. Each of these three approaches opens up a fascinating branch of research which promises a better theoretical understanding of the problem and an increasing range of solvable application settings at the same time. Contents Decomposition for Multi-Period Network Design Solving Network Design Problems via Ag...

  2. Development and application of a conceptual approach for defining high-level waste

    International Nuclear Information System (INIS)

    Croff, A.G.; Forsberg, C.W.; Kocher, D.C.; Cohen, J.J.; Smith, C.F.; Miller, D.E.

    1986-01-01

    This paper presents a conceptual approach to defining high-level radioactive waste (HLW) and a preliminary quantitative definition obtained from an example implementation of the conceptual approach. On the basis of the description of HLW in the Nuclear Waste Policy Act of 1982, we have developed a conceptual model in which HLW has two attributes: HLW is (1) highly radioactive and (2) requires permanent isolation via deep geologic disposal. This conceptual model results in a two-dimensional waste categorization system in which one axis, related to ''requires permanent isolation,'' is associated with long-term risks from waste disposal and the other axis, related to ''highly radioactive,'' is associated with short-term risks from waste management and operations; this system also leads to the specification of categories of wastes that are not HLW. Implementation of the conceptual model for defining HLW was based primarily on health and safety considerations. Wastes requiring permanent isolation via deep geologic disposal were defined by estimating the maximum concentrations of radionuclides that would be acceptable for disposal using the next-best technology, i.e., greater confinement disposal (GCD) via intermediate-depth burial or engineered surface structures. Wastes that are highly radioactive were defined by adopting heat generation rate as the appropriate measure and examining levels of decay heat that necessitate special methods to control risks from operations in a variety of nuclear fuel-cycle situations. We determined that wastes having a power density >200 W/m 3 should be considered highly radioactive. Thus, in the example implementation, the combination of maximum concentrations of long-lived radionuclides that are acceptable for GCD and a power density of 200 W/m 3 provides boundaries for defining wastes that are HLW

  3. A novel bi-level meta-analysis approach: applied to biological pathway analysis.

    Science.gov (United States)

    Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin

    2016-02-01

    The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e

  4. A Neural Network Approach to Fluid Level Measurement in Dynamic Environments Using a Single Capacitive Sensor

    Directory of Open Access Journals (Sweden)

    Edin TERZIC

    2010-03-01

    Full Text Available A measurement system has been developed using a single tube capacitive sensor to accurately determine the fluid level in vehicular fuel tanks. A novel approach based on artificial neural networks based signal pre-processing and classification has been described in this article. A broad investigation on the Backpropagation neural network and some selected signal pre-processing filters, namely, Moving Mean, Moving Median, and Wavelet Filter has also been presented. An on field drive trial was conducted under normal driving conditions at various fuel volumes ranging from 5 L to 50 L to acquire training samples from the capacitive sensor. A second field trial was conducted to obtain test samples to verify the performance of the neural network. The neural network was trained and verified with 50 % of the training and test samples. The results obtained using the neural network approach having different filtration methods are compared with the results obtained using simple Moving Mean and Moving Median functions. It is demonstrated that the Backpropagation neural network with Moving Median filter produced the most accurate outcome compared with the other signal filtration methods.

  5. PHENOMENOLOGICAL APPROACHES TO STUDY LEARNING IN THE TERTIARY LEVEL CHEMISTRY LABORATORY

    Directory of Open Access Journals (Sweden)

    Santiago Sandi-Urena

    Full Text Available Despite the widespread notion amongst chemistry educators that the laboratory is essential to learn chemistry, it is often a neglected area of teaching and, arguably, of educational research. Research has typically focused on secondary education, single institutions, and isolated interventions that are mostly assessed quantitatively. It has also honed in on compartmentalised features instead of searching understanding of broader aspects of learning through experimentation. This paper contends there is a gap in subject specific, tertiary level research that is comprehensive and learning-centred instead of fragmented and instruction-based. A shift in focus requires consideration of methodological approaches that can effectively tackle the challenges of researching complex learning environments. This paper reckons qualitative approaches, specifically phenomenology, are better suited for this purpose. To illustrate this potential, it summarises an exemplar phenomenological study that investigated students’ experience of change in instructional style from an expository (traditional laboratory program to one that was cooperative and project-based (reformed. The study suggests the experience was characterised by a transition from a learning environment that promoted mindless behaviour to one in which students were mindfully engaged in their learning. Thus, this work puts forth the use of Mindfulness Theory to investigate and support design of laboratory experiences.

  6. Alternate approaches to verifying the structural adequacy of the Defense High Level Waste Shipping Cask

    International Nuclear Information System (INIS)

    Zimmer, A.; Koploy, M.

    1991-12-01

    In the early 1980s, the US Department of Energy/Defense Programs (DOE/DP) initiated a project to develop a safe and efficient transportation system for defense high level waste (DHLW). A long-standing objective of the DHLW transportation project is to develop a truck cask that represents the leading edge of cask technology as well as one that fully complies with all applicable DOE, Nuclear Regulatory Commission (NRC), and Department of Transportation (DOT) regulations. General Atomics (GA) designed the DHLW Truck Shipping Cask using state-of-the-art analytical techniques verified by model testing performed by Sandia National Laboratories (SNL). The analytical techniques include two approaches, inelastic analysis and elastic analysis. This topical report presents the results of the two analytical approaches and the model testing results. The purpose of this work is to show that there are two viable analytical alternatives to verify the structural adequacy of a Type B package and to obtain an NRC license. It addition, this data will help to support the future acceptance by the NRC of inelastic analysis as a tool in packaging design and licensing

  7. Thermic decomposition of biphenyl; Decomposition thermique du biphenyle

    Energy Technology Data Exchange (ETDEWEB)

    Lutz, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1966-03-01

    Liquid and vapour phase pyrolysis of very pure biphenyl obtained by methods described in the text was carried out at 400 C in sealed ampoules, the fraction transformed being always less than 0.1 per cent. The main products were hydrogen, benzene, terphenyls, and a deposit of polyphenyls strongly adhering to the walls. Small quantities of the lower aliphatic hydrocarbons were also found. The variation of the yields of these products with a) the pyrolysis time, b) the state (gas or liquid) of the biphenyl, and c) the pressure of the vapour was measured. Varying the area and nature of the walls showed that in the absence of a liquid phase, the pyrolytic decomposition takes place in the adsorbed layer, and that metallic walls promote the reaction more actively than do those of glass (pyrex or silica). A mechanism is proposed to explain the results pertaining to this decomposition in the adsorbed phase. The adsorption seems to obey a Langmuir isotherm, and the chemical act which determines the overall rate of decomposition is unimolecular. (author) [French] Du biphenyle tres pur, dont la purification est decrite, est pyrolyse a 400 C en phase vapeur et en phase liquide dans des ampoules scellees sous vide, a des taux de decomposition n'ayant jamais depasse 0,1 pour cent. Les produits provenant de la pyrolyse sont essentiellement: l' hydrogene, le benzene, les therphenyles, et un depot de polyphenyles adherant fortement aux parois. En plus il se forme de faibles quantites d'hydrocarbures aliphatiques gazeux. On indique la variation des rendements des differents produits avec la duree de pyrolyse, l'etat gazeux ou liquide du biphenyle, et la pression de la vapeur. Variant la superficie et la nature des parois, on montre qu'en absence de liquide la pyrolyse se fait en phase adsorbee. La pyrolyse est plus active au contact de parois metalliques que de celles de verres (pyrex ou silice). A partir des resultats experimentaux un mecanisme de degradation du biphenyle en phase

  8. Kinetics of Roasting Decomposition of the Rare Earth Elements by CaO and Coal

    Directory of Open Access Journals (Sweden)

    Shuai Yuan

    2017-06-01

    Full Text Available The roasting method of magnetic tailing mixed with CaO and coal was used to recycle the rare earth elements (REE in magnetic tailing. The phase transformation and decomposition process were researched during the roasting processes. The results showed that the decomposition processes of REE in magnetic tailing were divided into two steps. The first step from 380 to 431 °C mainly entailed the decomposition of bastnaesite (REFCO3. The second step from 605 to 716 °C mainly included the decomposition of monazite (REPO4. The decomposition products were primarily RE2O3, Ce0.75Nd0.25O1.875, CeO2, Ca5F(PO43, and CaF2. Adding CaO could reduce the decomposition temperature of REFCO3 and REPO4. Meanwhile, the decomposition effect of CaO on bastnaesite and monazite was significant. Besides, the effects of the roasting time, roasting temperature, and CaO addition level on the decomposition rate were studied. The optimum technological conditions were a roasting time of 60 min; roasting temperature of 750 °C; and CaO addition level of 20% (w/w. The maximum decomposition rate of REFCO3 and REPO4 was 99.87%. The roasting time and temperature were the major factors influencing the decomposition rate. The kinetics process of the decomposition of REFCO3 and REPO4 accorded with the interfacial reaction kinetics model. The reaction rate controlling steps were divided into two steps. The first step (at low temperature was controlled by a chemical reaction with an activation energy of 52.67 kJ/mol. The second step (at high temperature was controlled by diffusion with an activation energy of 8.5 kJ/mol.

  9. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  10. Modeling approaches for concrete barriers used in low-level waste disposal

    International Nuclear Information System (INIS)

    Seitz, R.R.; Walton, J.C.

    1993-11-01

    A series of three NUREGs and several papers addressing different aspects of modeling performance of concrete barriers for low-level radioactive waste disposal have been prepared previously for the Concrete Barriers Research Project. This document integrates the information from the previous documents into a general summary of models and approaches that can be used in performance assessments of concrete barriers. Models for concrete degradation, flow, and transport through cracked concrete barriers are discussed. The models for flow and transport assume that cracks have occurred and thus should only be used for later times in simulations after fully penetrating cracks are formed. Most of the models have been implemented in a computer code. CEMENT, that was developed concurrently with this document. User documentation for CEMENT is provided separate from this report. To avoid duplication, the reader is referred to the three previous NUREGs for detailed discussions of each of the mathematical models. Some additional information that was not presented in the previous documents is also included. Sections discussing lessons learned from applications to actual performance assessments of low-level waste disposal facilities are provided. Sensitive design parameters are emphasized to identify critical areas of performance for concrete barriers, and potential problems in performance assessments are also identified and discussed

  11. Sentiment Analysis on Tweets about Diabetes: An Aspect-Level Approach

    KAUST Repository

    Salas-Zárate, María del Pilar

    2017-02-19

    In recent years, some methods of sentiment analysis have been developed for the health domain; however, the diabetes domain has not been explored yet. In addition, there is a lack of approaches that analyze the positive or negative orientation of each aspect contained in a document (a review, a piece of news, and a tweet, among others). Based on this understanding, we propose an aspect-level sentiment analysis method based on ontologies in the diabetes domain. The sentiment of the aspects is calculated by considering the words around the aspect which are obtained through N-gram methods (N-gram after, N-gram before, and N-gram around). To evaluate the effectiveness of our method, we obtained a corpus from Twitter, which has been manually labelled at aspect level as positive, negative, or neutral. The experimental results show that the best result was obtained through the N-gram around method with a precision of 81.93%, a recall of 81.13%, and an -measure of 81.24%.

  12. Cooperative Fuzzy Games Approach to Setting Target Levels of ECs in Quality Function Deployment

    Directory of Open Access Journals (Sweden)

    Zhihui Yang

    2014-01-01

    Full Text Available Quality function deployment (QFD can provide a means of translating customer requirements (CRs into engineering characteristics (ECs for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  13. Siting Criteria for Low and Intermediate Level Radioactive Waste Disposal in Egypt (Proposal approach)

    International Nuclear Information System (INIS)

    Abdellatif, M.M.

    2012-01-01

    The objective of radioactive waste disposal is to isolate waste from the surrounding media so that it does not result in undue radiation exposure to humans and the environment. The required degree of isolation can be obtained by implementing various disposal methods and suitable criteria. Near surface disposal method has been practiced for some decades, with a wide variation in sites, types and amounts of wastes, and facility designs employed. Experience has shown that the effective and safe isolation of waste depends on the performance of the overall disposal system, which is formed by three major components or barriers: the site, the disposal facility and the waste form. The site selection process for low-level and intermediate level radioactive waste disposal facility addressed a wide range of public health, safety, environmental, social and economic factors. Establishing site criteria is the first step in the sitting process to identify a site that is capable of protecting public health, safety and the environment. This paper is concerning a proposal approach for the primary criteria for near surface disposal facility that could be applicable in Egypt.

  14. Sentiment Analysis on Tweets about Diabetes: An Aspect-Level Approach

    Directory of Open Access Journals (Sweden)

    María del Pilar Salas-Zárate

    2017-01-01

    Full Text Available In recent years, some methods of sentiment analysis have been developed for the health domain; however, the diabetes domain has not been explored yet. In addition, there is a lack of approaches that analyze the positive or negative orientation of each aspect contained in a document (a review, a piece of news, and a tweet, among others. Based on this understanding, we propose an aspect-level sentiment analysis method based on ontologies in the diabetes domain. The sentiment of the aspects is calculated by considering the words around the aspect which are obtained through N-gram methods (N-gram after, N-gram before, and N-gram around. To evaluate the effectiveness of our method, we obtained a corpus from Twitter, which has been manually labelled at aspect level as positive, negative, or neutral. The experimental results show that the best result was obtained through the N-gram around method with a precision of 81.93%, a recall of 81.13%, and an F-measure of 81.24%.

  15. Storytelling as an approach to evaluate the child's level of speech development

    Directory of Open Access Journals (Sweden)

    Ljubica Marjanovič Umek

    2004-05-01

    Full Text Available Both in developmental psychology and in linguistics, the child's storytelling is an interesting topic of research from the point of view of evaluating the child's level of speech development, especially of its pragmatic component, and from the point of view of teaching and learning in the preschool period. In the present study, children's storytelling in different situational contexts was analyzed and evaluated: with a picture book without any text, after listening to a text from a picture book, and after a suggested story beginning (ie., with the introductory sentence given to them. The sample included children of three age groups, approximately 4, 6 and 8 years; each age group had approximately the same numbers of boys and girls. A total of over 300 stories were collected, which were subsequently analyzed and evaluated using a set of story developmental level criteria. Two key criteria were used: story coherence and cohesion. Comparisons by age and gender, as well as by context of storytelling, show significant developmental differences in story content and structure for different age groups, and the important role of storytelling context. Differences in storytelling between boys and girls did not prove statistically significant. The findings also suggest new options and approaches for further stimulations of speech development within preschool and primary school curricula might be considered.

  16. MetricForensics: A Multi-Level Approach for Mining Volatile Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Eliassi-Rad, Tina [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Faloutsos, Christos [Carnegie Mellon Univ., Pittsburgh, PA (United States); Akoglu, Leman [Carnegie Mellon Univ., Pittsburgh, PA (United States); Li, Lei [Carnegie Mellon Univ., Pittsburgh, PA (United States); Maruhashi, Koji [Fujitsu Laboratories Ltd., Kanagawa (Japan); Prakash, B. Aditya [Carnegie Mellon Univ., Pittsburgh, PA (United States); Tong, H [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    2010-02-08

    Advances in data collection and storage capacity have made it increasingly possible to collect highly volatile graph data for analysis. Existing graph analysis techniques are not appropriate for such data, especially in cases where streaming or near-real-time results are required. An example that has drawn significant research interest is the cyber-security domain, where internet communication traces are collected and real-time discovery of events, behaviors, patterns and anomalies is desired. We propose MetricForensics, a scalable framework for analysis of volatile graphs. MetricForensics combines a multi-level “drill down" approach, a collection of user-selected graph metrics and a collection of analysis techniques. At each successive level, more sophisticated metrics are computed and the graph is viewed at a finer temporal resolution. In this way, MetricForensics scales to highly volatile graphs by only allocating resources for computationally expensive analysis when an interesting event is discovered at a coarser resolution first. We test MetricForensics on three real-world graphs: an enterprise IP trace, a trace of legitimate and malicious network traffic from a research institution, and the MIT Reality Mining proximity sensor data. Our largest graph has »3M vertices and »32M edges, spanning 4:5 days. The results demonstrate the scalability and capability of MetricForensics in analyzing volatile graphs; and highlight four novel phenomena in such graphs: elbows, broken correlations, prolonged spikes, and strange stars.

  17. Sentiment Analysis on Tweets about Diabetes: An Aspect-Level Approach

    KAUST Repository

    Salas-Zá rate, Marí a del Pilar; Medina-Moreira, José ; Lagos-Ortiz, Katty; Luna-Aveiga, Harry; Rodriguez-Garcia, Miguel Angel; Valencia-Garcí a, Rafael

    2017-01-01

    In recent years, some methods of sentiment analysis have been developed for the health domain; however, the diabetes domain has not been explored yet. In addition, there is a lack of approaches that analyze the positive or negative orientation of each aspect contained in a document (a review, a piece of news, and a tweet, among others). Based on this understanding, we propose an aspect-level sentiment analysis method based on ontologies in the diabetes domain. The sentiment of the aspects is calculated by considering the words around the aspect which are obtained through N-gram methods (N-gram after, N-gram before, and N-gram around). To evaluate the effectiveness of our method, we obtained a corpus from Twitter, which has been manually labelled at aspect level as positive, negative, or neutral. The experimental results show that the best result was obtained through the N-gram around method with a precision of 81.93%, a recall of 81.13%, and an -measure of 81.24%.

  18. Diagnosing and Ranking Retinopathy Disease Level Using Diabetic Fundus Image Recuperation Approach

    Directory of Open Access Journals (Sweden)

    K. Somasundaram

    2015-01-01

    Full Text Available Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED and Optimally Adjusted Morphological Operator (OAMO effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.

  19. Diagnosing and ranking retinopathy disease level using diabetic fundus image recuperation approach.

    Science.gov (United States)

    Somasundaram, K; Rajendran, P Alli

    2015-01-01

    Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED) and Optimally Adjusted Morphological Operator (OAMO) effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR) method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.

  20. SHORT-TERM SOLAR FLARE LEVEL PREDICTION USING A BAYESIAN NETWORK APPROACH

    International Nuclear Information System (INIS)

    Yu Daren; Huang Xin; Hu Qinghua; Zhou Rui; Wang Huaning; Cui Yanmei

    2010-01-01

    A Bayesian network approach for short-term solar flare level prediction has been proposed based on three sequences of photospheric magnetic field parameters extracted from Solar and Heliospheric Observatory/Michelson Doppler Imager longitudinal magnetograms. The magnetic measures, the maximum horizontal gradient, the length of neutral line, and the number of singular points do not have determinate relationships with solar flares, so the solar flare level prediction is considered as an uncertainty reasoning process modeled by the Bayesian network. The qualitative network structure which describes conditional independent relationships among magnetic field parameters and the quantitative conditional probability tables which determine the probabilistic values for each variable are learned from the data set. Seven sequential features-the maximum, the mean, the root mean square, the standard deviation, the shape factor, the crest factor, and the pulse factor-are extracted to reduce the dimensions of the raw sequences. Two Bayesian network models are built using raw sequential data (BN R ) and feature extracted data (BN F ), respectively. The explanations of these models are consistent with physical analyses of experts. The performances of the BN R and the BN F appear comparable with other methods. More importantly, the comprehensibility of the Bayesian network models is better than other methods.

  1. Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.

    Science.gov (United States)

    Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang

    2014-01-01

    Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  2. Dialogic and integrated approach to promote soils at different school levels: a Brazilian experience

    Science.gov (United States)

    Muggler, Cristine Carole

    2017-04-01

    From ancient civilizations to present technological societies, soil is the material and immaterial ground of our existence. Soil is essential to life as are water, air and sun light. Nevertheless, it is overlooked and has its functions and importance not known and recognized by people. In formal education and in most school curricula, soil contents are not approached in the same way and intensity other environmental components are. In its essence, soils are an interdisciplinary subject, crossing over different disciplines. It has a great potential as unifying theme that links and synthesizes different contents and areas of knowledge, especially hard sciences as physics, chemistry and biology. Furthermore, soils are familiar and tangible to everyone, making them a meaningful subject that helps to build an efficient learning process. The challenge remains on how to bring such teaching-learning possibilities to formal education at all levels. Soil education deals with the significance of soil to people. What makes soil meaningful? What are the bases for effective learning about soil? The answers are very much related with subjective perceptions and life experiences carried by each individual. Those dimensions have been considered by the pedagogical approach based on Paulo Freire's socio constructivism which considers social inclusion, knowledge building, horizontal learning and collective action. This approach has been applied within the soil (science) education spaces of the Federal University of Viçosa, Minas Gerais, Brazil, both with university students and basic education pupils. At the university an average of 200 students per semester follow a 60 hours Soil Genesis course. With primary and secondary schools the activities are developed through the Soil Education Programme (PES) of the Earth Sciences Museum. In the classes and activities, materials, methods and learning strategies are developed to stimulate involvement, dialogues and exchange of experiences and

  3. Do we need an integrative approach to food safety at the country level?

    Energy Technology Data Exchange (ETDEWEB)

    Ristic, G E-mail:risticg@eunet yu [Department of Nutrition, Medical Faculty, Belgrade (Yugoslavia)

    2002-05-01

    Scientific data show increasing evidence of relationship between food safety and food standards on one hand and public health concern on the other hand. In FR Yugoslavia in 1989 the system of reporting on food safety issues on federal and republic level was established. The system provides data on laboratory analysis of 22 food items (bread, milk, meat and meat products, vegetables, processed vegetables etc). Those items were and still are tested on food quality and safety parameters such as microbiological, chemical and radio nuclides. Seldom all required testing on chemical and radio nuclides are performed, so we lack exact risk assessment for those contaminants. Further, during war conflict in FR Yugoslavia and also due to industrial hazards in neighbouring countries (Rumania, Hungary) high quantities of PCBs, dioxins, heavy metals, arsenic compounds and other toxic compounds contaminated the environment. In the soil and in some food products (animal fats predominantly) radionuclides originating from Chernobyl hazard can still be detected. In order to identify the level of exposure to chemical and radio nuclide contaminants in the food chain it is essential to test intensively and systematically food from animal and from plant origin. In order to prevent entering the contaminants to the food chain new recommendations from WHO, FAO and EU suggest implementation of integrative approach to food safety and control over the whole chain of food production from 'farm to table'. This approach provides control of the contaminants in soil, water, air, control over primary food production (covering animal feed too), intensive control over processing with implementation of HACCP system, but also, over transportation, retail trade, street food and home made food too. In our country creation of the map of the polluted areas, and actions in order to treat the pollution should accompany implementation of this new food safety system. The need for assessment of the level of

  4. Flipping for success: evaluating the effectiveness of a novel teaching approach in a graduate level setting.

    Science.gov (United States)

    Moraros, John; Islam, Adiba; Yu, Stan; Banow, Ryan; Schindelka, Barbara

    2015-02-28

    Flipped Classroom is a model that's quickly gaining recognition as a novel teaching approach among health science curricula. The purpose of this study was four-fold and aimed to compare Flipped Classroom effectiveness ratings with: 1) student socio-demographic characteristics, 2) student final grades, 3) student overall course satisfaction, and 4) course pre-Flipped Classroom effectiveness ratings. The participants in the study consisted of 67 Masters-level graduate students in an introductory epidemiology class. Data was collected from students who completed surveys during three time points (beginning, middle and end) in each term. The Flipped Classroom was employed for the academic year 2012-2013 (two terms) using both pre-class activities and in-class activities. Among the 67 Masters-level graduate students, 80% found the Flipped Classroom model to be either somewhat effective or very effective (M = 4.1/5.0). International students rated the Flipped Classroom to be significantly more effective when compared to North American students (X(2) = 11.35, p Students' perceived effectiveness of the Flipped Classroom had no significant association to their academic performance in the course as measured by their final grades (r s = 0.70). However, students who found the Flipped Classroom to be effective were also more likely to be satisfied with their course experience. Additionally, it was found that the SEEQ variable scores for students enrolled in the Flipped Classroom were significantly higher than the ones for students enrolled prior to the implementation of the Flipped Classroom (p = 0.003). Overall, the format of the Flipped Classroom provided more opportunities for students to engage in critical thinking, independently facilitate their own learning, and more effectively interact with and learn from their peers. Additionally, the instructor was given more flexibility to cover a wider range and depth of material, provide in-class applied learning

  5. Do we need an integrative approach to food safety at the country level?

    International Nuclear Information System (INIS)

    Ristic, G.

    2002-01-01

    Scientific data show increasing evidence of relationship between food safety and food standards on one hand and public health concern on the other hand. In FR Yugoslavia in 1989 the system of reporting on food safety issues on federal and republic level was established. The system provides data on laboratory analysis of 22 food items (bread, milk, meat and meat products, vegetables, processed vegetables etc). Those items were and still are tested on food quality and safety parameters such as microbiological, chemical and radio nuclides. Seldom all required testing on chemical and radio nuclides are performed, so we lack exact risk assessment for those contaminants. Further, during war conflict in FR Yugoslavia and also due to industrial hazards in neighbouring countries (Rumania, Hungary) high quantities of PCBs, dioxins, heavy metals, arsenic compounds and other toxic compounds contaminated the environment. In the soil and in some food products (animal fats predominantly) radionuclides originating from Chernobyl hazard can still be detected. In order to identify the level of exposure to chemical and radio nuclide contaminants in the food chain it is essential to test intensively and systematically food from animal and from plant origin. In order to prevent entering the contaminants to the food chain new recommendations from WHO, FAO and EU suggest implementation of integrative approach to food safety and control over the whole chain of food production from 'farm to table'. This approach provides control of the contaminants in soil, water, air, control over primary food production (covering animal feed too), intensive control over processing with implementation of HACCP system, but also, over transportation, retail trade, street food and home made food too. In our country creation of the map of the polluted areas, and actions in order to treat the pollution should accompany implementation of this new food safety system. The need for assessment of the level of

  6. The Decomposition Analysis of CO2 Emission and Economic Growth in Pakistan India and China

    Directory of Open Access Journals (Sweden)

    Muhammad Irfan Javaid Attari

    2011-12-01

    Full Text Available The conflict between economic growth and keeping greenhouse gases (GHG at controllable levels is one of the ultimate challenges of this century. The aim of Kyoto Protocol is to keep the level of carbon dioxide (CO2 below a certain threshold level. The purpose of this paper is to study the effect of CO2 emission on economic growth by conducting the regional analysis of PIC nations i.e. Pakistan, India and China. The study also provides the detail information regarding the atmospheric emission by applying decomposition analysis. It is suggested that environmental policies need more attention in the region by keeping the differences aside. So, the emission trading is considered to be the new concept. The approach should be introduced to tackle down the global warming in the region. Now it is time to respond because the low Carbon Economy is the reality.

  7. Institutional Disparities in the Cost Effectiveness of GCE A-Level Provision: A Multi-Level Approach.

    Science.gov (United States)

    Fielding, A.

    1995-01-01

    Reanalyzes H. Thomas's 1980s data, which used teaching group as the unit of analysis and illuminated some institutional disparities in provision of General Certificate of Education (GCE) A-levels. Uses multilevel analysis to focus on individual students in a hierarchical framework. Among the study institutions, school sixth forms appear less…

  8. Dolomite decomposition under CO2

    International Nuclear Information System (INIS)

    Guerfa, F.; Bensouici, F.; Barama, S.E.; Harabi, A.; Achour, S.

    2004-01-01

    Full text.Dolomite (MgCa (CO 3 ) 2 is one of the most abundant mineral species on the surface of the planet, it occurs in sedimentary rocks. MgO, CaO and Doloma (Phase mixture of MgO and CaO, obtained from the mineral dolomite) based materials are attractive steel-making refractories because of their potential cost effectiveness and world wide abundance more recently, MgO is also used as protective layers in plasma screen manufacture ceel. The crystal structure of dolomite was determined as rhombohedral carbonates, they are layers of Mg +2 and layers of Ca +2 ions. It dissociates depending on the temperature variations according to the following reactions: MgCa (CO 3 ) 2 → MgO + CaO + 2CO 2 .....MgCa (CO 3 ) 2 → MgO + Ca + CaCO 3 + CO 2 .....This latter reaction may be considered as a first step for MgO production. Differential thermal analysis (DTA) are used to control dolomite decomposition and the X-Ray Diffraction (XRD) was used to elucidate thermal decomposition of dolomite according to the reaction. That required samples were heated to specific temperature and holding times. The average particle size of used dolomite powders is 0.3 mm, as where, the heating temperature was 700 degree celsius, using various holding times (90 and 120 minutes). Under CO 2 dolomite decomposed directly to CaCO 3 accompanied by the formation of MgO, no evidence was offered for the MgO formation of either CaO or MgCO 3 , under air, simultaneous formation of CaCO 3 , CaO and accompanied dolomite decomposition

  9. Microbial community functional change during vertebrate carrion decomposition.

    Directory of Open Access Journals (Sweden)

    Jennifer L Pechal

    Full Text Available Microorganisms play a critical role in the decomposition of organic matter, which contributes to energy and nutrient transformation in every ecosystem. Yet, little is known about the functional activity of epinecrotic microbial communities associated with carrion. The objective of this study was to provide a description of the carrion associated microbial community functional activity using differential carbon source use throughout decomposition over seasons, between years and when microbial communities were isolated from eukaryotic colonizers (e.g., necrophagous insects. Additionally, microbial communities were identified at the phyletic level using high throughput sequencing during a single study. We hypothesized that carrion microbial community functional profiles would change over the duration of decomposition, and that this change would depend on season, year and presence of necrophagous insect colonization. Biolog EcoPlates™ were used to measure the variation in epinecrotic microbial community function by the differential use of 29 carbon sources throughout vertebrate carrion decomposition. Pyrosequencing was used to describe the bacterial community composition in one experiment to identify key phyla associated with community functional changes. Overall, microbial functional activity increased throughout decomposition in spring, summer and winter while it decreased in autumn. Additionally, microbial functional activity was higher in 2011 when necrophagous arthropod colonizer effects were tested. There were inconsistent trends in the microbial function of communities isolated from remains colonized by necrophagous insects between 2010 and 2011, suggesting a greater need for a mechanistic understanding of the process. These data indicate that functional analyses can be implemented in carrion studies and will be important in understanding the influence of microbial communities on an essential ecosystem process, carrion decomposition.

  10. Decomposition of Multi-player Games

    Science.gov (United States)

    Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael

    Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.

  11. Constructive quantum Shannon decomposition from Cartan involutions

    International Nuclear Information System (INIS)

    Drury, Byron; Love, Peter

    2008-01-01

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions

  12. Constructive quantum Shannon decomposition from Cartan involutions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, Byron; Love, Peter [Department of Physics, 370 Lancaster Ave., Haverford College, Haverford, PA 19041 (United States)], E-mail: plove@haverford.edu

    2008-10-03

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions.

  13. Ozone time scale decomposition and trend assessment from surface observations

    Science.gov (United States)

    Boleti, Eirini; Hueglin, Christoph; Takahama, Satoshi

    2017-04-01

    Emissions of ozone precursors have been regulated in Europe since around 1990 with control measures primarily targeting to industries and traffic. In order to understand how these measures have affected air quality, it is now important to investigate concentrations of tropospheric ozone in different types of environments, based on their NOx burden, and in different geographic regions. In this study, we analyze high quality data sets for Switzerland (NABEL network) and whole Europe (AirBase) for the last 25 years to calculate long-term trends of ozone concentrations. A sophisticated time scale decomposition method, called the Ensemble Empirical Mode Decomposition (EEMD) (Huang,1998;Wu,2009), is used for decomposition of the different time scales of the variation of ozone, namely the long-term trend, seasonal and short-term variability. This allows subtraction of the seasonal pattern of ozone from the observations and estimation of long-term changes of ozone concentrations with lower uncertainty ranges compared to typical methodologies used. We observe that, despite the implementation of regulations, for most of the measurement sites ozone daily mean values have been increasing until around mid-2000s. Afterwards, we observe a decline or a leveling off in the concentrations; certainly a late effect of limitations in ozone precursor emissions. On the other hand, the peak ozone concentrations have been decreasing for almost all regions. The evolution in the trend exhibits some differences between the different types of measurement. In addition, ozone is known to be strongly affected by meteorology. In the applied approach, some of the meteorological effects are already captured by the seasonal signal and already removed in the de-seasonalized ozone time series. For adjustment of the influence of meteorology on the higher frequency ozone variation, a statistical approach based on Generalized Additive Models (GAM) (Hastie,1990;Wood,2006), which corrects for meteorological

  14. Decomposition in pelagic marine ecosytems

    International Nuclear Information System (INIS)

    Lucas, M.I.

    1986-01-01

    During the decomposition of plant detritus, complex microbial successions develop which are dominated in the early stages by a number of distinct bacterial morphotypes. The microheterotrophic community rapidly becomes heterogenous and may include cyanobacteria, fungi, yeasts and bactivorous protozoans. Microheterotrophs in the marine environment may have a biomass comparable to that of all other heterotrophs and their significance as a resource to higher trophic orders, and in the regeneration of nutrients, particularly nitrogen, that support 'regenerated' primary production, has aroused both attention and controversy. Numerous methods have been employed to measure heterotrophic bacterial production and activity. The most widely used involve estimates of 14 C-glucose uptake; the frequency of dividing cells; the incorporation of 3 H-thymidine and exponential population growth in predator-reduced filtrates. Recent attempts to model decomposition processes and C and N fluxes in pelagic marine ecosystems are described. This review examines the most sensitive components and predictions of the models with particular reference to estimates of bacterial production, net growth yield and predictions of N cycling determined by 15 N methodology. Directed estimates of nitrogen (and phosphorus) flux through phytoplanktonic and bacterioplanktonic communities using 15 N (and 32 P) tracer methods are likely to provide more realistic measures of nitrogen flow through planktonic communities

  15. Infrared multiphoton absorption and decomposition

    International Nuclear Information System (INIS)

    Evans, D.K.; McAlpine, R.D.

    1984-01-01

    The discovery of infrared laser induced multiphoton absorption (IRMPA) and decomposition (IRMPD) by Isenor and Richardson in 1971 generated a great deal of interest in these phenomena. This interest was increased with the discovery by Ambartzumian, Letokhov, Ryadbov and Chekalin that isotopically selective IRMPD was possible. One of the first speculations about these phenomena was that it might be possible to excite a particular mode of a molecule with the intense infrared laser beam and cause decomposition or chemical reaction by channels which do not predominate thermally, thus providing new synthetic routes for complex chemicals. The potential applications to isotope separation and novel chemistry stimulated efforts to understand the underlying physics and chemistry of these processes. At ICOMP I, in 1977 and at ICOMP II in 1980, several authors reviewed the current understandings of IRMPA and IRMPD as well as the particular aspect of isotope separation. There continues to be a great deal of effort into understanding IRMPA and IRMPD and we will briefly review some aspects of these efforts with particular emphasis on progress since ICOMP II. 31 references

  16. Decomposition of Diethylstilboestrol in Soil

    DEFF Research Database (Denmark)

    Gregers-Hansen, Birte

    1964-01-01

    The rate of decomposition of DES-monoethyl-1-C14 in soil was followed by measurement of C14O2 released. From 1.6 to 16% of the added C14 was recovered as C14O2 during 3 months. After six months as much as 12 to 28 per cent was released as C14O2.Determination of C14 in the soil samples after the e...... not inhibit the CO2 production from the soil.Experiments with γ-sterilized soil indicated that enzymes present in the soil are able to attack DES.......The rate of decomposition of DES-monoethyl-1-C14 in soil was followed by measurement of C14O2 released. From 1.6 to 16% of the added C14 was recovered as C14O2 during 3 months. After six months as much as 12 to 28 per cent was released as C14O2.Determination of C14 in the soil samples after...

  17. Systems-level mechanisms of action of Panax ginseng: a network pharmacological approach.

    Science.gov (United States)

    Park, Sa-Yoon; Park, Ji-Hun; Kim, Hyo-Su; Lee, Choong-Yeol; Lee, Hae-Jeung; Kang, Ki Sung; Kim, Chang-Eop

    2018-01-01

    Panax ginseng has been used since ancient times based on the traditional Asian medicine theory and clinical experiences, and currently, is one of the most popular herbs in the world. To date, most of the studies concerning P. ginseng have focused on specific mechanisms of action of individual constituents. However, in spite of many studies on the molecular mechanisms of P. ginseng , it still remains unclear how multiple active ingredients of P. ginseng interact with multiple targets simultaneously, giving the multidimensional effects on various conditions and diseases. In order to decipher the systems-level mechanism of multiple ingredients of P. ginseng , a novel approach is needed beyond conventional reductive analysis. We aim to review the systems-level mechanism of P. ginseng by adopting novel analytical framework-network pharmacology. Here, we constructed a compound-target network of P. ginseng using experimentally validated and machine learning-based prediction results. The targets of the network were analyzed in terms of related biological process, pathways, and diseases. The majority of targets were found to be related with primary metabolic process, signal transduction, nitrogen compound metabolic process, blood circulation, immune system process, cell-cell signaling, biosynthetic process, and neurological system process. In pathway enrichment analysis of targets, mainly the terms related with neural activity showed significant enrichment and formed a cluster. Finally, relative degrees analysis for the target-disease association of P. ginseng revealed several categories of related diseases, including respiratory, psychiatric, and cardiovascular diseases.

  18. A novel approach for investigating the trends in nitrogen dioxide levels in UK cities

    International Nuclear Information System (INIS)

    Bell, Margaret Carol; Galatioto, Fabio; Chakravartty, Ayan; Namdeo, Anil

    2013-01-01

    This paper investigates the variations in levels of nitrogen dioxide, NO 2 , monitored over the decade 2001–2010, in Newcastle-upon-Tyne (UK) city centre, to develop fundamental understanding of the periods of persistence of levels of NO 2 greater than 40 μg m −3 (∼21 ppb) defined as air pollution event duration. The appropriateness of the hazard theory as a mechanism to understand failure rate of the duration of poor air pollution events was explored. The results revealed two types of air quality events. The longer duration air quality events (between 24 and 68 h) were associated with the “extreme-weather” conditions and were responsible for a small number of extremely long air pollution duration events. These created bias in the results and therefore the analysis was restricted specifically to the ‘normal-weather’ related air pollution event durations, conforming to a geometric distribution. This novel approach shows promise as a mechanism to monitor and investigate year on year trends observed in air quality data. -- Highlights: • Innovative method for the analysis of the duration of pollution events. • Hazard theory to better understand reasons of pollution events prevail year on year. • Duration of time of exceedences used as a parameter to assess trends in pollution. • Deteriorated or improved air quality from year to year over a decade investigated. -- Capsule: Explored appropriateness of the hazard theory to understand the failure rate of air pollution events and to investigate the year on year trends observed in air quality data in major cities

  19. A basic design for a multicriteria approach to efficient bioenergy production at regional level

    Energy Technology Data Exchange (ETDEWEB)

    Hagen, Zoe [Technische Univ. Berlin (Germany). Environmental Assessment and Policy Research Group

    2012-12-01

    In Germany, government policies supporting the growth of renewable energies lead to a rapid increase in energy crop cultivation. This increase is linked to possible conflicts between different sustainability goals which so far have been rarely considered in the planning procedure. This article looks at different approaches of assessment and planning methods on a regionspecific level. It describes the methodology of the project Efficient Bio-Energy in the Perspective of Nature Conservation - Assessment and Recommendations to Protect Biodiversity and Climate which aims to establish the basis for an integrated sustainability assessment of energy crop cultivation for decentralized energy production in Germany and has been conducted by the author. The method takes into account the three main requirements of agricultural profitability, greenhouse gases (GHG) efficiency, and environmental sustainability of energy crop cultivation for decentralized energy production and has been applied for two sample regions. Using ArcGIS, the suitability of energy crops can be displayed, and regional aspects can be considered by overlaying and intersecting the individual output of all three requirements. This allows the definition of 'no-go' areas as well as the overall estimation of the maximum sustainable production capacity for each energy crop or energy path in a specific region. It enables an estimation of the profitability and GHG efficiency of energy crop cultivation paths at regional or communal level under consideration of different indicators for environmental sustainability. The article closes with a discussion of the methodological challenges of this integrative method. The conclusion gives an outlook in which planning and policy processes could be beneficial to apply such an integrative method in order to assess the suitability of certain landscape areas for energy production paths. (orig.)

  20. Water-sanitation-hygiene mapping: an improved approach for data collection at local level.

    Science.gov (United States)

    Giné-Garriga, Ricard; de Palencia, Alejandro Jiménez-Fernández; Pérez-Foguet, Agustí

    2013-10-01

    Strategic planning and appropriate development and management of water and sanitation services are strongly supported by accurate and accessible data. If adequately exploited, these data might assist water managers with performance monitoring, benchmarking comparisons, policy progress evaluation, resources allocation, and decision making. A variety of tools and techniques are in place to collect such information. However, some methodological weaknesses arise when developing an instrument for routine data collection, particularly at local level: i) comparability problems due to heterogeneity of indicators, ii) poor reliability of collected data, iii) inadequate combination of different information sources, and iv) statistical validity of produced estimates when disaggregated into small geographic subareas. This study proposes an improved approach for water, sanitation and hygiene (WASH) data collection at decentralised level in low income settings, as an attempt to overcome previous shortcomings. The ultimate aim is to provide local policymakers with strong evidences to inform their planning decisions. The survey design takes the Water Point Mapping (WPM) as a starting point to record all available water sources at a particular location. This information is then linked to data produced by a household survey. Different survey instruments are implemented to collect reliable data by employing a variety of techniques, such as structured questionnaires, direct observation and water quality testing. The collected data is finally validated through simple statistical analysis, which in turn produces valuable outputs that might feed into the decision-making process. In order to demonstrate the applicability of the method, outcomes produced from three different case studies (Homa Bay District-Kenya-; Kibondo District-Tanzania-; and Municipality of Manhiça-Mozambique-) are presented. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Hong

    2015-07-01

    Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

  2. Interplay between Selenium Levels and Replicative Senescence in WI-38 Human Fibroblasts: A Proteomic Approach.

    Science.gov (United States)

    Hammad, Ghania; Legrain, Yona; Touat-Hamici, Zahia; Duhieu, Stéphane; Cornu, David; Bulteau, Anne-Laure; Chavatte, Laurent

    2018-01-20

    Selenoproteins are essential components of antioxidant defense, redox homeostasis, and cell signaling in mammals, where selenium is found in the form of a rare amino acid, selenocysteine. Selenium, which is often limited both in food intake and cell culture media, is a strong regulator of selenoprotein expression and selenoenzyme activity. Aging is a slow, complex, and multifactorial process, resulting in a gradual and irreversible decline of various functions of the body. Several cellular aspects of organismal aging are recapitulated in the replicative senescence of cultured human diploid fibroblasts, such as embryonic lung fibroblast WI-38 cells. We previously reported that the long-term growth of young WI-38 cells with high (supplemented), moderate (control), or low (depleted) concentrations of selenium in the culture medium impacts their replicative lifespan, due to rapid changes in replicative senescence-associated markers and signaling pathways. In order to gain insight into the molecular link between selenium levels and replicative senescence, in the present work, we have applied a quantitative proteomic approach based on 2-Dimensional Differential in-Gel Electrophoresis (2D-DIGE) to the study of young and presenescent cells grown in selenium-supplemented, control, or depleted media. Applying a restrictive cut-off (spot intensity ±50% and a p value iii) spots varying in response to selenium concentration in presenescent cells. Interestingly, a 72% overlap between the impact of senescence and selenium was observed in our proteomic results, demonstrating a strong interplay between selenium, selenoproteins, and replicative senescence.

  3. IS ROMANIA “GREEN” ENOUGH? – A MULTI-LEVEL APPROACH

    Directory of Open Access Journals (Sweden)

    Alina CIOMOŞ

    2014-06-01

    Full Text Available Climate changes’ effects are already present in our day to day life. The EU and national public authorities established strategies and public policies in order to enhance a more desirable behavior of the companies and citizens with regard to recycling and use of green energy. The companies try to make production choices that are more eco-friendly and to sustain campaigns helping cities to become “greener”. The environmentalist citizens occupy a growing share in the general population and are more vocal for convincing the others. But even with the “on the wave” eco-friendly attitude, is it done enough? The article will present some conclusions for Romania. The perspective taken is interdisciplinary and tries to draw a short review of the state of the art achievements in greening Romania’s economy, at macroeconomic level, while proposing new marketing approaches for enhancing further sustainable development, through promoting change in the companies and consumers attitudes and behaviors.

  4. Establishing the concept of buffer for a high-level radioactive waste repository: An approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Owan; Lee, Min Soo; Choi, Heui Joo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-12-15

    The buffer is a key component of the engineered barrier system in a high-level radioactive waste (HLW) repository. The present study reviewed the requirements and functional criteria of the buffer reported in literature, and also based on the results, proposed an approach to establish a buffer concept which is applicable to an HLW repository in Korea. The hydraulic conductivity, radionuclide-retarding capacity (equilibrium distribution coefficient and diffusion coefficient), swelling pressure, thermal conductivity, mechanical properties, organic carbon content, and initialization rate were considered as major technical parameters for the functional criteria of the buffer. Domestic bentonite (Ca-bentonite) and, as an alternative, MX-80 (Na-bentonite) were proposed for the buffer of an HLW repository in Korea. The technical specifications for those proposed bentonites were set to parameter values that conservatively satisfy Korea's functional criteria for the Ca-bentonite and Swedish criteria for the Na-bentonite. The thickness of the buffer was determined by evaluating the means of shear behavior, radionuclide release, and heat conduction, which resulted in the proper buffer thickness of 0.25 to 0.5 m. However, the final thickness of the buffer should be determined by considering coupled thermal-hydraulic-mechanical evaluation and economics and engineering aspects as well.

  5. Successes of trade reorientation and expansion in post-communist transition: an enterprise-level approach

    Directory of Open Access Journals (Sweden)

    Jan Winiecki

    2000-06-01

    Full Text Available The article offers an approach to the westward reorientation of foreign trade by the post-communist economies of East-Central Europe at the micro--i.e. enterprise--level. Having presented the dynamics of reorientation and its theoretical/historical underpinnings, the writer then goes on to underline the surprisingly large number of microeconomic determinants behind the strong westbound export surge. The article starts with the most often cited factor, namely the distressed sale argument, and then shifts the focus to determinants that have received far less attention: an unusual extension of the "distressed sale" argument and another, more important one, namely the legacy of the oversized industrial sector and resultant availability of firms ready (or forced to test their mettle on the world markets. The following section extends the list of determinants to foreign direct investment and the growing export activity of domestic de novo firms. The linkages between the determinants are also pointed out. The final section sums up the observations.

  6. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-04-15

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  7. A proposed alternative approach for protection of inadvertent human intruders from buried Department of Energy low level radioactive wastes

    International Nuclear Information System (INIS)

    Cochran, J.R.

    1995-01-01

    The burial of radioactive wastes creates a legacy. To limit the impact of this legacy on future generations, we establish and comply with performance objectives. This paper reviews performance objectives for the long-term isolation of buried radioactive wastes; identifies regulatorly-defined performance objectives for protecting the inadvertent human intruder (IHI) from buried low-level radioactive waste (LLW); (3) discusses a shortcoming of the current approach; and (4) offers an alternative approach for protecting the IHI. This alternative approach is written specifically for the burial of US Department of Energy (DOE) wastes at the Nevada Test Site (NTS), although the approach might be applied at other DOE burial sites

  8. Litter Decomposition Rate of Avicennia marina and Rhizophora apiculata in Pulau Dua Nature Reserve, Banten

    Directory of Open Access Journals (Sweden)

    Febriana Siska

    2016-05-01

    Full Text Available Litter decomposition rate is useful method to determine forest fertility level. The aims of this study were to measure decomposition rate, and analyze the nutrient content released organic carbon, nitrogen, and phosphor from Avicennia marina and Rhizophora apiculata litters during the decomposition process. The research was conducted in the Pulau Dua Nature Reserve, Serang-Banten on A. marina and R. apiculata forest communities. Litter decomposition rate measurements performed in the field. Litter that has been obtained with the trap system is inserted into litter bag and than tied to the roots or trees to avoid drifting sea water. Litter decomposition rate was measured every 15 days and is accompanied by analysis of the content of organic C , total N and P. Our research results showed decomposition rate of A. marina (k= 0.83 was higher than that of R. apiculata (k= 0.41. Differences of  leaf anatomical structure and sea water salinity  influenced to the rate of litter decomposition. Organic C released was declined with longer of litter decomposition, on the contrary of releasing N and P nutrients.

  9. Joint Markov Blankets in Feature Sets Extracted from Wavelet Packet Decompositions

    Directory of Open Access Journals (Sweden)

    Gert Van Dijck

    2011-07-01

    Full Text Available Since two decades, wavelet packet decompositions have been shown effective as a generic approach to feature extraction from time series and images for the prediction of a target variable. Redundancies exist between the wavelet coefficients and between the energy features that are derived from the wavelet coefficients. We assess these redundancies in wavelet packet decompositions by means of the Markov blanket filtering theory. We introduce the concept of joint Markov blankets. It is shown that joint Markov blankets are a natural extension of Markov blankets, which are defined for single features, to a set of features. We show that these joint Markov blankets exist in feature sets consisting of the wavelet coefficients. Furthermore, we prove that wavelet energy features from the highest frequency resolution level form a joint Markov blanket for all other wavelet energy features. The joint Markov blanket theory indicates that one can expect an increase of classification accuracy with the increase of the frequency resolution level of the energy features.

  10. Decomposition kinetics of plutonium hydride

    Energy Technology Data Exchange (ETDEWEB)

    Haschke, J.M.; Stakebake, J.L.

    1979-01-01

    Kinetic data for decomposition of PuH/sub 1/ /sub 95/ provides insight into a possible mechanism for the hydriding and dehydriding reactions of plutonium. The fact that the rate of the hydriding reaction, K/sub H/, is proportional to P/sup 1/2/ and the rate of the dehydriding process, K/sub D/, is inversely proportional to P/sup 1/2/ suggests that the forward and reverse reactions proceed by opposite paths of the same mechanism. The P/sup 1/2/ dependence of hydrogen solubility in metals is characteristic of the dissociative absorption of hydrogen; i.e., the reactive species is atomic hydrogen. It is reasonable to assume that the rates of the forward and reverse reactions are controlled by the surface concentration of atomic hydrogen, (H/sub s/), that K/sub H/ = c'(H/sub s/), and that K/sub D/ = c/(H/sub s/), where c' and c are proportionality constants. For this surface model, the pressure dependence of K/sub D/ is related to (H/sub s/) by the reaction (H/sub s/) reversible 1/2H/sub 2/(g) and by its equilibrium constant K/sub e/ = (H/sub 2/)/sup 1/2//(H/sub s/). In the pressure range of ideal gas behavior, (H/sub s/) = K/sub e//sup -1/(RT)/sup -1/2/ and the decomposition rate is given by K/sub D/ = cK/sub e/(RT)/sup -1/2/P/sup 1/2/. For an analogous treatment of the hydriding process with this model, it can be readily shown that K/sub H/ = c'K/sub e//sup -1/(RT)/sup -1/2/P/sup 1/2/. The inverse pressure dependence and direct temperature dependence of the decomposition rate are correctly predicted by this mechanism which is most consistent with the observed behavior of the Pu--H system.

  11. Decomposition of sugar cane crop residues under different nitrogen rates

    Directory of Open Access Journals (Sweden)

    Douglas Costa Potrich

    2014-09-01

    Full Text Available The deposition of organic residues through mechanical harvesting of cane sugar is a growing practice in sugarcane production system. The maintenance of these residues on the soil surface depends mainly on environmental conditions. Nitrogen fertilization on dry residues tend to retard decomposition of these, providing benefits such as increased SOM. Thus, the object of this research was to evaluate the effect of different doses of nitrogen on sugar cane crop residues, as its decomposition and contribution to carbon sequestration in soil. The experiment was conducted in Dourados-MS and consisted of a randomized complete block design. Dried residues were placed in litter bags and the treatments were arranged in a split plot, being the four nitrogen rates (0, 50, 100 and 150 kg ha-1 N the plots, and the seven sampling times (0, 30, 60, 90, 120, 150 and 180 the spit plots. Decomposition rates of residues, total organic carbon and labile carbon on soil were analysed. The application of increasing N doses resulted in an increase in their decomposition rates. Despite this, note also the mineral N application as a strategy to get higher levels of labile carbon in soil.

  12. The status of the safeguards implementation under the State-Level Approach at the HANARO

    International Nuclear Information System (INIS)

    Kim, H. S.; Lee, B. D.; Kim, I. C.; Kim, H. J.; Jung, J. A.; Lee, S. H.

    2016-01-01

    The IAEA developed the SLA(State-Level Approach) for the States in order to maximize effectiveness of safeguards in an environment of constrained resources. The SLA has been implemented at KAERI-Daejeon site in the ROK since 2015. The ten nuclear facilities and one LOF(Location Outsides Facility) of the KAERI-Daejeon site are grouped into three categories under the SLA. The HANARO(High flux Advanced Neutron Application ReactOr) and PIEF(Post Irradiation Examination Facility) are involved in the category I “self-contained capability” facilities that have at least one significant quantity of suitable nuclear material and which could support undeclared plutonium production/separation activities without other supporting infrastructures. This paper described the status of the safeguards implementation at the HANARO involved in the category I under the SLA. The status of a model inventory management system for a research reactor developed in 2013 was also investigated. In this paper, the features and status of the safeguards implementation of the HANARO under the SLA were analyzed. Under the SLA, the monthly, quarterly and annual advanced facility operational information for the HANARO has been submitted to the IAEA in a timely manner. The IAEA inspection at HANARO has been successfully performed under the SLA. It is expected that the safeguards implementation work at HANARO under the SLA has the similar level with that under IS. Under the SLA, the data occurred from the surveillance cameras and other equipment installed at HANARO enables to transmit remotely to the IAEA. The IAEA is targeting 2017~2018 to upgrade them. In addition, the development status of a model inventory management system for a research reactor was investigated. It aims at controlling the material inventory for the nuclear material accounting work and the convenient facility operation. The major functions of it are to trace the transfer history of the nuclear materials and non-nuclear materials

  13. The status of the safeguards implementation under the State-Level Approach at the HANARO

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H. S.; Lee, B. D.; Kim, I. C.; Kim, H. J.; Jung, J. A.; Lee, S. H. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The IAEA developed the SLA(State-Level Approach) for the States in order to maximize effectiveness of safeguards in an environment of constrained resources. The SLA has been implemented at KAERI-Daejeon site in the ROK since 2015. The ten nuclear facilities and one LOF(Location Outsides Facility) of the KAERI-Daejeon site are grouped into three categories under the SLA. The HANARO(High flux Advanced Neutron Application ReactOr) and PIEF(Post Irradiation Examination Facility) are involved in the category I “self-contained capability” facilities that have at least one significant quantity of suitable nuclear material and which could support undeclared plutonium production/separation activities without other supporting infrastructures. This paper described the status of the safeguards implementation at the HANARO involved in the category I under the SLA. The status of a model inventory management system for a research reactor developed in 2013 was also investigated. In this paper, the features and status of the safeguards implementation of the HANARO under the SLA were analyzed. Under the SLA, the monthly, quarterly and annual advanced facility operational information for the HANARO has been submitted to the IAEA in a timely manner. The IAEA inspection at HANARO has been successfully performed under the SLA. It is expected that the safeguards implementation work at HANARO under the SLA has the similar level with that under IS. Under the SLA, the data occurred from the surveillance cameras and other equipment installed at HANARO enables to transmit remotely to the IAEA. The IAEA is targeting 2017~2018 to upgrade them. In addition, the development status of a model inventory management system for a research reactor was investigated. It aims at controlling the material inventory for the nuclear material accounting work and the convenient facility operation. The major functions of it are to trace the transfer history of the nuclear materials and non-nuclear materials

  14. Multi-hazard national-level risk assessment in Africa using global approaches

    Science.gov (United States)

    Fraser, Stuart; Jongman, Brenden; Simpson, Alanna; Murnane, Richard

    2016-04-01

    In recent years Sub-Saharan Africa has been characterized by unprecedented opportunity for transformation and sustained growth. However, natural disasters such as droughts, floods, cyclones, earthquakes, landslides, volcanic eruptions and extreme temperatures cause significant economic and human losses, and major development challenges. Quantitative disaster risk assessments are an important basis for governments to understand disaster risk in their country, and to develop effective risk management and risk financing solutions. However, the data-scarce nature of many Sub-Saharan African countries as well as a lack of financing for risk assessments has long prevented detailed analytics. Recent advances in globally applicable disaster risk modelling practices and data availability offer new opportunities. In December 2013 the European Union approved a € 60 million contribution to support the development of an analytical basis for risk financing and to accelerate the effective implementation of a comprehensive disaster risk reduction. The World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR) was selected as the implementing partner of the Program for Result Area 5: the "Africa Disaster Risk Assessment and Financing Program." As part of this effort, the GFDRR is overseeing the production of national-level multi-hazard risk profiles for a range of countries in Sub-Saharan Africa, using a combination of national and global datasets and state-of-the-art hazard and risk assessment methodologies. In this presentation, we will highlight the analytical approach behind these assessments, and show results for the first five countries for which the assessment has been completed (Kenya, Uganda, Senegal, Niger and Ethiopia). The presentation will also demonstrate the visualization of the risk assessments into understandable and visually attractive risk profile documents.

  15. Structured decision making as a proactive approach to dealing with sea level rise in Florida

    Science.gov (United States)

    Martin, Julien; Fackler, Paul L.; Nichols, James D.; Lubow, Bruce C.; Eaton, Mitchell J.; Runge, Michael C.; Stith, Bradley M.; Langtimm, Catherine A.

    2011-01-01

    Sea level rise (SLR) projections along the coast of Florida present an enormous challenge for management and conservation over the long term. Decision makers need to recognize and adopt strategies to adapt to the potentially detrimental effects of SLR. Structured decision making (SDM) provides a rigorous framework for the management of natural resources. The aim of SDM is to identify decisions that are optimal with respect to management objectives and knowledge of the system. Most applications of SDM have assumed that the managed systems are governed by stationary processes. However, in the context of SLR it may be necessary to acknowledge that the processes underlying managed systems may be non-stationary, such that systems will be continuously changing. Therefore, SLR brings some unique considerations to the application of decision theory for natural resource management. In particular, SLR is expected to affect each of the components of SDM. For instance, management objectives may have to be reconsidered more frequently than under more stable conditions. The set of potential actions may also have to be adapted over time as conditions change. Models have to account for the non-stationarity of the modeled system processes. Each of the important sources of uncertainty in decision processes is expected to be exacerbated by SLR. We illustrate our ideas about adaptation of natural resource management to SLR by modeling a non-stationary system using a numerical example. We provide additional examples of an SDM approach for managing species that may be affected by SLR, with a focus on the endangered Florida manatee.

  16. Interplay between Selenium Levels and Replicative Senescence in WI-38 Human Fibroblasts: A Proteomic Approach

    Directory of Open Access Journals (Sweden)

    Ghania Hammad

    2018-01-01

    Full Text Available Selenoproteins are essential components of antioxidant defense, redox homeostasis, and cell signaling in mammals, where selenium is found in the form of a rare amino acid, selenocysteine. Selenium, which is often limited both in food intake and cell culture media, is a strong regulator of selenoprotein expression and selenoenzyme activity. Aging is a slow, complex, and multifactorial process, resulting in a gradual and irreversible decline of various functions of the body. Several cellular aspects of organismal aging are recapitulated in the replicative senescence of cultured human diploid fibroblasts, such as embryonic lung fibroblast WI-38 cells. We previously reported that the long-term growth of young WI-38 cells with high (supplemented, moderate (control, or low (depleted concentrations of selenium in the culture medium impacts their replicative lifespan, due to rapid changes in replicative senescence-associated markers and signaling pathways. In order to gain insight into the molecular link between selenium levels and replicative senescence, in the present work, we have applied a quantitative proteomic approach based on 2-Dimensional Differential in-Gel Electrophoresis (2D-DIGE to the study of young and presenescent cells grown in selenium-supplemented, control, or depleted media. Applying a restrictive cut-off (spot intensity ±50% and a p value < 0.05 to the 2D-DIGE analyses revealed 81 differentially expressed protein spots, from which 123 proteins of interest were identified by mass spectrometry. We compared the changes in protein abundance for three different conditions: (i spots varying between young and presenescent cells, (ii spots varying in response to selenium concentration in young cells, and (iii spots varying in response to selenium concentration in presenescent cells. Interestingly, a 72% overlap between the impact of senescence and selenium was observed in our proteomic results, demonstrating a strong interplay between

  17. Spinodal decomposition in fluid mixtures

    International Nuclear Information System (INIS)

    Kawasaki, Kyozi; Koga, Tsuyoshi

    1993-01-01

    We study the late stage dynamics of spinodal decomposition in binary fluids by the computer simulation of the time-dependent Ginzburg-Landau equation. We obtain a temporary linear growth law of the characteristic length of domains in the late stage. This growth law has been observed in many real experiments of binary fluids and indicates that the domain growth proceeds by the flow caused by the surface tension of interfaces. We also find that the dynamical scaling law is satisfied in this hydrodynamic domain growth region. By comparing the scaling functions for fluids with that for the case without hydrodynamic effects, we find that the scaling functions for the two systems are different. (author)

  18. Early stage litter decomposition across biomes

    Science.gov (United States)

    Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et

    2018-01-01

    Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...

  19. Nutrient Dynamics and Litter Decomposition in Leucaena ...

    African Journals Online (AJOL)

    Nutrient contents and rate of litter decomposition were investigated in Leucaena leucocephala plantation in the University of Agriculture, Abeokuta, Ogun State, Nigeria. Litter bag technique was used to study the pattern and rate of litter decomposition and nutrient release of Leucaena leucocephala. Fifty grams of oven-dried ...

  20. Climate history shapes contemporary leaf litter decomposition

    Science.gov (United States)

    Michael S. Strickland; Ashley D. Keiser; Mark A. Bradford

    2015-01-01

    Litter decomposition is mediated by multiple variables, of which climate is expected to be a dominant factor at global scales. However, like other organisms, traits of decomposers and their communities are shaped not just by the contemporary climate but also their climate history. Whether or not this affects decomposition rates is underexplored. Here we source...

  1. The decomposition of estuarine macrophytes under different ...

    African Journals Online (AJOL)

    The aim of this study was to determine the decomposition characteristics of the most dominant submerged macrophyte and macroalgal species in the Great Brak Estuary. Laboratory experiments were conducted to determine the effect of different temperature regimes on the rate of decomposition of 3 macrophyte species ...

  2. Decomposition and flame structure of hydrazinium nitroformate

    NARCIS (Netherlands)

    Louwers, J.; Parr, T.; Hanson-Parr, D.

    1999-01-01

    The decomposition of hydrazinium nitroformate (HNF) was studied in a hot quartz cell and by dropping small amounts of HNF on a hot plate. The species formed during the decomposition were identified by ultraviolet-visible absorption experiments. These experiments reveal that first HONO is formed. The

  3. Calculation of the level density parameter using semi-classical approach

    International Nuclear Information System (INIS)

    Canbula, B.; Babacan, H.

    2011-01-01

    The level density parameters (level density parameter a and energy shift δ) for back-shifted Fermi gas model have been determined for 1136 nuclei for which complete level scheme is available. Level density parameter is calculated by using the semi-classical single particle level density, which can be obtained analytically through spherical harmonic oscillator potential. This method also enables us to analyze the Coulomb potential's effect on the level density parameter. The dependence of this parameter on energy has been also investigated. Another parameter, δ, is determined by fitting of the experimental level scheme and the average resonance spacings for 289 nuclei. Only level scheme is used for optimization procedure for remaining 847 nuclei. Level densities for some nuclei have been calculated by using these parameter values. Obtained results have been compared with the experimental level scheme and the resonance spacing data.

  4. An iterative approach for the optimization of pavement maintenance management at the network level.

    Science.gov (United States)

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  5. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    Directory of Open Access Journals (Sweden)

    Cristina Torres-Machí

    2014-01-01

    Full Text Available Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  6. Effect of petroleum on decomposition of shrub-grass litters in soil in Northern Shaanxi of China.

    Science.gov (United States)

    Zhang, Xiaoxi; Liu, Zengwen; Yu, Qi; Luc, Nhu Trung; Bing, Yuanhao; Zhu, Bochao; Wang, Wenxuan

    2015-07-01

    The impacts of petroleum contamination on the litter decomposition of shrub-grass land would directly influence nutrient cycling, and the stability and function of ecosystem. Ten common shrub and grass species from Yujiaping oil deposits were studied. Litters from these species were placed into litterbags and buried in petroleum-contaminated soil with 3 levels of contamination (slight, moderate and serious pollution with petroleum concentrations of 15, 30 and 45 g/kg, respectively). A decomposition experiment was then conducted in the lab to investigate the impacts of petroleum contamination on litter decomposition rates. Slight pollution did not inhibit the decomposition of any litters and significantly promoted the litter decomposition of Hippophae rhamnoides, Caragana korshinskii, Amorpha fruticosa, Ziziphus jujuba var. spinosa, Periploca sepium, Medicago sativa and Bothriochloa ischaemum. Moderate pollution significantly inhibited litter decomposition of M. sativa, Coronilla varia, Artemisia vestita and Trrifolium repens and significantly promoted the litter decomposition of C. korshinskii, Z. jujuba var. spinosa and P. sepium. Serious pollution significantly inhibited the litter decomposition of H. rhamnoides, A. fruticosa, B. ischaemum and A. vestita and significantly promoted the litter decomposition of Z. jujuba var. spinosa, P. sepium and M. sativa. In addition, the impacts of petroleum contamination did not exhibit a uniform increase or decrease as petroleum concentration increased. Inhibitory effects of petroleum on litter decomposition may hinder the substance cycling and result in the degradation of plant communities in contaminated areas. Copyright © 2015. Published by Elsevier B.V.

  7. Effects of a blended learning approach on student outcomes in a graduate-level public health course.

    Science.gov (United States)

    Kiviniemi, Marc T

    2014-03-11

    Blended learning approaches, in which in-person and online course components are combined in a single course, are rapidly increasing in health sciences education. Evidence for the relative effectiveness of blended learning versus more traditional course approaches is mixed. The impact of a blended learning approach on student learning in a graduate-level public health course was examined using a quasi-experimental, non-equivalent control group design. Exam scores and course point total data from a baseline, "traditional" approach semester (n = 28) was compared to that from a semester utilizing a blended learning approach (n = 38). In addition, student evaluations of the blended learning approach were evaluated. There was a statistically significant increase in student performance under the blended learning approach (final course point total d = 0.57; a medium effect size), even after accounting for previous academic performance. Moreover, student evaluations of the blended approach were very positive and the majority of students (83%) preferred the blended learning approach. Blended learning approaches may be an effective means of optimizing student learning and improving student performance in health sciences courses.

  8. Effects of a blended learning approach on student outcomes in a graduate-level public health course

    Science.gov (United States)

    2014-01-01

    Background Blended learning approaches, in which in-person and online course components are combined in a single course, are rapidly increasing in health sciences education. Evidence for the relative effectiveness of blended learning versus more traditional course approaches is mixed. Method The impact of a blended learning approach on student learning in a graduate-level public health course was examined using a quasi-experimental, non-equivalent control group design. Exam scores and course point total data from a baseline, “traditional” approach semester (n = 28) was compared to that from a semester utilizing a blended learning approach (n = 38). In addition, student evaluations of the blended learning approach were evaluated. Results There was a statistically significant increase in student performance under the blended learning approach (final course point total d = 0.57; a medium effect size), even after accounting for previous academic performance. Moreover, student evaluations of the blended approach were very positive and the majority of students (83%) preferred the blended learning approach. Conclusions Blended learning approaches may be an effective means of optimizing student learning and improving student performance in health sciences courses. PMID:24612923

  9. A posteriori error analysis of multiscale operator decomposition methods for multiphysics models

    International Nuclear Information System (INIS)

    Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T

    2008-01-01

    Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples

  10. Decomposition Methods For a Piv Data Analysis with Application to a Boundary Layer Separation Dynamics

    OpenAIRE

    Václav URUBA

    2010-01-01

    Separation of the turbulent boundary layer (BL) on a flat plate under adverse pressure gradient was studied experimentally using Time-Resolved PIV technique. The results of spatio-temporal analysis of flow-field in the separation zone are presented. For this purpose, the POD (Proper Orthogonal Decomposition) and its extension BOD (Bi-Orthogonal Decomposition) techniques are applied as well as dynamical approach based on POPs (Principal Oscillation Patterns) method. The study contributes...

  11. Detailed RIF decomposition with selection : the gender pay gap in Italy

    OpenAIRE

    Töpfer, Marina

    2017-01-01

    In this paper, we estimate the gender pay gap along the wage distribution using a detailed decomposition approach based on unconditional quantile regressions. Non-randomness of the sample leads to biased and inconsistent estimates of the wage equation as well as of the components of the wage gap. Therefore, the method is extended to account for sample selection problems. The decomposition is conducted by using Italian microdata. Accounting for labor market selection may be particularly rele...

  12. In situ study of glasses decomposition layer

    International Nuclear Information System (INIS)

    Zarembowitch-Deruelle, O.

    1997-01-01

    The aim of this work is to understand the involved mechanisms during the decomposition of glasses by water and the consequences on the morphology of the decomposition layer, in particular in the case of a nuclear glass: the R 7 T 7 . The chemical composition of this glass being very complicated, it is difficult to know the influence of the different elements on the decomposition kinetics and on the resulting morphology because several atoms have a same behaviour. Glasses with simplified composition (only 5 elements) have then been synthesized. The morphological and structural characteristics of these glasses have been given. They have then been decomposed by water. The leaching curves do not reflect the decomposition kinetics but the solubility of the different elements at every moment. The three steps of the leaching are: 1) de-alkalinization 2) lattice rearrangement 3) heavy elements solubilization. Two decomposition layer types have also been revealed according to the glass heavy elements rate. (O.M.)

  13. Multilinear operators for higher-order decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  14. Management intensity alters decomposition via biological pathways

    Science.gov (United States)

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future

  15. Approach to Mathematics in Textbooks at Tertiary Level--Exploring Authors' Views about Their Texts

    Science.gov (United States)

    Randahl, Mira

    2012-01-01

    The aim of this article is to present and discuss some results from an inquiry into mathematics textbooks authors' visions about their texts and approaches they choose when new concepts are introduced. Authors' responses are discussed in relation to results about students' difficulties with approaching calculus reported by previous research. A…

  16. Extending Failure Modes and Effects Analysis Approach for Reliability Analysis at the Software Architecture Design Level

    NARCIS (Netherlands)

    Sözer, Hasan; Tekinerdogan, B.; Aksit, Mehmet; de Lemos, Rogerio; Gacek, Cristina

    2007-01-01

    Several reliability engineering approaches have been proposed to identify and recover from failures. A well-known and mature approach is the Failure Mode and Effect Analysis (FMEA) method that is usually utilized together with Fault Tree Analysis (FTA) to analyze and diagnose the causes of failures.

  17. FGP Approach for Solving Multi-level Multi-objective Quadratic Fractional Programming Problem with Fuzzy parameters

    Directory of Open Access Journals (Sweden)

    m. s. osman

    2017-09-01

    Full Text Available In this paper, we consider fuzzy goal programming (FGP approach for solving multi-level multi-objective quadratic fractional programming (ML-MOQFP problem with fuzzy parameters in the constraints. Firstly, the concept of the ?-cut approach is applied to transform the set of fuzzy constraints into a common deterministic one. Then, the quadratic fractional objective functions in each level are transformed into quadratic objective functions based on a proposed transformation. Secondly, the FGP approach is utilized to obtain a compromise solution for the ML-MOQFP problem by minimizing the sum of the negative deviational variables. Finally, an illustrative numerical example is given to demonstrate the applicability and performance of the proposed approach.

  18. Influence of the Constructivist Learning Approach on Students' Levels of Learning Trigonometry and on Their Attitudes towards Mathematics

    OpenAIRE

    İNAN, CEMİL

    2014-01-01

    In this experimental study, the influence of the constructivist learning approach on students’ levels of learning trigonometry and on their attitudes towards mathematics was examined in comparison with the traditional methods of instruction. The constructivist learning approach was the independent variable, while mathematics achievement, the lessons of trigonometry and the attitudes towards mathematics constituted the dependent variables. The study was designed as the pretest-posttest control...

  19. A multi-modality approach to examine reward satisfaction amongst mid-level managers

    OpenAIRE

    Favotto, Alvise; Kominis, Georgios; Emmanuel, Clive

    2014-01-01

    Limited research addresses the perceptions of mid-level managers as recipients of desirable rewards.In contrast to CEO “tailor-made” compensation schemes, mid-level manager reward schemes are treated as homogeneously acceptable to motivate individuals. However, in large corporations, mid-level managers are organized in several echelons where size of business unit, functions or geographic locations create an organizational hierarchy. Data from 1,771 mid-level managers across fi e echelons in a...

  20. The Impact of School Environment and Grade Level on Student Delinquency: A Multilevel Modeling Approach

    Science.gov (United States)

    Lo, Celia C.; Kim, Young S.; Allen, Thomas M.; Allen, Andrea N.; Minugh, P. Allison; Lomuto, Nicoletta

    2011-01-01

    Effects on delinquency made by grade level, school type (based on grade levels accommodated), and prosocial school climate were assessed, controlling for individual-level risk and protective factors. Data were obtained from the Substance Abuse Services Division of Alabama's state mental health agency and analyzed via hierarchical linear modeling,…

  1. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    Directory of Open Access Journals (Sweden)

    Ming Dong

    2017-11-01

    Full Text Available Sulfur hexafluoride (SF6 gas-insulated electrical equipment is widely used in high-voltage (HV and extra-high-voltage (EHV power systems. Partial discharge (PD and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES and infrared (IR spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  2. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    Science.gov (United States)

    Dong, Ming; Ren, Ming; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268

  3. Nutrient-enhanced decomposition of plant biomass in a freshwater wetland

    Science.gov (United States)

    Bodker, James E.; Turner, Robert Eugene; Tweel, Andrew; Schulz, Christopher; Swarzenski, Christopher M.

    2015-01-01

    We studied soil decomposition in a Panicum hemitomon (Schultes)-dominated freshwater marsh located in southeastern Louisiana that was unambiguously changed by secondarily-treated municipal wastewater effluent. We used four approaches to evaluate how belowground biomass decomposition rates vary under different nutrient regimes in this marsh. The results of laboratory experiments demonstrated how nutrient enrichment enhanced the loss of soil or plant organic matter by 50%, and increased gas production. An experiment demonstrated that nitrogen, not phosphorus, limited decomposition. Cellulose decomposition at the field site was higher in the flowfield of the introduced secondarily treated sewage water, and the quality of the substrate (% N or % P) was directly related to the decomposition rates. We therefore rejected the null hypothesis that nutrient enrichment had no effect on the decomposition rates of these organic soils. In response to nutrient enrichment, plants respond through biomechanical or structural adaptations that alter the labile characteristics of plant tissue. These adaptations eventually change litter type and quality (where the marsh survives) as the % N content of plant tissue rises and is followed by even higher decomposition rates of the litter produced, creating a positive feedback loop. Marsh fragmentation will increase as a result. The assumptions and conditions underlying the use of unconstrained wastewater flow within natural wetlands, rather than controlled treatment within the confines of constructed wetlands, are revealed in the loss of previously sequestered carbon, habitat, public use, and other societal benefits.

  4. Decomposition of gender wage differentials among Portuguese top management jobs

    OpenAIRE

    Mendes, Raquel Vale

    2004-01-01

    This paper studies gender wage differentials among top managers in the Portuguese economy. The objective is to investigate whether men and women within the same occupational group, with relatively high levels of human capital, and who are evaluated basically on their performance, are treated unequally in relation to pay. The Oaxaca wage differential decomposition method is used, relying on 1999 micro data gathered by the Portuguese Ministry of Social Security and Employment. The main findings...

  5. Underdetermined Blind Audio Source Separation Using Modal Decomposition

    Directory of Open Access Journals (Sweden)

    Abdeldjalil Aïssa-El-Bey

    2007-03-01

    Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.

  6. Underdetermined Blind Audio Source Separation Using Modal Decomposition

    Directory of Open Access Journals (Sweden)

    Aïssa-El-Bey Abdeldjalil

    2007-01-01

    Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.

  7. The application of individual approach in how to conduct aerobics classes with students of different levels of preparedness.

    Directory of Open Access Journals (Sweden)

    Barybina L.N.

    2012-09-01

    Full Text Available The aim of this work was to develop a system of classes in aerobics in high school with an individual approach. On this subject has been analyzed about 15 references. 105 students took part in an experiment. The technique, which combines fitness aerobics and step aerobics. The technique allows to take into account the physical capacity, functional differences, the level of preparedness and dealing with their needs, as well as to use an individual approach in the selection of means and methods of physical education. We propose the organization of activities in which the hall can simultaneously engage up to 5-6 subgroups: students with low fitness and low levels of coordination abilities, students with an average level of physical fitness and low levels of coordination abilities, students with high levels of physical fitness and low levels of coordination abilities, students with an average level of physical fitness and a high level of coordination abilities, students with high levels of physical fitness and coordination abilities.

  8. Expert vs. novice: Problem decomposition/recomposition in engineering design

    Science.gov (United States)

    Song, Ting

    The purpose of this research was to investigate the differences of using problem decomposition and problem recomposition among dyads of engineering experts, dyads of engineering seniors, and dyads of engineering freshmen. Fifty participants took part in this study. Ten were engineering design experts, 20 were engineering seniors, and 20 were engineering freshmen. Participants worked in dyads to complete an engineering design challenge within an hour. The entire design process was video and audio recorded. After the design session, members participated in a group interview. This study used protocol analysis as the methodology. Video and audio data were transcribed, segmented, and coded. Two coding systems including the FBS ontology and "levels of the problem" were used in this study. A series of statistical techniques were used to analyze data. Interview data and participants' design sketches also worked as supplemental data to help answer the research questions. By analyzing the quantitative and qualitative data, it was found that students used less problem decomposition and problem recomposition than engineer experts in engineering design. This result implies that engineering education should place more importance on teaching problem decomposition and problem recomposition. Students were found to spend less cognitive effort when considering the problem as a whole and interactions between subsystems than engineer experts. In addition, students were also found to spend more cognitive effort when considering details of subsystems. These results showed that students tended to use dept-first decomposition and experts tended to use breadth-first decomposition in engineering design. The use of Function (F), Behavior (B), and Structure (S) among engineering experts, engineering seniors, and engineering freshmen was compared on three levels. Level 1 represents designers consider the problem as an integral whole, Level 2 represents designers consider interactions between

  9. Thermal decomposition of beryllium perchlorate tetrahydrate

    International Nuclear Information System (INIS)

    Berezkina, L.G.; Borisova, S.I.; Tamm, N.S.; Novoselova, A.V.

    1975-01-01

    Thermal decomposition of Be(ClO 4 ) 2 x4H 2 O was studied by the differential flow technique in the helium stream. The kinetics was followed by an exchange reaction of the perchloric acid appearing by the decomposition with potassium carbonate. The rate of CO 2 liberation in this process was recorded by a heat conductivity detector. The exchange reaction yielding CO 2 is quantitative, it is not the limiting one and it does not distort the kinetics of the process of perchlorate decomposition. The solid products of decomposition were studied by infrared and NMR spectroscopy, roentgenography, thermography and chemical analysis. A mechanism suggested for the decomposition involves intermediate formation of hydroxyperchlorate: Be(ClO 4 ) 2 x4H 2 O → Be(OH)ClO 4 +HClO 4 +3H 2 O; Be(OH)ClO 4 → BeO+HClO 4 . Decomposition is accompained by melting of the sample. The mechanism of decomposition is hydrolytic. At room temperature the hydroxyperchlorate is a thick syrup-like compound crystallizing after long storing

  10. Approach to performance based regulation development

    International Nuclear Information System (INIS)

    Spogen, L.R.; Cleland, L.L.

    1977-06-01

    An approach to the development of performance based regulations (PBR's) is described. Initially, a framework is constructed that consists of a function hierarchy and associated measures. The function at the top of the hierarchy is described in terms of societal objectives. Decomposition of this function into subordinate functions and their subsequent decompositions yield the function hierarchy. ''Bottom'' functions describe the roles of system components. When measures are identified for the performance of each function and means of aggregating performances to higher levels are established, the framework may be employed for developing PBR's. Consideration of system flexibility and performance uncertainty guide in determining the hierarchical level at which regulations are formulated. Ease of testing compliance is also a factor. To show the viability of the approach, the framework developed by Lawrence Livermore Laboratory for the Nuclear Regulatory Commission for evaluation of material control systems at fixed facilities is presented

  11. Public administration of quality of education at the local level on the foundation of the competence approach

    Directory of Open Access Journals (Sweden)

    O. I. Popova

    2014-04-01

    Full Text Available The article deals with the essence of the phenomenon of management, reveals the meaning of science categories of public administration, of public administration of education and public administration of quality of education at the local level. Personnel factor identifies priority in improving of public administration of quality of education at the local level, the importance of the implementation of the competence approach to management education sector as a necessary condition to ensure the quality of education.

  12. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Naveed ur Rehman

    2015-05-01

    Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  13. Regional compacting for low-level waste management: an innovative approach in national problem solving

    International Nuclear Information System (INIS)

    Levin, G.B.; Nern, C.F.

    1983-01-01

    The nature of the current efforts by the states to institute a reliable national system for low-level radioactive waste management is analyzed. The history of low-level waste management over the last five years is not detailed. It is sufficient to say that there has been a seriously diminished availability of commercial disposal capacity for low-level waste. Some observations and insights into the process the nation has undertaken to solve this problem are offered

  14. Thermal decomposition of lanthanide and actinide tetrafluorides

    International Nuclear Information System (INIS)

    Gibson, J.K.; Haire, R.G.

    1988-01-01

    The thermal stabilities of several lanthanide/actinide tetrafluorides have been studied using mass spectrometry to monitor the gaseous decomposition products, and powder X-ray diffraction (XRD) to identify solid products. The tetrafluorides, TbF 4 , CmF 4 , and AmF 4 , have been found to thermally decompose to their respective solid trifluorides with accompanying release of fluorine, while cerium tetrafluoride has been found to be significantly more thermally stable and to congruently sublime as CeF 4 prior to appreciable decomposition. The results of these studies are discussed in relation to other relevant experimental studies and the thermodynamics of the decomposition processes. 9 refs., 3 figs

  15. Decomposition of lake phytoplankton. 1

    International Nuclear Information System (INIS)

    Hansen, L.; Krog, G.F.; Soendergaard, M.

    1986-01-01

    Short-time (24 h) and long-time (4-6 d) decomposition of phytoplankton cells were investigasted under in situ conditions in four Danish lakes. Carbon-14-labelled, dead algae were exposed to sterile or natural lake water and the dynamics of cell lysis and bacterial utilization of the leached products were followed. The lysis process was dominated by an initial fast water extraction. Within 2 to 4 h from 4 to 34% of the labelled carbon leached from the algal cells. After 24 h from 11 to 43% of the initial particulate carbon was found as dissolved carbon in the experiments with sterile lake water; after 4 to 6 d the leaching was from 67 to 78% of the initial 14 C. The leached compounds were utilized by bacteria. A comparison of the incubations using sterile and natural water showed that a mean of 71% of the lysis products was metabolized by microorganisms within 24 h. In two experiments the uptake rate equalled the leaching rate. (author)

  16. Decomposition of lake phytoplankton. 2

    International Nuclear Information System (INIS)

    Hansen, L.; Krog, G.F.; Soendergaard, M.

    1986-01-01

    The lysis process of phytoplankton was followed in 24 h incubations in three Danish lakes. By means of gel-chromatography it was shown that the dissolved carbon leaching from different algal groups differed in molecular weight composition. Three distinct molecular weight classes (>10,000; 700 to 10,000 and < 700 Daltons) leached from blue-green algae in almost equal proportion. The lysis products of spring-bloom diatoms included only the two smaller size classes, and the molecules between 700 and 10,000 Daltons dominated. Measurements of cell content during decomposition of the diatoms revealed polysaccharides and low molecular weight compounds to dominate the lysis products. No proteins were leached during the first 24 h after cell death. By incubating the dead algae in natural lake water, it was possible to detect a high bacterial affinity towards molecules between 700 and 10,000 Daltons, although the other size classes were also utilized. Bacterial transformation of small molecules to larger molecules could be demonstrated. (author)

  17. Thermal decomposition of titanium deuteride thin films

    International Nuclear Information System (INIS)

    Malinowski, M.E.

    1983-01-01

    The thermal desorption spectra of deuterium from essentially clean titanium deuteride thin films were measured by ramp heating the films in vacuum; the film thicknesses ranged from 20 to 220 nm and the ramp rates varied from 0.5 to about 3 0 C s - 1 . Each desorption spectrum consisted of a low nearly constant rate at low temperatures followed by a highly peaked rate at higher temperatures. The cleanliness and thinness of the films permitted a description of desorption rates in terms of a simple phenomenological model based on detailed balancing in which the low temperature pressure-composition characteristics of the two-phase (α-(α+#betta#)-#betta#) region of the Ti-D system were used as input data. At temperatures below 340 0 C the model predictions were in excellent agreement with the experimentally measured desorption spectra. Interpretations of the spectra in terms of 'decomposition trajectories'' are possible using this model, and this approach is also used to explain deviations of the spectra from the model at temperatures of 340 0 C and above. (Auth.)

  18. Structure for the decomposition of safeguards responsibilities

    International Nuclear Information System (INIS)

    Dugan, V.L.; Chapman, L.D.

    1977-01-01

    A major mission of safeguards is to protect against the use of nuclear materials by adversaries to harm society. A hierarchical structure of safeguards responsibilities and activities to assist in this mission is defined. The structure begins with the definition of international or multi-national safeguards and continues through domestic, regional, and facility safeguards. The facility safeguards is decomposed into physical protection and material control responsibilities. In addition, in-transit safeguards systems are considered. An approach to the definition of performance measures for a set of Generic Adversary Action Sequence Segments (GAASS) is illustrated. These GAASS's begin outside facility boundaries and terminate at some adversary objective which could lead to eventual safeguards risks and societal harm. Societal harm is primarily the result of an adversary who is successful in the theft of special nuclear material or in the sabotage of vital systems which results in the release of material in situ. With the facility safeguards system, GAASS's are defined in terms of authorized and unauthorized adversary access to materials and components, acquisition of material, unauthorized removal of material, and the compromise of vital components. Each GAASS defines a set of ''paths'' (ordered set of physical protection components) and each component provides one or more physical protection ''functions'' (detection, assessment, communication, delay, neutralization). Functional performance is then developed based upon component design features, the environmental factors, and the adversary attributes. An example of this decomposition is presented

  19. Structure for the decomposition of safeguards responsibilities

    International Nuclear Information System (INIS)

    Dugan, V.L.; Chapman, L.D.

    1977-08-01

    A major mission of safeguards is to protect against the use of nuclear materials by adversaries to harm society. A hierarchical structure of safeguards responsibilities and activities to assist in this mission is defined. The structure begins with the definition of international or multi-national safeguards and continues through domestic, regional, and facility safeguards. The facility safeguards is decomposed into physical protection and material control responsibilities. In addition, in-transit safeguards systems are considered. An approach to the definition of performance measures for a set of Generic Adversary Action Sequence Segments (GAASS) is illustrated. These GAASS's begin outside facility boundaries and terminate at some adversary objective which could lead to eventual safeguards risks and societal harm. Societal harm is primarily the result of an adversary who is successful in the theft of special nuclear material or in the sabotage of vital systems which results in the release of material in situ. With the facility safeguards system, GAASS's are defined in terms of authorized and unauthorized adversary access to materials and components, acquisition of material, unauthorized removal of material, and the compromise of vital components. Each GAASS defines a set of ''paths'' (ordered set of physical protection components) and each component provides one or more physical protection ''functions'' (detection, assessment, communication, delay, neutralization). Functional performance is then developed based upon component design features, the environmental factors, and the adversary attributes. An example of this decomposition is presented

  20. Interactive plant functional group and water table effects on decomposition and extracellular enzyme activity in Sphagnum peatlands

    Science.gov (United States)

    Magdalena M. Wiedermann; Evan S. Kane; Lynette R. Potvin; Erik A. Lilleskov

    2017-01-01

    Peatland decomposition may be altered by hydrology and plant functional groups (PFGs), but exactly how the latter influences decomposition is unclear, as are potential interactions of these factors.We used a factorial mesocosm experiment with intact 1 m3 peat monoliths to explore how PFGs (sedges vs Ericaceae) and water table level individually...