WorldWideScience

Sample records for model-selection process comparing

  1. Probabilistic wind power forecasting with online model selection and warped gaussian process

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Feng; Gao, Lin

    2014-01-01

    Highlights: • A new online ensemble model for the probabilistic wind power forecasting. • Quantifying the non-Gaussian uncertainties in wind power. • Online model selection that tracks the time-varying characteristic of wind generation. • Dynamically altering the input features. • Recursive update of base models. - Abstract: Based on the online model selection and the warped Gaussian process (WGP), this paper presents an ensemble model for the probabilistic wind power forecasting. This model provides the non-Gaussian predictive distributions, which quantify the non-Gaussian uncertainties associated with wind power. In order to follow the time-varying characteristics of wind generation, multiple time dependent base forecasting models and an online model selection strategy are established, thus adaptively selecting the most probable base model for each prediction. WGP is employed as the base model, which handles the non-Gaussian uncertainties in wind power series. Furthermore, a regime switch strategy is designed to modify the input feature set dynamically, thereby enhancing the adaptiveness of the model. In an online learning framework, the base models should also be time adaptive. To achieve this, a recursive algorithm is introduced, thus permitting the online updating of WGP base models. The proposed model has been tested on the actual data collected from both single and aggregated wind farms

  2. Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study.

    Science.gov (United States)

    Austin, Peter C

    2008-10-01

    Researchers have proposed using bootstrap resampling in conjunction with automated variable selection methods to identify predictors of an outcome and to develop parsimonious regression models. Using this method, multiple bootstrap samples are drawn from the original data set. Traditional backward variable elimination is used in each bootstrap sample, and the proportion of bootstrap samples in which each candidate variable is identified as an independent predictor of the outcome is determined. The performance of this method for identifying predictor variables has not been examined. Monte Carlo simulation methods were used to determine the ability of bootstrap model selection methods to correctly identify predictors of an outcome when those variables that are selected for inclusion in at least 50% of the bootstrap samples are included in the final regression model. We compared the performance of the bootstrap model selection method to that of conventional backward variable elimination. Bootstrap model selection tended to result in an approximately equal proportion of selected models being equal to the true regression model compared with the use of conventional backward variable elimination. Bootstrap model selection performed comparatively to backward variable elimination for identifying the true predictors of a binary outcome.

  3. Social Influence Interpretation of Interpersonal Processes and Team Performance Over Time Using Bayesian Model Selection

    NARCIS (Netherlands)

    Johnson, Alan R.; van de Schoot, Rens; Delmar, Frédéric; Crano, William D.

    The team behavior literature is ambiguous about the relations between members’ interpersonal processes—task debate and task conflict—and team performance. From a social influence perspective, we show why members’ interpersonal processes determine team performance over time in small groups. Together,

  4. Chemometrics applications in biotech processes: assessing process comparability.

    Science.gov (United States)

    Bhushan, Nitish; Hadpe, Sandip; Rathore, Anurag S

    2012-01-01

    A typical biotech process starts with the vial of the cell bank, ends with the final product and has anywhere from 15 to 30 unit operations in series. The total number of process variables (input and output parameters) and other variables (raw materials) can add up to several hundred variables. As the manufacturing process is widely accepted to have significant impact on the quality of the product, the regulatory agencies require an assessment of process comparability across different phases of manufacturing (Phase I vs. Phase II vs. Phase III vs. Commercial) as well as other key activities during product commercialization (process scale-up, technology transfer, and process improvement). However, assessing comparability for a process with such a large number of variables is nontrivial and often companies resort to qualitative comparisons. In this article, we present a quantitative approach for assessing process comparability via use of chemometrics. To our knowledge this is the first time that such an approach has been published for biotech processing. The approach has been applied to an industrial case study involving evaluation of two processes that are being used for commercial manufacturing of a major biosimilar product. It has been demonstrated that the proposed approach is able to successfully identify the unit operations in the two processes that are operating differently. We expect this approach, which can also be applied toward assessing product comparability, to be of great use to both the regulators and the industry which otherwise struggle to assess comparability. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  5. Comparing composts formed by different technological processing

    Science.gov (United States)

    Lyckova, B.; Mudrunka, J.; Kucerova, R.; Glogarova, V.

    2017-10-01

    The presented article compares quality of composts which were formed by different technological processes. The subject to comparison was a compost which was created in a closed fermenter where ideal conditions for decomposition and organic substances conversion were ensured, with compost which was produced in an open box of community composting. The created composts were analysed to determine whether it is more important for the final compost to comply with the composting conditions or better sorting of raw materials needed for compost production. The results of the carried out experiments showed that quality of the resulting compost cannot be determined unequivocally.

  6. Dissociable learning processes in comparative psychology.

    Science.gov (United States)

    Smith, J David; Church, Barbara A

    2017-08-10

    Comparative and cognitive psychologists interpret performance in different ways. Animal researchers invoke a dominant construct of associative learning. Human researchers acknowledge humans' capacity for explicit-declarative cognition. This article offers a way to bridge a divide that defeats productive cross-talk. We show that animals often challenge the associative-learning construct, and that it does not work to try to stretch the associative-learning construct to encompass these performances. This approach thins and impoverishes that important construct. We describe an alternative approach that restrains the construct of associative learning by giving it a clear operational definition. We apply this approach in several comparative domains to show that different task variants change-in concert-the level of awareness, the declarative nature of knowledge, the dimensional breadth of knowledge, and the brain systems that organize learning. These changes reveal dissociable learning processes that a unitary associative construct cannot explain but a neural-systems framework can explain. These changes define the limit of associative learning and the threshold of explicit cognition. The neural-systems framework can broaden empirical horizons in comparative psychology. It can offer animal models of explicit cognition to cognitive researchers and neuroscientists. It can offer simple behavioral paradigms for exploring explicit cognition to developmental researchers. It can enliven the synergy between human and animal research, promising a productive future for both.

  7. Comparing Binaural Pre-processing Strategies II

    Directory of Open Access Journals (Sweden)

    Regina M. Baumgärtel

    2015-12-01

    Full Text Available Several binaural audio signal enhancement algorithms were evaluated with respect to their potential to improve speech intelligibility in noise for users of bilateral cochlear implants (CIs. 50% speech reception thresholds (SRT50 were assessed using an adaptive procedure in three distinct, realistic noise scenarios. All scenarios were highly nonstationary, complex, and included a significant amount of reverberation. Other aspects, such as the perfectly frontal target position, were idealized laboratory settings, allowing the algorithms to perform better than in corresponding real-world conditions. Eight bilaterally implanted CI users, wearing devices from three manufacturers, participated in the study. In all noise conditions, a substantial improvement in SRT50 compared to the unprocessed signal was observed for most of the algorithms tested, with the largest improvements generally provided by binaural minimum variance distortionless response (MVDR beamforming algorithms. The largest overall improvement in speech intelligibility was achieved by an adaptive binaural MVDR in a spatially separated, single competing talker noise scenario. A no-pre-processing condition and adaptive differential microphones without a binaural link served as the two baseline conditions. SRT50 improvements provided by the binaural MVDR beamformers surpassed the performance of the adaptive differential microphones in most cases. Speech intelligibility improvements predicted by instrumental measures were shown to account for some but not all aspects of the perceptually obtained SRT50 improvements measured in bilaterally implanted CI users.

  8. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve this me....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online.......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve...... this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion...

  9. Aging in comparative perspective: processes and policies

    National Research Council Canada - National Science Library

    Cook, Ian G; Halsall, Jamie

    2012-01-01

    .... This timely volume analyzes the aging process in various countries, with special focus on the stresses placed on their economies as the numbers of elders increase with fewer young people available...

  10. Comparative analysis of accelerogram processing methods

    International Nuclear Information System (INIS)

    Goula, X.; Mohammadioun, B.

    1986-01-01

    The work described here inafter is a short development of an on-going research project, concerning high-quality processing of strong-motion recordings of earthquakes. Several processing procedures have been tested, applied to synthetic signals simulating ground-motion designed for this purpose. The methods of correction operating in the time domain are seen to be strongly dependent upon the sampling rate. Two methods of low-frequency filtering followed by an integration of accelerations yielded satisfactory results [fr

  11. Exploring Several Methods of Groundwater Model Selection

    Science.gov (United States)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  12. Chemical identification using Bayesian model selection

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom; Fry, H. A. (Herbert A.); McVey, B. D. (Brian D.); Sander, E. (Eric)

    2002-01-01

    Remote detection and identification of chemicals in a scene is a challenging problem. We introduce an approach that uses some of the image's pixels to establish the background characteristics while other pixels represent the target for which we seek to identify all chemical species present. This leads to a generalized least squares problem in which we focus on 'subset selection' to identify the chemicals thought to be present. Bayesian model selection allows us to approximate the posterior probability that each chemical in the library is present by adding the posterior probabilities of all the subsets which include the chemical. We present results using realistic simulated data for the case with 1 to 5 chemicals present in each target and compare performance to a hybrid of forward and backward stepwise selection procedure using the F statistic.

  13. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    International Nuclear Information System (INIS)

    Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.

    2012-01-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  14. Graphical tools for model selection in generalized linear models.

    Science.gov (United States)

    Murray, K; Heritier, S; Müller, S

    2013-11-10

    Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Bayesian Model Selection in Geophysics: The evidence

    Science.gov (United States)

    Vrugt, J. A.

    2016-12-01

    Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Per Bayes theorem, the posterior probability, , P(H|D), of a hypothesis, H, given the data D, is equivalent to the product of its prior probability, P(H), and likelihood, L(H|D), divided by a normalization constant, P(D). In geophysics, the hypothesis, H, often constitutes a description (parameterization) of the subsurface for some entity of interest (e.g. porosity, moisture content). The normalization constant, P(D), is not required for inference of the subsurface structure, yet of great value for model selection. Unfortunately, it is not particularly easy to estimate P(D) in practice. Here, I will introduce the various building blocks of a general purpose method which provides robust and unbiased estimates of the evidence, P(D). This method uses multi-dimensional numerical integration of the posterior (parameter) distribution. I will then illustrate this new estimator by application to three competing subsurface models (hypothesis) using GPR travel time data from the South Oyster Bacterial Transport Site, in Virginia, USA. The three subsurface models differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. The results of the new estimator are compared against the brute force Monte Carlo method, and the Laplace-Metropolis method.

  16. Comparative Study of Different Processing Methods for the ...

    African Journals Online (AJOL)

    The result of the two processing methods reduced the cyanide concentration to the barest minimum level required by World Health Organization (10mg/kg). The mechanical pressing-fermentation method removed more cyanide when compared to fermentation processing method. Keywords: Cyanide, Fermentation, Manihot ...

  17. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  18. Comparative analysis of genomic signal processing for microarray data clustering.

    Science.gov (United States)

    Istepanian, Robert S H; Sungoor, Ala; Nebel, Jean-Christophe

    2011-12-01

    Genomic signal processing is a new area of research that combines advanced digital signal processing methodologies for enhanced genetic data analysis. It has many promising applications in bioinformatics and next generation of healthcare systems, in particular, in the field of microarray data clustering. In this paper we present a comparative performance analysis of enhanced digital spectral analysis methods for robust clustering of gene expression across multiple microarray data samples. Three digital signal processing methods: linear predictive coding, wavelet decomposition, and fractal dimension are studied to provide a comparative evaluation of the clustering performance of these methods on several microarray datasets. The results of this study show that the fractal approach provides the best clustering accuracy compared to other digital signal processing and well known statistical methods.

  19. Magsonic™ Carbothermal Technology Compared with the Electrolytic and Pidgeon Processes

    Science.gov (United States)

    Prentice, Leon H.; Haque, Nawshad

    A broad technology comparison of carbothermal magnesium production with present technologies has not been previously presented. In this paper a comparative analysis of CSIRO's MagSonic™ process is made with the electrolytic and Pidgeon processes. The comparison covers energy intensity (GJ/tonne Mg), labor intensity (person-hours/tonne Mg), capital intensity (USD/tonne annual Mg installed capacity), and Global Warming Potential (GWP, tonnes CO2-equivalent/tonne Mg). Carbothermal technology is advantageous on all measures except capital intensity (where it is roughly twice the capital cost of a similarly-sized Pidgeon plant). Carbothermal and electrolytic production can have comparatively low environmental impacts, with typical emissions one-sixth those of the Pidgeon process. Despite recent progress, the Pidgeon process depends upon abundant energy and labor combined with few environmental constraints. Pressure is expected to increase on environmental constraints and labor and energy costs over the coming decade. Carbothermal reduction technology appears to be competitive for future production.

  20. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  1. A Comparative Analysis of Extract, Transformation and Loading (ETL) Process

    Science.gov (United States)

    Runtuwene, J. P. A.; Tangkawarow, I. R. H. T.; Manoppo, C. T. M.; Salaki, R. J.

    2018-02-01

    The current growth of data and information occurs rapidly in varying amount and media. These types of development will eventually produce large number of data better known as the Big Data. Business Intelligence (BI) utilizes large number of data and information for analysis so that one can obtain important information. This type of information can be used to support decision-making process. In practice a process integrating existing data and information into data warehouse is needed. This data integration process is known as Extract, Transformation and Loading (ETL). In practice, many applications have been developed to carry out the ETL process, but selection which applications are more time, cost and power effective and efficient may become a challenge. Therefore, the objective of the study was to provide comparative analysis through comparison between the ETL process using Microsoft SQL Server Integration Service (SSIS) and one using Pentaho Data Integration (PDI).

  2. Comparative Study of Different Processing Methods for the ...

    African Journals Online (AJOL)

    MBI

    2014-06-07

    Jun 7, 2014 ... ABSTRACT. Bitter cassava (Manihot esculanta) is one of the most important staple root crops planted in Nigeria. Substantial quantity of anti-nutrient factor cyanogenic glucoside that interferes with digestion and injurious to human health is present in it. This work is aimed at comparing different processing ...

  3. A Comparative Study : Microprogrammed Vs Risc Architectures For Symbolic Processing

    Science.gov (United States)

    Heudin, J. C.; Metivier, C.; Demigny, D.; Maurin, T.; Zavidovique, B.; Devos, F.

    1987-05-01

    It is oftenclaimed that conventional computers are not well suited for human-like tasks : Vision (Image Processing), Intelligence (Symbolic Processing) ... In the particular case of Artificial Intelligence, dynamic type-checking is one example of basic task that must be improved. The solution implemented in most Lisp work-stations consists in a microprogrammed architecture with a tagged memory. Another way to gain efficiency is to design a well suited instruction set for symbolic processing, which reduces the semantic gap between the high level language and the machine code. In this framework, the RISC concept provides a convenient approach to study new architectures for symbolic processing. This paper compares both approaches and describes our projectof designing a compact symbolic processor for Artificial Intelligence applications.

  4. Comparing the Governance of Novel Products and Processes of Biotechnology

    DEFF Research Database (Denmark)

    Hansen, Janus

    The emergence of novel products and processes of biotechnology in medicine, industry and agriculture has been accompanied by promises of healthier, safer and more productive lives and societies. However, biotechnology has also served as cause and catalyst of social controversy about the physical...... to start to fill this gap and develop a conceptual framework for comparing and analysing new and emerging modes of governance affiliated with biotechnology in the light of more general approaches to governance. We aim for a framework that can facilitate comparative inquiries and learning across different...

  5. A simple parametric model selection test

    OpenAIRE

    Susanne M. Schennach; Daniel Wilhelm

    2014-01-01

    We propose a simple model selection test for choosing among two parametric likelihoods which can be applied in the most general setting without any assumptions on the relation between the candidate models and the true distribution. That is, both, one or neither is allowed to be correctly speci fied or misspeci fied, they may be nested, non-nested, strictly non-nested or overlapping. Unlike in previous testing approaches, no pre-testing is needed, since in each case, the same test statistic to...

  6. Model selection in statistical historical biogeography of Neotropical insects-The Exophthalmus genus complex (Curculionidae: Entiminae).

    Science.gov (United States)

    Zhang, Guanyang; Basharat, Usmaan; Matzke, Nicholas; Franz, Nico M

    2017-04-01

    Statistical historical biogeographic methods rely on models that represent various biogeographic processes. Until recently model selection in this domain was not widely used, and the impact of differential model selection on inferring biogeographic scenarios was not well understood. Focusing on Neotropical weevils in the Exophthalmus genus complex (EGC) (Insecta: Curculionidae: Entiminae), we compare three commonly used biogeographic models - DIVA (Dispersal-Vicariance Analysis), DEC (Dispersal-Extinction-Cladogenesis) and BayArea (Bayesian Analysis of Biogeography), and examine the impact of modeling founder-event jump dispersal on historical biogeographic reconstructions. We also investigate the biogeographic events that have shaped patterns of distribution, diversification, and endemism in this weevil lineage. We sample representatives of 65 species of the EGC and 26 outgroup terminals from the Neotropics, including Caribbean islands and the mainland. We reconstruct a molecular phylogeny based on six genes and apply molecular dating using a relaxed clock with three fossil calibration points. Historical biogeographic estimations and alternative biogeographic models are computed and compared with the R package BioGeoBEARS. Model selection strongly favors biogeographic models that include founder-event jump dispersal. Without modeling jump dispersal, estimations based on the three biogeographic models are dramatically different, especially for early-diverging nodes. When jump dispersal is included, the three biogeographic models perform similarly. Accordingly, we show that the Neotropical mainland was colonized by Caribbean species in the early Miocene, and that in situ diversification accounts for a majority (∼75%) of the biogeographic events in the EGC. Our study highlights the need to assess wide-ranging historical biogeographic processes - including founder-event jump dispersal - for best-fitting statistical Caribbean biogeographic reconstructions. Moreover

  7. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  8. Sparse model selection via integral terms

    Science.gov (United States)

    Schaeffer, Hayden; McCalla, Scott G.

    2017-08-01

    Model selection and parameter estimation are important for the effective integration of experimental data, scientific theory, and precise simulations. In this work, we develop a learning approach for the selection and identification of a dynamical system directly from noisy data. The learning is performed by extracting a small subset of important features from an overdetermined set of possible features using a nonconvex sparse regression model. The sparse regression model is constructed to fit the noisy data to the trajectory of the dynamical system while using the smallest number of active terms. Computational experiments detail the model's stability, robustness to noise, and recovery accuracy. Examples include nonlinear equations, population dynamics, chaotic systems, and fast-slow systems.

  9. A Process for Comparing Dynamics of Distributed Space Systems Simulations

    Science.gov (United States)

    Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.

    2009-01-01

    The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.

  10. A comparative kinetic study of SNCR process using ammonia

    Directory of Open Access Journals (Sweden)

    M. Tayyeb Javed

    2008-03-01

    Full Text Available The paper presents comparative kinetic modelling of nitrogen oxides (NOx removal from flue gases by selective non-catalytic reduction process using ammonia as reducing agent. The computer code SENKIN is used in this study with the three published chemical kinetic mechanisms; Zanoelo, Kilpinen and Skreiberg. Kinetic modeling was performed for an isothermal plug flow reactor at atmospheric pressure so as to compare it with the experimental results. A 500 ppm NOx background in the flue gas is considered and kept constant throughout the investigation. The ammonia performance was modeled in the range of 750 to 1250 ºC using the molar ratios NH3/NOx from 0.25 to 3.0 and residence times up to 1.5 seconds. The modeling using all the mechanisms exhibits and confirms a temperature window of NOx reduction with ammonia. It was observed that 80% of NOx reduction efficiency could be achieved if the flue gas is given 300 msec to react with ammonia, while it is passing through a section within a temperature range of 910 to 1060 ºC (Kilpinen mechanism or within a temperature range of 925 to 1030 ºC (Zanoelo mechanism or within a temperature range of 890 to 1090 ºC (Skreiberg mechanism.

  11. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  12. Comparative study of hexamethyldisiloxane photofragmentation through multiphotonic and monophotonic processes

    Science.gov (United States)

    Quintella, Cristina M.; de Souza, G. Gerson B.; Mundim, M. S. P.

    1998-05-01

    A comparative study of the hexamethyldisiloxane (HMDSO) molecule photofragmentation induced by laser multiphotonic (MPI) and synchrotron monophotonic (SR) processes is presented. The HMDSO sample was effusively expanded into the vacuum chamber and fragmented by either laser or synchrotron irradiation. The resulting ions were detected by a time-of- flight spectrometer using both electron-ion and ion-ion coincidence techniques. The parent ion has not been observed in both processes suggesting its instability. MPI induced fragmentation is characterized by a high ionic yield (IY) in the lighter fragments region. The MPI atomization is severe generating ions like C+ and Si+ that are absent from the SR spectra. The doubly-charged ions SiOSi(CH3)2++ and SiOSi(CH3)4++ are observed in the SR spectra. SR and MPI fragmentation have a common main route: the methyl group ejection yielding m/q equals 147,148,149 and m/q equals 15. The first presents a higher IY suggesting that the positive charge stays preferentially with the more massive fragment. Through MPI there is another route: the Si-O bond breakage yielding m/q equals 73,74,75 and m/q equals 89 (Si(CH3)3+ and OSi(CH3)3+. The metastable doubly charged ions were SiOSiC1,2,3,6Hn++ and OSiC3Hn++ in the SR case; and a wider fragment mass range was observed through MPI.

  13. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  14. A comparative study on microwave and routine tissue processing

    Directory of Open Access Journals (Sweden)

    T Mahesh Babu

    2011-01-01

    Conclusions: The individual scores by different observers regarding the various parameters included in the study were statistically insignificant, the overall quality of microwave-processed and microwave-stained slides appeared slightly better than conventionally processed and stained slides.

  15. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  16. Comparative analysis of business rules and business process modeling languages

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2013-03-01

    Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages ​​and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.

  17. A comparative study of gender roles in indigenous processing and ...

    African Journals Online (AJOL)

    This study investigated the gender roles in indigenous processing and marketing of yam and cassava in central Benue State, Nigeria. The study was carried out in 3 Local Government Areas randomly selected out of 7 Local Government Areas in the zone. A sample of 115 (51 males and 64 females) farmers were randomly ...

  18. Education and Social Stratification Processes in Comparative Perspective.

    Science.gov (United States)

    Kerckhoff, Alan C.

    2001-01-01

    Discusses three characteristics of educational systems that have been used to explain social stratification processes: stratification, standardization, and vocational specificity. Describes how these characteristics affect the movement of students through school and into the labor force in France, Germany, Great Britain, and the United States.…

  19. A Comparative Study of Point Cloud Data Collection and Processing

    Science.gov (United States)

    Pippin, J. E.; Matheney, M.; Gentle, J. N., Jr.; Pierce, S. A.; Fuentes-Pineda, G.

    2016-12-01

    Over the past decade, there has been dramatic growth in the acquisition of publicly funded high-resolution topographic data for scientific, environmental, engineering and planning purposes. These data sets are valuable for applications of interest across a large and varied user community. However, because of the large volumes of data produced by high-resolution mapping technologies and expense of aerial data collection, it is often difficult to collect and distribute these datasets. Furthermore, the data can be technically challenging to process, requiring software and computing resources not readily available to many users. This study presents a comparison of advanced computing hardware and software that is used to collect and process point cloud datasets, such as LIDAR scans. Activities included implementation and testing of open source libraries and applications for point cloud data processing such as, Meshlab, Blender, PDAL, and PCL. Additionally, a suite of commercial scale applications, Skanect and Cloudcompare, were applied to raw datasets. Handheld hardware solutions, a Structure Scanner and Xbox 360 Kinect V1, were tested for their ability to scan at three field locations. The resultant data projects successfully scanned and processed subsurface karst features ranging from small stalactites to large rooms, as well as a surface waterfall feature. Outcomes support the feasibility of rapid sensing in 3D at field scales.

  20. The Bologna Process in Portugal and Poland: A comparative study

    Directory of Open Access Journals (Sweden)

    Eduardo Tomé

    2016-04-01

    Full Text Available We analyze the consequences of the introduction of the EU directed Bologna Process in Portuguese and Polish Universities. Specifically, we study how the Bologna Process has impacted in the employment situations of graduates in Portugal and Poland. Concerning methodology, we use available official data on the implementation of the Bologna Process in Poland and Portugal. We have found that the investment in Higher Education (HE stalled in both countries in the years since the implementation of the Bologna Process due to massive budgetary restrictions. Nevertheless, the stock of HE graduates increased massively, seemingly because the authorities thought that the free market should lead the HE market in the two countries. Employment prospects, unemployment prospects and wages of graduates continued to be much higher than those of non-graduates. But an unexpected divide appeared between graduates and Masters/PhDs, with important social consequences. While the first “saved” themselves and prospered going into high skilled jobs, the later had to endure minimum wage and underskilled occupations. The low payment for these youngsters was also justified because the supply of HE with Bologna increased but the demand by companies did not match. In fact, both Portugal and Poland have stronger needs in the demand side of the market than in the supply side. Finally, both markets continue to be essentially public and the experiences of privatization did not succeed to much. In terms of social implications, the Bologna Process faces in both countries the massive and decisive challenge of eliminating youth unemployment and emigration but this can only be done with the cooperation of companies that should create high paid and high skilled jobs. Only when this occurs the Bologna Process will achieve its ultimate goal of transforming Portugal and Poland in high skilled equibriuns. Let us hope it happens, for the good of the two countries and particularly for the

  1. Family process and content: comparing families of suicide ...

    African Journals Online (AJOL)

    Subjects and Methods: This was a causal‑comparative study. Our study population included three groups of single men, including suicide attempters, HIV positive patients and general population in Southern Iran. Our sample size was 180 male individuals including 60 suicide attempters referring to one of hospitals in Shiraz ...

  2. A comparative life cycle assessment of process water treatment ...

    African Journals Online (AJOL)

    Two different raw water desalination technologies, an existing ion exchange plant and a proposed reverse osmosis intervention, are compared by life cycle assessment for the production of 1 M. of boiler feed water, in the context of the Secunda industrial complex situated in Mpumalanga, South Africa. The proposed reverse ...

  3. A Comparative Analysis of Short Time Series Processing Methods

    OpenAIRE

    Kiršners, A; Borisovs, A

    2012-01-01

    This article analyzes the traditional time series processing methods that are used to perform the task of short time series analysis in demand forecasting. The main aim of this paper is to scrutinize the ability of these methods to be used when analyzing short time series. The analyzed methods include exponential smoothing, exponential smoothing with the development trend and moving average method. The paper gives the description of the structure and main operating princi...

  4. Novel metrics for growth model selection.

    Science.gov (United States)

    Grigsby, Matthew R; Di, Junrui; Leroux, Andrew; Zipunnikov, Vadim; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William

    2018-01-01

    Literature surrounding the statistical modeling of childhood growth data involves a diverse set of potential models from which investigators can choose. However, the lack of a comprehensive framework for comparing non-nested models leads to difficulty in assessing model performance. This paper proposes a framework for comparing non-nested growth models using novel metrics of predictive accuracy based on modifications of the mean squared error criteria. Three metrics were created: normalized, age-adjusted, and weighted mean squared error (MSE). Predictive performance metrics were used to compare linear mixed effects models and functional regression models. Prediction accuracy was assessed by partitioning the observed data into training and test datasets. This partitioning was constructed to assess prediction accuracy for backward (i.e., early growth), forward (i.e., late growth), in-range, and on new-individuals. Analyses were done with height measurements from 215 Peruvian children with data spanning from near birth to 2 years of age. Functional models outperformed linear mixed effects models in all scenarios tested. In particular, prediction errors for functional concurrent regression (FCR) and functional principal component analysis models were approximately 6% lower when compared to linear mixed effects models. When we weighted subject-specific MSEs according to subject-specific growth rates during infancy, we found that FCR was the best performer in all scenarios. With this novel approach, we can quantitatively compare non-nested models and weight subgroups of interest to select the best performing growth model for a particular application or problem at hand.

  5. THOUGHTS ON THE REGIONALIZATION PROCESS IN COMPARATIVE LAW

    Directory of Open Access Journals (Sweden)

    CLAUDIA GILIA

    2013-05-01

    Full Text Available Regionalization is a complex and smooth process in terms of territorial organization of the states which chose this form of organization, but also from the perspective of how the regional systems understood to exercise their competences. The regionalization process did not only yield profit for the inhabitants of those administrative entities, but it also caused problems to national governors (e.g. demands for the federalization of the state in question, proposals to gain independence from the state they are part of, issues related to state security – terrorists attacks etc.. Our paper shall conduct an assessment over two states which chose this form of regionalization: The United Kingdom of Great Britain and North Ireland and Spain. Given the context of the upcoming constitutional and legislative changes envisaged by the Romanian government, we consider that our research can be regarded as a useful tool, designed to develop certain constitutional directives, which are intended to acquire the best solutions from European experiences.

  6. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    , including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... applicable, and we recommend their use instead of the popular polynomial kernels in general settings, in which no information on the data-generating process is available....

  7. Comparing Different Approaches for Processing GRACE Level-1 Data

    Science.gov (United States)

    Naeimi, Majid

    2010-05-01

    Three different approaches for determining global gravity field solutions from GRACE satellites are presented and compared. Gravity field solutions - the so-called GRACE level-2 data - are mainly spherical harmonic expansions of the Earth's gravitational potential and are widely used by the geosciences community. Level-2 data are obtained via the functional modeling of GRACE level-1 data which are in principle the GRACE orbit, observed by GPS high-low and K-band low-low satellite-to-satellite tracking as well as on-board accelerometry. There are several approaches to connect the Earth's gravitational potential to the level-1 observations. In this research study we compare three different approaches using simulated GRACE level-1 data. The methods being considered here are the acceleration approach, the energy balance approach and the integral equation method. This work is part of the cooperation between Institut für Erdmessung (IfE) and Albert Einstein Institut (AEI) at Leibniz Universität Hannover, Deutsches Geodätisches Forschungsinstitut (DGFI) and Bayerische Kommission für die Internationale Erdmessung (BEK) in Munich and Deutsches GeoforschungsZentrum (GFZ) in Potsdam. Each institution will apply one of the above mentioned methods. Features and typical characteristics of each approach are discussed.

  8. Statistical Model Selection for TID Hardness Assurance

    Science.gov (United States)

    Ladbury, R.; Gorelick, J. L.; McClure, S.

    2010-01-01

    Radiation Hardness Assurance (RHA) methodologies against Total Ionizing Dose (TID) degradation impose rigorous statistical treatments for data from a part's Radiation Lot Acceptance Test (RLAT) and/or its historical performance. However, no similar methods exist for using "similarity" data - that is, data for similar parts fabricated in the same process as the part under qualification. This is despite the greater difficulty and potential risk in interpreting of similarity data. In this work, we develop methods to disentangle part-to-part, lot-to-lot and part-type-to-part-type variation. The methods we develop apply not just for qualification decisions, but also for quality control and detection of process changes and other "out-of-family" behavior. We begin by discussing the data used in ·the study and the challenges of developing a statistic providing a meaningful measure of degradation across multiple part types, each with its own performance specifications. We then develop analysis techniques and apply them to the different data sets.

  9. Comparative assessment of licensing processes of uranium mines in Brazil

    International Nuclear Information System (INIS)

    Silva, K.M.; Menezes, R.M.; Mezrahi, A.

    2002-01-01

    Commercial operation of uranium mining and milling started in Brazil, at the Pocos de Caldas Unit, State of Minas Gerais, in 1982. The Pocos de Caldas Unit was licensed by the Brazilian Regulatory Body (CNEN) and its is now in the decommissioning process. In 2000, a new mining and milling installation, the Caetite Unit, located in State of Bahia, started operation. This paper will discuss how Brazilian Nuclear Energy Commission is licensing the Caetite Unit based on the lessons learned from the Pocos de Caldas Unit. The objective is to draw attention to the importance of the safety assessment for a new unit, specially considering that some wrong decisions were taken for the Pocos de Caldas unit. These decisions lead to less effective long term solutions to protect the environment. Notwithstanding the differences between the two units, it is of great value to use the acquired experience to avoid or minimize the short, medium and long term impacts to the environment and population in the new operation. (author)

  10. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  11. A guide to Bayesian model selection for ecologists

    Science.gov (United States)

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  12. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  13. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  14. The Governance of Higher Education Regionalisation: Comparative Analysis of the Bologna Process and MERCOSUR-Educativo

    Science.gov (United States)

    Verger, Antoni; Hermo, Javier Pablo

    2010-01-01

    The article analyses two processes of higher education regionalisation, MERCOSUR-Educativo in Latin America and the Bologna Process in Europe, from a comparative perspective. The comparative analysis is centered on the content and the governance of both processes and, specifically, on the reasons of their uneven evolution and implementation. We…

  15. The governance of higher education regionalisation: comparative analysis of the Bologna Process and MERCOSUR-Educativo

    NARCIS (Netherlands)

    Verger, A.; Hermo, J.P.

    2010-01-01

    The article analyses two processes of higher education regionalisation, MERCOSUR‐Educativo in Latin America and the Bologna Process in Europe, from a comparative perspective. The comparative analysis is centered on the content and the governance of both processes and, specifically, on the reasons of

  16. Nonclinical comparability studies of recombinant human arylsulfatase A addressing manufacturing process changes.

    Science.gov (United States)

    Wright, Teresa; Li, Aiqun; Lotterhand, Jason; Graham, Anne-Renee; Huang, Yan; Avila, Nancy; Pan, Jing

    2018-01-01

    Recombinant human arylsulfatase A (rhASA) is in clinical development for the treatment of patients with metachromatic leukodystrophy (MLD). Manufacturing process changes were introduced to improve robustness and efficiency, resulting in higher levels of mannose-6-phosphate and sialic acid in post-change (process B) compared with pre-change (process A) rhASA. A nonclinical comparability program was conducted to compare process A and process B rhASA. All doses were administered intrathecally. Pharmacodynamic comparability was evaluated in immunotolerant MLD mice, using immunohistochemical staining of lysosomal-associated membrane protein-1 (LAMP-1). Pharmacokinetic comparability was assessed in juvenile cynomolgus monkeys dosed once with 6.0 mg (equivalent to 100 mg/kg of brain weight) process A or process B rhASA. Biodistribution was compared by quantitative whole-body autoradiography in rats. Potential toxicity of process B rhASA was evaluated by repeated rhASA administration at doses of 18.6 mg in juvenile cynomolgus monkeys. The specific activities for process A and process B rhASA were 89 U/mg and 106 U/mg, respectively, which were both well within the target range for the assay. Pharmacodynamic assessments showed no statistically significant differences in LAMP-1 immunohistochemical staining in the spinal cord and in most of the brain areas assessed between process A and B rhASA-dosed mice. LAMP-1 staining was reduced with both process A and B rhASA compared with vehicle, supporting its activity. Concentration-time curves in cerebrospinal fluid and serum of cynomolgus monkeys were similar with process A and B rhASA. Process A and B rhASA were similar in terms of their pharmacokinetic parameters and biodistribution data. No process B rhASA-related toxicity was detected. In conclusion, manufacturing process changes did not affect the pharmacodynamic, pharmacokinetic or safety profiles of process B rhASA relative to process A rhASA.

  17. Discounting model selection with area-based measures: A case for numerical integration.

    Science.gov (United States)

    Gilroy, Shawn P; Hantula, Donald A

    2018-03-01

    A novel method for analyzing delay discounting data is proposed. This newer metric, a model-based Area Under Curve (AUC) combining approximate Bayesian model selection and numerical integration, was compared to the point-based AUC methods developed by Myerson, Green, and Warusawitharana (2001) and extended by Borges, Kuang, Milhorn, and Yi (2016). Using data from computer simulation and a published study, comparisons of these methods indicated that a model-based form of AUC offered a more consistent and statistically robust measurement of area than provided by using point-based methods alone. Beyond providing a form of AUC directly from a discounting model, numerical integration methods permitted a general calculation in cases when the Effective Delay 50 (ED50) measure could not be calculated. This allowed discounting model selection to proceed in conditions where data are traditionally more challenging to model and measure, a situation where point-based AUC methods are often enlisted. Results from simulation and existing data indicated that numerical integration methods extended both the area-based interpretation of delay discounting as well as the discounting model selection approach. Limitations of point-based AUC as a first-line analysis of discounting and additional extensions of discounting model selection were also discussed. © 2018 Society for the Experimental Analysis of Behavior.

  18. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  19. Identification of Distillation Process Dynamics Comparing Process Knowledge and Black Box Based Approaches

    DEFF Research Database (Denmark)

    Rasmussen, Knud H; Nielsen, C. S.; Jørgensen, Sten Bay

    1990-01-01

    A distillation plant equipped with a heat pump separates a mixture of isopropanol and methanol. The mixture contains some water as impurity. The model development aims at dual composition control design, where top and bottom compositions should follow the setpoints, and disturbances should...... obtained in closed loop operation of the distillation plant. In the present work, the two approaches are compared in terms of how well the model fits, and predicts the data, the conditioning of the model parameter estimation, and convenience of usage....

  20. COMPARATIVE RESEARCHES OF THE HIGH-STRENGTH CAST IRON MICROSTRUCTURE OF AFTER LASER AND PLASMA PROCESSING

    Directory of Open Access Journals (Sweden)

    V. I. Gurinovich

    2012-01-01

    Full Text Available The comparative researches of microstructure of highstrength cast iron after laser and plasma processing are carried out. It is shown that the peculiarity of plasma processing is formation of deeper layers with hardness 950010000 MPa. At laser processing the depth of the strengthened layers is less (about 0,5-0,8 mm, and their hardness is higher (to 11000 MPa.

  1. Generating Artificial Event Logs with Sufficient Discriminatory Power to Compare Process Discovery Techniques

    OpenAIRE

    JOUCK, Toon; Depaire, Benoit

    2014-01-01

    Past research revealed issues with artificial event data used for comparative analysis of process mining algorithms. The aim of this research is to design, implement and validate a framework for producing artificial event logs which should increase discriminatory power of artificial event logs when evaluating process discovery techniques. artificial event logs; event log simulation; performance measurement of business processes

  2. Consistent and Conservative Model Selection with the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    2016-01-01

    as if only these had been included in the model from the outset. In particular, this implies that it is able to discriminate between stationary and nonstationary autoregressions and it thereby constitutes an addition to the set of unit root tests. Next, and important in practice, we show that choosing...... to perform conservative model selection it has power even against shrinking alternatives of this form and compare it to the plain Lasso....

  3. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  4. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  5. Optimal experiment design for model selection in biochemical networks.

    Science.gov (United States)

    Vanlier, Joep; Tiemann, Christian A; Hilbers, Peter A J; van Riel, Natal A W

    2014-02-20

    Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors.

  6. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  7. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  8. Comparative assessment of TRU waste forms and processes. Volume II. Waste form data, process descriptions, and costs

    International Nuclear Information System (INIS)

    Ross, W.A.; Lokken, R.O.; May, R.P.; Roberts, F.P.; Thornhill, R.E.; Timmerman, C.L.; Treat, R.L.; Westsik, J.H. Jr.

    1982-09-01

    This volume contains supporting information for the comparative assessment of the transuranic waste forms and processes summarized in Volume I. Detailed data on the characterization of the waste forms selected for the assessment, process descriptions, and cost information are provided. The purpose of this volume is to provide additional information that may be useful when using the data in Volume I and to provide greater detail on particular waste forms and processes. Volume II is divided into two sections and two appendixes. The first section provides information on the preparation of the waste form specimens used in this study and additional characterization data in support of that in Volume I. The second section includes detailed process descriptions for the eight processes evaluated. Appendix A lists the results of MCC-1 leach test and Appendix B lists additional cost data. 56 figures, 12 tables

  9. Thermal versus high pressure processing of carrots: A comparative pilot-scale study on equivalent basis

    NARCIS (Netherlands)

    Vervoort, L.; Plancken, Van der L.; Grauwet, T.; Verlinde, P.; Matser, A.M.; Hendrickx, M.; Loey, van A.

    2012-01-01

    This report describes the first study comparing different high pressure (HP) and thermal treatments at intensities ranging from mild pasteurization to sterilization conditions. To allow a fair comparison, the processing conditions were selected based on the principles of equivalence. Moreover,

  10. Investigation of near dry EDM compared with wet and dry EDM processes

    Energy Technology Data Exchange (ETDEWEB)

    Gholipoor, Ahad [Islamic Azad University of Tabriz, Tabriz (Iran, Islamic Republic of); Baseri, Hamid [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Shabgard, Mohammad Reza [University of Tabriz, Tabriz (Iran, Islamic Republic of)

    2015-05-15

    Material removal rate (MRR), tool wear ratio (TWR) and surface roughness (SR) obtained by near-dry EDM process were compared with wet and dry EDM at three levels of discharge energy in drilling of SPK steel. Surface integrity machined by this process was studied and compared with wet and dry EDM processes, by scanning electron microscopy (SEM). The results showed that at high level of discharge energy, wet EDM has the most MRR, TWR and SR, and dry EDM has the least MRR, TWR and SR, while at low discharge energy levels, near-dry EDM process has the most MRR and the least SR. SEM micrographs showed that the quality of surface obtained by near-dry EDM process is better than others and the machined surfaces by near-dry EDM process have lower micro-cracks and craters, relatively.

  11. Investigation of near dry EDM compared with wet and dry EDM processes

    International Nuclear Information System (INIS)

    Gholipoor, Ahad; Baseri, Hamid; Shabgard, Mohammad Reza

    2015-01-01

    Material removal rate (MRR), tool wear ratio (TWR) and surface roughness (SR) obtained by near-dry EDM process were compared with wet and dry EDM at three levels of discharge energy in drilling of SPK steel. Surface integrity machined by this process was studied and compared with wet and dry EDM processes, by scanning electron microscopy (SEM). The results showed that at high level of discharge energy, wet EDM has the most MRR, TWR and SR, and dry EDM has the least MRR, TWR and SR, while at low discharge energy levels, near-dry EDM process has the most MRR and the least SR. SEM micrographs showed that the quality of surface obtained by near-dry EDM process is better than others and the machined surfaces by near-dry EDM process have lower micro-cracks and craters, relatively.

  12. Hierarchical models in ecology: confidence intervals, hypothesis testing, and model selection using data cloning.

    Science.gov (United States)

    Ponciano, José Miguel; Taper, Mark L; Dennis, Brian; Lele, Subhash R

    2009-02-01

    Hierarchical statistical models are increasingly being used to describe complex ecological processes. The data cloning (DC) method is a new general technique that uses Markov chain Monte Carlo (MCMC) algorithms to compute maximum likelihood (ML) estimates along with their asymptotic variance estimates for hierarchical models. Despite its generality, the method has two inferential limitations. First, it only provides Wald-type confidence intervals, known to be inaccurate in small samples. Second, it only yields ML parameter estimates, but not the maximized likelihood values used for profile likelihood intervals, likelihood ratio hypothesis tests, and information-theoretic model selection. Here we describe how to overcome these inferential limitations with a computationally efficient method for calculating likelihood ratios via data cloning. The ability to calculate likelihood ratios allows one to do hypothesis tests, construct accurate confidence intervals and undertake information-based model selection with hierarchical models in a frequentist context. To demonstrate the use of these tools with complex ecological models, we reanalyze part of Gause's classic Paramecium data with state-space population models containing both environmental noise and sampling error. The analysis results include improved confidence intervals for parameters, a hypothesis test of laboratory replication, and a comparison of the Beverton-Holt and the Ricker growth forms based on a model selection index.

  13. Waste water processing technology for Space Station Freedom - Comparative test data analysis

    Science.gov (United States)

    Miernik, Janie H.; Shah, Burt H.; Mcgriff, Cindy F.

    1991-01-01

    Comparative tests were conducted to choose the optimum technology for waste water processing on SSF. A thermoelectric integrated membrane evaporation (TIMES) subsystem and a vapor compression distillation subsystem (VCD) were built and tested to compare urine processing capability. Water quality, performance, and specific energy were compared for conceptual designs intended to function as part of the water recovery and management system of SSF. The VCD is considered the most mature and efficient technology and was selected to replace the TIMES as the baseline urine processor for SSF.

  14. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  15. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  16. Benchmarking healthcare logistics processes: a comparative case study of Danish and US hospitals

    DEFF Research Database (Denmark)

    Feibert, Diana Cordes; Andersen, Bjørn; Jacobsen, Peter

    2017-01-01

    initiatives prevalent in manufacturing industries such as lean, business process reengineering and benchmarking have seen an increase in use in healthcare. This study investigates how logistics processes in a hospital can be benchmarked to improve process performance. A comparative case study of the bed...... logistics process and the pharmaceutical distribution process was conducted at a Danish and a US hospital. The case study results identified decision criteria for designing efficient and effective healthcare logistics processes. The most important decision criteria were related to quality, security...... of supply and employee engagement. Based on these decision criteria, performance indicators were developed to enable benchmarking of logistics processes in healthcare. The study contributes to the limited literature on healthcare logistics benchmarking. Furthermore, managers in healthcare logistics...

  17. Neuronal oscillations reveal the processes underlying intentional compared to incidental learning in children and young adults.

    Directory of Open Access Journals (Sweden)

    Moritz Köster

    Full Text Available This EEG study investigated the neuronal processes during intentional compared to incidental learning in young adults and two groups of children aged 10 and 7 years. Theta (3-8 Hz and alpha (10-16 Hz neuronal oscillations were analyzed to compare encoding processes during an intentional and an incidental encoding task. In all three age groups, both encoding conditions were associated with an increase in event-related theta activity. Encoding-related alpha suppression increased with age. Memory performance was higher in the intentional compared to the incidental task in all age groups. Furthermore, intentional learning was associated with an improved encoding of perceptual features, which were relevant for the retrieval phase. Theta activity increased from incidental to intentional encoding. Specifically, frontal theta increased in all age groups, while parietal theta increased only in adults and older children. In younger children, parietal theta was similarly high in both encoding phases. While alpha suppression may reflect semantic processes during encoding, increased theta activity during intentional encoding may indicate perceptual binding processes, in accordance with the demands of the encoding task. Higher encoding-related alpha suppression in the older age groups, together with age differences in parietal theta activity during incidental learning in young children, is in line with recent theoretical accounts, emphasizing the role of perceptual processes in mnemonic processing in young children, whereas semantic encoding processes continue to mature throughout middle childhood.

  18. Comparative analysis of thermal-assisted high pressure and thermally processed mango pulp: Influence of processing, packaging, and storage.

    Science.gov (United States)

    Kaushik, Neelima; Rao, P Srinivasa; Mishra, H N

    2018-01-01

    Storage stability and shelf-life of mango pulp packed in three different packaging films and processed using an optimized thermal-assisted high pressure processing treatment 'HPP' (600 MPa/52 ℃/10 min) was analyzed during refrigerated (5 ℃) and accelerated (37 ℃) storage and compared with the conventional thermal treatment 'TT' (0.1 MPa/95 ℃/15 min). After processing, HPP resulted in relatively lower total color difference (3.5), retained higher ascorbic acid (95%), total phenolics (106%), total flavonoids content (118%) in mango pulp compared to TT, with values of 5.0, 62, 83, 73%, respectively. However, HPP led to ∼50% enzymes inactivation (pectin methylesterase, polyphenol oxidase, peroxidase) in comparison to >90% obtained during TT. Both HPP and TT resulted in > 5 log 10 units reduction of the studied microorganisms to give a safe product. In contrast to the refrigerated storage, quality changes under accelerated conditions were found to be considerably rapid and dependent on packaging material irrespective of the method of processing. Shelf-life under refrigeration was limited by microbial growth and sensory quality; whereas, browning restricted the shelf-life during accelerated storage. HPP in aluminum-based retort pouch was adjudged superior processing -packaging combination for maximizing the shelf-life of mango pulp to 120 and 58 days during refrigerated and accelerated storage, respectively. In comparison, TT led to higher quality changes upon processing than HPP and resulted in shelf-life of 110 and 58 days under the same packaging and storage conditions, respectively.

  19. Comparative energetic assessment of methanol production from CO2: Chemical versus electrochemical process

    International Nuclear Information System (INIS)

    Al-Kalbani, Haitham; Xuan, Jin; García, Susana; Wang, Huizhi

    2016-01-01

    Highlights: • We model two emission-to-fuel processes which convert CO 2 to fuels. • We optimize the heat exchanger networks for the two processes. • We compare the two processes in terms of energy requirement and climate impact. • The process based on CO 2 electrolysis is more energy efficient. • Both of the processes can reduce CO 2 emissions if renewable energies are used. - Abstract: Emerging emission-to-liquid (eTL) technologies that produce liquid fuels from CO 2 are a possible solution for both the global issues of greenhouse gas emissions and fossil fuel depletion. Among those technologies, CO 2 hydrogenation and high-temperature CO 2 electrolysis are two promising options suitable for large-scale applications. In this study, two CO 2 -to-methanol conversion processes, i.e., production of methanol by CO 2 hydrogenation and production of methanol based on high-temperature CO 2 electrolysis, are simulated using Aspen HYSYS. With Aspen Energy Analyzer, heat exchanger networks are optimized and minimal energy requirements are determined for the two different processes. The two processes are compared in terms of energy requirement and climate impact. It is found that the methanol production based on CO 2 electrolysis has an energy efficiency of 41%, almost double that of the CO 2 hydrogenation process provided that the required hydrogen is sourced from water electrolysis. The hydrogenation process produces more CO 2 when fossil fuel energy sources are used, but can result in more negative CO 2 emissions with renewable energies. The study reveals that both of the eTL processes can outperform the conventional fossil-fuel-based methanol production process in climate impacts as long as the renewable energy sources are implemented.

  20. MEASUREMENT OF COMPARATIVE ADVANTAGES OF PROCESSED FOOD SECTOR OF SERBIA IN THE INCREASING THE EXPORT

    OpenAIRE

    Ignjatijević, Svetlana; Čavlin, Miroslav; Đorđević, Dragomir

    2014-01-01

    The subject of this research is to analyse the comparative advantages of export of processed food sector, in order to define the position of the processed food sector in Serbia in compare to the Danube region and highlight the products that were and will be the main exporting agricultural product of Serbia. In this research, we have applied the following indexes: RXA, RTA, ln RXA, RC, RCA, LFI, GL, Sm. We have examined the movement of the index for the period 2005 - 2011th year. We have inves...

  1. Model selection for convolutive ICA with an application to spatiotemporal analysis of EEG

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2007-01-01

    We present a new algorithm for maximum likelihood convolutive independent component analysis (ICA) in which components are unmixed using stable autoregressive filters determined implicitly by estimating a convolutive model of the mixing process. By introducing a convolutive mixing model...... for the components, we show how the order of the filters in the model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving a subspace of independent components in electroencephalography (EEG). Initial results suggest that in some cases, convolutive mixing may...

  2. COMPAR

    International Nuclear Information System (INIS)

    Kuefner, K.

    1976-01-01

    COMPAR works on FORTRAN arrays with four indices: A = A(i,j,k,l) where, for each fixed k 0 ,l 0 , only the 'plane' [A(i,j,k 0 ,l 0 ), i = 1, isub(max), j = 1, jsub(max)] is held in fast memory. Given two arrays A, B of this type COMPAR has the capability to 1) re-norm A and B ind different ways; 2) calculate the deviations epsilon defined as epsilon(i,j,k,l): =[A(i,j,k,l) - B(i,j,k,l)] / GEW(i,j,k,l) where GEW (i,j,k,l) may be chosen in three different ways; 3) calculate mean, standard deviation and maximum in the array epsilon (by several intermediate stages); 4) determine traverses in the array epsilon; 5) plot these traverses by a printer; 6) simplify plots of these traverses by the PLOTEASY-system by creating input data blocks for this system. The main application of COMPAR is given (so far) by the comparison of two- and three-dimensional multigroup neutron flux-fields. (orig.) [de

  3. Experimental Investigation of Comparative Process Capabilities of Metal and Ceramic Injection Molding for Precision Applications

    DEFF Research Database (Denmark)

    Islam, Aminul; Giannekas, Nikolaos; Marhöfer, David Maximilian

    2016-01-01

    and discussion presented in the paper will be useful for thorough understanding of the MIM and CIM processes and to select the right material and process for the right application or even to combine metal and ceramic materials by molding to produce metal–ceramic hybrid components.......The purpose of this paper is to make a comparative study on the process capabilities of the two branches of the powder injection molding (PIM) process—metal injection molding (MIM) and ceramic injection molding (CIM), for high-end precision applications. The state-of-the-art literature does...

  4. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  5. Economics of recombinant antibody production processes at various scales: Industry-standard compared to continuous precipitation.

    Science.gov (United States)

    Hammerschmidt, Nikolaus; Tscheliessnig, Anne; Sommer, Ralf; Helk, Bernhard; Jungbauer, Alois

    2014-06-01

    Standard industry processes for recombinant antibody production employ protein A affinity chromatography in combination with other chromatography steps and ultra-/diafiltration. This study compares a generic antibody production process with a recently developed purification process based on a series of selective precipitation steps. The new process makes two of the usual three chromatographic steps obsolete and can be performed in a continuous fashion. Cost of Goods (CoGs) analyses were done for: (i) a generic chromatography-based antibody standard purification; (ii) the continuous precipitation-based purification process coupled to a continuous perfusion production system; and (iii) a hybrid process, coupling the continuous purification process to an upstream batch process. The results of this economic analysis show that the precipitation-based process offers cost reductions at all stages of the life cycle of a therapeutic antibody, (i.e. clinical phase I, II and III, as well as full commercial production). The savings in clinical phase production are largely attributed to the fact that expensive chromatographic resins are omitted. These economic analyses will help to determine the strategies that are best suited for small-scale production in parallel fashion, which is of importance for antibody production in non-privileged countries and for personalized medicine. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Optimization of a micro-scale, high throughput process development tool and the demonstration of comparable process performance and product quality with biopharmaceutical manufacturing processes.

    Science.gov (United States)

    Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J

    2017-07-14

    In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    Science.gov (United States)

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  8. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  9. Y-TZP ceramic processing from coprecipitated powders : A comparative study with three commercial dental ceramics

    NARCIS (Netherlands)

    Lazar, Dolores R. R.; Bottino, Marco C.; Ozcan, Mutlu; Valandro, Luiz Felipe; Amaral, Regina; Ussui, Valter; Bressiani, Ana H. A.

    2008-01-01

    Objectives. (1) To synthesize 3 mol% yttria-stabilized zirconia (3Y-TZP) powders via coprecipitation route, (2) to obtain zirconia ceramic specimens, analyze surface characteristics, and mechanical properties, and (3) to compare the processed material with three reinforced dental ceramics. Methods.

  10. A comparative review of recovery processes in rivers, lakes, estuarine and coastal waters

    NARCIS (Netherlands)

    Verdonschot, P.F.M.; Spears, B.M.; Feld, C.K.; Brucet, S.; Keizer-Vlek, H.E.; Borja, A.; Elliot, M.; Kernan, M.; Johnson, R.K.

    2013-01-01

    The European Water Framework Directive aims to improve ecological status within river basins. This requires knowledge of responses of aquatic assemblages to recovery processes that occur after measures have been taken to reduce major stressors. A systematic literature review comparatively assesses

  11. Higher Education Quality Assurance Processes in Latin America: A Comparative Perspective

    Science.gov (United States)

    Lamarra, Norberto Fernandez

    2009-01-01

    The article first considers a characterization of higher education in Latin America, the principal problems and the scenarios that have led to the inclusion of quality assessment and accreditation processes in higher education as a priority in the regional agenda. The following aspects are then developed from a comparative perspective: the main…

  12. Different Gestalt Processing for Different Actions? Comparing Object-Directed Reaching and Looking Time Measures

    Science.gov (United States)

    Vishton, P.M.; Ware, E.A.; Badger, A.N.

    2005-01-01

    Six experiments compared the Gestalt processing that mediates infant reaching and looking behaviors. Experiment 1 demonstrated that the positioning and timing of 8- and 9-month-olds' reaching was influenced by remembered relative motion. Experiment 2 suggested that a visible gap, without this relative motion, was not sufficient to produce these…

  13. Comparative analyses of diffusion coefficients for different extraction processes from thyme

    Directory of Open Access Journals (Sweden)

    Petrovic Slobodan S.

    2012-01-01

    Full Text Available This work was aimed to analyze kinetics and mass transfer phenomena for different extraction processes from thyme (Thymus vulgaris L. leaves. Different extraction processes with ethanol were studied: Soxhlet extraction and ultrasound-assisted batch extraction on the laboratory scale as well as pilot plant batch extraction with mixing. The extraction processes with ethanol were compared to the process of supercritical carbon dioxide extraction performed at 10 MPa and 40°C. Experimental data were analyzed by mathematical model derived from the Fick’s second law to determine and compare diffusion coefficients in the periods of constant and decreasing extraction rate. In the fast extraction period, values of diffusion coefficients were one to three orders of magnitude higher compared to those determined for the period of slow extraction. The highest diffusion coefficient was reported for the fast extraction period of supercritical fluid extraction. In the case of extraction processes with ethanol, ultrasound, stirring and extraction temperature increase enhanced mass transfer rate in the washing phase. On the other hand, ultrasound contributed the most to the increase of mass transfer rate in the period of slow extraction.

  14. Hazardous waste characterization among various thermal processes in South Korea: a comparative analysis.

    Science.gov (United States)

    Shin, Sun Kyoung; Kim, Woo-Il; Jeon, Tae-Wan; Kang, Young-Yeul; Jeong, Seong-Kyeong; Yeon, Jin-Mo; Somasundaram, Swarnalatha

    2013-09-15

    Ministry of Environment, Republic of Korea (South Korea) is in progress of converting its current hazardous waste classification system to harmonize it with the international standard and to set-up the regulatory standards for toxic substances present in the hazardous waste. In the present work, the concentrations along with the trend of 13 heavy metals, F(-), CN(-) and 19 PAH present in the hazardous waste generated among various thermal processes (11 processes) in South Korea were analyzed along with their leaching characteristics. In all thermal processes, the median concentrations of Cu (3.58-209,000 mg/kg), Ni (BDL-1560 mg/kg), Pb (7.22-5132.25mg/kg) and Zn (83.02-31419 mg/kg) were comparatively higher than the other heavy metals. Iron & Steel thermal process showed the highest median value of the heavy metals Cd (14.76 mg/kg), Cr (166.15 mg/kg) and Hg (2.38 mg/kg). Low molecular weight PAH (BDL-37.59 mg/kg) was predominant in sludge & filter cake samples present in most of the thermal processes. Comparatively flue gas dust present in most of the thermal processing units resulted in the higher leaching of the heavy metals. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Statistical power of model selection strategies for genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Zheyang Wu

    2009-07-01

    Full Text Available Genome-wide association studies (GWAS aim to identify genetic variants related to diseases by examining the associations between phenotypes and hundreds of thousands of genotyped markers. Because many genes are potentially involved in common diseases and a large number of markers are analyzed, it is crucial to devise an effective strategy to identify truly associated variants that have individual and/or interactive effects, while controlling false positives at the desired level. Although a number of model selection methods have been proposed in the literature, including marginal search, exhaustive search, and forward search, their relative performance has only been evaluated through limited simulations due to the lack of an analytical approach to calculating the power of these methods. This article develops a novel statistical approach for power calculation, derives accurate formulas for the power of different model selection strategies, and then uses the formulas to evaluate and compare these strategies in genetic model spaces. In contrast to previous studies, our theoretical framework allows for random genotypes, correlations among test statistics, and a false-positive control based on GWAS practice. After the accuracy of our analytical results is validated through simulations, they are utilized to systematically evaluate and compare the performance of these strategies in a wide class of genetic models. For a specific genetic model, our results clearly reveal how different factors, such as effect size, allele frequency, and interaction, jointly affect the statistical power of each strategy. An example is provided for the application of our approach to empirical research. The statistical approach used in our derivations is general and can be employed to address the model selection problems in other random predictor settings. We have developed an R package markerSearchPower to implement our formulas, which can be downloaded from the

  16. Comparative exergy analyses of Jatropha curcas oil extraction methods: Solvent and mechanical extraction processes

    International Nuclear Information System (INIS)

    Ofori-Boateng, Cynthia; Keat Teong, Lee; JitKang, Lim

    2012-01-01

    Highlights: ► Exergy analysis detects locations of resource degradation within a process. ► Solvent extraction is six times exergetically destructive than mechanical extraction. ► Mechanical extraction of jatropha oil is 95.93% exergetically efficient. ► Solvent extraction of jatropha oil is 79.35% exergetically efficient. ► Exergy analysis of oil extraction processes allow room for improvements. - Abstract: Vegetable oil extraction processes are found to be energy intensive. Thermodynamically, any energy intensive process is considered to degrade the most useful part of energy that is available to produce work. This study uses literature values to compare the efficiencies and degradation of the useful energy within Jatropha curcas oil during oil extraction taking into account solvent and mechanical extraction methods. According to this study, J. curcas seeds on processing into J. curcas oil is upgraded with mechanical extraction but degraded with solvent extraction processes. For mechanical extraction, the total internal exergy destroyed is 3006 MJ which is about six times less than that for solvent extraction (18,072 MJ) for 1 ton J. curcas oil produced. The pretreatment processes of the J. curcas seeds recorded a total internal exergy destructions of 5768 MJ accounting for 24% of the total internal exergy destroyed for solvent extraction processes and 66% for mechanical extraction. The exergetic efficiencies recorded are 79.35% and 95.93% for solvent and mechanical extraction processes of J. curcas oil respectively. Hence, mechanical oil extraction processes are exergetically efficient than solvent extraction processes. Possible improvement methods are also elaborated in this study.

  17. Comparing biological and thermochemical processing of sugarcane bagasse: An energy balance perspective

    International Nuclear Information System (INIS)

    Leibbrandt, N.H.; Knoetze, J.H.; Goergens, J.F.

    2011-01-01

    The technical performance of lignocellulosic enzymatic hydrolysis and fermentation versus pyrolysis processes for sugarcane bagasse was evaluated, based on currently available technology. Process models were developed for bioethanol production from sugarcane bagasse using three different pretreatment methods, i.e. dilute acid, liquid hot water and steam explosion, at various solid concentrations. Two pyrolysis processes, namely fast pyrolysis and vacuum pyrolysis, were considered as alternatives to biological processing for the production of biofuels from sugarcane bagasse. For bioethanol production, a minimum of 30% solids in the pretreatment reactor was required to render the process energy self-sufficient, which led to a total process energy demand equivalent to roughly 40% of the feedstock higher heating value. Both vacuum pyrolysis and fast pyrolysis could be operated as energy self-sufficient if 45% of the produced char from fast pyrolysis is used to fuel the process. No char energy is required to fuel the vacuum pyrolysis process due to lower process energy demands (17% compared to 28% of the feedstock higher heating value). The process models indicated that effective process heat integration can result in a 10-15% increase in all process energy efficiencies. Process thermal efficiencies between 52 and 56% were obtained for bioethanol production at pretreatment solids at 30% and 50%, respectively, while the efficiencies were 70% for both pyrolysis processes. The liquid fuel energy efficiency of the best bioethanol process is 41%, while that of crude bio-oil production before upgrading is 67% and 56% via fast and vacuum pyrolysis, respectively. Efficiencies for pyrolysis processes are expected to decrease by up to 15% should upgrade to a transportation fuel of equivalent quality to bioethanol be taken into consideration. -- Highlights: → Liquid biofuels can be produced via lignocellulosic enzymatic hydrolysis and fermentation or pyrolysis. → A minimum of

  18. The Comparative Effect of Top-down Processing and Bottom-up Processing through TBLT on Extrovert and Introvert EFL

    Directory of Open Access Journals (Sweden)

    Pezhman Nourzad Haradasht

    2013-09-01

    Full Text Available This research seeks to examine the effect of two models of reading comprehension, namely top-down and bottom-up processing, on the reading comprehension of extrovert and introvert EFL learners’ reading comprehension. To do this, 120 learners out of a total number of 170 intermediate learners being educated at Iran Mehr English Language School were selected all taking a PET (Preliminary English Test first for homogenization prior to the study. They also answered the Eysenck Personality Inventory (EPI which in turn categorized them into two subgroups within each reading models consisting of introverts and extroverts. All in all, there were four subgroups: 30 introverts and 30 extroverts undergoing the top-down processing treatment, and 30 introverts and 30 extroverts experiencing the bottom-up processing treatment. The aforementioned PET was administered as the post test of the study after each group was exposed to the treatment for 18 sessions in six weeks. After the instructions finished, the mean scores of all four groups on this post test were computed and a two-way ANOVA was run to test all the four hypotheses raise in this study. the results showed that while learners generally benefitted more from the bottom-up processing setting compared  to the top-down processing one, the extrovert group was better off receiving top-down instruction. Furthermore, introverts outperformed extroverts in bottom-up group; yet between the two personalities subgroups in the top-down setting no difference was seen. A predictable pattern of benefitting from teaching procedures could not be drawn for introverts as in both top-down and bottom-up settings, they benefitted more than extroverts.

  19. Comparative Analysis of Processes for Recovery of Rare Earths from Bauxite Residue

    Science.gov (United States)

    Borra, Chenna Rao; Blanpain, Bart; Pontikes, Yiannis; Binnemans, Koen; Van Gerven, Tom

    2016-11-01

    Environmental concerns and lack of space suggest that the management of bauxite residue needs to be re-adressed. The utilization of the residue has thus become a topic high on the agenda for both academia and industry, yet, up to date, it is only rarely used. Nonetheless, recovery of rare earth elements (REEs) with or without other metals from bauxite residue, and utilization of the left-over residue in other applications like building materials may be a viable alternative to storage. Hence, different processes developed by the authors for recovery of REEs and other metals from bauxite residue were compared. In this study, preliminary energy and cost analyses were carried out to assess the feasibility of the processes. These analyses show that the combination of alkali roasting-smelting-quenching-leaching is a promising process for the treatment of bauxite residue and that it is justified to study this process at a pilot scale.

  20. Measurement of comparative advantages of processed food sector of Serbia in the increasing the export

    Directory of Open Access Journals (Sweden)

    Ignjatijević Svetlana

    2014-01-01

    Full Text Available The subject of this research is to analyse the comparative advantages of export of processed food sector, in order to define the position of the processed food sector in Serbia in compare to the Danube region and highlight the products that were and will be the main exporting agricultural product of Serbia. In this research, we have applied the following indexes: RXA, RTA, ln RXA, RC, RCA, LFI, GL, Sm. We have examined the movement of the index for the period 2005 - 2011th year. We have investigated the existence of correlations RCA index of processed food sectors with the application of the Pearson and Spearman index determined as RCA variable mutual co-variant. We found that following products showed an increase of comparative advantage in export as measured by the Balassa index: milk products, cheese and curd, groats and meal of other cereals, preparations of cereals, flour, starch, vegetables, roots and tubers, processed, prepared and Fruit products, sugar, molasses and honey, chocolate and other food preparations with cocoa, animal food (including un milled cereals, edible products and preparations, alcoholic beverages, non-alcoholic beverages, solid vegetable fats, oils, 'soft' and animal and vegetable fats.

  1. Facilitating comparative effectiveness research in cancer genomics: evaluating stakeholder perceptions of the engagement process.

    Science.gov (United States)

    Deverka, Patricia A; Lavallee, Danielle C; Desai, Priyanka J; Armstrong, Joanne; Gorman, Mark; Hole-Curry, Leah; O'Leary, James; Ruffner, B W; Watkins, John; Veenstra, David L; Baker, Laurence H; Unger, Joseph M; Ramsey, Scott D

    2012-07-01

    The Center for Comparative Effectiveness Research in Cancer Genomics completed a 2-year stakeholder-guided process for the prioritization of genomic tests for comparative effectiveness research studies. We sought to evaluate the effectiveness of engagement procedures in achieving project goals and to identify opportunities for future improvements. The evaluation included an online questionnaire, one-on-one telephone interviews and facilitated discussion. Responses to the online questionnaire were tabulated for descriptive purposes, while transcripts from key informant interviews were analyzed using a directed content analysis approach. A total of 11 out of 13 stakeholders completed both the online questionnaire and interview process, while nine participated in the facilitated discussion. Eighty-nine percent of questionnaire items received overall ratings of agree or strongly agree; 11% of responses were rated as neutral with the exception of a single rating of disagreement with an item regarding the clarity of how stakeholder input was incorporated into project decisions. Recommendations for future improvement included developing standard recruitment practices, role descriptions and processes for improved communication with clinical and comparative effectiveness research investigators. Evaluation of the stakeholder engagement process provided constructive feedback for future improvements and should be routinely conducted to ensure maximal effectiveness of stakeholder involvement.

  2. Optimization of Multiple Responses of Ultrasonic Machining (USM Process: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Rina Chakravorty

    2013-04-01

    Full Text Available Ultrasonic machining (USM process has multiple performance measures, e.g. material removal rate (MRR, tool wear rate (TWR, surface roughness (SR etc., which are affected by several process parameters. The researchers commonly attempted to optimize USM process with respect to individual responses, separately. In the recent past, several systematic procedures for dealing with the multi-response optimization problems have been proposed in the literature. Although most of these methods use complex mathematics or statistics, there are some simple methods, which can be comprehended and implemented by the engineers to optimize the multiple responses of USM processes. However, the relative optimization performance of these approaches is unknown because the effectiveness of different methods has been demonstrated using different sets of process data. In this paper, the computational requirements for four simple methods are presented, and two sets of past experimental data on USM processes are analysed using these methods. The relative performances of these methods are then compared. The results show that weighted signal-to-noise (WSN ratio method and utility theory (UT method usually give better overall optimisation performance for the USM process than the other approaches.

  3. Comparative study on resistant starch, amilose content and glycaemic index after precooked process in white rice

    Science.gov (United States)

    Pratiwi, V. N.

    2018-03-01

    Rice is a staple food and regarded as a useful carbohydrate source. In general rice is high in glycaemic index (GI) and low colonic fermentation. People are aware of the alterations in blood glucose levels or glycaemic index after consuming rice. Resistant starch (RS) and amylose content play an important role in controlling GI. GI and RS content have been established as important indicators of starch digestibility. The aim of this study was to determine the precooked process with hydrothermal (boiling at 80°C, 10 minutes) and cooling process with low temperature (4°C, 1 h) to increase potential content of RS and decrease of glycaemic index of white rice. There were two stages of this research, 1) preparation of white rice with precooked process; 2) analysis of precooked white rice characteristics (resistant starch, amylose content, and estimated glycaemic index). The result of analysis on precooked white rice showed an increased RS content (1.11%) and white rice (0.99%), but the difference was not statistically significant. The amylose content increased significantly after precooked process in white rice (24.70%) compared with white rice (20.89%). Estimated glycaemic index (EGI) decreased after precooked proses (65.63%) but not significant as compared to white rice (66.47%). From the present study it was concluded that precooked process had no significant impact on increasing RS and decreasing EGI of white rice. This may be due to the relatively short cooling time (1hour) in 4°C.

  4. Assessing environmental impacts using a comparative LCA of industrial and artisanal production processes: "Minas Cheese" case

    Directory of Open Access Journals (Sweden)

    Elbert Muller Nigri

    2014-09-01

    Full Text Available This study uses the Life Cycle Assessment (LCA methodology to evaluate and compare the environmental impacts caused by both the artisanal and the industrial manufacturing processes of "Minas cheese". This is a traditional cheese produced in the state of Minas Gerais (Brazil, and it is considered a "cultural patrimony" in the country. The high participation of artisanal producers in the market justifies this research, and this analysis can help the identification of opportunities to improve the environmental performance of several stages of the production system. The environmental impacts caused were also assessed and compared. The functional unit adopted was 1 kilogram (Kg of cheese. The system boundaries considered were the production process, conservation of product (before sale, and transport to consumer market. The milk production process was considered similar in both cases, and therefore it was not included in the assessment. The data were collected through interviews with the producers, observation, and a literature review; they were ordered and processed using the SimaPro 7 LCA software. According to the impact categories analyzed, the artisanal production exerted lower environmental impacts. This can be justified mainly because the industrial process includes the pasteurization stage, which uses dry wood as an energy source and refrigeration.

  5. Electro-spun organic nanofibers elaboration process investigations using comparative analytical solutions.

    Science.gov (United States)

    Colantoni, A; Boubaker, K

    2014-01-30

    In this paper Enhanced Variational Iteration Method, EVIM is proposed, along with the BPES, for solving Bratu equation which appears in the particular elecotrospun nanofibers fabrication process framework. Elecotrospun organic nanofibers, with diameters less than 1/4 microns have been used in non-wovens and filtration industries for a broad range of filtration applications in the last decade. Electro-spinning process has been associated to Bratu equation through thermo-electro-hydrodynamics balance equations. Analytical solutions have been proposed, discussed and compared. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A comparative analysis of selected wastewater pretreatment processes in food industry

    Science.gov (United States)

    Jaszczyszyn, Katarzyna; Góra, Wojciech; Dymaczewski, Zbysław; Borowiak, Robert

    2018-02-01

    The article presents a comparative analysis of the classical coagulation with the iron sulphate and adsorption on bentonite for the pretreatment of wastewater in the food industry. As a result of the studies, chemical oxygen demand (COD) and total nitrogen (TN) reduction were found to be comparable in both technologies, and a 29% higher total phosphorus removal efficiency by the coagulation was observed. After the coagulation and adsorption processes, a significant difference between mineral and organic fraction in the sludge was found (49% and 51% for bentonite and 28% and 72% for iron sulphate, respectively).

  7. Demographic model selection using random forests and the site frequency spectrum.

    Science.gov (United States)

    Smith, Megan L; Ruffley, Megan; Espíndola, Anahí; Tank, David C; Sullivan, Jack; Carstens, Bryan C

    2017-09-01

    Phylogeographic data sets have grown from tens to thousands of loci in recent years, but extant statistical methods do not take full advantage of these large data sets. For example, approximate Bayesian computation (ABC) is a commonly used method for the explicit comparison of alternate demographic histories, but it is limited by the "curse of dimensionality" and issues related to the simulation and summarization of data when applied to next-generation sequencing (NGS) data sets. We implement here several improvements to overcome these difficulties. We use a Random Forest (RF) classifier for model selection to circumvent the curse of dimensionality and apply a binned representation of the multidimensional site frequency spectrum (mSFS) to address issues related to the simulation and summarization of large SNP data sets. We evaluate the performance of these improvements using simulation and find low overall error rates (~7%). We then apply the approach to data from Haplotrema vancouverense, a land snail endemic to the Pacific Northwest of North America. Fifteen demographic models were compared, and our results support a model of recent dispersal from coastal to inland rainforests. Our results demonstrate that binning is an effective strategy for the construction of a mSFS and imply that the statistical power of RF when applied to demographic model selection is at least comparable to traditional ABC algorithms. Importantly, by combining these strategies, large sets of models with differing numbers of populations can be evaluated. © 2017 John Wiley & Sons Ltd.

  8. Life Cycle Assessment (LCA) used to compare two different methods of ripe table olive processing

    Energy Technology Data Exchange (ETDEWEB)

    Russo, C.; Cappelletti, G. M.; Nicoletti, G. M.

    2010-07-01

    The aim of the present study is to analyze the most common method used for processing ripe table olives: the California style. Life Cycle Assessment (LCA) was applied to detect the hot spots of the system under examination. The LCA results also allowed us to compare the traditional California style, here called method A, with another California style, here called method B. We were interested in this latter method, because the European Union is considering introducing it into the product specification of the Protected Denomination of Origin (PDO) La Bella della Daunia. It was also possible to compare the environmental impacts of the two California style methods with those of the Spanish style method. From the comparison it is clear that method B has a greater environmental impact than method A because greater amounts of water and electricity are required, whereas Spanish style processing has a lower environmental impact than the California style methods. (Author)

  9. Comparing an FPGA to a Cell for an Image Processing Application

    Directory of Open Access Journals (Sweden)

    Robert W. Ives

    2010-01-01

    Full Text Available Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs, have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3 game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.

  10. Comparative study on the processing of armour steels with various unconventional technologies

    Science.gov (United States)

    Herghelegiu, E.; Schnakovszky, C.; Radu, M. C.; Tampu, N. C.; Zichil, V.

    2017-08-01

    The aim of the current paper is to analyse the suitability of three unconventional technologies - abrasive water jet (AWJ), plasma and laser - to process armour steels. In view of this, two materials (Ramor 400 and Ramor 550) were selected to carry out the experimental tests and the quality of cuts was quantified by considering the following characteristics: width of the processed surface at the jet inlet (Li), width of the processed surface at the jet outlet (Lo), inclination angle (a), deviation from perpendicularity (u), surface roughness (Ra) and surface hardness. It was fond that in terms of cut quality and environmental impact, the best results are offered by abrasive water jet technology. However, it has the lowest productivity comparing to the other two technologies.

  11. Comparative study of submerged and surface culture acetification process for orange vinegar.

    Science.gov (United States)

    Cejudo-Bastante, Cristina; Durán-Guerrero, Enrique; García-Barroso, Carmelo; Castro-Mejías, Remedios

    2018-02-01

    The two main acetification methodologies generally employed in the production of vinegar (surface and submerged cultures) were studied and compared for the production of orange vinegar. Polyphenols (UPLC/DAD) and volatiles compounds (SBSE-GC/MS) were considered as the main variables in the comparative study. Sensory characteristics of the obtained vinegars were also evaluated. Seventeen polyphenols and 24 volatile compounds were determined in the samples during both acetification processes. For phenolic compounds, analysis of variance showed significant higher concentrations when surface culture acetification was employed. However, for the majority of volatile compounds higher contents were observed for submerged culture acetification process, and it was also reflected in the sensory analysis, presenting higher scores for the different descriptors. Multivariate statistical analysis such as principal component analysis demonstrated the possibility of discriminating the samples regarding the type of acetification process. Polyphenols such as apigenin derivative or ferulic acid and volatile compounds such as 4-vinylguaiacol, decanoic acid, nootkatone, trans-geraniol, β-citronellol or α-terpineol, among others, were those compounds that contributed more to the discrimination of the samples. The acetification process employed in the production of orange vinegar has been demonstrated to be very significant for the final characteristics of the vinegar obtained. So it must be carefully controlled to obtain high quality products. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  12. Model selection for the extraction of movement primitives

    Directory of Open Access Journals (Sweden)

    Dominik M Endres

    2013-12-01

    Full Text Available A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA,independent component analysis (ICA, anechoic demixing, and the time-varying synergy model. However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model.We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria (Bayesian information criterion, BIC and the Akaike Information Criterion (AIC. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  13. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  14. A comparative study of electrochemical machining process parameters by using GA and Taguchi method

    Science.gov (United States)

    Soni, S. K.; Thomas, B.

    2017-11-01

    In electrochemical machining quality of machined surface strongly depend on the selection of optimal parameter settings. This work deals with the application of Taguchi method and genetic algorithm using MATLAB to maximize the metal removal rate and minimize the surface roughness and overcut. In this paper a comparative study is presented for drilling of LM6 AL/B4C composites by comparing the significant impact of numerous machining process parameters such as, electrolyte concentration (g/l),machining voltage (v),frequency (hz) on the response parameters (surface roughness, material removal rate and over cut). Taguchi L27 orthogonal array was chosen in Minitab 17 software, for the investigation of experimental results and also multiobjective optimization done by genetic algorithm is employed by using MATLAB. After obtaining optimized results from Taguchi method and genetic algorithm, a comparative results are presented.

  15. The treatment of municipal solid waste in Malaysia comparing the biothennal process and mass burning

    Energy Technology Data Exchange (ETDEWEB)

    Fogelholm, C.J.; Iso-Tryykari, M.

    1997-12-31

    Mass burning is the previously much used technology in the combustion of municipal solid waste. In mass burning, unsorted waste is burned on a grate. The Biothermal Process is a new innovative municipal solid waste treatment concept. It consists of front end treatment, the biogasification of the biofraction and the fluidized bed combustion of the combustible fraction. The objective of this work is to compare the technical, environmental and economical features of the Biothermal Process and mass burning, when constructed in Malaysia. Firstly technical descriptions of concepts are presented. Secondly three cases namely Kuala Lumpur, Perai and Johor Bahru are studied. Finally conclusions are drawn. Economic comparisons revealed that the Biothermal Process is more economical than mass burning. The investment cost far the Biothermal Process is about 30 % lower than for mass burning plant. To achieve an 8 % Return on Investment, the treatment fee for the Biothermal Process is 47-95 MYR per tonne and for mass burning 181-215 MYR per tonne depending on the case. The sensibility analysis showed that independent of the variations in feeding values, the treatment fee remains much lower in the Biothermal Process. Technical comparisons show that the Biothermal Process has the better waste reduction and recycling rate in all cases. The Biothermal Process has much better electrical efficiency in the Kuala Lumpur and Johor Bahru cases, while mass burning has slightly better electrical efficiency in the Perai case. Both concepts have postal for phased construction, but phasing increases investment costs more in mass burning. The suitability of each concept to the differences in the quality of waste depends on local conditions, and both methods have merits. The Biothermal Process produces 45-70 % lower air emissions than mass burning, and generates less traffic in Kuala Lumpur and Perai, while traffic generation is equal in the Johor Bahru case. The comparisons show that according

  16. Comparing Sensory Information Processing and Alexithymia between People with Substance Dependency and Normal.

    Science.gov (United States)

    Bashapoor, Sajjad; Hosseini-Kiasari, Seyyedeh Tayebeh; Daneshvar, Somayeh; Kazemi-Taskooh, Zeinab

    2015-01-01

    Sensory information processing and alexithymia are two important factors in determining behavioral reactions. Some studies explain the effect of the sensitivity of sensory processing and alexithymia in the tendency to substance abuse. Giving that, the aim of the current study was to compare the styles of sensory information processing and alexithymia between substance-dependent people and normal ones. The research method was cross-sectional and the statistical population of the current study comprised of all substance-dependent men who are present in substance quitting camps of Masal, Iran, in October 2013 (n = 78). 36 persons were selected randomly by simple randomly sampling method from this population as the study group, and 36 persons were also selected among the normal population in the same way as the comparison group. Both groups was evaluated by using Toronto alexithymia scale (TAS) and adult sensory profile, and the multivariate analysis of variance (MANOVA) test was applied to analyze data. The results showed that there are significance differences between two groups in low registration (P processing and difficulty in describing emotions (P process sensory information in a different way than normal people and show more alexithymia features than them.

  17. Comparative assessment of several post-processing methods for correcting evapotranspiration forecasts derived from TIGGE datasets.

    Science.gov (United States)

    Tian, D.; Medina, H.

    2017-12-01

    Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.

  18. Comparative analysis of cogeneration power plants optimization based on stochastic method using superstructure and process simulator

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Leonardo Rodrigues de [Instituto Federal do Espirito Santo, Vitoria, ES (Brazil)], E-mail: leoaraujo@ifes.edu.br; Donatelli, Joao Luiz Marcon [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil)], E-mail: joaoluiz@npd.ufes.br; Silva, Edmar Alino da Cruz [Instituto Tecnologico de Aeronautica (ITA/CTA), Sao Jose dos Campos, SP (Brazil); Azevedo, Joao Luiz F. [Instituto de Aeronautica e Espaco (CTA/IAE/ALA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    Thermal systems are essential in facilities such as thermoelectric plants, cogeneration plants, refrigeration systems and air conditioning, among others, in which much of the energy consumed by humanity is processed. In a world with finite natural sources of fuels and growing energy demand, issues related with thermal system design, such as cost estimative, design complexity, environmental protection and optimization are becoming increasingly important. Therefore the need to understand the mechanisms that degrade energy, improve energy sources use, reduce environmental impacts and also reduce project, operation and maintenance costs. In recent years, a consistent development of procedures and techniques for computational design of thermal systems has occurred. In this context, the fundamental objective of this study is a performance comparative analysis of structural and parametric optimization of a cogeneration system using stochastic methods: genetic algorithm and simulated annealing. This research work uses a superstructure, modelled in a process simulator, IPSEpro of SimTech, in which the appropriate design case studied options are included. Accordingly, the cogeneration system optimal configuration is determined as a consequence of the optimization process, restricted within the configuration options included in the superstructure. The optimization routines are written in MsExcel Visual Basic, in order to work perfectly coupled to the simulator process. At the end of the optimization process, the system optimal configuration, given the characteristics of each specific problem, should be defined. (author)

  19. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  20. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  1. Motor development and sensory processing: A comparative study between preterm and term infants.

    Science.gov (United States)

    Cabral, Thais Invenção; Pereira da Silva, Louise Gracelli; Tudella, Eloisa; Simões Martinez, Cláudia Maria

    2014-10-16

    Infants born preterm and/or with low birth weight may present a clinical condition of organic instability and usually face a long period of hospitalization in the Neonatal Intensive Care Units, being exposed to biopsychosocial risk factors to their development due to decreased spontaneous movement and excessive sensory stimuli. This study assumes that there are relationships between the integration of sensory information of preterm infants, motor development and their subsequent effects. To evaluate the sensory processing and motor development in preterm infants aged 4-6 months and compare performance data with their peers born at term. This was a cross-sectional and comparative study consisting of a group of preterm infants (n=15) and a group of term infants (n=15), assessed using the Test of Sensory Functions in Infants (TSFI) and the Alberta Infant Motor Scale (AIMS). The results showed no significant association between motor performance on the AIMS scale (total score) and sensory processing in the TSFI (total score). However, all infants who scored abnormal in the total TSFI score, subdomain 1, and subdomain 5 presented motor performance at or below the 5th percentile on the AIMS scale. Since all infants who presented definite alteration in tolerating tactile deep pressure and poor postural control are at risk of delayed gross motor development, there may be peculiarities not detected by the tests used that seem to establish some relationship between sensory processing and motor development. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Pain processing in atypical Parkinsonisms and Parkinson disease: A comparative neurophysiological study.

    Science.gov (United States)

    Avenali, Micol; Tassorelli, Cristina; De Icco, Roberto; Perrotta, Armando; Serrao, Mariano; Fresia, Mauro; Pacchetti, Claudio; Sandrini, Giorgio

    2017-10-01

    Pain is a frequent non-motor feature in Parkinsonism but mechanistic data on the alteration of pain processing are insufficient to understand the possible causes and to define specifically-targeted treatments. we investigated spinal nociception through the neurophysiological measure of the threshold (TR) of nociceptive withdrawal reflex (NWR) and its temporal summation threshold (TST) comparatively in 12 Progressive Supranuclear Palsy (PSP) subjects, 11 Multiple System Atrophy (MSA) patients, 15 Parkinson's disease (PD) subjects and 24 healthy controls (HC). We also investigated the modulatory effect of L-Dopa in these three parkinsonian groups. We found a significant reduction in the TR of NWR and in the TST of NWR in PSP, MSA and PD patients compared with HC. L-Dopa induced an increase in the TR of NWR in the PSP group while TST of NWR increased in both PSP and PD. Our neurophysiological findings identify a facilitation of nociceptive processing in PSP that is broadly similar to that observed in MSA and PD. Specific peculiarities have emerged for PSP. Our data advance the knowledge of the neurophysiology of nociception in the advanced phases of parkinsonian syndromes and on the role of dopaminergic pathways in the control on pain processing. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  3. Balance of Comparative Advantages in the Processed Food Sector of the Danube Countries

    Directory of Open Access Journals (Sweden)

    Svetlana Ignjatijević

    2015-05-01

    Full Text Available In this paper, we investigated the level of competitiveness of the processed food sector of the Danube region countries, in order to show the existence of comparative advantage and the correlation of exports. We used the Balassa (RCA–revealed comparative advantage index and TPI (trade performance indexes. At first, using the Pearson and Spearman index, we examined the existence of correlations between the processed food sector of the Danube countries. Then, we applied the Least Significant Difference (LSD test to further compare the value and answered the question: between which Danube countries are there significant differences? With the study, we found that the distribution of the RCA index in Bosnia and Herzegovina, Hungary, Moldova and Slovenia deviates from normality. We also found the existence of a strong correlation of the RCA index of the Czech Republic with Romania, Hungary with Moldova and Serbia, Moldova with Serbia and Bulgaria with Ukraine. Finally, we concluded that the development of trade in the countries of the Danube region requires the participation of all relevant interest groups and could play an important role in providing faster economic development, that is in achieving sustainable development of the countries, with the sustainable use of available resources.

  4. Color, TOC and AOX removals from pulp mill effluent by advanced oxidation processes: A comparative study

    International Nuclear Information System (INIS)

    Catalkaya, Ebru Cokay; Kargi, Fikret

    2007-01-01

    Pulp mill effluent containing toxic chemicals was treated by different advanced oxidation processes (AOPs) consisting of treatments by hydrogen peroxide, Fenton's reagent (H 2 O 2 /Fe 2+ ), UV, UV/H 2 O 2 , photo-Fenton (UV/H 2 O 2 /Fe 2+ ), ozonation and peroxone (ozone/H 2 O 2 ) in laboratory-scale reactors for color, total organic carbon (TOC) and adsorbable organic halogens (AOX) removals from the pulp mill effluent. Effects of some operating parameters such as the initial pH, oxidant and catalyst concentrations on TOC, color, AOX removals were investigated. Almost every method used resulted in some degree of color removal from the pulp mill effluent. However, the Fenton's reagent utilizing H 2 O 2 /Fe 2+ resulted in the highest color, TOC and AOX removals under acidic conditions when compared with the other AOPs tested. Approximately, 88% TOC, 85% color and 89% AOX removals were obtained by the Fenton's reagent at pH 5 within 30 min. Photo-Fenton process yielded comparable TOC (85%), color (82%) and AOX (93%) removals within 5 min due to oxidations by UV light in addition to the Fenton's reagent. Fast oxidation reactions by the photo-Fenton treatment makes this approach more favorable as compared to the others used

  5. Application of analytic hierarchy process for measuring and comparing the global performance of intensive care units.

    Science.gov (United States)

    Hariharan, Seetharaman; Dey, Prasanta K; Chen, Deryk R; Moseley, Harley S L; Kumar, Areti Y

    2005-06-01

    To develop a model for the global performance measurement of intensive care units (ICUs) and to apply that model to compare the services for quality improvement. Analytic hierarchy process, a multiple-attribute decision-making technique, is used in this study to evolve such a model. The steps consisted of identifying the critical success factors for the best performance of an ICU, identifying subfactors that influence the critical factors, comparing them pairwise, deriving their relative importance and ratings, and calculating the cumulative performance according to the attributes of a given ICU. Every step in the model was derived by group discussions, brainstorming, and consensus among intensivists. The model was applied to 3 ICUs, 1 each in Barbados, Trinidad, and India in tertiary care teaching hospitals of similar setting. The cumulative performance rating of the Barbados ICU was 1.17 when compared with that of Trinidad and Indian ICU, which were 0.82 and 0.75, respectively, showing that the Trinidad and Indian ICUs performed 70% and 64% with respect to Barbados ICU. The model also enabled identifying specific areas where the ICUs did not perform well, which helped to improvise those areas. Analytic hierarchy process is a very useful model to measure the global performance of an ICU.

  6. Comparative effects of processing methods on the feeding value of maize in feedlot cattle.

    Science.gov (United States)

    Zinn, R A; Barreras, A; Corona, L; Owens, F N; Plascencia, A

    2011-12-01

    The primary reason for processing maize is to enhance feeding value. Total tract starch digestion is similar for coarsely processed (dry rolled, cracked) dry maize. Enhancements in starch digestion due to dry rolling maize v. feeding maize whole may be greater in light-weight calves than in yearlings, and when DM intake is restricted ( < 1·5 % of body weight). The net energy (NE) maintain (NEm) and NE gain (NEg) values for whole maize are 8·83 and 6·02 MJ (2·11 and 1·44 Mcal)/kg, respectively. Compared with conventional dry processing (i.e. coarse rolled, cracked), finely processing maize may increase the initial rate of digestion, but does not improve total tract starch digestion. Tempering before rolling (without the addition of steam) may enhance the growth performance response and the NE value of maize. Average total tract starch digestion is similar for high-moisture and steam-flaked maize. However, the proportion of starch digested ruminally is greater (about 8 %) for high-moisture maize. The growth performance response of feedlot cattle to the feeding of high-moisture maize is highly variable. Although the NEm and NEg value of whole high-moisture maize was slightly less than that of dry processed maize (averaging 9·04 and 6·44 MJ (2·16 and 1·54 Mcal)/kg, respectively), grinding or rolling high-moisture maize before ensiling increased (6 %) its NE value. Substituting steam-flaked maize for dry processed maize increases average daily gain (6·3 %) and decreases DM intake (5 %). The comparative NEm and NEg values for steam-flaked maize at optimal processing (density = 0·34 kg/l) are 10·04 and 7·07 MJ (2·40 and 1·69 Mcal)/kg, respectively. These NE values are greater (3 %) than current tabular values (National Research Council, 2000), being more consistent with earlier standards (National Research Council, 1984). When maize is the primary or sole source of starch in the diet, concentration of starch in faeces (faecal starch, % of

  7. ENZYMATIC HYDROLYSIS AS AN ENVIRONMENTALLY FRIENDLY PROCESS COMPARED TO THERMAL HYDROLYSIS FOR INSTANT COFFEE PRODUCTION

    Directory of Open Access Journals (Sweden)

    I. J. Baraldi

    Full Text Available Abstract Conventional production of instant coffee is based on solubilisation of polysaccharides present in roasted coffee. Higher process temperatures increase the solubilisation yield, but also lead to carbohydrate degradation and formation of undesirable volatile compounds. Enzymatic hydrolysis of roasted coffee is an alternative to minimize carbohydrate degradation. In this work, products obtained from thermal and enzymatic processes were compared in terms of carbohydrates and volatiles composition. Roasted coffee was extracted with water at 125 °C, and spent coffee was processed by thermal (180 °C or enzymatic hydrolysis. Enzymatic hydrolysis experiments were carried out at 50 °C using the commercial enzyme preparations Powercell (Prozyn, Galactomannanase (HBI-Enzymes, and Ultraflo XL (Novozymes. These formulations were previously selected from eleven different commercial enzyme preparations, and their main enzymatic activities included cellulase, galactomannanase, galactanase, and β-glucanase. Enzymatic hydrolysis yield was 18% (dry basis, similar to the extraction yield at 125 °C (20%, but lower than the thermal hydrolysis yield at 180 °C (28%. Instant coffee produced by enzymatic hydrolysis had a low content of undesirable volatile compounds and 21% (w/w of total carbohydrates. These results point to the enzymatic process as a feasible alternative for instant coffee production, with benefits including improved quality as well as reduced energy consumption.

  8. A comparative concept analysis of centring vs. opening meditation processes in health care.

    Science.gov (United States)

    Birx, Ellen

    2013-08-01

    To report an analysis and comparison of the concepts centring and opening meditation processes in health care. Centring and opening meditation processes are included in nursing theories and frequently recommended in health care for stress management. These meditation processes are integrated into emerging psychotherapy approaches and there is a rapidly expanding body of neuroscience research distinguishing brain activity associated with different types of meditation. Currently, there is a lack of theoretical and conceptual clarity needed to guide meditation research in health care. A search of healthcare literature between 2006-2011 was conducted using Alt HealthWatch, CINAHL, PsychNET and PubMed databases using the keywords 'centring' and 'opening' alone and in combination with the term 'meditation.' For the concept centring, 10 articles and 11 books and for the concept opening 13 articles and 10 books were included as data sources. Rodgers' evolutionary method of concept analysis was used. Centring and opening are similar in that they both involve awareness in the present moment; both use a gentle, effortless approach; and both have a calming effect. Key differences include centring's focus on the individual's inner experience compared with the non-dual, spacious awareness of opening. Centring and opening are overlapping, yet distinct meditation processes. The term meditation cannot be used in a generic way in health care. The differences between centring and opening have important implications for the further development of unitary-transformative nursing theories. © 2012 Blackwell Publishing Ltd.

  9. Early-Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques.

    Science.gov (United States)

    Tsagkari, Mirela; Couturier, Jean-Luc; Kokossis, Antonis; Dubois, Jean-Luc

    2016-09-08

    Biorefineries offer a promising alternative to fossil-based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital-intensive projects that involve state-of-the-art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well-documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early-stage capital cost estimation tool suitable for biorefinery processes. © 2015 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  10. Comparative study of thermochemical processes for hydrogen production from biomass fuels.

    Science.gov (United States)

    Biagini, Enrico; Masoni, Lorenzo; Tognotti, Leonardo

    2010-08-01

    Different thermochemical configurations (gasification, combustion, electrolysis and syngas separation) are studied for producing hydrogen from biomass fuels. The aim is to provide data for the production unit and the following optimization of the "hydrogen chain" (from energy source selection to hydrogen utilization) in the frame of the Italian project "Filiera Idrogeno". The project focuses on a regional scale (Tuscany, Italy), renewable energies and automotive hydrogen. Decentred and small production plants are required to solve the logistic problems of biomass supply and meet the limited hydrogen infrastructures. Different options (gasification with air, oxygen or steam/oxygen mixtures, combustion, electrolysis) and conditions (varying the ratios of biomass and gas input) are studied by developing process models with uniform hypothesis to compare the results. Results obtained in this work concern the operating parameters, process efficiencies, material and energetic needs and are fundamental to optimize the entire hydrogen chain. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Comparative Study of Powdered Ginger Drink Processed by Different Method:Traditional and using Evaporation Machine

    Science.gov (United States)

    Apriyana, Wuri; Taufika Rosyida, Vita; Nur Hayati, Septi; Darsih, Cici; Dewi Poeloengasih, Crescentiana

    2017-12-01

    Ginger drink is one of the traditional beverage that became one of the products of interest by consumers in Indonesia. This drink is believed to have excellent properties for the health of the body. In this study, we have compared the moisture content, ash content, metal content and the identified compound of product which processed with traditional technique and using an evaporator machine. The results show that both of products fulfilled some parameters of the Indonesian National Standard for the traditional powdered drink. GC-MS analysis data showed the identified compound of both product. The major of hydrocarbon groups that influenced the flavor such as zingiberene, camphene, beta-phelladrine, beta-sesquepelladrine, curcumene, and beta-bisabolene were found higher in ginger drink powder treated with a machine than those processed traditionally.

  12. Sensory and Quality Evaluation of Traditional Compared with Power Ultrasound Processed Corn (Zea Mays) Tortilla Chips.

    Science.gov (United States)

    Janve, Bhaskar; Yang, Wade; Sims, Charles

    2015-06-01

    Power ultrasound reduces the traditional corn steeping time from 18 to 1.5 h during tortilla chips dough (masa) processing. This study sought to examine consumer (n = 99) acceptability and quality of tortilla chips made from the masa by traditional compared with ultrasonic methods. Overall appearance, flavor, and texture acceptability scores were evaluated using a 9-point hedonic scale. The baked chips (process intermediate) before and after frying (finished product) were analyzed using a texture analyzer and machine vision. The texture values were determined using the 3-point bend test using breaking force gradient (BFG), peak breaking force (PBF), and breaking distance (BD). The fracturing properties determined by the crisp fracture support rig using fracture force gradient (FFG), peak fracture force (PFF), and fracture distance (FD). The machine vision evaluated the total surface area, lightness (L), color difference (ΔE), Hue (°h), and Chroma (C*). The results were evaluated by analysis of variance and means were separated using Tukey's test. Machine vision values of L, °h, were higher (P < 0.05) and ΔE was lower (P < 0.05) for fried and L, °h were significantly (P < 0.05) higher for baked chips produced from ultra-sonication as compare to traditional. Baked chips texture for ultra-sonication was significantly higher (P < 0.05) on BFG, BPD, PFF, and FD. Fried tortilla chips texture were higher significantly (P < 0.05) in BFG and PFF for ultra-sonication than traditional processing. However, the instrumental differences were not detected in sensory analysis, concluding possibility of power ultrasound as potential tortilla chips processing aid. © 2015 Institute of Food Technologists®

  13. A Comparative Analysis of Two Software Development Methodologies: Rational Unified Process and Extreme Programming

    Directory of Open Access Journals (Sweden)

    Marcelo Rafael Borth

    2014-01-01

    Full Text Available Software development methodologies were created to meet the great market demand for innovation, productivity, quality and performance. With the use of a methodology, it is possible to reduce the cost, the risk, the development time, and even increase the quality of the final product. This article compares two of these development methodologies: the Rational Unified Process and the Extreme Programming. The comparison shows the main differences and similarities between the two approaches, and highlights and comments some of their predominant features.

  14. On-line process failure diagnosis: The necessity and a comparative review of the methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Kim, I.S.

    1991-01-01

    Three basic approaches to process failure management are defined and discussed to elucidate the role of diagnosis in the operation of nuclear power plants. The rationale for the necessity of diagnosis is given from various perspectives. A comparative review of some representative diagnostic methodologies is presented and their shortcomings are discussed. Based on the insights from the review, the desirable characteristics from the review, the desirable characteristics of advanced diagnostic methodologies are derived from the viewpoints of failure detection, diagnosis, and correction. 11 refs.

  15. On-line process failure diagnosis: The necessity and a comparative review of the methodologies

    International Nuclear Information System (INIS)

    Kim, I.S.

    1991-01-01

    Three basic approaches to process failure management are defined and discussed to elucidate the role of diagnosis in the operation of nuclear power plants. The rationale for the necessity of diagnosis is given from various perspectives. A comparative review of some representative diagnostic methodologies is presented and their shortcomings are discussed. Based on the insights from the review, the desirable characteristics from the review, the desirable characteristics of advanced diagnostic methodologies are derived from the viewpoints of failure detection, diagnosis, and correction. 11 refs

  16. From arrest to sentencing: A comparative analysis of the criminal justice system processing for rape crimes

    Directory of Open Access Journals (Sweden)

    Joana Domingues Vargas

    2008-01-01

    Full Text Available The current article is intended to demonstrate the advantages of prioritizing an analysis of court caseload processing for a given type of crime and proceeding to a comparison of the results obtained from empirical studies in different countries. The article draws on a study I performed on rape cases tried by the court system in Campinas, São Paulo State, and the study by Gary LaFree on rape cases in the United States, based on data in Indianapolis, Indiana. The comparative analysis of determinants of victims' and law enforcement agencies' decisions concerning the pursuit of legal action proved to be productive, even when comparing two different systems of justice. This allowed greater knowledge of how the Brazilian criminal justice system operates, both in its capacity to identify, try, and punish sex offenders, and in terms of the importance it ascribes to formal legal rules in trying rape cases, in comparison to the American criminal justice system.

  17. Comparative Physicochemical Evaluation of Kharekhasak (Tribulus terrestris Linn.) Before and After Mudabbar Process

    Science.gov (United States)

    Tauheed, Abdullah; Hamiduddin; Khanam, Salma; Ali, Mohd Akhtar; Zaigham, Mohammad

    2017-01-01

    Background and Objectives: Mudabbar/Tadbeere advia is referred to the processes performed on the drugs to detoxify, purify, and enhance therapeutic action and to reduce its doses before making the formulations in Unani medicine. It improves quality of drugs either by optimizing its desirable characteristics or minimizing the undesirable ones; it makes drug effective, safe, and specific. There is a need of comparative evaluation to understand its significance. Tadbeer of Kharekhasak (KK) khurd (Tribulus terrestris Linn. fruit) is described by Rabban Al-Tabari in Firdausul Hikmat, Akbar Arzani in Qarabadeene Qadri, etc., during the compounding of aphrodisiac formulations. Mudabbar Kharekhasak (MKK) used in Safoofe Kharekhasak mentioned in Al-Qarabadeene was evaluated in this work. Methods: Mudabbar/Tadbeer process was carried out by blending fresh KK. Juice with powdered dry KK and drying it under the sun. Juice used for process is thrice the weight of dry KK powder. The KK before and after the process was evaluated using physicochemical tests: powder characterization, extractive value, alcohol and water soluble matter, ash value, loss on drying (LOD) at 105°C, pH, high-performance thin layer chromatography (HPTLC) fingerprinting, and diosgenin content. Results: Powder characterizations were set in. Increase in successive and nonsuccessive extractive values in various solvents, water/alcohol-soluble content, total ash, acid-insoluble ash, water-soluble ash, and sulfated ash of MKK was noted in comparison with KK. Decrease in LOD at 105°C and pH of MKK powder was observed. HPTLC fingerprinting data were developed for the identification and evaluation. Quantification of diosgenin content increased to 432.1 g/g in MKK as compared to 144.5 g/g in KK, suggesting significant increase in saponin content. Conclusion: Data obtained clearly indicated changes in MKK validating the classical Mudabbar process, probably to enhance/modify the action of drug. Standards for crude

  18. Comparative Physicochemical Evaluation of Kharekhasak (Tribulus terrestris Linn.) Before and After Mudabbar Process.

    Science.gov (United States)

    Tauheed, Abdullah; Hamiduddin; Khanam, Salma; Ali, Mohd Akhtar; Zaigham, Mohammad

    2017-01-01

    Mudabbar/ Tadbeere advia is referred to the processes performed on the drugs to detoxify, purify, and enhance therapeutic action and to reduce its doses before making the formulations in Unani medicine. It improves quality of drugs either by optimizing its desirable characteristics or minimizing the undesirable ones; it makes drug effective, safe, and specific. There is a need of comparative evaluation to understand its significance. Tadbeer of Kharekhasak (KK) khurd ( Tribulus terrestris Linn. fruit) is described by Rabban Al-Tabari in Firdausul Hikmat, Akbar Arzani in Qarabadeene Qadri, etc., during the compounding of aphrodisiac formulations. Mudabbar Kharekhasak (MKK) used in Safoofe Kharekhasak mentioned in Al-Qarabadeene was evaluated in this work. Mudabbar/Tadbeer process was carried out by blending fresh KK. Juice with powdered dry KK and drying it under the sun. Juice used for process is thrice the weight of dry KK powder. The KK before and after the process was evaluated using physicochemical tests: powder characterization, extractive value, alcohol and water soluble matter, ash value, loss on drying (LOD) at 105°C, pH, high-performance thin layer chromatography (HPTLC) fingerprinting, and diosgenin content. Powder characterizations were set in. Increase in successive and nonsuccessive extractive values in various solvents, water/alcohol-soluble content, total ash, acid-insoluble ash, water-soluble ash, and sulfated ash of MKK was noted in comparison with KK. Decrease in LOD at 105°C and pH of MKK powder was observed. HPTLC fingerprinting data were developed for the identification and evaluation. Quantification of diosgenin content increased to 432.1 g/g in MKK as compared to 144.5 g/g in KK, suggesting significant increase in saponin content. Data obtained clearly indicated changes in MKK validating the classical Mudabbar process, probably to enhance/modify the action of drug. Standards for crude and MKK were established for future reference. Mudabbar

  19. Y-TZP ceramic processing from coprecipitated powders: a comparative study with three commercial dental ceramics.

    Science.gov (United States)

    Lazar, Dolores R R; Bottino, Marco C; Ozcan, Mutlu; Valandro, Luiz Felipe; Amaral, Regina; Ussui, Valter; Bressiani, Ana H A

    2008-12-01

    (1) To synthesize 3mol% yttria-stabilized zirconia (3Y-TZP) powders via coprecipitation route, (2) to obtain zirconia ceramic specimens, analyze surface characteristics, and mechanical properties, and (3) to compare the processed material with three reinforced dental ceramics. A coprecipitation route was used to synthesize a 3mol% yttria-stabilized zirconia ceramic processed by uniaxial compaction and pressureless sintering. Commercially available alumina or alumina/zirconia ceramics, namely Procera AllCeram (PA), In-Ceram Zirconia Block (CAZ) and In-Ceram Zirconia (IZ) were chosen for comparison. All specimens (6mmx5mmx5mm) were polished and ultrasonically cleaned. Qualitative phase analysis was performed by XRD and apparent densities were measured on the basis of Archimedes principle. Ceramics were also characterized using SEM, TEM and EDS. The hardness measurements were made employing Vickers hardness test. Fracture toughness (K(IC)) was calculated. Data were analyzed using one-way analysis of variance (ANOVA) and Tukey's test (alpha=0.05). ANOVA revealed that the Vickers hardness (p<0.0001) and fracture toughness (p<0.0001) were affected by the ceramic materials composition. It was confirmed that the PA ceramic was constituted of a rhombohedral alumina matrix, so-called alpha-alumina. Both CAZ and IZ ceramics presented tetragonal zirconia and alpha-alumina mixture of phases. The SEM/EDS analysis confirmed the presence of aluminum in PA ceramic. In the IZ and CAZ ceramics aluminum, zirconium and cerium in grains involved by a second phase containing aluminum, silicon and lanthanum were identified. PA showed significantly higher mean Vickers hardness values (H(V)) (18.4+/-0.5GPa) compared to vitreous CAZ (10.3+/-0.2GPa) and IZ (10.6+/-0.4GPa) ceramics. Experimental Y-TZP showed significantly lower results than that of the other monophased ceramic (PA) (p<0.05) but it showed significantly higher fracture toughness (6.0+/-0.2MPam(1/2)) values when compared to the

  20. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    Science.gov (United States)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both

  1. Mental health policy process: a comparative study of Ghana, South Africa, Uganda and Zambia

    Directory of Open Access Journals (Sweden)

    Kigozi Fred

    2010-08-01

    Full Text Available Abstract Background Mental illnesses are increasingly recognised as a leading cause of disability worldwide, yet many countries lack a mental health policy or have an outdated, inappropriate policy. This paper explores the development of appropriate mental health policies and their effective implementation. It reports comparative findings on the processes for developing and implementing mental health policies in Ghana, South Africa, Uganda and Zambia as part of the Mental Health and Poverty Project. Methods The study countries and respondents were purposively selected to represent different levels of mental health policy and system development to allow comparative analysis of the factors underlying the different forms of mental health policy development and implementation. Data were collected using semi-structured interviews and document analysis. Data analysis was guided by conceptual framework that was developed for this purpose. A framework approach to analysis was used, incorporating themes that emerged from the data and from the conceptual framework. Results Mental health policies in Ghana, South Africa, Uganda and Zambia are weak, in draft form or non-existent. Mental health remained low on the policy agenda due to stigma and a lack of information, as well as low prioritisation by donors, low political priority and grassroots demand. Progress with mental health policy development varied and respondents noted a lack of consultation and insufficient evidence to inform policy development. Furthermore, policies were poorly implemented, due to factors including insufficient dissemination and operationalisation of policies and a lack of resources. Conclusions Mental health policy processes in all four countries were inadequate, leading to either weak or non-existent policies, with an impact on mental health services. Recommendations are provided to strengthen mental health policy processes in these and other African countries.

  2. Comparative biology approaches for charged particle exposures and cancer development processes

    Science.gov (United States)

    Kronenberg, Amy; Gauny, Stacey; Kwoh, Ely; Sudo, Hiroko; Wiese, Claudia; Dan, Cristian; Turker, Mitchell

    Comparative biology studies can provide useful information for the extrapolation of results be-tween cells in culture and the more complex environment of the tissue. In other circumstances, they provide a method to guide the interpretation of results obtained for cells from differ-ent species. We have considered several key cancer development processes following charged particle exposures using comparative biology approaches. Our particular emphases have been mutagenesis and genomic instability. Carcinogenesis requires the accumulation of mutations and most of htese mutations occur on autosomes. Two loci provide the greatest avenue for the consideration of charged particle-induced mutation involving autosomes: the TK1 locus in human cells and the APRT locus in mouse cells. Each locus can provide information on a wide variety of mutational changes, from small intragenic mutations through multilocus dele-tions and extensive tracts of mitotic recombination. In addition, the mouse model can provide a direct measurement of chromosome loss which cannot be accomplished in the human cell system. Another feature of the mouse APRT model is the ability to examine effects for cells exposed in vitro with those obtained for cells exposed in situ. We will provide a comparison of the results obtained for the TK1 locus following 1 GeV/amu Fe ion exposures to the human lymphoid cells with those obtained for the APRT locus for mouse kidney epithelial cells (in vitro or in situ). Substantial conservation of mechanisms is found amongst these three exposure scenarios, with some differences attributable to the specific conditions of exposure. A similar approach will be applied to the consideraiton of proton-induced autosomal mutations in the three model systems. A comparison of the results obtained for Fe ions vs. protons in each case will highlight LET-specificc differences in response. Another cancer development process that is receiving considerable interest is genomic instability. We

  3. Comparative proteome and transcriptome analysis of lager brewer's yeast in the autolysis process.

    Science.gov (United States)

    Xu, Weina; Wang, Jinjing; Li, Qi

    2014-12-01

    The autolysis of brewer's yeast during beer production has a significant effect on the quality of the final product. In this work, we performed proteome and transcriptome studies on brewer's yeast to examine changes in protein and mRNA levels in the process of autolysis. Protein and RNA samples of the strain Qing2 at two different autolysis stages were obtained for further study. In all, 49 kinds of proteins were considered to be involved in the autolysis response, eight of which were up-regulated and 41 down-regulated. Seven new kinds of proteins emerged during autolysis. Results of comparative analyses showed that important changes had taken place as an adaptive response to autolysis. Functional analysis showed that carbohydrate and energy metabolism, cellular amino acid metabolic processes, cell response to various stresses (such as oxidative stress, salt stress, and osmotic stress), translation and transcription were repressed by the down-regulation of the corresponding proteins, and starvation and DNA damage responses could be induced. The comparison of data on transcriptomes with proteomes demonstrated that most autolysis-response proteins as well as new proteins showed a general correlation between mRNA and protein levels. Thus these proteins were thought to be transcriptionally regulated. These findings provide important information about how brewer's yeast acts to cope with autolysis at molecular levels, which might enhance global understanding of the autolysis process. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  4. A Systematic Survey Instrument Translation Process for Multi-Country, Comparative Health Workforce Studies

    Science.gov (United States)

    Squires, Allison; Aiken, Linda H.; van den Heede, Koen; Sermeus, Walter; Bruyneel, Luk; Lindqvist, Rikard; Schoonoven, Lisette; Stromseng, Ingeborg; Busse, Reinhard; Brozstek, Tomas; Ensio, Anneli; Moreno-Casbas, Mayte; Rafferty, Anne Marie; Schubert, Maria; Zikos, Dimitris

    2012-01-01

    Background As health services research (HSR) expands across the globe, researchers will adopt health services and health worker evaluation instruments developed in one country for use in another. This paper explores the cross-cultural methodological challenges involved in translating HSR in the language and context of different health systems. Objectives To describe the pre-data collection systematic translation process used in a twelve country, eleven language nursing workforce survey. Design & Settings We illustrate the potential advantages of Content Validity Indexing (CVI) techniques to validate a nursing workforce survey developed for RN4CAST, a twelve country (Belgium, England, Finland, Germany, Greece, Ireland, Netherlands, Norway, Poland, Spain, Sweden, and Switzerland), eleven language (with modifications for regional dialects, including Dutch, English, Finnish, French, German, Greek, Italian, Norwegian, Polish, Spanish, and Swedish), comparative nursing workforce study in Europe. Participants Expert review panels comprised of practicing nurses from twelve European countries who evaluated cross-cultural relevance, including translation, of a nursing workforce survey instrument developed by experts in the field. Methods The method described in this paper used Content Validity Indexing (CVI) techniques with chance correction and provides researchers with a systematic approach for standardizing language translation processes while simultaneously evaluating the cross-cultural applicability of a survey instrument in the new context. Results The cross-cultural evaluation process produced CVI scores for the instrument ranging from .61 to .95. The process successfully identified potentially problematic survey items and errors with translation. Conclusions The translation approach described here may help researchers reduce threats to data validity and improve instrument reliability in multinational health services research studies involving comparisons across

  5. A systematic survey instrument translation process for multi-country, comparative health workforce studies.

    Science.gov (United States)

    Squires, Allison; Aiken, Linda H; van den Heede, Koen; Sermeus, Walter; Bruyneel, Luk; Lindqvist, Rikard; Schoonhoven, Lisette; Stromseng, Ingeborg; Busse, Reinhard; Brzostek, Tomasz; Ensio, Anneli; Moreno-Casbas, Mayte; Rafferty, Anne Marie; Schubert, Maria; Zikos, Dimitris; Matthews, Anne

    2013-02-01

    As health services research (HSR) expands across the globe, researchers will adopt health services and health worker evaluation instruments developed in one country for use in another. This paper explores the cross-cultural methodological challenges involved in translating HSR in the language and context of different health systems. To describe the pre-data collection systematic translation process used in a twelve country, eleven language nursing workforce survey. We illustrate the potential advantages of Content Validity Indexing (CVI) techniques to validate a nursing workforce survey developed for RN4CAST, a twelve country (Belgium, England, Finland, Germany, Greece, Ireland, Netherlands, Norway, Poland, Spain, Sweden, and Switzerland), eleven language (with modifications for regional dialects, including Dutch, English, Finnish, French, German, Greek, Italian, Norwegian, Polish, Spanish, and Swedish), comparative nursing workforce study in Europe. Expert review panels comprised of practicing nurses from twelve European countries who evaluated cross-cultural relevance, including translation, of a nursing workforce survey instrument developed by experts in the field. The method described in this paper used Content Validity Indexing (CVI) techniques with chance correction and provides researchers with a systematic approach for standardizing language translation processes while simultaneously evaluating the cross-cultural applicability of a survey instrument in the new context. The cross-cultural evaluation process produced CVI scores for the instrument ranging from .61 to .95. The process successfully identified potentially problematic survey items and errors with translation. The translation approach described here may help researchers reduce threats to data validity and improve instrument reliability in multinational health services research studies involving comparisons across health systems and language translation. Copyright © 2012 Elsevier Ltd. All rights

  6. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  7. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    Science.gov (United States)

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  8. An improved swarm optimization for parameter estimation and biological model selection.

    Directory of Open Access Journals (Sweden)

    Afnizanfaizal Abdullah

    Full Text Available One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete

  9. Virtually simulated social pressure influences early visual processing more in low compared to high autonomous participants.

    Science.gov (United States)

    Trautmann-Lengsfeld, Sina Alexa; Herrmann, Christoph Siegfried

    2014-02-01

    In a previous study, we showed that virtually simulated social group pressure could influence early stages of perception after only 100  ms. In the present EEG study, we investigated the influence of social pressure on visual perception in participants with high (HA) and low (LA) levels of autonomy. Ten HA and ten LA individuals were asked to accomplish a visual discrimination task in an adapted paradigm of Solomon Asch. Results indicate that LA participants adapted to the incorrect group opinion more often than HA participants (42% vs. 30% of the trials, respectively). LA participants showed a larger posterior P1 component contralateral to targets presented in the right visual field when conforming to the correct compared to conforming to the incorrect group decision. In conclusion, our ERP data suggest that the group context can have early effects on our perception rather than on conscious decision processes in LA, but not HA participants. Copyright © 2013 Society for Psychophysiological Research.

  10. Composting on Mars or the Moon: I. Comparative evaluation of process design alternatives

    Science.gov (United States)

    Finstein, M. S.; Strom, P. F.; Hogan, J. A.; Cowan, R. M.; Janes, H. W. (Principal Investigator)

    1999-01-01

    As a candidate technology for treating solid wastes and recovering resources in bioregenerative Advanced Life Support, composting potentially offers such advantages as compactness, low mass, near ambient reactor temperatures and pressures, reliability, flexibility, simplicity, and forgiveness of operational error or neglect. Importantly, the interactions among the physical, chemical, and biological factors that govern composting system behavior are well understood. This article comparatively evaluates five Generic Systems that describe the basic alternatives to composting facility design and control. These are: 1) passive aeration; 2) passive aeration abetted by mechanical agitation; 3) forced aeration--O2 feedback control; 4) forced aeration--temperature feedback control; 5) forced aeration--integrated O2 and temperature feedback control. Each of the five has a distinctive pattern of behavior and process performance characteristics. Only Systems 4 and 5 are judged to be viable candidates for ALS on alien worlds, though which is better suited in this application is yet to be determined.

  11. Comparing mesophilic and thermophilic anaerobic digestion of chicken manure: Microbial community dynamics and process resilience

    International Nuclear Information System (INIS)

    Niu, Qigui; Takemura, Yasuyuki; Kubota, Kengo; Li, Yu-You

    2015-01-01

    Highlights: • Microbial community dynamics and process functional resilience were investigated. • The threshold of TAN in mesophilic reactor was higher than the thermophilic reactor. • The recoverable archaeal community dynamic sustained the process resilience. • Methanosarcina was more sensitive than Methanoculleus on ammonia inhibition. • TAN and FA effects the dynamic of hydrolytic and acidogenic bacteria obviously. - Abstract: While methane fermentation is considered as the most successful bioenergy treatment for chicken manure, the relationship between operational performance and the dynamic transition of archaeal and bacterial communities remains poorly understood. Two continuous stirred-tank reactors were investigated under thermophilic and mesophilic conditions feeding with 10%TS. The tolerance of thermophilic reactor on total ammonia nitrogen (TAN) was found to be 8000 mg/L with free ammonia (FA) 2000 mg/L compared to 16,000 mg/L (FA1500 mg/L) of mesophilic reactor. Biomethane production was 0.29 L/gV S in in the steady stage and decreased following TAN increase. After serious inhibition, the mesophilic reactor was recovered successfully by dilution and washing stratagem compared to the unrecoverable of thermophilic reactor. The relationship between the microbial community structure, the bioreactor performance and inhibitors such as TAN, FA, and volatile fatty acid was evaluated by canonical correspondence analysis. The performance of methanogenic activity and substrate removal efficiency were changed significantly correlating with the community evenness and phylogenetic structure. The resilient archaeal community was found even after serious inhibition in both reactors. Obvious dynamics of bacterial communities were observed in acidogenic and hydrolytic functional bacteria following TAN variation in the different stages

  12. Processing of vocalizations in humans and monkeys: A comparative fMRI study

    International Nuclear Information System (INIS)

    Joly, Olivier; Orban, Guy A.; Pallier, Christophe; Ramus, Franck; Pressnitzer, Daniel; Vanduffel, Wim

    2012-01-01

    Humans and many other animals use acoustical signals to mediate social interactions with con-specifics. The evolution of sound-based communication is still poorly understood and its neural correlates have only recently begun to be investigated. In the present study, we applied functional MRI to humans and macaque monkeys listening to identical stimuli in order to compare the cortical networks involved in the processing of vocalizations. At the first stages of auditory processing, both species showed similar fMRI activity maps within and around the lateral sulcus (the Sylvian fissure in humans). Monkeys showed remarkably similar responses to monkey calls and to human vocal sounds (speech or otherwise), mainly in the lateral sulcus and the adjacent superior temporal gyrus (STG). In contrast, a preference for human vocalizations and especially for speech was observed in the human STG and superior temporal sulcus (STS). The STS and Broca's region were especially responsive to intelligible utterances. The evolution of the language faculty in humans appears to have recruited most of the STS. It may be that in monkeys, a much simpler repertoire of vocalizations requires less involvement of this temporal territory. (authors)

  13. Combined compared to dissociated oral and intestinal sucrose stimuli induce different brain hedonic processes

    Science.gov (United States)

    Clouard, Caroline; Meunier-Salaün, Marie-Christine; Meurice, Paul; Malbert, Charles-Henri; Val-Laillet, David

    2014-01-01

    The characterization of brain networks contributing to the processing of oral and/or intestinal sugar signals in a relevant animal model might help to understand the neural mechanisms related to the control of food intake in humans and suggest potential causes for impaired eating behaviors. This study aimed at comparing the brain responses triggered by oral and/or intestinal sucrose sensing in pigs. Seven animals underwent brain single photon emission computed tomography (99mTc-HMPAO) further to oral stimulation with neutral or sucrose artificial saliva paired with saline or sucrose infusion in the duodenum, the proximal part of the intestine. Oral and/or duodenal sucrose sensing induced differential cerebral blood flow changes in brain regions known to be involved in memory, reward processes and hedonic (i.e., pleasure) evaluation of sensory stimuli, including the dorsal striatum, prefrontal cortex, cingulate cortex, insular cortex, hippocampus, and parahippocampal cortex. Sucrose duodenal infusion only and combined sucrose stimulation induced similar activity patterns in the putamen, ventral anterior cingulate cortex and hippocampus. Some brain deactivations in the prefrontal and insular cortices were only detected in the presence of oral sucrose stimulation. Finally, activation of the right insular cortex was only induced by combined oral and duodenal sucrose stimulation, while specific activity patterns were detected in the hippocampus and parahippocampal cortex with oral sucrose dissociated from caloric load. This study sheds new light on the brain hedonic responses to sugar and has potential implications to unravel the neuropsychological mechanisms underlying food pleasure and motivation. PMID:25147536

  14. Comprehensive comparative genomic and transcriptomic analyses of the legume genes controlling the nodulation process

    Directory of Open Access Journals (Sweden)

    Zhenzhen eQiao

    2016-01-01

    Full Text Available Nitrogen is one of the most essential plant nutrients and one of the major factors limiting crop productivity. Having the goal to perform a more sustainable agriculture, there is a need to maximize biological nitrogen fixation, a feature of legumes. To enhance our understanding of the molecular mechanisms controlling the interaction between legumes and rhizobia, the symbiotic partner fixing and assimilating the atmospheric nitrogen for the plant, researchers took advantage of genetic and genomic resources developed across different legume models (e.g. Medicago truncatula, Lotus japonicus, Glycine max and Phaseolous vulgaris to identify key regulatory genes of the nodulation process. In this study, we are presenting the results of a comprehensive comparative genomic analysis to highlight orthologous and paralogous relationships between the legume genes controlling nodulation. Mining large transcriptomic datasets, we also identified several orthologous and paralogous genes characterized by the induction of their expression during nodulation across legume plant species. This comprehensive study prompts new insights into the evolution of the nodulation process in legume plant and will benefit the scientific community interested in the transfer of functional genomic information between species.

  15. Man Versus Machine: Comparing Double Data Entry and Optical Mark Recognition for Processing CAHPS Survey Data.

    Science.gov (United States)

    Fifolt, Matthew; Blackburn, Justin; Rhodes, David J; Gillespie, Shemeka; Bennett, Aleena; Wolff, Paul; Rucks, Andrew

    Historically, double data entry (DDE) has been considered the criterion standard for minimizing data entry errors. However, previous studies considered data entry alternatives through the limited lens of data accuracy. This study supplies information regarding data accuracy, operational efficiency, and cost for DDE and Optical Mark Recognition (OMR) for processing the Consumer Assessment of Healthcare Providers and Systems 5.0 survey. To assess data accuracy, we compared error rates for DDE and OMR by dividing the number of surveys that were arbitrated by the total number of surveys processed for each method. To assess operational efficiency, we tallied the cost of data entry for DDE and OMR after survey receipt. Costs were calculated on the basis of personnel, depreciation for capital equipment, and costs of noncapital equipment. The cost savings attributed to this method were negated by the operational efficiency of OMR. There was a statistical significance between rates of arbitration between DDE and OMR; however, this statistical significance did not create a practical significance. The potential benefits of DDE in terms of data accuracy did not outweigh the operational efficiency and thereby financial savings of OMR.

  16. [Comparative analysis of natural uranium mobility and concentration process in ecosystems of the Pechora river basin].

    Science.gov (United States)

    Rachkova, N G; Shuktomova, I I

    2013-01-01

    Natural uranium mobility and its concentration process in water ecosystems of the Pechora river basin situated in the areas with the uranium increased concentration in rocks and in the zone around radioactive waste repository were compared. The study investigated the influence of the environmental factors on the uranium distribution in water reservoirs. In the studied ecosystems, Fe-bearing compounds are major sorbents of uranium during the migration and concentration process. Nitrate-ions increase the uranium mobility in the ecosystems. The influence of sulfate, phosphate and carbonate complexation on the uranium distribution between water and bottom sediments wasn't pronounced in the ecosystems with high natural radioactivity, but significant for the radioactively contaminated water reservoirs. Uranium geochemical mobility is higher in contaminated water ecosystems. The uranium content in the water from this area substantially exceeds the background value for the region and toxicity limits for hydrophytes. Comparison of the current and earlier received data shows that the uranium concentration in the water has decreased, its specific activity in sediments has enhanced. The level of the uranium concentration in dry hygrophyte biomass has not changed.

  17. Funding Decisions for Newborn Screening: A Comparative Review of 22 Decision Processes in Europe

    Directory of Open Access Journals (Sweden)

    Katharina Elisabeth Fischer

    2014-05-01

    Full Text Available Decision-makers need to make choices to improve public health. Population-based newborn screening (NBS is considered as one strategy to prevent adverse health outcomes and address rare disease patients’ needs. The aim of this study was to describe key characteristics of decisions for funding new NBS programmes in Europe. We analysed past decisions using a conceptual framework. It incorporates indicators that capture the steps of decision processes by health care payers. Based on an internet survey, we compared 22 decisions for which answers among two respondents were validated for each observation. The frequencies of indicators were calculated to elicit key characteristics. All decisions resulted in positive, mostly unrestricted funding. Stakeholder participation was diverse focusing on information provision or voting. Often, decisions were not fully transparent. Assessment of NBS technologies concentrated on expert opinion, literature review and rough cost estimates. Most important appraisal criteria were effectiveness (i.e., health gain from testing for the children being screened, disease severity and availability of treatments. Some common and diverging key characteristics were identified. Although no evidence of explicit healthcare rationing was found, processes may be improved in respect of transparency and scientific rigour of assessment.

  18. Comprehensive Comparative Genomic and Transcriptomic Analyses of the Legume Genes Controlling the Nodulation Process.

    Science.gov (United States)

    Qiao, Zhenzhen; Pingault, Lise; Nourbakhsh-Rey, Mehrnoush; Libault, Marc

    2016-01-01

    Nitrogen is one of the most essential plant nutrients and one of the major factors limiting crop productivity. Having the goal to perform a more sustainable agriculture, there is a need to maximize biological nitrogen fixation, a feature of legumes. To enhance our understanding of the molecular mechanisms controlling the interaction between legumes and rhizobia, the symbiotic partner fixing and assimilating the atmospheric nitrogen for the plant, researchers took advantage of genetic and genomic resources developed across different legume models (e.g., Medicago truncatula, Lotus japonicus, Glycine max, and Phaseolus vulgaris) to identify key regulatory protein coding genes of the nodulation process. In this study, we are presenting the results of a comprehensive comparative genomic analysis to highlight orthologous and paralogous relationships between the legume genes controlling nodulation. Mining large transcriptomic datasets, we also identified several orthologous and paralogous genes characterized by the induction of their expression during nodulation across legume plant species. This comprehensive study prompts new insights into the evolution of the nodulation process in legume plant and will benefit the scientific community interested in the transfer of functional genomic information between species.

  19. Financial performance as a decision criterion of credit scoring models selection [doi: 10.21529/RECADM.2017004

    Directory of Open Access Journals (Sweden)

    Rodrigo Alves Silva

    2017-09-01

    Full Text Available This paper aims to show the importance of the use of financial metrics in decision-making of credit scoring models selection. In order to achieve such, we considered an automatic approval system approach and we carried out a performance analysis of the financial metrics on the theoretical portfolios generated by seven credit scoring models based on main statistical learning techniques. The models were estimated on German Credit dataset and the results were analyzed based on four metrics: total accuracy, error cost, risk adjusted return on capital and Sharpe index. The results show that total accuracy, widely used as a criterion for selecting credit scoring models, is unable to select the most profitable model for the company, indicating the need to incorporate financial metrics into the credit scoring model selection process. Keywords Credit risk; Model’s selection; Statistical learning.

  20. Comparative evaluation of different wavelet thresholding methods for neural signal processing.

    Science.gov (United States)

    Barabino, Gianluca; Baldazzi, Giulia; Sulas, Eleonora; Carboni, Caterina; Raffo, Luigi; Pani, Danilo

    2017-07-01

    Neural signal decoding is the basis for the development of neuroprosthetic devices and systems. Depending on the part of the nervous system these signals are picked up from, different signal-to-noise ratios (SNR) can be experienced. Wavelet denoising is often adopted due to its capability of reducing, to some extent, the noise falling within the signal spectrum. Several variables influence the denoising quality, but usually the focus in on the selection of the best performing mother wavelet. However, the threshold definition and the way it is applied to the signal have a significant impact on the denoising quality, determining the amount of noise removed and the distortion introduced on the signal. This work presents a comparative analysis of different threshold definition and thresholding mechanisms on neural signals, either largely adopted for neural signal processing or not. In order to evaluate the quality of the denoising in terms of the introduced distortion, which is important when decoding is implemented through spike-sorting algorithms, a synthetic dataset built on real action potentials was used, creating signals with different SNR and characterized by an additive white Gaussian noise (AWGN). The obtained results reveal the superiority of an approach, originally conceived for noisy non-linear time series, over the more typical ones. When compared to the original signal, a correlation above 0.9 was obtained, while in terms of root mean square error (RMSE) an improvement of 13% and 33% was reported with respect to the Minimax and Universal thresholds respectively.

  1. Neural processes of vocal social perception: Dog-human comparative fMRI studies.

    Science.gov (United States)

    Andics, Attila; Miklósi, Ádám

    2018-02-01

    In this review we focus on the exciting new opportunities in comparative neuroscience to study neural processes of vocal social perception by comparing dog and human neural activity using fMRI methods. The dog is a relatively new addition to this research area; however, it has a large potential to become a standard species in such investigations. Although there has been great interest in the emergence of human language abilities, in case of fMRI methods, most research to date focused on homologue comparisons within Primates. By belonging to a very different clade of mammalian evolution, dogs could give such research agendas a more general mammalian foundation. In addition, broadening the scope of investigations into vocal communication in general can also deepen our understanding of human vocal skills. Being selected for and living in an anthropogenic environment, research with dogs may also be informative about the way in which human non-linguistic and linguistic signals are represented in a mammalian brain without skills for language production. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Convergence on Self - Generated vs. Crowdsourced Ideas in Crisis Response: Comparing Social Exchange Processes and Satisfaction with Process

    DEFF Research Database (Denmark)

    Seeber, Isabella; Merz, Alexander B.; Maier, Ronald

    2017-01-01

    engage in social exchange processes to converge on a few promising ideas. Traditionally, teams work on self-generated ideas. However, in a crowdsourcing scenario, such as public participation in crisis response, teams may have to process crowd-generated ideas. To better understand this new practice......, it is important to investigate how converging on crowdsourced ideas affects the social exchange processes of teams and resulting outcomes. We conducted a laboratory experiment in which small teams working in a crisis response setting converged on self-generated or crowdsourced ideas in an emergency response...

  3. Quantile hydrologic model selection and model structure deficiency assessment : 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies

  4. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Science.gov (United States)

    2013-04-03

    ... mathematical modeling methods used in predicting the dispersion of heated effluent in natural water bodies. The... COMMISSION Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in... Mathematical Models Selected to Predict Heated Effluent Dispersion in Natural Water Bodies.'' The guide is...

  5. Model Selection and Risk Estimation with Applications to Nonlinear Ordinary Differential Equation Systems

    DEFF Research Database (Denmark)

    Mikkelsen, Frederik Vissing

    Broadly speaking, this thesis is devoted to model selection applied to ordinary dierential equations and risk estimation under model selection. A model selection framework was developed for modelling time course data by ordinary dierential equations. The framework is accompanied by the R software...... eective computational tools for estimating unknown structures in dynamical systems, such as gene regulatory networks, which may be used to predict downstream eects of interventions in the system. A recommended algorithm based on the computational tools is presented and thoroughly tested in various...... simulation studies and applications. The second part of the thesis also concerns model selection, but focuses on risk estimation, i.e., estimating the error of mean estimators involving model selection. An extension of Stein's unbiased risk estimate (SURE), which applies to a class of estimators with model...

  6. Combined compared to dissociated oral and intestinal sucrose stimuli induce different brain hedonic processes

    Directory of Open Access Journals (Sweden)

    Caroline eClouard

    2014-08-01

    Full Text Available The characterization of brain networks contributing to the processing of oral and/or intestinal sugar signals in a relevant animal model might help to understand the neural mechanisms related to the control of food intake in humans and suggest potential causes for impaired eating behaviors. This study aimed at comparing the brain responses triggered by oral and/or intestinal sucrose sensing in pigs. Seven animals underwent brain single photon emission computed tomography (99mTc-HMPAO further to oral stimulation with neutral or sucrose artificial saliva paired with saline or sucrose infusion in the duodenum, the proximal part of the intestine. Oral and/or duodenal sucrose sensing induced differential cerebral blood flow (CBF changes in brain regions known to be involved in memory, reward processes and hedonic (i.e. pleasure evaluation of sensory stimuli, including the dorsal striatum, prefrontal cortex, cingulate cortex, insular cortex, hippocampus and parahippocampal cortex. Sucrose duodenal infusion only and combined sucrose stimulation induced similar activity patterns in the putamen, ventral anterior cingulate cortex and hippocampus. Some brain deactivations in the prefrontal and insular cortices were only detected in the presence of oral sucrose stimulation. Finally, activation of the right insular cortex was only induced by combined oral and duodenal sucrose stimulation, while specific activity patterns were detected in the hippocampus and parahippocampal cortex with oral sucrose dissociated from caloric load. This study sheds new light on the brain hedonic responses to sugar and has potential implications to unravel the neuropsychological mechanisms underlying food pleasure and motivation.

  7. Life Cycle Assessment (LCA used to compare two different methods of ripe table olive processing

    Directory of Open Access Journals (Sweden)

    Russo, Carlo

    2010-06-01

    Full Text Available The aim of the present study is to analyze the most common method used for processing ripe table olives: the “California style”. Life Cycle Assessment (LCA was applied to detect the “hot spots” of the system under examination. The LCA results also allowed us to compare the traditional “California style”, here called “method A”, with another “California style”, here called “method B”. We were interested in this latter method, because the European Union is considering introducing it into the product specification of the Protected Denomination of Origin (PDO “La Bella della Daunia”. It was also possible to compare the environmental impacts of the two “California style” methods with those of the “Spanish style” method. From the comparison it is clear that “method B” has a greater environmental impact than “method A” because greater amounts of water and electricity are required, whereas “Spanish style” processing has a lower environmental impact than the ”California style” methods.

    El objetivo de este estudio es analizar el método más común utilizado para el procesamiento de la aceituna negra de mesa “estilo California” (Californian Style. La metodología LCA se aplicó para detectar los puntos calientes del sistema estudiado. Los resultados LCA también nos permitieron comparar el estilo californiano tradicional, aquí llamado “método A”, con otro estilo californiano, llamado “método B”. Nosotros estábamos interesados en el segundo método, porque la Unión Europea está considerando introducirlo en la Denominación de Origen Protegida (DOP “La Bella della Daunia”. También fue posible comparar los impactos medioambientales de los dos mètodos californianos con los impactos del método español. Observando la comparación, está claro que el “método B” tiene un mejor impacto ambiental que el “método A” porque este último requiere más cantidad de agua y

  8. Comparative reliability of structured versus unstructured interviews in the admission process of a residency program.

    Science.gov (United States)

    Blouin, Danielle; Day, Andrew G; Pavlov, Andrey

    2011-12-01

    Although never directly compared, structured interviews are reported as being more reliable than unstructured interviews. This study compared the reliability of both types of interview when applied to a common pool of applicants for positions in an emergency medicine residency program. In 2008, one structured interview was added to the two unstructured interviews traditionally used in our resident selection process. A formal job analysis using the critical incident technique guided the development of the structured interview tool. This tool consisted of 7 scenarios assessing 4 of the domains deemed essential for success as a resident in this program. The traditional interview tool assessed 5 general criteria. In addition to these criteria, the unstructured panel members were asked to rate each candidate on the same 4 essential domains rated by the structured panel members. All 3 panels interviewed all candidates. Main outcomes were the overall, interitem, and interrater reliabilities, the correlations between interview panels, and the dimensionality of each interview tool. Thirty candidates were interviewed. The overall reliability reached 0.43 for the structured interview, and 0.81 and 0.71 for the unstructured interviews. Analyses of the variance components showed a high interrater, low interitem reliability for the structured interview, and a high interrater, high interitem reliability for the unstructured interviews. The summary measures from the 2 unstructured interviews were significantly correlated, but neither was correlated with the structured interview. Only the structured interview was multidimensional. A structured interview did not yield a higher overall reliability than both unstructured interviews. The lower reliability is explained by a lower interitem reliability, which in turn is due to the multidimensionality of the interview tool. Both unstructured panels consistently rated a single dimension, even when prompted to assess the 4 specific domains

  9. Comparing ecohydrological processes in alien vs. native ranges: perspectives from the endangered shrub Myricaria germanica

    Science.gov (United States)

    Michielon, Bruno; Campagnaro, Thomas; Porté, Annabel; Hoyle, Jo; Picco, Lorenzo; Sitzia, Tommaso

    2017-04-01

    Comparing the ecology of woody species in their alien and native ranges may provide interesting insights for theoretical ecology, invasion biology, restoration ecology and forestry. The literature which describes the biological evolution of successful plant invaders is rich and increasing. However, no general theories have been developed about the geomorphic settings which may limit or favour the alien woody species expansion along rivers. The aim of this contribution is to explore the research opportunities in the comparison of ecohydrological processes occurring in the alien vs. the native ranges of invasive tree and shrub species along the riverine corridor. We use the endangered shrub Myricaria germanica as an example. Myricaria germanica is an Euro-Asiatic pioneer species that, in the native range, develops along natural rivers, wide and dynamic. These conditions are increasingly limited by anthropogenic constraints in most European rivers. This species has been recently introduced in New Zealand, where it is spreading in some natural rivers of the Canterbury region (South Island). We present the current knowledge about the natural and anthropogenic factors influencing this species in its native range. We compare this information with the current knowledge about the same factors influencing M. germanica invasiveness and invasibility of riparian habitats in New Zealand. We stress the need to identify potential factors which could drive life-traits and growing strategies divergence which may hinder the application to the alien ranges of existing ecohydrological knowledge from native ranges. Moreover, the pattern of expansion of the alien range of species endangered in their native ranges opens new windows for research.

  10. Comparative energy consumption analyses of an ultra high frequency induction heating system for material processing applications

    Directory of Open Access Journals (Sweden)

    Taştan, Mehmet

    2015-09-01

    Full Text Available This study compares an energy consumption results of the TI-6Al-4V based material processing under the 900 kHz induction heating for different cases. By this means, total power consumption and energy consumptions per sample and amount have been analyzed. Experiments have been conducted with 900 kHz, 2.8 kW ultra-high frequency induction system. Two cases are considered in the study. In the first case, TI-6Al-4V samples have been heated up to 900 °C with classical heating method, which is used in industrial applications, and then they have been cooled down by water. Afterwards, the samples have been heated up to 600 °C, 650 °C and 700 °C respectively and stress relieving process has been applied through natural cooling. During these processes, energy consumptions for each defined process have been measured. In the second case, unlike the first study, can be used five different samples have been heated up to the various temperatures between 600 °C and 1120 °C and energy consumptions have been measured for these processes. Thereby, the effect of temperature increase on each sample on energy cost has been analyzed. It has been seen that as a result of heating the titanium bulk materials, which have been used in the experiment, with ultra high frequency induction, temperature increase also increases the energy consumption. But it has been revealed that the increase rate in the energy consumption is more than the increase rate of the temperature.En este estudio se comparan los consumos energéticos al procesar Ti-6Al-4V por inducción a 900 kHz. Se ha analizado la potencia total consumida y la energía consumida por muestra. Los experimentos se han realizado en un sistema de inducción de ultra alta frecuencia a 900 kHz, 2,8 kW. Se han considerado dos casos, en el primero se ha calentado Ti-6Al-4V a 900 °C por el método clásico usado en la industria y enfriado en agua; posteriormente las muestras se han calentado a 600, 650 y 700 °C y

  11. Neural activation in cognitive motor processes: comparing motor imagery and observation of gymnastic movements.

    Science.gov (United States)

    Munzert, Jörn; Zentgraf, Karen; Stark, Rudolf; Vaitl, Dieter

    2008-07-01

    The simulation concept suggested by Jeannerod (Neuroimage 14:S103-S109, 2001) defines the S-states of action observation and mental simulation of action as action-related mental states lacking overt execution. Within this framework, similarities and neural overlap between S-states and overt execution are interpreted as providing the common basis for the motor representations implemented within the motor system. The present brain imaging study compared activation overlap and differential activation during mental simulation (motor imagery) with that while observing gymnastic movements. The fMRI conjunction analysis revealed overlapping activation for both S-states in primary motor cortex, premotor cortex, and the supplementary motor area as well as in the intraparietal sulcus, cerebellar hemispheres, and parts of the basal ganglia. A direct contrast between the motor imagery and observation conditions revealed stronger activation for imagery in the posterior insula and the anterior cingulate gyrus. The hippocampus, the superior parietal lobe, and the cerebellar areas were differentially activated in the observation condition. In general, these data corroborate the concept of action-related S-states because of the high overlap in core motor as well as in motor-related areas. We argue that differential activity between S-states relates to task-specific and modal information processing.

  12. Comparing Two Modes of Instruction in English Passive Structures (Processing and Meaning-Based Output Instruction

    Directory of Open Access Journals (Sweden)

    Asma Dabiri

    2018-04-01

    Full Text Available This research compared the effects of two types of instruction: Processing Instruction (PI and Meaning-based Output Instruction (MOI on the interpretation and production of English passive structures.  Ninety EFL intermediate tertiary level female students (PI group= 30, MOI group= 30 and control group = 30 participated in this study. The instruments were a proficiency test, a test to assess English passive structures and two instructional materials (PI and MOI. The data were analyzed by running one-way analysis of variance (ANOVA and mixed between-within ANOVA. The study indicated the effectiveness of PI and MOI on English passive structures. PI had considerable enhancement on interpretation tasks all the time. It supported the use of PI rather than the use of traditional instructions in which mechanical components were emphasized. Also, the PI and MOI had long term effects on the interpretation and production of English passive sentences.  This study supported the use of PI and MOI rather than the use of traditional instruction (TI in EFL settings. The implication for particularly classroom teaching is that successful grammar instruction has to related to ultimate learning outcomes. Also, creating communicative tasks to offer opportunities for teaching grammar can lead to long-lasting learning effects.

  13. The town in Serbia and Bulgaria: A comparative reading of current processes. Introduction

    Directory of Open Access Journals (Sweden)

    Zlatanović Sanja

    2015-01-01

    Full Text Available The topic of this volume is a result from The Contemporary City in Serbia and Bulgaria: Processes and Changes, a bilateral project of the Institute of Ethnography of the Serbian Academy of Sciences and Arts and the Institute of Ethnology and Folklore Studies with Ethnographic Museum of the Bulgarian Academy of Sciences (2014-2016. The six papers offer a comparative view of current social processes in two neighbouring Balkan countries, linked by numerous historical and political experiences. Comparative research into societal trends enables a more thorough understanding and monitoring of global processes. In today’s increasingly globalised and glocalised world, towns experience sudden changes and it is in the towns that these changes are most vividly to be seen. The focus of our research is on the dynamism of the contemporary town, on processuality and changes in societal practices. Ana Luleva examines life in the small town of Nessebar in southeast Bulgaria, which has been on the UNESCO World Heritage list since 1983. The protection, management and presentation of Nessebar’s cultural heritage are highly complex issues, further complicated by the problem of collision with the interests of the inhabitants. The author analyses the relations between the various factors - the state administration, municipal authorities and the local population. Here the tourist industry, investment interests, corrupt institutions and civil society all play their part. Ivanka Petrova chose to research Belogradchik, a small town in northwest Bulgaria. Petrova investigates how local social and cultural resources are used in the work of a family tourist enterprise. The author looks for answers to questions such as: how its members identify with the town and its culture and how the work of the enterprise fits into the Belogradchik local context. At the focus of her paper are current societal practices: the local urban economy and the production of images and symbols

  14. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  15. The Peace Processes of Colombia and El Salvador: A Comparative Study

    National Research Council Canada - National Science Library

    Gantiva, Diego

    1997-01-01

    Colombia and El Salvador, two Latin American countries, have developed similar counterinsurgency processes and started similar processes of peace negotiations between the insurgent armies and the forces of order...

  16. Evaluation methodology for comparing memory and communication of analytic processes in visual analytics

    Energy Technology Data Exchange (ETDEWEB)

    Ragan, Eric D [ORNL; Goodall, John R [ORNL

    2014-01-01

    Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective evaluation methods are needed to evaluate how well provenance tools support analyst s memory and communication of analytic processes. In this paper, we present several methods for the evaluation of process memory, and we discuss the advantages and limitations of each. We discuss methods for determining a baseline process for comparison, and we describe various methods that can be used to elicit process recall, step ordering, and time estimations. Additionally, we discuss methods for conducting quantitative and qualitative analyses of process memory. By organizing possible memory evaluation methods and providing a meta-analysis of the potential benefits and drawbacks of different approaches, this paper can inform study design and encourage objective evaluation of process memory and communication.

  17. Organizational Recruitment as a Two-Stage Process: A Comparative Analysis of Detroit and Yokohama.

    Science.gov (United States)

    Marx, Jonathan

    1988-01-01

    After examining recruitment as a dual process in Detroit and Yokohama (Japan), the author states that discrepant findings result in part from different temporal focus in the recruitment process. He advocates study of the entire recruitment process and the need to isolate contingencies that influence employee selection. (Author/CH)

  18. COMPARATIVE STUDY OF THE USE OF ICT IN ENGLISH TEACHING-LEARNING PROCESSES

    Directory of Open Access Journals (Sweden)

    Abbas ZARE-EE

    2010-04-01

    Full Text Available The use of Information Communication Technologies (ICT in cultural, political, social, economic, and academic activities has recently attracted the attention of many researchers and it should now be an important component of the comparative study of education. The present study was conducted to compare the amount and quality of ICT use in English teaching-learning processes among the faculty members of Medical and Non-medical Universities in Kashan, Iran and to explore the dimensions in which the two groups can benefit from one another and from ICT training in this respect. Out of a total of 255 full-time university teachers teaching at medical and no-medical universities in the region, 193 were chosen to participate in the study using a simple random sampling technique and the Morgan & Kritjki table for sample selection. A researcher-made 5-point Likert scale questionnaire containing 50 items was used to collect the necessary data on the amount of access and use ICT in the two environments. The Chronbach Alfa reliability for this instrument was shown to be 0.8. To answer the research questions, t-test and the analysis of variance were used and the differences in ICT use for learning and teaching were analyzed. The results of the analyses showed that there was a significant difference in the amount of ICT use among the faculty members of medical and non-medical universities. For reason considered in length, teachers at medical universities used ICT significantly less than the other group. Results also indicated that there was a significant difference between the two types of universities with regard to the availability of computers and the amount of ICT training and use. No significant effects on the use of ICT in education were observed for age, teaching experience, and university degree. University teachers with different fields of study showed significant differences only in non-medical universities. Based on the findings of the study

  19. Evaluating the Impacts of Urbanization on Hydrological Processes and Water Resources by Comparing Two Neighboring Basins

    Science.gov (United States)

    Shao, M.; Zhao, G.; Gao, H.

    2017-12-01

    Texas, the fastest growing state in the US, has seen significant land cover/land use change due to urbanization over the past decades. With most of the region being arid/semi-arid, water issues are unprecedentedly pressing. Among the 15 major river basins, two adjacent river basins located in south-central Texas—the San Antonio River Basin (SARB) and the Guadalupe River Basin (GRB)—form an ideal testbed for evaluating the impacts of urbanization on both hydrological processes and water resources. These two basins are similar in size and in climate pattern, but differ in terms of urbanization progress. In SARB, where the city of San Antonio is located, the impervious area has increased from 0.6% (1929) to 7.8% (2011). In contrast, there is little land cover change in the GRB. With regard to the underground components, both basins intersect with the Edward Aquifer (more than 15% of basin area in both cases). The Edward Aquifer acts as one of the major municipal water supplies for San Antonio, and as the water source for local agricultural uses (and for the surrounding habitat). This aquifer has the characteristic of being highly sensitive to changes in surface water conditions, like the descending trend of the underground water table due to over exploitation. In this study, a distributed hydrologic model—DHSVM (the Distributed Hydrology Soil Vegetation Model)—is used to compare the hydrologic characteristics (and their impacts on water resources) over the two basins. With a 200m spatial resolution, the model is calibrated and validated during the historical period over both basins. The objectives of the comparisons are two-fold: First, the urbanization effects on peak flows are evaluated for selected extreme rainfall events; Second, the Edward Aquifer recharge rate from surface water under flood and/or drought conditions within the two basins is analyzed. Furthermore, future urbanization scenarios are tested to provide information relevant to decision making.

  20. Pathways to Medical Home Recognition: A Qualitative Comparative Analysis of the PCMH Transformation Process.

    Science.gov (United States)

    Mendel, Peter; Chen, Emily K; Green, Harold D; Armstrong, Courtney; Timbie, Justin W; Kress, Amii M; Friedberg, Mark W; Kahn, Katherine L

    2017-12-15

    To understand the process of practice transformation by identifying pathways for attaining patient-centered medical home (PCMH) recognition. The CMS Federally Qualified Health Center (FQHC) Advanced Primary Care Practice Demonstration was designed to help FQHCs achieve NCQA Level 3 PCMH recognition and improve patient outcomes. We used a stratified random sample of 20 (out of 503) participating sites for this analysis. We developed a conceptual model of structural, cultural, and implementation factors affecting PCMH transformation based on literature and initial qualitative interview themes. We then used conventional cross-case analysis, followed by qualitative comparative analysis (QCA), a cross-case method based on Boolean logic algorithms, to systematically identify pathways (i.e., combinations of factors) associated with attaining-or not attaining-Level 3 recognition. Site-level indicators were derived from semistructured interviews with site leaders at two points in time (mid- and late-implementation) and administrative data collected prior to and during the demonstration period. The QCA results identified five distinct pathways to attaining PCMH recognition and four distinct pathways to not attaining recognition by the end of the demonstration. Across these pathways, one condition (change leader capacity) was common to all pathways for attaining recognition, and another (previous improvement or recognition experience) was absent in all pathways for not attaining recognition. In general, sites could compensate for deficiencies in one factor with capacity in others, but they needed a threshold of strengths in cultural and implementation factors to attain PCMH recognition. Future efforts at primary care transformation should take into account multiple pathways sites may pursue. Sites should be assessed on key cultural and implementation factors, in addition to structural components, in order to differentiate interventions and technical assistance. © Health

  1. Comparative evaluation of short-term leach tests for heavy metal release from mineral processing waste

    Science.gov (United States)

    Al-Abed, S. R.; Hageman, P.L.; Jegadeesan, G.; Madhavan, N.; Allen, D.

    2006-01-01

    Evaluation of metal leaching using a single leach test such as the Toxicity Characteristic Leaching Procedure (TCLP) is often questionable. The pH, redox potential (Eh), particle size and contact time are critical variables in controlling metal stability, not accounted for in the TCLP. This paper compares the leaching behavior of metals in mineral processing waste via short-term extraction tests such as TCLP, Field Leach Test (FLT) used by USGS and deionized water extraction tests. Variation in the extracted amounts was attributed to the use of different particle sizes, extraction fluid and contact time. In the controlled pH experiments, maximum metal extraction was obtained at acidic pH for cationic heavy metals such as Cu, Pb and Zn, while desorption of Se from the waste resulted in high extract concentrations in the alkaline region. Precipitation of iron, caused by a pH increase, probably resulted in co-precipitation and immobilization of Cu, Pb and Zn in the alkaline pH region. A sequential extraction procedure was performed on the original waste and the solid residue from the Eh-pH experiments to determine the chemical speciation and distribution of the heavy metals. In the as-received waste, Cu existed predominantly in water soluble or sulfidic phases, with no binding to carbonates or iron oxides. Similar characteristics were observed for Pb and Zn, while Se existed mostly associated with iron oxides or sulfides. Adsorption/co-precipitation of Cu, Se and Pb on precipitated iron hydroxides was observed in the experimental solid residues, resulting in metal immobilization above pH 7.

  2. Ground-water transport model selection and evaluation guidelines

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1983-01-01

    Guidelines are being developed to assist potential users with selecting appropriate computer codes for ground-water contaminant transport modeling. The guidelines are meant to assist managers with selecting appropriate predictive models for evaluating either arid or humid low-level radioactive waste burial sites. Evaluation test cases in the form of analytical solutions to fundamental equations and experimental data sets have been identified and recommended to ensure adequate code selection, based on accurate simulation of relevant physical processes. The recommended evaluation procedures will consider certain technical issues related to the present limitations in transport modeling capabilities. A code-selection plan will depend on identifying problem objectives, determining the extent of collectible site-specific data, and developing a site-specific conceptual model for the involved hydrology. Code selection will be predicated on steps for developing an appropriate systems model. This paper will review the progress in developing those guidelines. 12 references

  3. Processing capacity defined by relational complexity: implications for comparative, developmental, and cognitive psychology.

    Science.gov (United States)

    Halford, G S; Wilson, W H; Phillips, S

    1998-12-01

    Working memory limits are best defined in terms of the complexity of the relations that can be processed in parallel. Complexity is defined as the number of related dimensions or sources of variation. A binary relation has one argument and one source of variation; its argument can be instantiated in only one way at a time. A binary relation has two arguments, two sources of variation, and two instantiations, and so on. Dimensionality is related to the number of chunks, because both attributes on dimensions and chunks are independent units of information of arbitrary size. Studies of working memory limits suggest that there is a soft limit corresponding to the parallel processing of one quaternary relation. More complex concepts are processed by "segmentation" or "conceptual chunking." In segmentation, tasks are broken into components that do not exceed processing capacity and can be processed serially. In conceptual chunking, representations are "collapsed" to reduce their dimensionality and hence their processing load, but at the cost of making some relational information inaccessible. Neural net models of relational representations show that relations with more arguments have a higher computational cost that coincides with experimental findings on higher processing loads in humans. Relational complexity is related to processing load in reasoning and sentence comprehension and can distinguish between the capacities of higher species. The complexity of relations processed by children increases with age. Implications for neural net models and theories of cognition and cognitive development are discussed.

  4. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    Science.gov (United States)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  5. A Comparative Case Study of Implementation of Writing as a Process.

    Science.gov (United States)

    Mol, Anne Marie

    Implementation of a new program is a complex process of putting ideas into action. Program implementation can be characterized through the identification of interrelated factors which determine the success or failure of implementation of an innovation. Writing as a process has been perceived as a successful teaching methodology for many years, but…

  6. Extensive separations (CLEAN) processing strategy compared to TRUEX strategy and sludge wash ion exchange

    International Nuclear Information System (INIS)

    Knutson, B.J.; Jansen, G.; Zimmerman, B.D.; Seeman, S.E.; Lauerhass, L.; Hoza, M.

    1994-08-01

    Numerous pretreatment flowsheets have been proposed for processing the radioactive wastes in Hanford's 177 underground storage tanks. The CLEAN Option is examined along with two other flowsheet alternatives to quantify the trade-off of greater capital equipment and operating costs for aggressive separations with the reduced waste disposal costs and decreased environmental/health risks. The effect on the volume of HLW glass product and radiotoxicity of the LLW glass or grout product is predicted with current assumptions about waste characteristics and separations processes using a mass balance model. The prediction is made on three principal processing options: washing of tank wastes with removal of cesium and technetium from the supernatant, with washed solids routed directly to the glass (referred to as the Sludge Wash C processing strategy); the previous steps plus dissolution of the solids and removal of transuranic (TRU) elements, uranium, and strontium using solvent extraction processes (referred to as the Transuranic Extraction Option C (TRUEX-C) processing strategy); and an aggressive yet feasible processing strategy for separating the waste components to meet several main goals or objectives (referred to as the CLEAN Option processing strategy), such as the LLW is required to meet the US Nuclear Regulatory Commission Class A limits; concentrations of technetium, iodine, and uranium are reduced as low as reasonably achievable; and HLW will be contained within 1,000 borosilicate glass canisters that meet current Hanford Waste Vitrification Plant glass specifications

  7. Extensive separations (CLEAN) processing strategy compared to TRUEX strategy and sludge wash ion exchange

    Energy Technology Data Exchange (ETDEWEB)

    Knutson, B.J.; Jansen, G.; Zimmerman, B.D.; Seeman, S.E. [Westinghouse Hanford Co., Richland, WA (United States); Lauerhass, L.; Hoza, M. [Pacific Northwest Lab., Richland, WA (United States)

    1994-08-01

    Numerous pretreatment flowsheets have been proposed for processing the radioactive wastes in Hanford`s 177 underground storage tanks. The CLEAN Option is examined along with two other flowsheet alternatives to quantify the trade-off of greater capital equipment and operating costs for aggressive separations with the reduced waste disposal costs and decreased environmental/health risks. The effect on the volume of HLW glass product and radiotoxicity of the LLW glass or grout product is predicted with current assumptions about waste characteristics and separations processes using a mass balance model. The prediction is made on three principal processing options: washing of tank wastes with removal of cesium and technetium from the supernatant, with washed solids routed directly to the glass (referred to as the Sludge Wash C processing strategy); the previous steps plus dissolution of the solids and removal of transuranic (TRU) elements, uranium, and strontium using solvent extraction processes (referred to as the Transuranic Extraction Option C (TRUEX-C) processing strategy); and an aggressive yet feasible processing strategy for separating the waste components to meet several main goals or objectives (referred to as the CLEAN Option processing strategy), such as the LLW is required to meet the US Nuclear Regulatory Commission Class A limits; concentrations of technetium, iodine, and uranium are reduced as low as reasonably achievable; and HLW will be contained within 1,000 borosilicate glass canisters that meet current Hanford Waste Vitrification Plant glass specifications.

  8. Comparing Fenton Oxidation with Conventional Coagulation Process for RR198 Dye Removal from Aqueous Solutions

    Directory of Open Access Journals (Sweden)

    Behnaz Esrafili

    2017-10-01

    Discussion: Although under optimal conditions, the efficiency of coagulation process with coagulant aid was only 4% less than the efficiency of Fenton process, considering the advantages of Fenton oxidation including lack of production of excessive sludge, a higher efficiency was gained at large doses of dye.

  9. Model Selection in the Analysis of Photoproduction Data

    Science.gov (United States)

    Landay, Justin

    2017-01-01

    Scattering experiments provide one of the most powerful and useful tools for probing matter to better understand its fundamental properties governed by the strong interaction. As the spectroscopy of the excited states of nucleons enters a new era of precision ushered in by improved experiments at Jefferson Lab and other facilities around the world, traditional partial-wave analysis methods must be adjusted accordingly. In this poster, we present a rigorous set of statistical tools and techniques that we implemented; most notably, the LASSO method, which serves for the selection of the simplest model, allowing us to avoid over fitting. In the case of establishing the spectrum of exited baryons, it avoids overpopulation of the spectrum and thus the occurrence of false-positives. This is a prerequisite to reliably compare theories like lattice QCD or quark models to experiments. Here, we demonstrate the principle by simultaneously fitting three observables in neutral pion photo-production, such as the differential cross section, beam asymmetry and target polarization across thousands of data points. Other authors include Michael Doring, Bin Hu, and Raquel Molina.

  10. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  11. A model selection support system for numerical simulations of nuclear thermal-hydraulics

    International Nuclear Information System (INIS)

    Gofuku, Akio; Shimizu, Kenji; Sugano, Keiji; Yoshikawa, Hidekazu; Wakabayashi, Jiro

    1990-01-01

    In order to execute efficiently a dynamic simulation of a large-scaled engineering system such as a nuclear power plant, it is necessary to develop intelligent simulation support system for all phases of the simulation. This study is concerned with the intelligent support for the program development phase and is engaged in the adequate model selection support method by applying AI (Artificial Intelligence) techniques to execute a simulation consistent with its purpose and conditions. A proto-type expert system to support the model selection for numerical simulations of nuclear thermal-hydraulics in the case of cold leg small break loss-of-coolant accident of PWR plant is now under development on a personal computer. The steps to support the selection of both fluid model and constitutive equations for the drift flux model have been developed. Several cases of model selection were carried out and reasonable model selection results were obtained. (author)

  12. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  13. Comparative analysis of different process simulation settings of a micro injection molded part featuring conformal cooling

    DEFF Research Database (Denmark)

    Marhöfer, David Maximilian; Tosello, Guido; Islam, Aminul

    2015-01-01

    Process simulations are applied in all fields of engineering in order to support and optimize the design and quality of products and their manufacturing processes. Micro injection molding is not an exception in this regard. Simulations enable to investigate the process and the part quality....... In the reported work, process simulations using Autodesk Moldflow Insight 2015® are applied to a micro mechanical part to be fabricated by micro injection molding and with over-all dimensions of 12.0 × 3.0 × 0.8 mm³ and micro features (micro hole, diameter of 580 μm, and sharp radii down to 100 μm). Three...... of the implementation of the actual mold block, conventional cooling, and conformal cooling. In the comparison, characteristic quality criteria for injection molding are studied, such as the filling behavior of the cavity, the injection pressure, the temperature distribution, and the resulting part warpage...

  14. Comparative costs of hydrogen produced from photovoltaic electrolysis and from photoelectrochemical processes

    International Nuclear Information System (INIS)

    Block, D.L.

    1998-01-01

    The need for hydrogen produced from renewable energy sources is the key element to the world's large-scale usage of hydrogen and to the hydrogen economy envisioned by the World Hydrogen Energy Association. Renewables-produced hydrogen is also the most technically difficult problem to be solved. Hydrogen will never achieve large-scale usage until it can be competitively produced from renewable energy. One of the important questions that has to be addressed is: What are the economics of present and expected future technologies that will be used to produce hydrogen from renewables? The objective of this study is to give an answer to this question by determining the cost of hydrogen (in U.S.$/MBtu) from competing renewable production technologies. It should be noted that the costs and efficiencies assumed in this paper are assumptions of the author, and that the values are expected to be achieved after additional research on photoelectrochemical process technologies. The cost analysis performed is for three types of hydrogen (H 2 ) produced from five different types of renewable processes: photovoltaic (PV) electrolysis, three photoelectrochemical (PEC) processes and higher temperature electrolysis (HTE). The costs and efficiencies for PV, PEC and HTE processes are established for present day, and for expected costs and efficiencies 10 years into the future. A second objective of this analysis is to set base case costs of PV electrolysis. For any other renewable process, the costs for PV electrolysis, which is existing technology, sets the numbers which the other processes must better. (author)

  15. Comparative analysis of intracellular metabolites of Cephalosporium acremonium in pilot and industrial fermentation processes.

    Science.gov (United States)

    Yang, Yang; Lu, Hua; Ding, Ming-Zhu; Jiang, Jing; Chen, Yao; Yuan, Ying-Jin

    2012-01-01

    To get a better understanding of the characteristics of Cephalosporium acremonium with higher productivity, C. acremonium cells in pilot and industrial fermentation processes were analyzed using the profiles of metabolites. The different metabolic features of cells in pilot and industrial processes were caused by the different fermentation environments. The hierarchical cluster analysis of the data of metabolic profiling revealed that the concentrations of most of the metabolites were higher in the industrial process than in the pilot one, especially at the cephalosporin C accumulation stage. The analysis of important metabolites of primary metabolism indicated that the ability of the cephalosporin C biosynthesis was higher in the industrial process than that in the pilot one in C. acremonium. The analysis of the variations of cephalosporin C precursors and amino acids that were related to these precursors suggested that the metabolic flux changes of α-aminoadipic acid and cysteine between the primary metabolism and cephalosporin biosynthetic pathway in the industrial process. Furthermore, metabolites of C. acremonium, such as proline, spermine, inositol phosphate, and glycerol, were shown to respond to the fermentation environmental stress. These findings provide insights into the intracellular metabolite characteristics and feasible regulation scheme to improve the titer of cephalosporin C in the industrial process. Copyright © 2012 International Union of Biochemistry and Molecular Biology, Inc.

  16. Implications of allometric model selection for county-level biomass mapping

    Directory of Open Access Journals (Sweden)

    Laura Duncanson

    2017-10-01

    Full Text Available Abstract Background Carbon accounting in forests remains a large area of uncertainty in the global carbon cycle. Forest aboveground biomass is therefore an attribute of great interest for the forest management community, but the accuracy of aboveground biomass maps depends on the accuracy of the underlying field estimates used to calibrate models. These field estimates depend on the application of allometric models, which often have unknown and unreported uncertainties outside of the size class or environment in which they were developed. Results Here, we test three popular allometric approaches to field biomass estimation, and explore the implications of allometric model selection for county-level biomass mapping in Sonoma County, California. We test three allometric models: Jenkins et al. (For Sci 49(1: 12–35, 2003, Chojnacky et al. (Forestry 87(1: 129–151, 2014 and the US Forest Service’s Component Ratio Method (CRM. We found that Jenkins and Chojnacky models perform comparably, but that at both a field plot level and a total county level there was a ~ 20% difference between these estimates and the CRM estimates. Further, we show that discrepancies are greater in high biomass areas with high canopy covers and relatively moderate heights (25–45 m. The CRM models, although on average ~ 20% lower than Jenkins and Chojnacky, produce higher estimates in the tallest forests samples (> 60 m, while Jenkins generally produces higher estimates of biomass in forests < 50 m tall. Discrepancies do not continually increase with increasing forest height, suggesting that inclusion of height in allometric models is not primarily driving discrepancies. Models developed using all three allometric models underestimate high biomass and overestimate low biomass, as expected with random forest biomass modeling. However, these deviations were generally larger using the Jenkins and Chojnacky allometries, suggesting that the CRM approach may be more

  17. Same but different: Comparative modes of information processing are implicated in the construction of perceptions of autonomy support.

    Science.gov (United States)

    Lee, Rebecca Rachael; Chatzisarantis, Nikos L D

    2017-11-01

    An implicit assumption behind tenets of self-determination theory is that perceptions of autonomy support are a function of absolute modes of information processing. In this study, we examined whether comparative modes of information processing were implicated in the construction of perceptions of autonomy support. In an experimental study, we demonstrated that participants employed comparative modes of information processing in evaluating receipt of small, but not large, amounts of autonomy support. In addition, we found that social comparison processes influenced a number of outcomes that are empirically related to perceived autonomy support such as sense of autonomy, positive affect, perceived usefulness, and effort. Findings shed new light upon the processes underpinning construction of perceptions related to autonomy support and yield new insights into how to increase the predictive validity of models that use autonomy support as a determinant of motivation and psychological well-being. © 2017 The British Psychological Society.

  18. Comparing internal and alliance-based new product development processes: case studies in the food industry

    OpenAIRE

    Olsen, Nina Veflen; Gripsrud, Geir

    2011-01-01

    Companies may simultaneously pursue different new product development (NPD) strategies. This article reports a comparative two case design study of in-house NPD projects as well as alliance based NPD projects in a food company. Two contradicting proposition’s of the efficiency of NPD in an alliance compared to NPD performed internally are stated, and the findings indicate that the alliance based NPD solution creates a better context for NPD than the in-house solution. Less forwarding of unsol...

  19. Comparing internal and alliance-based new product development processes: case studies in the food industry

    OpenAIRE

    Olsen, Nina Veflen; Gripsrud, Geir

    2011-01-01

    This is the authors’ final, accepted and refereed manuscript to the article Companies may simultaneously pursue different new product development (NPD) strategies. This article reports a comparative two case design study of in-house NPD projects as well as alliance based NPD projects in a food company. Two contradicting proposition’s of the efficiency of NPD in an alliance compared to NPD performed internally are stated, and the findings indicate that the alliance based NPD solution create...

  20. A single theoretical framework for circular features processing in humans: orientation and direction of motion compared

    Directory of Open Access Journals (Sweden)

    Tzvetomir eTzvetanov

    2012-05-01

    Full Text Available Common computational principles underly processing of various visual features in the cortex. They are considered to create similar patterns of contextual modulations in behavioral studies for different features as orientation and direction of motion. Here, I studied the possibility that a single theoretical framework, implemented in different visual areas, of circular feature coding and processing could explain these similarities in observations. Stimuli were created that allowed direct comparison of the contextual effects on orientation and motion direction with two different psychophysical probes: changes in weak and strong signal perception. One unique simplified theoretical model of circular feature coding including only inhibitory interactions, and decoding through standard vector average, successfully predicted the similarities in the two domains, while different feature population characteristics explained well the differences in modulation on both experimental probes. These results demonstrate how a single computational principle underlies processing of various features across the cortices.

  1. Matching reality in the arts: self-referential neural processing of naturalistic compared to surrealistic images.

    Science.gov (United States)

    Silveira, Sarita; Graupmann, Verena; Frey, Dieter; Blautzik, Janusch; Meindl, Thomas; Reiser, Maximilian; Chen, Cheng; Wang, Yizhou; Bao, Yan; PöppeI, Ernst; Gutyrchik, Evgeny

    2012-01-01

    How are works of art that present scenes that match potential expectations processed in the brain, in contrast to such scenes that can never occur in real life because they would violate physical laws? Using functional magnetic resonance imaging, we investigated the processing of surrealistic and naturalistic images in visual artworks. Looking at naturalistic paintings leads to a significantly higher activation in the visual cortex and in the precuneus. Humans apparently own a sensitive mechanism even for artistic representations of the visual world to separate the impossible from what potentially matches physical reality. The observation reported here also suggests that sensory input corresponding to a realistic representation of the visual world elicits higher self-referential processing.

  2. "Active" and "Passive" Lava Resurfacing Processes on Io: A Comparative Study of Loki Patera and Prometheus

    Science.gov (United States)

    Davies, A. G.; Matson, D. L.; Leone, G.; Wilson, L.; Keszthelyi, L. P.

    2004-01-01

    Studies of Galileo Near Infrared Mapping Spectrometer (NIMS) data and ground based data of volcanism at Prometheus and Loki Patera on Io reveal very different mechanisms of lava emplacement at these two volcanoes. Data analyses show that the periodic nature of Loki Patera s volcanism from 1990 to 2001 is strong evidence that Loki s resurfacing over this period resulted from the foundering of a crust on a lava lake. This process is designated passive , as there is no reliance on sub-surface processes: the foundering of the crust is inevitable. Prometheus, on the other hand, displays an episodicity in its activity which we designate active . Like Kilauea, a close analog, Prometheus s effusive volcanism is dominated by pulses of magma through the nearsurface plumbing system. Each system affords views of lava resurfacing processes through modelling.

  3. A Life Cycle Assessment of Silica Sand: Comparing the Beneficiation Processes

    Directory of Open Access Journals (Sweden)

    Anamarija Grbeš

    2015-12-01

    Full Text Available Silica sand or quartz sand is a mineral resource with a wide variety of application; glass industry, construction and foundry are the most common examples thereof. The Republic of Croatia has reserves of 40 million tons of silica sand and a long tradition of surface mining and processing. The average annual production of raw silica sand in Croatia in the period from 2006 to 2011 amounted to 150 thousand tons. This paper presents cradle to gate LCA results of three different types of beneficiation techniques: electrostatic separation; flotation; gravity concentration. The aim of this research is to identify and quantify the environmental impacts of the silica sand production, to learn the range of the impacts for different processing methods, as well as to identify the major contributors and focus for further process design development.

  4. Three column intermittent simulated moving bed chromatography: 1. Process description and comparative assessment.

    Science.gov (United States)

    Jermann, Simon; Mazzotti, Marco

    2014-09-26

    The three column intermittent simulated moving bed (3C-ISMB) process is a new type of multi-column chromatographic process for binary separations and can be regarded as a modification of the I-SMB process commercialized by Nippon Rensui Corporation. In contrast to conventional I-SMB, this enables the use of only three instead of four columns without compromising product purity and throughput. The novel mode of operation is characterized by intermittent feeding and product withdrawal as well as by partial recycling of the weakly retained component from section III to section I. Due to the smaller number of columns with respect to conventional I-SMB, higher internal flow rates can be applied without violating pressure drop constraints. Therefore, the application of 3C-ISMB allows for a higher throughput whilst using a smaller number of columns. As a result, we expect that the productivity given in terms of throughput per unit time and unit volume of stationary phase can be significantly increased. In this contribution, we describe the new process concept in detail and analyze its cyclic steady state behavior through an extensive simulation study. The latter shows that 3C-ISMB can be easily designed by Triangle Theory even under highly non-linear conditions. The simple process design is an important advantage to other advanced SMB-like processes. Moreover, the simulation study demonstrates the superior performance of 3C-ISMB, namely productivity increases by roughly 60% with respect to conventional I-SMB without significantly sacrificing solvent consumption. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.

    Science.gov (United States)

    Jeong, Sanghyup; Marks, Bradley P; James, Michael K

    2017-01-01

    Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10 8 CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (a w ), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, a w , and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.

  6. COMPARATIVE ANALYSIS OF KIRLIANOGRAFIIA IMAGES GLOW OF BIOLOGICAL TISSUES WITH BIOCHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    L. A. Pisotska

    2015-12-01

    the investigated samples. For kirlianograficeskih studies used an experimental device, RIVERS 1, developed by Ukrainian Scientific Research Institute of mechanical engineering technologies (Dnepropetrovsk. For mathematical processing of results using Matlab program. The growing shortage of ATP causes the breach and termination of ion exchange, increases reactive oxygen generation, lipid peroxidation destroys cell membranes. The process of self digestion (autoliza tissue tendons, as shown by the results of the experiments, had cyclical changes metabolism enzyme activity (ALT, carbohydrate (LDH, nucleotides, of total protein and micronutrients.

  7. Science Teachers' Information Processing Behaviours in Nepal: A Reflective Comparative Study

    Science.gov (United States)

    Acharya, Kamal Prasad

    2017-01-01

    This study examines the investigation of the information processing behaviours of secondary level science teachers. It is based on the data collected from 50 secondary level school science teachers working in Kathmandy valley. The simple random sampling and the Cognitive Style Inventory have been used respectively as the technique and tool to…

  8. An innovative process for treatment of municipal wastewater with superior charcteristics compared to traditional techologies

    DEFF Research Database (Denmark)

    Schmidt, Jens Ejbye; Fitsios, E.; Angelidaki, Irini

    2002-01-01

    An innovative treatment process for municipal sewage, which results in low sludge production, low energy consumption, high COD removal and high energy and nutrients recovery, is described. The organic matter will primarly be removed through anaerobic degradation using high-flow reactors. For nitr...

  9. Learning in the Process of Industrial Work--A Comparative Study of Finland, Sweden and Germany

    Science.gov (United States)

    Kira, Mari

    2007-01-01

    By combining a positivistic and an interpretive approach, this research investigates the learning opportunities that contemporary industrial work processes and workplaces offer for employees individually and collectively. The research explores how employees can become trained through their work and how individual development may expand to…

  10. Real-World Experimentation Comparing Time-Sharing and Batch Processing in Teaching Computer Science,

    Science.gov (United States)

    effectiveness of time-sharing and batch processing in teaching computer science . The experimental design was centered on direct, ’real world’ comparison...ALGOL). The experimental sample involved all introductory computer science courses with a total population of 415 cadets. The results generally

  11. Studies on thermal processing of Tuna-A comparative study in tin ...

    African Journals Online (AJOL)

    Tin-free steel can is an ideal alternative to open top sanitary tin cans (OTS) for thermal processing of little tuna (Ethynnus affinis) in curry used as filling media. Effect of heat penetration on physical, biochemical and sensory characteristics of canned tuna product were studied. The chemical analysis of raw tuna fish showed a ...

  12. Selection and study performance : comparing three admission processes within one medical school

    NARCIS (Netherlands)

    Schripsema, Nienke R.; van Trigt, Anke M.; Borleffs, Jan C. C.; Cohen-Schotanus, Janke

    2014-01-01

    ObjectivesThis study was conducted to: (i) analyse whether students admitted to one medical school based on top pre-university grades, a voluntary multifaceted selection process, or lottery, respectively, differed in study performance; (ii) examine whether students who were accepted in the

  13. Processing the ground vibration signal produced by debris flows: the methods of amplitude and impulses compared

    Science.gov (United States)

    Arattano, M.; Abancó, C.; Coviello, V.; Hürlimann, M.

    2014-12-01

    Ground vibration sensors have been increasingly used and tested, during the last few years, as devices to monitor debris flows and they have also been proposed as one of the more reliable devices for the design of debris flow warning systems. The need to process the output of ground vibration sensors, to diminish the amount of data to be recorded, is usually due to the reduced storing capabilities and the limited power supply, normally provided by solar panels, available in the high mountain environment. There are different methods that can be found in literature to process the ground vibration signal produced by debris flows. In this paper we will discuss the two most commonly employed: the method of impulses and the method of amplitude. These two methods of data processing are analyzed describing their origin and their use, presenting examples of applications and their main advantages and shortcomings. The two methods are then applied to process the ground vibration raw data produced by a debris flow occurred in the Rebaixader Torrent (Spanish Pyrenees) in 2012. The results of this work will provide means for decision to researchers and technicians who find themselves facing the task of designing a debris flow monitoring installation or a debris flow warning equipment based on the use of ground vibration detectors.

  14. Competency-Based Training in International Perspective: Comparing the Implementation Processes Towards the Achievement of Employability

    Science.gov (United States)

    Boahin, Peter; Eggink, Jose; Hofman, Adriaan

    2014-01-01

    This article undertakes a comparison of competency-based training (CBT) systems in a number of countries with the purpose of drawing lessons to support Ghana and other countries in the process of CBT implementation. The study focuses on recognition of prior learning and involvement of industry since these features seem crucial in achieving…

  15. A comparative study of cellulose nanofibrils disintegrated via multiple processing approaches

    Science.gov (United States)

    Yan Qing; Ronald Sabo; J.Y. Zhu; Umesh Agarwal; Zhiyong Cai; Yiqiang Wu

    2013-01-01

    Various cellulose nanofibrils (CNFs) created by refining and microfluidization, in combination withenzymatic or 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO) oxidized pretreatment were compared. Themorphological properties, degree of polymerization, and crystallinity for the obtained nanofibrils, aswell as physical and mechanical properties of the corresponding films...

  16. The Paradigm of Utilizing Robots in the Teaching Process: A Comparative Study

    Science.gov (United States)

    Bacivarov, Ioan C.; Ilian, Virgil L. M.

    2012-01-01

    This paper discusses a comparative study of the effects of using a humanoid robot for introducing students to personal robotics. Even if a humanoid robot is one of the more complicated types of robots, comprehension was not an issue. The study highlighted the importance of using real hardware for teaching such complex subjects as opposed to…

  17. PRELIMINARY COMPARATIVE STUDY OF METHODS TO EXTRACT VIRUS FROM RAW AND PROCESSED SEWAGE SLUDGES

    Science.gov (United States)

    Two simple virus extraction techniques were compared to an EPA standard method for detection of human enteric viruses in raw sewage sludge and class A biosolids. The techniques were used to detect both indigenous and seeded virus from a plant that distributes class A material pr...

  18. Theory for Explaining and Comparing the Dynamics of Education in Transitional Processes

    Science.gov (United States)

    van der Walt, Johannes L.

    2016-01-01

    Countries all over the world find themselves in the throes of revolution, change, transition or transformation. Because of the complexities of these momentous events, it is no simple matter to describe and evaluate them. This paper suggests that comparative educationists apply a combination of three theories as a lens through which such national…

  19. How Multilevel Societal Learning Processes Facilitate Transformative Change: A Comparative Case Study Analysis on Flood Management

    Directory of Open Access Journals (Sweden)

    Claudia Pahl-Wostl

    2013-12-01

    Full Text Available Sustainable resources management requires a major transformation of existing resource governance and management systems. These have evolved over a long time under an unsustainable management paradigm, e.g., the transformation from the traditionally prevailing technocratic flood protection toward the holistic integrated flood management approach. We analyzed such transformative changes using three case studies in Europe with a long history of severe flooding: the Hungarian Tisza and the German and Dutch Rhine. A framework based on societal learning and on an evolutionary understanding of societal change was applied to identify drivers and barriers for change. Results confirmed the importance of informal learning and actor networks and their connection to formal policy processes. Enhancing a society's capacity to adapt is a long-term process that evolves over decades, and in this case, was punctuated by disastrous flood events that promoted windows of opportunity for change.

  20. Tool wear monitoring using neuro-fuzzy techniques: a comparative study in a turning process

    OpenAIRE

    Gajate, Agustín; Haber Guerra, Rodolfo E.; Toro Matamoros, Raúl Mario del; Vega, Pastora; Bustillo, Andrés

    2012-01-01

    Tool wear detection is a key issue for tool condition monitoring. The maximization of useful tool life is frequently related with the optimization of machining processes. This paper presents two model-based approaches for tool wear monitoring on the basis of neuro-fuzzy techniques. The use of a neuro-fuzzy hybridization to design a tool wear monitoring system is aiming at exploiting the synergy of neural networks and fuzzy logic, by combining human reasoning with learning and connectionist st...

  1. Processing German petroleum: (comparative tests of several oils found in Germany

    Energy Technology Data Exchange (ETDEWEB)

    Hupfer, H.; Leonhardt, P.; Kroenig, W.

    1943-02-05

    Topping residues from several German petroleum types were processed in a 10-liter oven at both 600 and 250 atm, using coke catalysts. Tables in the article gave the data comparisons. Tests on three types of petroleum (Wietze-heavy, Nienhagen, and Reitbrook) were conducted at 350/sup 0/ and 325/sup 0/C. Notable was the nearly complete decomposition of asphalt in most of the test runs. The best yields were obtained with the Wietze products, but the same conditions did not produce similar results with the other products. It was found that, as with bituminous coals, when lower pressure was used, the oil yield had a higher content of hydrogen; this was traced back to low reaction temperature since some processing tests for an oil yield were run at 325/sup 0/C instead of 350/sup 0/C. Tests using Reitbrook oil, with molybdenum-coke catalyst at this same temperature, resulted in a better yield. Octane numbers of gasoline made using iron catalyst were higher than those of gasoline made using molybdenum catalyst. None of the oils satisfied the specifications for fuel oil. Undiluted sludges from the oils were not filterable on technical scale. Composition of hydrocarbon vaporization was, in comparison to coal, lower in ethane, but higher in butane. Isobutane yield from processed petroleum appeared to be variable. In comparison to processing coal, the petroleum substances had to be preheated more strongly at 250 atm, nearly to the oven operating temperature. Also, coking was discovered. It was traced back to procedural problems. Nevertheless, there seemed to be no unsurmountable difficulties associated with hydrogenating the various petroleum types. 16 tables

  2. Comparative study of liberalization process of passengers railway market in Spain and England

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez Morote, G.; Ortuño Padilla, A.; Fernandez Aracil, P.

    2016-07-01

    This article provides an overview of the privatization of railway passengers market in Spain through a background to the most relevant cases studies in Europe, particularly the liberalization process in England. The English case study is a paradigmatic example to assess how the liberalization process was developed and its effect in the present. This assessment has been undertaken to analyse the railway franchise structure, ticketing measures, role of national and regional authorities, etc. and possible analogies to be adapted to the case of Spain. Firstly, this article reviews the origin of the privatization of the railway market in both Spain and England, describing every phase of the liberalization and the success of every stage. Secondly, a critical approach assessment exposes those market failures of the liberalization process in England that caused negative impacts on society. In addition, the role of the Government is analysed to measure their implication in order to solve that situation. Furthermore, the paper expounds a wide analysis of the rail ticketing in England, its effects on increased passenger number. Finally, this article proposes some measures to be followed on the privatization of passenger rail market in Spain, as well as some key concepts to guarantee its success as taken from the case studies that have been reviewed. (Author)

  3. Comparative analysis of automation of production process with industrial robots in Asia/Australia and Europe

    Directory of Open Access Journals (Sweden)

    I. Karabegović

    2017-01-01

    Full Text Available The term "INDUSTRY 4.0" or "fourth industrial revolution" was first introduced at the fair in 2011 in Hannover. It comes from the high-tech strategy of the German Federal Government that promotes automation-computerization to complete smart automation, meaning the introduction of a method of self-automation, self-configuration, self-diagnosing and fixing the problem, knowledge and intelligent decision-making. Any automation, including smart, cannot be imagined without industrial robots. Along with the fourth industrial revolution, ‘’robotic revolution’’ is taking place in Japan. Robotic revolution refers to the development and research of robotic technology with the aim of using robots in all production processes, and the use of robots in real life, to be of service to a man in daily life. Knowing these facts, an analysis was conducted of the representation of industrial robots in the production processes on the two continents of Europe and Asia /Australia, as well as research that industry is ready for the introduction of intelligent automation with the goal of establishing future smart factories. The paper gives a representation of the automation of production processes in Europe and Asia/Australia, with predictions for the future.

  4. Comparative Study of the Scaling Effect on Pressure Profiles in Capillary Underfill Process

    Science.gov (United States)

    Ng, Fei Chong; Abas, Aizat; Abdullah, M. Z.; Ishak, M. H. H.; Yuen Chong, Gean

    2017-05-01

    Optimization of the capillary underfill (CUF) encapsulation process is vital to enhance the package’s reliability. Therefore, the design and sizing of the newly developed ball grid array (BGA) device must be considered so that it is compatible with the CUF process. The scaling effect of BGA on CUF flow and its dynamic properties is thoroughly investigated by means of fluid-structure interaction (FSI) numerical simulation. This paper generally highlighted the differences in CUF flow behaviours, together with the pressure distributions between the actual industrial size BGA and the scaled up models for large BGA setup. While flow front profiles appeared to be similar across BGA of various sizes at relative error less than 10%, the CUF filling time gradually increases as the BGA become larger. The scaling limit is found to be at 20, based on the analysis of dimensionless number. The entrant pressure however decreases when the BGA device being scaled up. These findings will assist in the future BGA designs for various sizes used in the CUF encapsulation process.

  5. THE BOLOGNA PROCESS AND THE DYNAMICS OF ACADEMIC MOBILITY: A COMPARATIVE APPROACH TO ROMANIA AND TURKEY

    Directory of Open Access Journals (Sweden)

    Monica ROMAN

    2008-12-01

    Full Text Available Recent changes that have occurred in the European higher education system are grounded on the options of continental countries, expressed in the Bologna Declaration, to achieve a single European space in this field by the year 2010. The purpose of this paper is to develop a better understanding of student mobility in the process of internationalization of higher education in a South European context. The rationale of the study is that student mobility has long been the most important dimension of the process of internationalization of higher education. At the moment there is increasing demand for higher education, as a consequence of demographic trends and the need for new degrees and diploma programs. The article focuses on two countries from South-Eastern Europe, Romania and Turkey. Both countries have a very dynamic higher education system, in terms of number of students and stuff, integrating in Bologna process. They also are primarily perceived as sending students countries. The key findings are linked to obstacles and solutions to overcome this obstacle. It also stresses the necessity of the two higher education systems to be more involved in attracting European students.

  6. Effects of stimulus order on discrimination processes in comparative and equality judgements: data and models.

    Science.gov (United States)

    Dyjas, Oliver; Ulrich, Rolf

    2014-01-01

    In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.

  7. Exergy destruction and losses on four North Sea offshore platforms: A comparative study of the oil and gas processing plants

    DEFF Research Database (Denmark)

    Voldsund, Mari; Nguyen, Tuong-Van; Elmegaard, Brian

    2014-01-01

    conditions, which implies that some platforms need less heat and power than others. Reservoir properties and composition vary over the lifetime of an oil field, and therefore maintaining a high efficiency of the processing plant is challenging. The results of the analysis show that 27%-57% of the exergy......The oil and gas processing plants of four North Sea offshore platforms are analysed and compared, based on the exergy analysis method. Sources of exergy destruction and losses are identified and the findings for the different platforms are compared. Different platforms have different working...

  8. Comparing population patterns to processes: abundance and survival of a forest salamander following habitat degradation.

    Directory of Open Access Journals (Sweden)

    Clint R V Otto

    Full Text Available Habitat degradation resulting from anthropogenic activities poses immediate and prolonged threats to biodiversity, particularly among declining amphibians. Many studies infer amphibian response to habitat degradation by correlating patterns in species occupancy or abundance with environmental effects, often without regard to the demographic processes underlying these patterns. We evaluated how retention of vertical green trees (CANOPY and coarse woody debris (CWD influenced terrestrial salamander abundance and apparent survival in recently clearcut forests. Estimated abundance of unmarked salamanders was positively related to CANOPY (β Canopy  = 0.21 (0.02-1.19; 95% CI, but not CWD (β CWD  = 0.11 (-0.13-0.35 within 3,600 m2 sites, whereas estimated abundance of unmarked salamanders was not related to CANOPY (β Canopy  = -0.01 (-0.21-0.18 or CWD (β CWD  = -0.02 (-0.23-0.19 for 9 m2 enclosures. In contrast, apparent survival of marked salamanders within our enclosures over 1 month was positively influenced by both CANOPY and CWD retention (β Canopy  = 0.73 (0.27-1.19; 95% CI and β CWD  = 1.01 (0.53-1.50. Our results indicate that environmental correlates to abundance are scale dependent reflecting habitat selection processes and organism movements after a habitat disturbance event. Our study also provides a cautionary example of how scientific inference is conditional on the response variable(s, and scale(s of measure chosen by the investigator, which can have important implications for species conservation and management. Our research highlights the need for joint evaluation of population state variables, such as abundance, and population-level process, such as survival, when assessing anthropogenic impacts on forest biodiversity.

  9. A comparative life cycle assessment of hybrid osmotic dilution desalination and established seawater desalination and wastewater reclamation processes.

    Science.gov (United States)

    Hancock, Nathan T; Black, Nathan D; Cath, Tzahi Y

    2012-03-15

    The purpose of this study was to determine the comparative environmental impacts of coupled seawater desalination and water reclamation using a novel hybrid system that consist of an osmotically driven membrane process and established membrane desalination technologies. A comparative life cycle assessment methodology was used to differentiate between a novel hybrid process consisting of forward osmosis (FO) operated in osmotic dilution (ODN) mode and seawater reverse osmosis (SWRO), and two other processes: a stand alone conventional SWRO desalination system, and a combined SWRO and dual barrier impaired water purification system consisting of nanofiltration followed by reverse osmosis. Each process was evaluated using ten baseline impact categories. It was demonstrated that from a life cycle perspective two hurdles exist to further development of the ODN-SWRO process: module design of FO membranes and cleaning intensity of the FO membranes. System optimization analysis revealed that doubling FO membrane packing density, tripling FO membrane permeability, and optimizing system operation, all of which are technically feasible at the time of this publication, could reduce the environmental impact of the hybrid ODN-SWRO process compared to SWRO by more than 25%; yet, novel hybrid nanofiltration-RO treatment of seawater and wastewater can achieve almost similar levels of environmental impact. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. RUNON a hitherto little noticed factor - Field experiments comparing RUNOFF/RUNON processes

    Science.gov (United States)

    Kohl, Bernhard; Achleitner, Stefan; Lumassegger, Simon

    2017-04-01

    When ponded water moves downslope as overland flow, an important process called runon manifests itself, but is often ignored in rainfall-runoff studies (Nahar et al. 2004) linking infiltration exclusively to rainfall. Runon effects on infiltration have not yet or only scarcely been evaluated (e.g. Zheng et al. 2000). Runoff-runon occurs when spatially variable infiltration capacities result in runoff generated in one location potentially infiltrating further downslope in an area with higher infiltration capacity (Jones et al. 2013). Numerous studies report inverse relationships between unit area volumes of overland flow and plot lengths (Jones et al. 2016). This is an indication that the effects of rainfall and runon often become blurred. We use a coupled hydrological/2D hydrodynamic model to simulate surface runoff and pluvial flooding including the associated infiltration process. In frame of the research project SAFFER-CC (sensitivity assessment of critical condition for local flash floods - evaluating the recurrence under climate change) the influence of land use and soil conservation on pluvial flash flood modeling is assessed. Field experiments are carried out with a portable irrigation spray installation at different locations with a plot size 5m width and 10m length. The test plots were subjected first to a rainfall with constant intensity of 100 mm/h for one hour. Consecutively a super intense, one hour mid accentuated rainfall hydrograph was applied after 30 minutes at the same plots, ranging from 50 mm/h to 200 mm/h for 1hour. Finally, runon was simulated by upstream feeding of the test plots using two different inflow intensities. The irrigation test showed expected differences of runoff coefficients depending on the various agricultural management. However, these runoff coefficients change with the applied process (rainfall or runon). While a decrease was observed on a plot with a closed litter layer, runoff coefficient from runon increases on poor

  11. Structures and processes in spontaneous ADR reporting systems: a comparative study of Australia and Denmark.

    Science.gov (United States)

    Aagaard, Lise; Stenver, Doris Irene; Hansen, Ebba Holme

    2008-10-01

    To explore the organisational structure and processes of the Danish and Australian spontaneous ADR reporting systems with a view to how information is generated about new ADRs. The Danish and Australian spontaneous ADR reporting systems. Qualitative analyses of documentary material, descriptive interviews with key informants, and observations were made. We analysed the organisational structure of the Danish and Australian ADR reporting systems with respect to structures and processes, including information flow and exchange of ADR data. The analysis was made based on Scott's adapted version of Leavitt's diamond model, with the components: goals/tasks, social structure, technology and participants, within a surrounding environment. The main differences between the systems were: (1) PARTICIPANTS: Outsourcing of ADR assessments to the pharmaceutical companies complicates maintenance of scientific skills within the Danish Medicines Agency (DKMA), as it leaves the handling of spontaneous ADR reports purely administrative within the DKMA, and the knowledge creation process remains with the pharmaceutical companies, while in Australia senior scientific staff work with evaluation of the ADR report; (2) Goals/tasks: In Denmark, resources are targeted at evaluating Periodic Safety Update Reports (PSUR) submitted by the companies, while the resources in Australia are focused on single case assessment resulting in faster and more proactive medicine surveillance; (3) Social structure: Discussions between scientific staff about ADRs take place in Australia, while the Danish system primarily focuses on entering and forwarding ADR data to the relevant pharmaceutical companies; (4) Technology: The Danish system exchanges ADR data electronically with pharmaceutical companies and the other EU countries, while Australia does not have a system for electronic exchange of ADR data; and (5) ENVIRONMENT: The Danish ADR system is embedded in the routines of cooperation within European

  12. A comparative analysis of pre-processing techniques in colour retinal images

    International Nuclear Information System (INIS)

    Salvatelli, A; Bizai, G; Barbosa, G; Drozdowicz, B; Delrieux, C

    2007-01-01

    Diabetic retinopathy (DR) is a chronic disease of the ocular retina, which most of the times is only discovered when the disease is on an advanced stage and most of the damage is irreversible. For that reason, early diagnosis is paramount for avoiding the most severe consequences of the DR, of which complete blindness is not uncommon. Unsupervised or supervised image processing of retinal images emerges as a feasible tool for this diagnosis. The preprocessing stages are the key for any further assessment, since these images exhibit several defects, including non uniform illumination, sampling noise, uneven contrast due to pigmentation loss during sampling, and many others. Any feasible diagnosis system should work with images where these defects were compensated. In this work we analyze and test several correction techniques. Non uniform illumination is compensated using morphology and homomorphic filtering; uneven contrast is compensated using morphology and local enhancement. We tested our processing stages using Fuzzy C-Means, and local Hurst (self correlation) coefficient for unsupervised segmentation of the abnormal blood vessels. The results over a standard set of DR images are more than promising

  13. Comparing Proteolytic Fingerprints of Antigen-Presenting Cells during Allergen Processing

    Directory of Open Access Journals (Sweden)

    Heidi Hofer

    2017-06-01

    Full Text Available Endolysosomal processing has a critical influence on immunogenicity as well as immune polarization of protein antigens. In industrialized countries, allergies affect around 25% of the population. For the rational design of protein-based allergy therapeutics for immunotherapy, a good knowledge of T cell-reactive regions on allergens is required. Thus, we sought to analyze endolysosomal degradation patterns of inhalant allergens. Four major allergens from ragweed, birch, as well as house dust mites were produced as recombinant proteins. Endolysosomal proteases were purified by differential centrifugation from dendritic cells, macrophages, and B cells, and combined with allergens for proteolytic processing. Thereafter, endolysosomal proteolysis was monitored by protein gel electrophoresis and mass spectrometry. We found that the overall proteolytic activity of specific endolysosomal fractions differed substantially, whereas the degradation patterns of the four model allergens obtained with the different proteases were extremely similar. Moreover, previously identified T cell epitopes were assigned to endolysosomal peptides and indeed showed a good overlap with known T cell epitopes for all four candidate allergens. Thus, we propose that the degradome assay can be used as a predictor to determine antigenic peptides as potential T cell epitopes, which will help in the rational design of protein-based allergy vaccine candidates.

  14. A comparative analysis of pre-processing techniques in colour retinal images

    Energy Technology Data Exchange (ETDEWEB)

    Salvatelli, A [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Bizai, G [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Barbosa, G [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Drozdowicz, B [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Delrieux, C [Electric and Computing Engineering Department, Universidad Nacional del Sur, Alem 1253, BahIa Blanca, (Partially funded by SECyT-UNS) (Argentina)], E-mail: claudio@acm.org

    2007-11-15

    Diabetic retinopathy (DR) is a chronic disease of the ocular retina, which most of the times is only discovered when the disease is on an advanced stage and most of the damage is irreversible. For that reason, early diagnosis is paramount for avoiding the most severe consequences of the DR, of which complete blindness is not uncommon. Unsupervised or supervised image processing of retinal images emerges as a feasible tool for this diagnosis. The preprocessing stages are the key for any further assessment, since these images exhibit several defects, including non uniform illumination, sampling noise, uneven contrast due to pigmentation loss during sampling, and many others. Any feasible diagnosis system should work with images where these defects were compensated. In this work we analyze and test several correction techniques. Non uniform illumination is compensated using morphology and homomorphic filtering; uneven contrast is compensated using morphology and local enhancement. We tested our processing stages using Fuzzy C-Means, and local Hurst (self correlation) coefficient for unsupervised segmentation of the abnormal blood vessels. The results over a standard set of DR images are more than promising.

  15. Product Development and its Comparative Analysis by SLA, SLS and FDM Rapid Prototyping Processes

    Science.gov (United States)

    Choudhari, C. M.; Patil, V. D.

    2016-09-01

    To grab market and meeting deadlines has increased the scope of new methods in product design and development. Industries continuously strive to optimize the development cycles with high quality and cost efficient products to maintain market competitiveness. Thus the need of Rapid Prototyping Techniques (RPT) has started to play pivotal role in rapid product development cycle for complex product. Dimensional accuracy and surface finish are the corner stone of Rapid Prototyping (RP) especially if they are used for mould development. The paper deals with the development of part made with the help of Selective Laser Sintering (SLS), Stereo-lithography (SLA) and Fused Deposition Modelling (FDM) processes to benchmark and investigate on various parameters like material shrinkage rate, dimensional accuracy, time, cost and surface finish. This helps to conclude which processes can be proved to be effective and efficient in mould development. In this research work the emphasis was also given to the design stage of a product development to obtain an optimum design solution for an existing product.

  16. Comparing Germany's and California's Interconnection Processes for PV Systems (White Paper)

    Energy Technology Data Exchange (ETDEWEB)

    Tweedie, A.; Doris, E.

    2011-07-01

    Establishing interconnection to the grid is a recognized barrier to the deployment of distributed energy generation. This report compares interconnection processes for photovoltaic projects in California and Germany. This report summarizes the steps of the interconnection process for developers and utilities, the average length of time utilities take to process applications, and paperwork required of project developers. Based on a review of the available literature, this report finds that while the interconnection procedures and timelines are similar in California and Germany, differences in the legal and regulatory frameworks are substantial.

  17. Photonic processing and realization of an all-optical digital comparator based on semiconductor optical amplifiers

    Science.gov (United States)

    Singh, Simranjit; Kaur, Ramandeep; Kaler, Rajinder Singh

    2015-01-01

    A module of an all-optical 2-bit comparator is analyzed and implemented using semiconductor optical amplifiers (SOAs). By employing SOA-based cross phase modulation, the optical XNOR logic is used to get an A=B output signal, where as AB¯ and A¯B> logics operations are used to realize A>B and Aoperations results along with the wide open eye diagrams are obtained. It is suggested that the proposed system would be promising in all-optical high speed networks and computing systems.

  18. Traditional versus commercial food processing techniques - A comparative study based on chemical analysis of selected foods consumed in rural Zimbabwe.

    Directory of Open Access Journals (Sweden)

    Abraham I. C. Mwadiwa

    2012-01-01

    Full Text Available With the advent of industrialisation, food processors are constantly looking for ways to cut costs, increase production and maximise profits at the expense of quality. Commercial food processors have since shifted their focus from endogenous ways of processing food to more profitable commercial food processing techniques. The aim of this study was to investigate the holistic impact of commercial food processing techniques on nutrition by comparing commercially (industrially processed food products and endogenously processed food products through chemical analysis of selected foods. Eight food samples which included commercially processed peanut butter, mealie-meal, dried vegetables (mufushwa and rice and endogenously processed peanut butter, mealie-meal, dried vegetables (mufushwa and rice were randomly sampled from rural communities in the south-eastern and central provinces of Zimbabwe. They were analysed for ash, zinc, iron, copper, magnesium, protein, fat, carbohydrates, energy, crude fibre, vitamin C and moisture contents. The results of chemical analysis indicate that endogenously processed mealie-meal, dried vegetables and rice contained higher ash values of 2.00g/100g, 17.83g/100g, and 3.28g/100g respectively than commercially processed mealie-meal, dried vegetables and rice, which had ash values of 1.56g/100g, 15.25g/100g and 1.46g/100g respectively. The results also show that endogenously processed foods have correspondingly higher iron, zinc and magnesium contents and, on the whole, a higher protein content. The results also indicate that commercially processed foods have higher fat and energy contents. The result led to the conclusion that the foods are likely to pose a higher risk of causing adverse conditions to health, such as obesity and cardiovascular diseases to susceptible individuals. Based on these findings, it can, therefore, be concluded that endogenously processed foods have a better nutrient value and health implications

  19. Measurements of Corneal Thickness in Eyes with Pseudoexfoliation Syndrome: Comparative Study of Different Image Processing Protocols

    Directory of Open Access Journals (Sweden)

    Katarzyna Krysik

    2017-01-01

    Full Text Available Purpose. Comparative analysis of central and peripheral corneal thickness in PEX patients using three different imaging systems: Pentacam-Scheimpflug device, time-domain optical coherence tomography (OCT Visante, and swept-source OCT Casia. Materials and Methods. 128 eyes of 80 patients with diagnosed PEX were examined and compared with 112 normal, non-PEX eyes of 72 cataract patients. The study parameters included 5 measured zones: central and 4 peripheral (superior, inferior, nasal, and temporal. Results. The mean CCT in eyes with PEX syndrome measured with all three instruments was thicker than that in normal eyes. Corneal thickness measurements in the PEX group were statistically significantly different between Pentacam and OCT Casia: central corneal thickness (p=0.04, inferior corneal zone (p=0.01, and nasal and temporal corneal zones (p<0.01. Between Pentacam and OCT Visante inferior, nasal and temporal corneal zones were statistically significantly different (p<0.01. Between OCT Casia and OCT Visante, there were no statistically significant differences in measured parameters values. Conclusion. The central corneal thickness in eyes with PEX syndrome measured with three different independent methods is higher than that in the non-PEX group, and despite variable peripheral corneal thickness, this one parameter is still crucial in intraocular pressure measurements.

  20. Processes of Urban and Rural Development: a Comparative Analysis of Europe and China.

    Directory of Open Access Journals (Sweden)

    Andrea Raffaele Neri

    2014-03-01

    Full Text Available China, in its construction fever, has imported from Europe a great range of architectural and design features. The planning systems of China and of most European countries are based on functionalzoning, allowing meaningful comparison. Nonetheless, the process and goals of spatial planning differ markedly and China largely ignores the distinctive progress achieved in the field in Europe. AcrossEurope, the model of planning is undergoing important transformations in the last decades, gradually making decisions concerning land­use more participated, flexible and sustainable, and safeguarding the rural dimension. In contrast, the planning system of China is primarily focused on promoting urban GDP growth and is still based on a top­ down approach. The inclusion of some key elements of European planning into the Chinese system, with particular reference to laws establishing national standards and comprehensive environmental protection, would benefit China by reducing the internal inequalities between cities and countryside and safeguarding its natural assets.

  1. Process of Judging Significant Modifications for Different Transportation Systems compared to the Approach for Nuclear Installations

    Directory of Open Access Journals (Sweden)

    Nicolas Petrek

    2015-12-01

    Full Text Available The implementation of the CSM regulation by the European Commission in 2009 which harmonizes the risk assessment process and introduces a rather new concept of judging changes within the European railway industry. This circumstance has risen the question how other technology sectors handle the aspect of modifications and alterations. The paper discusses the approaches for judging the significance of modifications within the three transport sectors of European railways, aviation and maritime transportation and the procedure which is used in the area of nuclear safety. We will outline the similarities and differences between these four methods and discuss the underlying reasons. Finally, we will take into account the role of the European legislator and the fundamental idea of a harmonization of the different approaches.

  2. Standardization of data processing and statistical analysis in comparative plant proteomics experiment.

    Science.gov (United States)

    Valledor, Luis; Romero-Rodríguez, M Cristina; Jorrin-Novo, Jesus V

    2014-01-01

    Two-dimensional gel electrophoresis remains the most widely used technique for protein separation in plant proteomics experiments. Despite the continuous technical advances and improvements in current 2-DE protocols, an adequate and correct experimental design and statistical analysis of the data tend to be ignored or not properly documented in current literature. Both proper experimental design and appropriate statistical analysis are requested in order to confidently discuss our results and to conclude from experimental data.In this chapter, we describe a model procedure for a correct experimental design and a complete statistical analysis of proteomic dataset. Our model procedure covers all of the steps in data mining and processing, starting with the data preprocessing (transformation, missing value imputation, definition of outliers) and univariate statistics (parametric and nonparametric tests), and finishing with multivariate statistics (clustering, heat-mapping, PCA, ICA, PLS-DA).

  3. Separating macroecological pattern and process: comparing ecological, economic, and geological systems.

    Directory of Open Access Journals (Sweden)

    Benjamin Blonder

    Full Text Available Theories of biodiversity rest on several macroecological patterns describing the relationship between species abundance and diversity. A central problem is that all theories make similar predictions for these patterns despite disparate assumptions. A troubling implication is that these patterns may not reflect anything unique about organizational principles of biology or the functioning of ecological systems. To test this, we analyze five datasets from ecological, economic, and geological systems that describe the distribution of objects across categories in the United States. At the level of functional form ('first-order effects', these patterns are not unique to ecological systems, indicating they may reveal little about biological process. However, we show that mechanism can be better revealed in the scale-dependency of first-order patterns ('second-order effects'. These results provide a roadmap for biodiversity theory to move beyond traditional patterns, and also suggest ways in which macroecological theory can constrain the dynamics of economic systems.

  4. A Multi-Process Test Case to Perform Comparative Analysis of Coastal Oceanic Models

    Science.gov (United States)

    Lemarié, F.; Burchard, H.; Knut, K.; Debreu, L.

    2016-12-01

    Due to the wide variety of choices that need to be made during the development of dynamical kernels of oceanic models, there is a strong need for an effective and objective assessment of the various methods and approaches that predominate in the community. We present here an idealized multi-scale scenario for coastal ocean models combining estuarine, coastal and shelf sea scales at midlatitude. The bathymetry, initial conditions and external forcings are defined analytically so that any model developer or user could reproduce the test case with its own numerical code. Thermally stratified conditions are prescribed and a tidal forcing is imposed as a propagating coastal Kelvin wave. The following physical processes can be assessed from the model results: estuarine process driven by tides and buoyancy gradients, the river plume dynamics, tidal fronts, and the interaction between tides and inertial oscillations. We show results obtained using the GETM (General Estuarine Transport Model) and the CROCO (Coastal and Regional Ocean Community model) models. Those two models are representative of the diversity of numerical methods in use in coastal models: GETM is based on a quasi-lagrangian vertical coordinate, a coupled space-time approach for advective terms, a TVD (Total Variation Diminishing) tracer advection scheme while CROCO is discretized with a quasi-eulerian vertical coordinate, a method of lines is used for advective terms, and tracer advection satisfies the TVB (Total Variation Bounded) property. The multiple scales are properly resolved thanks to nesting strategies, 1-way nesting for GETM and 2-way nesting for CROCO. Such test case can be an interesting experiment to continue research in numerical approaches as well as an efficient tool to allow intercomparison between structured-grid and unstructured-grid approaches. Reference : Burchard, H., Debreu, L., Klingbeil, K., Lemarié, F. : The numerics of hydrostatic structured-grid coastal ocean models: state of

  5. The Dunes and Rivers of Titan and Earth : An overview of comparative landscapes and processes

    Science.gov (United States)

    Lorenz, R. D.

    2006-12-01

    Cassini has shown Titan to have a strikingly varied and Earth-like landscape with extensive regions modified by aeolian and fluvial sediment transport. The formation of large linear sand dunes, apparently occupying most of the low-latitude low-albedo regions such as Shangri-La and Belet, is something of a surprise, given how weak thermally-driven winds were expected to be. The explanation appears to be the gravitational tide due to Saturn, which may be the dominant driver of near-surface winds. The linear dunes observed with the Cassini RADAR are strikingly similar to such dunes seen in areas with seasonally-changing winds on Earth, such as Namibia and Arabia. Instructive comparisons may be made as excellent spaceborne radar images and in-situ studies exist of these features, giving us a window into how Titan works. The weak solar flux implies average rainfall on Titan is low, perhaps only 1cm per year. Yet like many terrestrial arid regions, the landscape is nonetheless significantly altered (at least in some places) by pluvial and fluvial processes, because when it does rain, it does so violently. Models of thunderstorms, and of sediment transport, integrate neatly with Cassini observations of fluvial networks. A recent development is the detection of probable lakes : these demand an understanding of littoral processes and, in turn, wind-wave generation. Titan is an energy-poor environment, but one in which it is easier to transport materials. Many of the factors in sediment generation and transport appear to cancel out. It remains to be seen whether Titan, as an exotic laboratory, will teach us more about Earth, or whether our home planet, as an accessible analog, will teach us more about Titan. The comparisons only make both worlds seem all the more intriguing.

  6. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  7. APPLICATION OF THE ANALYTIC HIERARCHY PROCESS TO COMPARE ALTERNATIVES FOR THE LONG-TERM MANAGEMENT OF SURPLUS MERCURY

    Science.gov (United States)

    This paper describes a systematic method for comparing options for the long-term management of surplus elemental mercury in the U.S., using the Analytic Hierarchy Process (AHP) as embodied in commercially available Expert Choice software. A limited scope multi-criteria decision-a...

  8. Pathways to an East Asian Higher Education Area: A Comparative Analysis of East Asian and European Regionalization Processes

    Science.gov (United States)

    Chao, Roger Y., Jr.

    2014-01-01

    The Author argues that historical regional developments in Europe and East Asia greatly influence the formation of an East Asian Higher Education Area. As such, this article compares European and East Asian regionalization and higher education regionalization processes to show this path dependency in East Asian regionalization of higher education…

  9. Designing Training for Temporal and Adaptive Transfer: A Comparative Evaluation of Three Training Methods for Process Control Tasks

    Science.gov (United States)

    Kluge, Annette; Sauer, Juergen; Burkolter, Dina; Ritzmann, Sandrina

    2010-01-01

    Training in process control environments requires operators to be prepared for temporal and adaptive transfer of skill. Three training methods were compared with regard to their effectiveness in supporting transfer: Drill & Practice (D&P), Error Training (ET), and procedure-based and error heuristics training (PHT). Communication…

  10. Evaluating experimental design for soil-plant model selection with Bayesian model averaging

    Science.gov (United States)

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang; Gayler, Sebastian

    2013-04-01

    The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), the model weights in BMA are perceived as uncertain quantities with assigned probability distributions that narrow down as more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. The models were then conditioned on field measurements of soil moisture, leaf-area index (LAI), and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at the Nellingen site in Southwestern Germany. Following our new method, we derived the BMA model weights (and their distributions) when using all data or different subsets thereof. We discuss to which degree the posterior BMA mean outperformed the prior BMA

  11. Comparative economic factors on the use of radionuclide or electrical sources for food processing with ionizing radiation

    International Nuclear Information System (INIS)

    Lagunas-Solar, M.C.

    1985-01-01

    Food irradiation is a promising addition to conventional food processing techniques. However, as is the case with most new technologies, its economic suitability will be determined by comparison to current methods. Assuming that current food processing facilities are adaptable to the incorporation of a food irradiation capability, an analysis of cost for several different optional systems able to process up to 100 Mrad ton/day (1 MGy ton/day; or 1,000 ton/day at 100 krad) will be made. Both radionuclide and electrical accelerators will be compared as sources of ionizing radiation. The cost of irradiation will be shown to be competitive with most other treatments including fumigation, low-temperature storage, and controlled atmosphere. A proper figure-of-merit for comparing the different sources will be defined and used as a basis for an economic evaluation of food irradiation. (author)

  12. Comparative and phylogenetic perspectives of the cleavage process in tailed amphibians.

    Science.gov (United States)

    Desnitskiy, Alexey G; Litvinchuk, Spartak N

    2015-10-01

    The order Caudata includes about 660 species and displays a variety of important developmental traits such as cleavage pattern and egg size. However, the cleavage process of tailed amphibians has never been analyzed within a phylogenetic framework. We use published data on the embryos of 36 species concerning the character of the third cleavage furrow (latitudinal, longitudinal or variable) and the magnitude of synchronous cleavage period (up to 3-4 synchronous cell divisions in the animal hemisphere or a considerably longer series of synchronous divisions followed by midblastula transition). Several species from basal caudate families Cryptobranchidae (Andrias davidianus and Cryptobranchus alleganiensis) and Hynobiidae (Onychodactylus japonicus) as well as several representatives from derived families Plethodontidae (Desmognathus fuscus and Ensatina eschscholtzii) and Proteidae (Necturus maculosus) are characterized by longitudinal furrows of the third cleavage and the loss of synchrony as early as the 8-cell stage. By contrast, many representatives of derived families Ambystomatidae and Salamandridae have latitudinal furrows of the third cleavage and extensive period of synchronous divisions. Our analysis of these ontogenetic characters mapped onto a phylogenetic tree shows that the cleavage pattern of large, yolky eggs with short series of synchronous divisions is an ancestral trait for the tailed amphibians, while the data on the orientation of third cleavage furrows seem to be ambiguous with respect to phylogeny. Nevertheless, the midblastula transition, which is characteristic of the model species Ambystoma mexicanum (Caudata) and Xenopus laevis (Anura), might have evolved convergently in these two amphibian orders.

  13. Comparative 187Re-187Os systematics of chondrites: Implications regarding early solar system processes

    Science.gov (United States)

    Walker, R.J.; Horan, M.F.; Morgan, J.W.; Becker, H.; Grossman, J.N.; Rubin, A.E.

    2002-01-01

    A suite of 47 carbonaceous, enstatite, and ordinary chondrites are examined for Re-Os isotopic systematics. There are significant differences in the 187Re/188Os and 187Os/188Os ratios of carbonaceous chondrites compared with ordinary and enstatite chondrites. The average 187Re/188Os for carbonaceous chondrites is 0.392 ?? 0.015 (excluding the CK chondrite, Karoonda), compared with 0.422 ?? 0.025 and 0.421 ?? 0.013 for ordinary and enstatite chondrites (1?? standard deviations). These ratios, recast into elemental Re/Os ratios, are as follows: 0.0814 ?? 0.0031, 0.0876 ?? 0.0052 and 0.0874 ?? 0.0027 respectively. Correspondingly, the 187Os/188Os ratios of carbonaceous chondrites average 0.1262 ?? 0.0006 (excluding Karoonda), and ordinary and enstatite chondrites average 0.1283 ?? 0.0017 and 0.1281 ?? 0.0004, respectively (1?? standard deviations). The new results indicate that the Re/Os ratios of meteorites within each group are, in general, quite uniform. The minimal overlap between the isotopic compositions of ordinary and enstatite chondrites vs. carbonaceous chondrites indicates long-term differences in Re/Os for these materials, most likely reflecting chemical fractionation early in solar system history. A majority of the chondrites do not plot within analytical uncertainties of a 4.56-Ga reference isochron. Most of the deviations from the isochron are consistent with minor, relatively recent redistribution of Re and/or Os on a scale of millimeters to centimeters. Some instances of the redistribution may be attributed to terrestrial weathering; others are most likely the result of aqueous alteration or shock events on the parent body within the past 2 Ga. The 187Os/188Os ratio of Earth's primitive upper mantle has been estimated to be 0.1296 ?? 8. If this composition was set via addition of a late veneer of planetesimals after core formation, the composition suggests the veneer was dominated by materials that had Re/Os ratios most similar to ordinary and

  14. Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups

    Directory of Open Access Journals (Sweden)

    Tiago P. Peixoto

    2015-03-01

    Full Text Available The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties are often an essential ingredient of network formation.

  15. Comparative study on aerosol removal by natural processes in containment in severe accident for AP1000 reactor

    International Nuclear Information System (INIS)

    Sun, Xiaohui; Cao, Xinrong; Shi, Xingwei; Yan, Jin

    2017-01-01

    Highlights: • Characteristics of aerosol distribution in containment are obtained. • Aerosol removal by natural processes is comparative studied by two methods. • Traditional rapid assessment method is conservative and can be applied in AP1000 reactor. - Abstract: Focusing on aerosol removal by naturally occurring processes in containment in severe accident for AP1000, integral severe accident code MELCOR and rapid assessment method mentioned in NUREG/CR-6189 are utilized to study aerosol removal by natural processes, respectively. Three typical severe accidents, induced by large break loss of coolant accident (LBLOCA), small break loss of coolant accident (SBLOCA) and steam generator tube rupture (SGTR), respectively, are selected for the study. The results obtained by two methods were further compared in the following several aspects: efficiency of aerosol removal by natural processes, peak time of aerosol suspended in containment atmosphere, peak amount of aerosol suspended in containment atmosphere, time when aerosol removal efficiency by natural processes is up to 99.9%. It was further concluded that results obtained by rapid assessment with shorter calculation process are more conservative. The analysis results provide reference to assessment method selection of severe accident source term for AP1000 nuclear emergency.

  16. Comparing the Effect of Pentoxifylline Administration on Mast Cells Maturing Process in a Diabetic and Normoglycemic Rat Wound Healing

    Directory of Open Access Journals (Sweden)

    Saeid Babaei

    2014-12-01

    Full Text Available Background: Wound healing is a complicated process that is influenced by many factors. Studies at molecular level on human and animal models have revealed several molecular changes related to the effect of diabetes on wound healing process. Increasing number of researches implicates the influence of mast cells on skin wounds healing. The present experimental study was conducted to compare systemic pentoxifylline administration on maturing process of mast cells during skin wound healing in diabetic and normoglycemic rats. Materials and Methods: In this experimental study, 48 wistar rats were divided into 2 groups of normoglycemic and diabetic and each group was divided into experimental and control. Experimental group received intraperitoneal (25 mg/kg twice a day and control group received distilled water. The number of mast cells and their maturing process was evaluated by microscopically counting of the types of mast cells (types 1, 2, 3 by stereological methods on day 3 and 7 after surgery. Results: In all experimental groups receiving pentoxifylline there were significant difference in the number of total mast cells, comparing normoglycemic groups (p<0.05 and also we found that in wound healing process pentoxifylline caused increasing the number of type 2 mast cells in all experimental groups (p<0.05. Conclusion: In all pentoxifylline treated groups delay in converting type 2 into type 3 mast cell was seen. Pentoxifylline causes decreasing mast cell degranulation during wound healing process.

  17. Inversion factor in the comparative analysis of dynamical processes in radioecology

    Energy Technology Data Exchange (ETDEWEB)

    Zarubin, O.; Zarubina, N. [Institute for Nuclear Researh of National Academy of Science of Ukraine (Ukraine)

    2014-07-01

    We have studied levels of specific activity of radionuclides in fish and fungi of the Kiev region of Ukraine since 1986 till 2013, including 30-km alienation zone of Chernobyl Nuclear Power Plant (ChNPP) after the accident. The radionuclides specific activity dynamics analysis for 10 species of freshwater fishes of different trophic levels and at 7 species of higher fungi was carried out for this period. Multiple research of specific activity of radionuclides in fish was carried out on the Kanevskoe reservoir and cooling-pond of ChNPP, in fungi - on 6 testing areas, which are situated within the range of 2 to 150 km from ChNPP. The basic attention was given to accumulation of {sup 137}Cs. We have established that dynamics of specific activity of {sup 137}Cs within different species of fish in the same reservoir is not identical. Dynamics of specific activity of {sup 137}Cs within various species of fungi of the same testing area is also not identical. Dynamics of specific activity of {sup 137}Cs with the investigated objects of various testing dry-land and water areas also varies. Authors suggest an inversion factor to be used for comparison of dynamics of specific activity of {sup 137}Cs, which in case of biota is a nonlinear process: K{sub inv} = A{sub 0} / A{sub t}, where A{sub 0} stands for the value of specific activity of the radionuclide at time 0; A{sub t} - specific activity of radionuclide at time t. Therefore, K{sub inv} reflects ratio (inversion) of specific activity of radionuclides to its starting value as a function of time, where K{sub inv} > 1 corresponds to increase in radionuclides' specific activity and K{sub inv} < 1 corresponds to its decrease. For example, K{sub inv} of {sup 137}Cs in fish Rutilus rutilus in the Kanevskoe reservoir was equal to 0.57, and 13.33 in the cooling-pond of ChNPP, at Blicca bjoerkna 0.95 and 29.61 accordingly in 1987 - 1996. In 1987 - 2011 K{sub inv} of {sup 137}Cs at R. rutilus in the Kanevskoe reservoir

  18. Comparing and Exploring the Sensory Processing Patterns of Higher Education Students With Attention Deficit Hyperactivity Disorder and Autism Spectrum Disorder.

    Science.gov (United States)

    Clince, Maria; Connolly, Laura; Nolan, Clodagh

    2016-01-01

    Research regarding sensory processing and adults with attention deficit hyperactivity disorder (ADHD) or autism spectrum disorder (ASD) is limited. This study aimed to compare sensory processing patterns of groups of higher education students with ADHD or ASD and to explore the implications of these disorders for their college life. The Adolescent/Adult Sensory Profile was administered to 28 students with ADHD and 27 students with ASD. Students and professionals were interviewed. The majority of students received scores that differed from those of the general population. Students with ADHD received significantly higher scores than students with ASD in relation to sensation seeking; however, there were no other major differences. Few differences exist between the sensory processing patterns of students with ADHD and ASD; however, both groups differ significantly from the general population. Occupational therapists should consider sensory processing patterns when designing supports for these groups. Copyright © 2016 by the American Occupational Therapy Association, Inc.

  19. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  20. How to avoid mismodelling in GLM-based fMRI data analysis: cross-validated Bayesian model selection.

    Science.gov (United States)

    Soch, Joram; Haynes, John-Dylan; Allefeld, Carsten

    2016-11-01

    Voxel-wise general linear models (GLMs) are a standard approach for analyzing functional magnetic resonance imaging (fMRI) data. An advantage of GLMs is that they are flexible and can be adapted to the requirements of many different data sets. However, the specification of first-level GLMs leaves the researcher with many degrees of freedom which is problematic given recent efforts to ensure robust and reproducible fMRI data analysis. Formal model comparisons that allow a systematic assessment of GLMs are only rarely performed. On the one hand, too simple models may underfit data and leave real effects undiscovered. On the other hand, too complex models might overfit data and also reduce statistical power. Here we present a systematic approach termed cross-validated Bayesian model selection (cvBMS) that allows to decide which GLM best describes a given fMRI data set. Importantly, our approach allows for non-nested model comparison, i.e. comparing more than two models that do not just differ by adding one or more regressors. It also allows for spatially heterogeneous modelling, i.e. using different models for different parts of the brain. We validate our method using simulated data and demonstrate potential applications to empirical data. The increased use of model comparison and model selection should increase the reliability of GLM results and reproducibility of fMRI studies. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Synthesizing Equivalence Indices for the Comparative Evaluation of Technoeconomic Efficiency of Industrial Processes at the Design/Re-engineering Level

    Science.gov (United States)

    Fotilas, P.; Batzias, A. F.

    2007-12-01

    The equivalence indices synthesized for the comparative evaluation of technoeconomic efficiency of industrial processes are of critical importance since they serve as both, (i) positive/analytic descriptors of the physicochemical nature of the process and (ii) measures of effectiveness, especially helpful for investigated competitiveness in the industrial/energy/environmental sector of the economy. In the present work, a new algorithmic procedure has been developed, which initially standardizes a real industrial process, then analyzes it as a compromise of two ideal processes, and finally synthesizes the index that can represent/reconstruct the real process as a result of the trade-off between the two ideal processes taking as parental prototypes. The same procedure makes fuzzy multicriteria ranking within a set of pre-selected industrial processes for two reasons: (a) to analyze the process most representative of the production/treatment under consideration, (b) to use the `second best' alternative as a dialectic pole in absence of the two ideal processes mentioned above. An implantation of this procedure is presented, concerning a facility of biological wastewater treatment with six alternatives: activated sludge through (i) continuous-flow incompletely-stirred tank reactors in series, (ii) a plug flow reactor with dispersion, (iii) an oxidation ditch, and biological processing through (iv) a trickling filter, (v) rotating contactors, (vi) shallow ponds. The criteria used for fuzzy (to count for uncertainty) ranking are capital cost, operating cost, environmental friendliness, reliability, flexibility, extendibility. Two complementary indices were synthesized for the (ii)-alternative ranked first and their quantitative expressions were derived, covering a variety of kinetic models as well as recycle/bypass conditions. Finally, analysis of estimating the optimal values of these indices at maximum technoeconomic efficiency is presented and the implications

  2. A comparative analysis of the domestic and foreign licensing processes for power and non-power reactors

    International Nuclear Information System (INIS)

    Joe, J. C.; Youn, Y. K.; Kim, W. S.; Kim, H. J.

    2003-01-01

    The System-integrated Modular Advanced Reactor (SMART), a small to medium sized integral type Pressurized Water Reactor (PWR) has been developed in Korea. Now, SMART-P, a 1/5 scaled-down of the SMART, is being developed for the purpose of demonstrating the safety and performance of SMART design. The SMART-P is a first-of-a-kind reactor which is utilized for the research and development of a power reactor. Since the licensing process of such a reactor is not clearly specified in the current Atomic Energy Act, a comparative survey and analysis of domestic and foreign licensing processes for power and non-power reactors has been carried out to develop the rationale and technical basis for establishing the licensing process of such a reactor. The domestic and foreign licensing processes of power and non-power reactors have been surveyed and compared, including those of the U.S.A., Japan, France, U.K., Canada, and IAEA. The general trends in nuclear reactor classification, licensing procedures, regulatory technical requirements, and other licensing requirements and regulations have been investigated. The results of this study will be used as the rationale and technical basis for establishing the licensing process of reactors at development stage such as SMART-P

  3. A comparative study of the probabilistic fracture mechanics and the stochastic Markovian process approaches for structural reliability assessment

    Energy Technology Data Exchange (ETDEWEB)

    Stavrakakis, G.; Lucia, A.C.; Solomos, G. (Commission of the European Communities, Ispra (Italy). Joint Research Centre)

    1990-01-01

    The two computer codes COVASTOL and RELIEF, developed for the modeling of cumulative damage processes in the framework of probabilistic structural reliability, are compared. They are based respectively on the randomisation of a differential crack growth law and on the theory of discrete Markov processes. The codes are applied for fatigue crack growth predictions using two sets of data of crack propagation curves from specimens. The results are critically analyzed and an extensive discussion follows on the merits and limitations of each code. Their transferability for the reliability assessment of real structures is investigated. (author).

  4. Effects of petrophysical uncertainty in Bayesian hydrogeophysical inversion and model selection

    Science.gov (United States)

    Brunetti, Carlotta; Linde, Niklas

    2017-04-01

    Hydrogeophysical studies rely on petrophysical relationships that link geophysical properties to hydrological proprieties and state variables of interest; these relationships are frequently assumed to be perfect (i.e., a one-to-one relation). Using first-arrival traveltime data from a synthetic crosshole ground-penetrating radar (GPR) experiment, we investigate the role of petrophysical uncertainty on porosity estimates from Markov chain Monte Carlo (MCMC) inversion and on Bayes factors (i.e., ratios of the evidences, or marginal likelihoods, of two competing models) used in Bayesian model selection. The petrophysical errors (PE) are conceptualized by a correlated zero-mean multi-Gaussian field with horizontal anisotropy with a resulting correlation coefficient of 0.8 between porosity and radar wave speed. We consider four different cases: (1) no PE are present (i.e., they are not used to generate the synthetic data) and they are not inferred in the MCMC inversion, (2) the PE are inferred for but they are not present in the data, (3) the PE are present in the data, but not inferred for and (4) the PE are present in the data and inferred for. To obtain appropriate acceptance ratios (i.e., between 35% and 45%), it is necessary to infer the PE as model parameters with a proper proposal distribution (simple Monte Carlo sampling of the petrophysical errors within Metropolis leads to very small acceptance rates). Case 4 provides consistent porosity field estimates (no bias) and the correlation coefficient between the "true" and posterior mean porosity field decreases from 0.9 for case 1 to 0.75. For case 2, we find that the variance of the posterior mean porosity field is too low and the porosity range is underestimated (i.e., some of the variance is accounted for by the inferred petrophysical uncertainty). Correspondingly, the porosity range is too wide for case 3 as it is used to account for petrophysical errors in the data. When comparing three different conceptual

  5. A Short Introduction to Model Selection, Kolmogorov Complexity and Minimum Description Length (MDL)

    NARCIS (Netherlands)

    Nannen, Volker

    2010-01-01

    The concept of overtting in model selection is explained and demon- strated. After providing some background information on information theory and Kolmogorov complexity, we provide a short explanation of Minimum Description Length and error minimization. We conclude with a discussion of the typical

  6. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia...

  7. Comparative analysis of gradient-field-based orientation estimation methods and regularized singular-value decomposition for fringe pattern processing.

    Science.gov (United States)

    Sun, Qi; Fu, Shujun

    2017-09-20

    Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.

  8. Children's resilience and trauma-specific cognitive behavioral therapy: Comparing resilience as an outcome, a trait, and a process.

    Science.gov (United States)

    Happer, Kaitlin; Brown, Elissa J; Sharma-Patel, Komal

    2017-11-01

    Resilience, which is associated with relatively positive outcomes following negative life experiences, is an important research target in the field of child maltreatment (Luthar et al., 2000). The extant literature contains multiple conceptualizations of resilience, which hinders development in research and clinical utility. Three models emerge from the literature: resilience as an immediate outcome (i.e., behavioral or symptom response), resilience as a trait, and resilience as a dynamic process. The current study compared these models in youth undergoing trauma-specific cognitive behavioral therapy. Results provide the most support for resilience as a process, in which increase in resilience preceded associated decrease in posttraumatic stress and depressive symptoms. There was partial support for resilience conceptualized as an outcome, and minimal support for resilience as a trait. Results of the models are compared and discussed in the context of existing literature and in light of potential clinical implications for maltreated youth seeking treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Comparing equivalent thermal, high pressure and pulsed electric field processes for mild pasteurization of orange juice. Part I: Impact on overall quality attributes

    NARCIS (Netherlands)

    Timmermans, R.A.H.; Mastwijk, H.C.; Knol, J.J.; Quataert, M.C.J.; Vervoort, L.; Plancken, van der I.; Hendrickx, M.E.; Matser, A.M.

    2011-01-01

    Mild heat pasteurization, high pressure processing (HP) and pulsed electric field (PEF) processing of freshly squeezed orange juice were comparatively evaluated examining their impact on microbial load and quality parameters immediately after processing and during two months of storage. Microbial

  10. Comparative analysis between the SPIF and DPIF variants for die-less forming process for an automotive workpiece

    Directory of Open Access Journals (Sweden)

    Adrian José Benitez Lozano

    2015-07-01

    Full Text Available Over time the process of incremental deformation Die-less has been developed in many ways to meet the needs of flexible production with no investment in tooling and low production costs. Two of their configurations are the SPIF (Single point incremental forming and DPIF (Double point Incremental forming technique. The aim of this study is to compare both techniques with the purpose of exposing their advantages and disadvantages in the production of industrial parts, as well as to inform about Die-less as an alternative manufacturing process. Experiments with the exhaust pipe cover of a vehicle are performed, the main process parameters are described, and formed workpieces without evidence of defects are achieved. Significant differences between the two techniques in terms of production times and accuracy to the original model are also detected. Finally, it is suggested when is more convenient to use each of these.

  11. Random Secure Comparator Selection Based Privacy-Preserving MAX/MIN Query Processing in Two-Tiered Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hua Dai

    2016-01-01

    Full Text Available Privacy-preserving data queries for wireless sensor networks (WSNs have drawn much attention recently. This paper proposes a privacy-preserving MAX/MIN query processing approach based on random secure comparator selection in two-tiered sensor network, which is denoted by RSCS-PMQ. The secret comparison model is built on the basis of the secure comparator which is defined by 0-1 encoding and HMAC. And the minimal set of highest secure comparators generating algorithm MaxRSC is proposed, which is the key to realize RSCS-PMQ. In the data collection procedures, the sensor node randomly selects a generated secure comparator of the maximum data into ciphertext which is submitted to the nearby master node. In the query processing procedures, the master node utilizes the MaxRSC algorithm to determine the corresponding minimal set of candidate ciphertexts containing the query results and returns it to the base station. And the base station obtains the plaintext query result through decryption. The theoretical analysis and experimental result indicate that RSCS-PMQ can preserve the privacy of sensor data and query result from master nodes even if they are compromised, and it has a better performance on the network communication cost than the existing approaches.

  12. Comparing the neural bases of self-referential processing in typically developing and 22q11.2 adolescents.

    Science.gov (United States)

    Schneider, Maude; Debbané, Martin; Lagioia, Annalaura; Salomon, Roy; d'Argembeau, Arnaud; Eliez, Stephan

    2012-04-01

    The investigation of self-reflective processing during adolescence is relevant, as this period is characterized by deep reorganization of the self-concept. It may be the case that an atypical development of brain regions underlying self-reflective processing increases the risk for psychological disorders and impaired social functioning. In this study, we investigated the neural bases of self- and other-related processing in typically developing adolescents and youths with 22q11.2 deletion syndrome (22q11DS), a rare neurogenetic condition associated with difficulties in social interactions and increased risk for schizophrenia. The fMRI paradigm consisted in judging if a series of adjectives applied to the participant himself/herself (self), to his/her best friend or to a fictional character (Harry Potter). In control adolescents, we observed that self- and other-related processing elicited strong activation in cortical midline structures (CMS) when contrasted with a semantic baseline condition. 22q11DS exhibited hypoactivation in the CMS and the striatum during the processing of self-related information when compared to the control group. Finally, the hypoactivation in the anterior cingulate cortex was associated with the severity of prodromal positive symptoms of schizophrenia. The findings are discussed in a developmental framework and in light of their implication for the development of schizophrenia in this at-risk population. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Comparing the difference of measured GFR of ectopic pelvic kidney between anterior and posterior imaging processing in renal dynamic imaging

    International Nuclear Information System (INIS)

    Li Baojun; Zhao Deshan

    2014-01-01

    Objective: To compare and analyze the difference of measured glomerular filtration rate (GFR) of ectopic pelvic kidney between anterior and posterior imaging processing in renal dynamic imaging. Methods: There were 10 patients collected retrospectively, with ectopic kidneys in pelvic cavity confirmed by ultrasound, CT, renal dynamic imaging and other imaging modalities. All images of ectopic kidneys in renal dynamic imaging were processed by anterior and posterior methods respectively. The ectopic kidney was only processed in anterior imaging, ectopic kidney and contralateral normal kidney were processed in posterior imaging. Total GFR equalled the sum of GFR of normal kidney in posterior imaging and GFR of ectopic kidney in anterior imaging, was compared with total GFR of two kidneys in posterior imaging and GFR in two-sample method. All correlation analysis were completed between GFRs from three methods and all patients were followed up. Statistically paired t-test and bivariate correlation analysis test were used. Results: The mean GFR of ectopic kidney in anterior imaging equal to (27.48±12.24) ml/(min · 1.73 m 2 ). It was more than GFR [(10.71 ±4.74) ml/ (min · 1.73 m 2 )] in posterior imaging above 46% (t=5.481, P<0.01). There was no significant difference (t=-2.238, P>0.05), but better correlation (r=0.704, P<0.05) between total GFR in anterior imaging and GFR in two-sample method. There was significant difference (t=4.629, P<0.01)and worse correlation (r=0.576, P>0.05) between total GFR in posterior imaging and GFR in two-sample method. Conclusion: Comparing with GFR in posterior imaging, GFR in anterior imaging can more truly reflect function condition of ectopic pelvic kidney in renal dynamic imaging. (authors)

  14. Children with dyslexia show a reduced processing benefit from bimodal speech information compared to their typically developing peers.

    Science.gov (United States)

    Schaadt, Gesa; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Männel, Claudia

    2018-01-17

    During information processing, individuals benefit from bimodally presented input, as has been demonstrated for speech perception (i.e., printed letters and speech sounds) or the perception of emotional expressions (i.e., facial expression and voice tuning). While typically developing individuals show this bimodal benefit, school children with dyslexia do not. Currently, it is unknown whether the bimodal processing deficit in dyslexia also occurs for visual-auditory speech processing that is independent of reading and spelling acquisition (i.e., no letter-sound knowledge is required). Here, we tested school children with and without spelling problems on their bimodal perception of video-recorded mouth movements pronouncing syllables. We analyzed the event-related potential Mismatch Response (MMR) to visual-auditory speech information and compared this response to the MMR to monomodal speech information (i.e., auditory-only, visual-only). We found a reduced MMR with later onset to visual-auditory speech information in children with spelling problems compared to children without spelling problems. Moreover, when comparing bimodal and monomodal speech perception, we found that children without spelling problems showed significantly larger responses in the visual-auditory experiment compared to the visual-only response, whereas children with spelling problems did not. Our results suggest that children with dyslexia exhibit general difficulties in bimodal speech perception independently of letter-speech sound knowledge, as apparent in altered bimodal speech perception and lacking benefit from bimodal information. This general deficit in children with dyslexia may underlie the previously reported reduced bimodal benefit for letter-speech sound combinations and similar findings in emotion perception. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Pretreatment of furfural industrial wastewater by Fenton, electro-Fenton and Fe(II)-activated peroxydisulfate processes: a comparative study.

    Science.gov (United States)

    Yang, C W; Wang, D; Tang, Q

    2014-01-01

    The Fenton, electro-Fenton and Fe(II)-activated peroxydisulfate (PDS) processes have been applied for the treatment of actual furfural industrial wastewater in this paper. Through the comparative study of the three processes, a suitable pretreatment technology for actual furfural wastewater treatment was obtained, and the mechanism and dynamics process of this technology is discussed. The experimental results show that Fenton technology has a good and stable effect without adjusting pH of furfural wastewater. At optimal conditions, which were 40 mmol/L H₂O₂ initial concentration and 10 mmol/L Fe²⁺ initial concentration, the chemical oxygen demand (COD) removal rate can reach 81.2% after 90 min reaction at 80 °C temperature. The PDS process also has a good performance. The COD removal rate could attain 80.3% when Na₂S₂O₈ initial concentration was 4.2 mmol/L, Fe²⁺ initial concentration was 0.1 mol/L, the temperature remained at 70 °C, and pH value remained at 2.0. The electro-Fenton process was not competent to deal with the high-temperature furfural industrial wastewater and only 10.2% COD was degraded at 80 °C temperature in the optimal conditions (2.25 mA/cm² current density, 4 mg/L Na₂SO₄, 0.3 m³/h aeration rate). For the Fenton, electro-Fenton and PDS processes in pretreatment of furfural wastewater, their kinetic processes follow the pseudo first order kinetics law. The pretreatment pathways of furfural wastewater degradation are also investigated in this study. The results show that furfural and furan formic acid in furfural wastewater were preferentially degraded by Fenton technology. Furfural can be degraded into low-toxicity or nontoxic compounds by Fenton pretreatment technology, which could make furfural wastewater harmless and even reusable.

  16. Mindfulness training alters emotional memory recall compared to active controls: support for an emotional information processing model of mindfulness

    Directory of Open Access Journals (Sweden)

    Doug eRoberts-Wolfe

    2012-02-01

    Full Text Available Objectives: While mindfulness-based interventions have received widespread application in both clinical and non-clinical populations, the mechanism by which mindfulness meditation improves well-being remains elusive. One possibility is that mindfulness training alters the processing of emotional information, similar to prevailing cognitive models of depression and anxiety. The aim of this study was to investigating the effects of mindfulness training on emotional information processing (i.e. memory biases in relation to both clinical symptomatology and well-being in comparison to active control conditions.Methods: Fifty-eight university students (28 female, age = 20.1 ± 2.7 years participated in either a 12-week course containing a "meditation laboratory" or an active control course with similar content or experiential practice laboratory format (music. Participants completed an emotional word recall task and self-report questionnaires of well-being and clinical symptoms before and after the 12-week course.Results: Meditators showed greater increases in positive word recall compared to controls F(1, 56 = 6.6, p = .02. The meditation group increased significantly more on measures of well-being [F(1, 56 = 6.6, p = .01], with a marginal decrease in depression and anxiety [(F(1, 56 = 3.0, p = .09] compared to controls. Increased positive word recall was associated with increased psychological well-being [r = 0.31, p = .02] and decreased clinical symptoms [r = -0.29, p = .03].Conclusion: Mindfulness training was associated with greater improvements in processing efficiency for positively valenced stimuli than active control conditions. This change in emotional information processing was associated with improvements in psychological well-being and less depression and anxiety. These data suggest that mindfulness training may improve well-being via changes in emotional information processing.

  17. Advanced pulse oximeter signal processing technology compared to simple averaging. I. Effect on frequency of alarms in the operating room.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new signal processing technique (Oxismart, Nellcor, Inc., Pleasanton, CA) on the incidence of false pulse oximeter alarms in the operating room (OR). Prospective observational study. Nonuniversity hospital. 53 ASA physical status I, II, and III consecutive patients undergoing general anesthesia with tracheal intubation. In the OR we compared the number of alarms produced by a recently developed third generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504). Three pulse oximeters were used simultaneously in each patient: a Nellcor pulse oximeter, a Criticare with the signal averaging time set at 3 seconds (Criticareaverage3s) and a similar unit with the signal averaging time set at 21 seconds (Criticareaverage21s). For each pulse oximeter, the number of false (artifact) alarms was counted. One false alarm was produced by the Nellcor (duration 55 sec) and one false alarm by the Criticareaverage21s monitor (5 sec). The incidence of false alarms was higher in Criticareaverage3s. In eight patients, Criticareaverage3s produced 20 false alarms (p signal processing compared with the Criticare monitor with the longer averaging time of 21 seconds.

  18. Processing Adipose-Rich Mohs Samples: A Comparative Study of Effectiveness of Pretreatment With Liquid Nitrogen Versus Flash Freezing Spray.

    Science.gov (United States)

    Reserva, Jeave; Kozel, Zachary; Krol, Cindy; Speiser, Jodi; Adams, William; Tung, Rebecca

    2017-11-01

    Processing of adipose-rich Mohs micrographic surgery (MMS) specimens poses challenges that may preclude complete margin evaluation. In this setting, the value of additional freezing methods using various cooling agents has not been previously investigated. The aim of this study is to compare the frozen section quality of high-adipose Mohs specimens processed without additional cooling treatments versus those pretreated with 1,1,1,2-tetrafluoroethane (TFE) or liquid nitrogen (LN2). A set of 3 sections were each taken from 24 adipose-rich Mohs micrographic surgery specimens. A section from each set was subjected to either no additional cooling treatment (control), two 10-second pulse sprays of 1,1,1,2-tetrafluoroethane, or three 2-second pulse sprays of LN2. After staining, 2 blinded raters evaluated slide quality based on the presence or absence of the following features: margin completeness, nuclear clearing, epidermal or adipose folding, holes, or venetian blind-like artifacts. Pretreatment of the sample with LN2 produced a significantly (P < 0.001) greater number of high-quality slides (19/24) compared to pretreatment with 1,1,1,2-tetrafluoroethane (1/24) and no additional treatment (0/24). The adjunctive use of LN2 spray before tissue embedding circumvents the challenges of processing "thick" (high-adipose) specimens and facilitates the production of high-quality frozen section slides during Mohs micrographic surgery.

  19. Radioactive waste management: a comparative study of national decision-making processes. Final report, September 15, 1978-December 31, 1979

    International Nuclear Information System (INIS)

    Zinberg, D.S.; Deese, D.

    1980-01-01

    A report is presented resulting from a comparative study of national decision-making processes in radioactive waste management. By seeking out the variations among the socio-political and institutional components of the nuclear power and radioactive waste policies in ten countries, the authors have attempted to identify means to improve national and international responses to a seemingly intractable problem, the management of wastes from military and commercial nuclear programs worldwide. Efforts were focused on evaluation of comparative national policy formulation processes. Mapping national programs in conjunction with social, political and administrative structure and comparing the similarities and differences among them has revealed six major issues: (1) technological bias in decision-making; (2) lack of natioal strategies for the RWM programs; (3) fragmentation of governmental power structures; (4) crippled national regulatory bodies; (5) complex and competing relations among local, state and federal levels of government; and (6) increased importance of non-governmental actors and public participation. The first two issues are overarching, encompassing the fundamental approach to policy, whereas the last four describe more specific aspects of the decision-making structures

  20. Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.

    Science.gov (United States)

    Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L

    2012-03-01

    Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.

  1. Bayesian model selection without evidences: application to the dark energy equation-of-state

    Science.gov (United States)

    Hee, S.; Handley, W. J.; Hobson, M. P.; Lasenby, A. N.

    2016-01-01

    A method is presented for Bayesian model selection without explicitly computing evidences, by using a combined likelihood and introducing an integer model selection parameter n so that Bayes factors, or more generally posterior odds ratios, may be read off directly from the posterior of n. If the total number of models under consideration is specified a priori, the full joint parameter space (θ, n) of the models is of fixed dimensionality and can be explored using standard Markov chain Monte Carlo (MCMC) or nested sampling methods, without the need for reversible jump MCMC techniques. The posterior on n is then obtained by straightforward marginalization. We demonstrate the efficacy of our approach by application to several toy models. We then apply it to constraining the dark energy equation of state using a free-form reconstruction technique. We show that Λ cold dark matter is significantly favoured over all extensions, including the simple w(z) = constant model.

  2. A comparative study of sensory processing in children with and without Autism Spectrum Disorder in the home and classroom environments.

    Science.gov (United States)

    Fernández-Andrés, Ma Inmaculada; Pastor-Cerezuela, Gemma; Sanz-Cervera, Pilar; Tárraga-Mínguez, Raúl

    2015-03-01

    Sensory processing and higher integrative functions impairments are highly prevalent in children with ASD. Context should be considered in analyzing the sensory profile and higher integrative functions. The main objective of this study is to compare sensory processing, social participation and praxis in a group of 79 children (65 males and 14 females) from 5 to 8 years of age (M=6.09) divided into two groups: ASD Group (n=41) and Comparison Group (n=38). The Sensory Processing Measure (SPM) was used to evaluate the sensory profile of the children: parents reported information about their children's characteristics in the home environment, and teachers reported information about the same characteristics in the classroom environment. The ASD Group obtained scores that indicate higher levels of dysfunction on all the assessed measures in both environments, with the greatest differences obtained on the social participation and praxis variables. The most affected sensory modalities in the ASD Group were hearing and touch. Only in the ASD Group were significant differences found between the information reported by parents and what was reported by teachers: specifically, the teachers reported greater dysfunction than the parents in social participation (p=.000), touch (p=.003) and praxis (p=.010). These results suggest that the context-specific qualities found in children with ASD point out the need to receive information from both parents and teachers during the sensory profile assessment process, and use context-specific assessments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Comparative study of the bioconversion process using R-(+)- and S-(-)-limonene as substrates for Fusarium oxysporum 152B.

    Science.gov (United States)

    Molina, Gustavo; Bution, Murillo L; Bicas, Juliano L; Dolder, Mary Anne Heidi; Pastore, Gláucia M

    2015-05-01

    This study compared the bioconversion process of S-(-)-limonene into limonene-1,2-diol with the already established biotransformation of R-(+)-limonene into α-terpineol using the same biocatalyst in both processes, Fusarium oxysporum 152B. The bioconversion of the S-(-)-isomer was tested on cell permeabilisation under anaerobic conditions and using a biphasic system. When submitted to permeabilisation trials, this biocatalyst has shown a relatively high resistance; still, no production of limonene-1,2-diol and a loss of activity of the biocatalyst were observed after intense cell treatment, indicating a complete loss of cell viability. Furthermore, the results showed that this process can be characterised as an aerobic system that was catalysed by limonene-1,2-epoxide hydrolase, had an intracellular nature and was cofactor-dependent because the final product was not detected by an anaerobic process. Finally, this is the first report to characterise the bioconversion of R-(+)- and S-(-)-limonene by cellular detoxification using ultra-structural analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Comparative antioxidant potential of Withania somnifera based herbal formulation prepared by traditional and non-traditional fermentation processes.

    Science.gov (United States)

    Manwar, Jagdish; Mahadik, Kakasaheb; Sathiyanarayanan, L; Paradkar, Anant; Patil, Sanjay

    2013-06-01

    Ashwagandharishtha is a liquid polyherbal formulation traditionally prepared by fermentation process using the flowers of Woodfordia fruticosa. It contains roots of Withania somnifera as a major crude drug. Alcohol generated during the fermentation causes the extraction of water insoluble phytoconstituents. Yeasts present on the flowers are responsible for this fermentation. Total nine formulations of ashwagandharishtha were prepared by fermentation process using traditional Woodfordia fruticosa flowers (ASG-WFS) and using yeasts isolated from the same flowers. During fermentation, kinetic of alcohol generation, sugar consumption, changes in pH and withanolides extraction were studied. All the formulations were tested for in vitro antioxidant potential by 1,1-diphenyl-2-picryl-hydrazyl (DPPH) free radical scavenging, hydrogen peroxide scavenging and total reducing power assay. The results were compared with standard ascorbic acid. Traditional formulation (ASG-WFS) showed the highest activity (p < 0.001) relative to other formulations and standard ascorbic acid. ASG-WFS showed significant (DPPH) free radical scavenging (78.75%) and hydrogen peroxide scavenging (69.62%) at the concentration of 1000 μg/mL and 100 μg/mL, respectively. Traditional process is the best process for preparing ashwagandharishtha to obtain significant antioxidant activity.

  5. The comparative study of pressing and extrusion like processes of construction ceramic products in the Metropolitan Area of Cucuta

    International Nuclear Information System (INIS)

    Gelves, J. F.; Monroy, R.; Sanchez, J.; Ramirez, R. P.

    2013-01-01

    The present work studies the principal variables of control in the manufacturing process of construction pieces of the Metropolitan Area of San Jose de Cucuta by extrusion and pressing techniques for its forming. The investigation was taken out using clayey samples of the two principal geological formations of the region where the raw material is taken for processing at an industrial level. The clayey samples milling was made by dry means as well as by moisture means and its particle size was measured. Subsequently the forming process was taken over by using an hydraulic press and extruder with vacuum system , both equipment s at laboratory scale, the pieces shaped were dry and firing between 980 degree centigrade and 1180 degree centigrade at the end of the process the tests were made to determine water absorption, contraction and mass loss at the pieces firing. The study results left to see that the extrusion technique allowed a faster vitrification for the region's clay in comparing with the pressing technique, the contractions of drying and firing are less marked on the pressing techniques with standard deviations much lower than in extrusion. (Author) 13 refs.

  6. Different signal processing techniques of ratio spectra for spectrophotometric resolution of binary mixture of bisoprolol and hydrochlorothiazide; a comparative study

    Science.gov (United States)

    Elzanfaly, Eman S.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2015-04-01

    Five signal processing techniques were applied to ratio spectra for quantitative determination of bisoprolol (BIS) and hydrochlorothiazide (HCT) in their binary mixture. The proposed techniques are Numerical Differentiation of Ratio Spectra (ND-RS), Savitsky-Golay of Ratio Spectra (SG-RS), Continuous Wavelet Transform of Ratio Spectra (CWT-RS), Mean Centering of Ratio Spectra (MC-RS) and Discrete Fourier Transform of Ratio Spectra (DFT-RS). The linearity of the proposed methods was investigated in the range of 2-40 and 1-22 μg/mL for BIS and HCT, respectively. The proposed methods were applied successfully for the determination of the drugs in laboratory prepared mixtures and in commercial pharmaceutical preparations and standard deviation was less than 1.5. The five signal processing techniques were compared to each other and validated according to the ICH guidelines and accuracy, precision, repeatability and robustness were found to be within the acceptable limit.

  7. Comparing ICD9-encoded diagnoses and NLP-processed discharge summaries for clinical trials pre-screening: a case study.

    Science.gov (United States)

    Li, Li; Chase, Herbert S; Patel, Chintan O; Friedman, Carol; Weng, Chunhua

    2008-11-06

    The prevalence of electronic medical record (EMR) systems has made mass-screening for clinical trials viable through secondary uses of clinical data, which often exist in both structured and free text formats. The tradeoffs of using information in either data format for clinical trials screening are understudied. This paper compares the results of clinical trial eligibility queries over ICD9-encoded diagnoses and NLP-processed textual discharge summaries. The strengths and weaknesses of both data sources are summarized along the following dimensions: information completeness, expressiveness, code granularity, and accuracy of temporal information. We conclude that NLP-processed patient reports supplement important information for eligibility screening and should be used in combination with structured data.

  8. Inference of the protokaryotypes of amniotes and tetrapods and the evolutionary processes of microchromosomes from comparative gene mapping.

    Directory of Open Access Journals (Sweden)

    Yoshinobu Uno

    Full Text Available Comparative genome analysis of non-avian reptiles and amphibians provides important clues about the process of genome evolution in tetrapods. However, there is still only limited information available on the genome structures of these organisms. Consequently, the protokaryotypes of amniotes and tetrapods and the evolutionary processes of microchromosomes in tetrapods remain poorly understood. We constructed chromosome maps of functional genes for the Chinese soft-shelled turtle (Pelodiscus sinensis, the Siamese crocodile (Crocodylus siamensis, and the Western clawed frog (Xenopus tropicalis and compared them with genome and/or chromosome maps of other tetrapod species (salamander, lizard, snake, chicken, and human. This is the first report on the protokaryotypes of amniotes and tetrapods and the evolutionary processes of microchromosomes inferred from comparative genomic analysis of vertebrates, which cover all major non-avian reptilian taxa (Squamata, Crocodilia, Testudines. The eight largest macrochromosomes of the turtle and chicken were equivalent, and 11 linkage groups had also remained intact in the crocodile. Linkage groups of the chicken macrochromosomes were also highly conserved in X. tropicalis, two squamates, and the salamander, but not in human. Chicken microchromosomal linkages were conserved in the squamates, which have fewer microchromosomes than chicken, and also in Xenopus and the salamander, which both lack microchromosomes; in the latter, the chicken microchromosomal segments have been integrated into macrochromosomes. Our present findings open up the possibility that the ancestral amniotes and tetrapods had at least 10 large genetic linkage groups and many microchromosomes, which corresponded to the chicken macro- and microchromosomes, respectively. The turtle and chicken might retain the microchromosomes of the amniote protokaryotype almost intact. The decrease in number and/or disappearance of microchromosomes by repeated

  9. Modelling uncertainty due to imperfect forward model and aerosol microphysical model selection in the satellite aerosol retrieval

    Science.gov (United States)

    Määttä, Anu; Laine, Marko; Tamminen, Johanna

    2015-04-01

    This study aims to characterize the uncertainty related to the aerosol microphysical model selection and the modelling error due to approximations in the forward modelling. Many satellite aerosol retrieval algorithms rely on pre-calculated look-up tables of model parameters representing various atmospheric conditions. In the retrieval we need to choose the most appropriate aerosol microphysical models from the pre-defined set of models by fitting them to the observations. The aerosol properties, e.g. AOD, are then determined from the best models. This choice of an appropriate aerosol model composes a notable part in the AOD retrieval uncertainty. The motivation in our study was to account these two sources in the total uncertainty budget: uncertainty in selecting the most appropriate model, and uncertainty resulting from the approximations in the pre-calculated aerosol microphysical model. The systematic model error was analysed by studying the behaviour of the model residuals, i.e. the differences between modelled and observed reflectances, by statistical methods. We utilised Gaussian processes to characterize the uncertainty related to approximations in aerosol microphysics modelling due to use of look-up tables and other non-modelled systematic features in the Level 1 data. The modelling error is described by a non-diagonal covariance matrix parameterised by correlation length, which is estimated from the residuals using computational tools from spatial statistics. In addition, we utilised Bayesian model selection and model averaging methods to account the uncertainty due to aerosol model selection. By acknowledging the modelling error as a source of uncertainty in the retrieval of AOD from observed spectral reflectance, we allow the observed values to deviate from the modelled values within limits determined by both the measurement and modelling errors. This results in a more realistic uncertainty level of the retrieved AOD. The method is illustrated by both

  10. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with

  11. EPR: Comparative approach of the French and Finnish reviewing process and optimization of radiation-protection at the design phase

    International Nuclear Information System (INIS)

    Couasnon, Olivier; Bouchez, Emmanuelle; Gourram, Hakim; Evrard, Jean-Michel; Riihiluoma, Veli; Beneteau, Yannick; Foret, Jean-Luc

    2008-01-01

    Following the assessment of EPR preliminary safety analysis report in France, the purpose of this paper is to present a comparative approach of the french and finnish reviewing process. The overall picture drawn in this occasion is dedicated: 1) To remind the history of EPR (from the decision to implement studies in the 90 's to the french and german cooperation and finally to the construction of a unit in Finland and another one in France); 2) To compare french and finnish safety evaluation systems: in France, the safety authority in charge of the authorization process is not directly linked to its technical support which leads the technical instruction. In Finland, the safety authority is in charge of the evaluation of safety analysis. In this process Technical Support Organizations (TSO) can be requested for example in some comparative calculations; 3) To present the dose targets (calculated reference doses) planned by the nuclear operators in the design phase as well as the global radiation-protection optimization process. In France, for example, EDF performed a detailed optimization analysis on selected tasks known to have a major contribution to the annual average collective dose (thermal insulation, logistics, valve-maintenance, opening/closing of the vessel, preparation and checks of steam generators, on-site spent fuel management and waste management). The optimization process is set in France on an iterative method. In Finland the optimization of annual collective dose has to be described in a separate topical report. In every phase of system descriptions the radiation-protection aspects have to be taken into account to meet the requirement stated in specific regulatory guides; 4) As a conclusion, to draw a comparison between the EPR collective dose target and doses received on other pressurized water reactors that are close to the EPR design (Konvoi of German design, 'best French units'). This paper has been jointly written by the french operator (EDF

  12. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  13. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  14. A Comparative Analysis of Taguchi Methodology and Shainin System DoE in the Optimization of Injection Molding Process Parameters

    Science.gov (United States)

    Khavekar, Rajendra; Vasudevan, Hari, Dr.; Modi, Bhavik

    2017-08-01

    Two well-known Design of Experiments (DoE) methodologies, such as Taguchi Methods (TM) and Shainin Systems (SS) are compared and analyzed in this study through their implementation in a plastic injection molding unit. Experiments were performed at a perfume bottle cap manufacturing company (made by acrylic material) using TM and SS to find out the root cause of defects and to optimize the process parameters for minimum rejection. Experiments obtained the rejection rate to be 8.57% from 40% (appx.) during trial runs, which is quiet low, representing successful implementation of these DoE methods. The comparison showed that both methodologies gave same set of variables as critical for defect reduction, but with change in their significance order. Also, Taguchi methods require more number of experiments and consume more time compared to the Shainin System. Shainin system is less complicated and is easy to implement, whereas Taguchi methods is statistically more reliable for optimization of process parameters. Finally, experimentations implied that DoE methods are strong and reliable in implementation, as organizations attempt to improve the quality through optimization.

  15. Comparative effect of high pressure processing and traditional thermal treatment on the physicochemical, microbiology, and sensory analysis of olive jam

    Directory of Open Access Journals (Sweden)

    Delgado-Adamez, J.

    2013-09-01

    Full Text Available In the present work the effect of the processing by high hydrostatic pressures (HPP was assessed as an alternative to the thermal treatment of pasteurization in olive jam. The effects of both treatments on the product after processing were compared and stability during storage under refrigeration was assessed through the characterization of physicochemical, microbiological and sensory aspects. To assess the effect of processing, two HPP treatments (450 and 600MPa and thermal pasteurization (80 °C for 20 min were applied, comparing them with the unprocessed product. HPP 600MPa versus the rest of treatments showed a reduction in microorganisms, greater clarity and less browning, and sensory acceptance. The shelf-life of the refrigerated product would indicate the feasibility of the application of the HPP technology for food with similar shelf-life to that obtained with the traditional treatment of pasteurization, but with a better sensory quality.En el presente trabajo se valoró el efecto del procesado por altas presiones hidrostáticas (HPP como método alternativo al tratamiento térmico de pasteurización en la mermelada de aceitunas. Para ello se comparó el efecto de ambos tratamientos sobre el producto procesado y se evaluó su estabilidad durante el almacenamiento en refrigeración, mediante la caracterización de los aspectos físico-químicos, microbiológicos, y sensoriales. Para evaluar el efecto del procesado, se aplicaron dos tratamientos de HPP (450 y 600MPa y otro de pasteurización térmica (80 °C durante 20 min, comparándose con el producto no procesado. Las muestras tratadas con HPP 600MPa presentaron, frente al resto de tratamientos una reducción en la presencia de microorganismos, mayor claridad y menor pardeamiento, y una mayor aceptación sensorial. El estudio de la vida útil del producto en refrigeración, indicaría la viabilidad de la aplicación de la tecnología de HPP para obtener alimentos con vida útil similar

  16. The time-profile of cell growth in fission yeast: model selection criteria favoring bilinear models over exponential ones

    Directory of Open Access Journals (Sweden)

    Sveiczer Akos

    2006-03-01

    Full Text Available Abstract Background There is considerable controversy concerning the exact growth profile of size parameters during the cell cycle. Linear, exponential and bilinear models are commonly considered, and the same model may not apply for all species. Selection of the most adequate model to describe a given data-set requires the use of quantitative model selection criteria, such as the partial (sequential F-test, the Akaike information criterion and the Schwarz Bayesian information criterion, which are suitable for comparing differently parameterized models in terms of the quality and robustness of the fit but have not yet been used in cell growth-profile studies. Results Length increase data from representative individual fission yeast (Schizosaccharomyces pombe cells measured on time-lapse films have been reanalyzed using these model selection criteria. To fit the data, an extended version of a recently introduced linearized biexponential (LinBiExp model was developed, which makes possible a smooth, continuously differentiable transition between two linear segments and, hence, allows fully parametrized bilinear fittings. Despite relatively small differences, essentially all the quantitative selection criteria considered here indicated that the bilinear model was somewhat more adequate than the exponential model for fitting these fission yeast data. Conclusion A general quantitative framework was introduced to judge the adequacy of bilinear versus exponential models in the description of growth time-profiles. For single cell growth, because of the relatively limited data-range, the statistical evidence is not strong enough to favor one model clearly over the other and to settle the bilinear versus exponential dispute. Nevertheless, for the present individual cell growth data for fission yeast, the bilinear model seems more adequate according to all metrics, especially in the case of wee1Δ cells.

  17. Mindfulness training alters emotional memory recall compared to active controls: support for an emotional information processing model of mindfulness.

    Science.gov (United States)

    Roberts-Wolfe, Douglas; Sacchet, Matthew D; Hastings, Elizabeth; Roth, Harold; Britton, Willoughby

    2012-01-01

    While mindfulness-based interventions have received widespread application in both clinical and non-clinical populations, the mechanism by which mindfulness meditation improves well-being remains elusive. One possibility is that mindfulness training alters the processing of emotional information, similar to prevailing cognitive models of depression and anxiety. The aim of this study was to investigate the effects of mindfulness training on emotional information processing (i.e., memory) biases in relation to both clinical symptomatology and well-being in comparison to active control conditions. Fifty-eight university students (28 female, age = 20.1 ± 2.7 years) participated in either a 12-week course containing a "meditation laboratory" or an active control course with similar content or experiential practice laboratory format (music). Participants completed an emotional word recall task and self-report questionnaires of well-being and clinical symptoms before and after the 12-week course. Meditators showed greater increases in positive word recall compared to controls [F(1, 56) = 6.6, p = 0.02]. The meditation group increased significantly more on measures of well-being [F(1, 56) = 6.6, p = 0.01], with a marginal decrease in depression and anxiety [F(1, 56) = 3.0, p = 0.09] compared to controls. Increased positive word recall was associated with increased psychological well-being (r = 0.31, p = 0.02) and decreased clinical symptoms (r = -0.29, p = 0.03). Mindfulness training was associated with greater improvements in processing efficiency for positively valenced stimuli than active control conditions. This change in emotional information processing was associated with improvements in psychological well-being and less depression and anxiety. These data suggest that mindfulness training may improve well-being via changes in emotional information processing. Future research with a fully randomized design will be

  18. Mindfulness Training Alters Emotional Memory Recall Compared to Active Controls: Support for an Emotional Information Processing Model of Mindfulness

    Science.gov (United States)

    Roberts-Wolfe, Douglas; Sacchet, Matthew D.; Hastings, Elizabeth; Roth, Harold; Britton, Willoughby

    2012-01-01

    Objectives: While mindfulness-based interventions have received widespread application in both clinical and non-clinical populations, the mechanism by which mindfulness meditation improves well-being remains elusive. One possibility is that mindfulness training alters the processing of emotional information, similar to prevailing cognitive models of depression and anxiety. The aim of this study was to investigate the effects of mindfulness training on emotional information processing (i.e., memory) biases in relation to both clinical symptomatology and well-being in comparison to active control conditions. Methods: Fifty-eight university students (28 female, age = 20.1 ± 2.7 years) participated in either a 12-week course containing a “meditation laboratory” or an active control course with similar content or experiential practice laboratory format (music). Participants completed an emotional word recall task and self-report questionnaires of well-being and clinical symptoms before and after the 12-week course. Results: Meditators showed greater increases in positive word recall compared to controls [F(1, 56) = 6.6, p = 0.02]. The meditation group increased significantly more on measures of well-being [F(1, 56) = 6.6, p = 0.01], with a marginal decrease in depression and anxiety [F(1, 56) = 3.0, p = 0.09] compared to controls. Increased positive word recall was associated with increased psychological well-being (r = 0.31, p = 0.02) and decreased clinical symptoms (r = −0.29, p = 0.03). Conclusion: Mindfulness training was associated with greater improvements in processing efficiency for positively valenced stimuli than active control conditions. This change in emotional information processing was associated with improvements in psychological well-being and less depression and anxiety. These data suggest that mindfulness training may improve well-being via changes in emotional information processing. Future

  19. Comparative Proteomic Analysis of the Graft Unions in Hickory (Carya cathayensis Provides Insights into Response Mechanisms to Grafting Process

    Directory of Open Access Journals (Sweden)

    Daoliang Yan

    2017-04-01

    Full Text Available Hickory (Carya cathayensis, a tree with high nutritional and economic value, is widely cultivated in China. Grafting greatly reduces the juvenile phase length and makes the large scale cultivation of hickory possible. To reveal the response mechanisms of this species to grafting, we employed a proteomics-based approach to identify differentially expressed proteins in the graft unions during the grafting process. Our study identified 3723 proteins, of which 2518 were quantified. A total of 710 differentially expressed proteins (DEPs were quantified and these were involved in various molecular functional and biological processes. Among these DEPs, 341 were up-regulated and 369 were down-regulated at 7 days after grafting compared with the control. Four auxin-related proteins were down-regulated, which was in agreement with the transcription levels of their encoding genes. The Kyoto Encyclopedia of Genes and Genomes (KEGG analysis showed that the ‘Flavonoid biosynthesis’ pathway and ‘starch and sucrose metabolism’ were both significantly up-regulated. Interestingly, five flavonoid biosynthesis-related proteins, a flavanone 3-hyfroxylase, a cinnamate 4-hydroxylase, a dihydroflavonol-4-reductase, a chalcone synthase, and a chalcone isomerase, were significantly up-regulated. Further experiments verified a significant increase in the total flavonoid contents in scions, which suggests that graft union formation may activate flavonoid biosynthesis to increase the content of a series of downstream secondary metabolites. This comprehensive analysis provides fundamental information on the candidate proteins and secondary metabolism pathways involved in the grafting process for hickory.

  20. Comparing and combining implicit ligand sampling with multiple steered molecular dynamics to study ligand migration processes in heme proteins.

    Science.gov (United States)

    Forti, Flavio; Boechi, Leonardo; Estrin, Dario A; Marti, Marcelo A

    2011-07-30

    The ubiquitous heme proteins perform a wide variety of tasks that rely on the subtle regulation of their affinity for small ligands like O2, CO, and NO. Ligand affinity is characterized by kinetic association and dissociation rate constants, that partially depend on ligand migration between the solvent and active site, mediated by the presence of internal cavities or tunnels. Different computational methods have been developed to study these processes which can be roughly divided in two strategies: those costly methods in which the ligand is treated explicitly during the simulations, and the free energy landscape of the process is computed; and those faster methods that use prior computed Molecular Dynamics simulation without the ligand, and incorporate it afterwards, called implicit ligand sampling (ILS) methods. To compare both approaches performance and to provide a combined protocol to study ligand migration in heme proteins, we performed ILS and multiple steered molecular dynamics (MSMD) free energy calculations of the ligand migration process in three representative and well theoretically and experimentally studied cases that cover a wide range of complex situations presenting a challenging benchmark for the aim of the present work. Our results show that ILS provides a good description of the tunnel topology and a reasonable approximation to the free energy landscape, while MSMD provides more accurate and detailed free energy profile description of each tunnel. Based on these results, a combined strategy is presented for the study of internal ligand migration in heme proteins. Copyright © 2011 Wiley Periodicals, Inc.

  1. Evaluation of the radiographic process using a experimental monobath solution compared with normal (Kodak) and rapid (RAY) developer solutions

    International Nuclear Information System (INIS)

    Baratieri, N.M.M.

    1985-01-01

    A comparative evaluation of the radiographic image quality of two dental X-ray films (Kodak's EP-21 and Agfa-Gevaert DOS-1) when processed in a normal (Kodak) a rapid (Ray) and a experimental monobath solutions, is presented. These films, processed in those solutions had their time of development, temperature and agitation performances checked by sensitometry; pH and color by routine methods and hipo rests by spectrophotometry. The radiographies were also analysed by able professionals regarding the best development time. The data so obtained allowed the conclusions that the best development time for the monobath was 3 minutes at 20 0 C but 25 or 30 0 C give also acceptable results at shorter times. The agitation of 10 seconds every minute is an important factor concerning image quality. pH and color do alter rapidally but with little influence in the final result. We found a certain amount of residual chemical compounds which were not identified but that are not hipo components, and being important to note that they seem not act upon the emulsion at least during one year after processing. (author) [pt

  2. Comparative Evaluation of the Ostium After External and Nonendoscopic Endonasal Dacryocystorhinostomy Using Image Processing (Matlabs and Image J) Softwares.

    Science.gov (United States)

    Ganguly, Anasua; Kaza, Hrishikesh; Kapoor, Aditya; Sheth, Jenil; Ali, Mohammad Hasnat; Tripathy, Devjyoti; Rath, Suryasnata

    The purpose of this study was to compare the characteristics of the ostium after external dacryocystorhinostomy and nonendoscopic endonasal dacryocystorhinostomy (NEN-DCR). This cross-sectional study included patients who underwent a successful external dacryocystorhinostomy or NEN-DCR and had ≥1 month follow up. Pictures of the ostium were captured with a nasal endoscope (4 mm, 30°) after inserting a lacrimal probe premarked at 2 mm. Image analyses were performed using Image J and Contour softwares. Of the 113 patients included, external dacryocystorhinostomy group had 53 patients and NEN-DCR group had 60 patients. The mean age of patients in the NEN-DCR group (38 years) was significantly (p 0.05) in mean follow up (6 vs. 4 months), maximum diameter of ostium (8 vs. 7 mm), perpendicular drawn to it (4 vs. 4 mm), area of ostium (43 vs. 36 mm), and the minimum distance between common internal punctum and edge of the ostium (1 vs. 1 mm) between the external and NEN-DCR groups. Image processing softwares offer simple and objective method to measure the ostium. While ostia are comparable in size, their relative position differs with posteriorly placed ostia in external compared with inferior in NEN-DCR.

  3. Information processing, attention and visual-motor function of adolescents born after in vitro fertilization compared with spontaneous conception.

    Science.gov (United States)

    Wagenaar, K; van Weissenbruch, M M; Knol, D L; Cohen-Kettenis, P T; Delemarre-van de Waal, H A; Huisman, J

    2009-04-01

    Adverse conditions during prenatal life are associated with changes in physical and mental functioning in later life, as shown in children born preterm or small for gestational age. While recently in IVF children cardiometabolic differences have been demonstrated, there might also be risks for disturbance in cognitive functions. Therefore, this study examined information processing, attention and visual-motor function in pubertal IVF children compared with spontaneously conceived controls from subfertile parents. Results of these cognitive functions were then related to cardiometabolic measures to explore whether both can be explained by changes in fetal programming due to IVF. A total of 139 IVF and 143 control adolescents underwent various neuropsychological tests to measure information processing, attention and visual-motor function. The results were then related to data on blood pressure and glucose levels previously obtained from the same groups. No differences between IVF and control adolescents were found in the various test results for information processing and attention. A slight difference was found between the groups for motor speed, but these scores were within the normal range for the test. No direct relation was found between cognitive measures and cardiometabolic outcome. Comparison of IVF adolescents and controls revealed no disturbances in information processing, attention and visual-motor function. In addition, these cognitive functions were not directly related to cardiometabolic outcome. Therefore, these results do not support the hypothesis that cognition is influenced by IVF conception or an altered programming of metabolic systems due to IVF, and indicate that cognitive abilities in IVF children, as measured by the tasks assessed, appear to develop normally.

  4. Live Imaging-Based Model Selection Reveals Periodic Regulation of the Stochastic G1/S Phase Transition in Vertebrate Axial Development

    Science.gov (United States)

    Kurokawa, Hiroshi; Sakaue-Sawano, Asako; Imamura, Takeshi; Miyawaki, Atsushi; Iimura, Tadahiro

    2014-01-01

    In multicellular organism development, a stochastic cellular response is observed, even when a population of cells is exposed to the same environmental conditions. Retrieving the spatiotemporal regulatory mode hidden in the heterogeneous cellular behavior is a challenging task. The G1/S transition observed in cell cycle progression is a highly stochastic process. By taking advantage of a fluorescence cell cycle indicator, Fucci technology, we aimed to unveil a hidden regulatory mode of cell cycle progression in developing zebrafish. Fluorescence live imaging of Cecyil, a zebrafish line genetically expressing Fucci, demonstrated that newly formed notochordal cells from the posterior tip of the embryonic mesoderm exhibited the red (G1) fluorescence signal in the developing notochord. Prior to their initial vacuolation, these cells showed a fluorescence color switch from red to green, indicating G1/S transitions. This G1/S transition did not occur in a synchronous manner, but rather exhibited a stochastic process, since a mixed population of red and green cells was always inserted between newly formed red (G1) notochordal cells and vacuolating green cells. We termed this mixed population of notochordal cells, the G1/S transition window. We first performed quantitative analyses of live imaging data and a numerical estimation of the probability of the G1/S transition, which demonstrated the existence of a posteriorly traveling regulatory wave of the G1/S transition window. To obtain a better understanding of this regulatory mode, we constructed a mathematical model and performed a model selection by comparing the results obtained from the models with those from the experimental data. Our analyses demonstrated that the stochastic G1/S transition window in the notochord travels posteriorly in a periodic fashion, with doubled the periodicity of the neighboring paraxial mesoderm segmentation. This approach may have implications for the characterization of the

  5. Legislative processes in transition : comparative study of the legislative processes in Finland, Slovenia and the United Kingdom as a source of inspiration for enhancing the efficiency of the Dutch legislative process

    NARCIS (Netherlands)

    Voermans, W.; Napel, H.-M. ten; Diamant, M.; Groothuis, M.; Steunenberg, B.; Passchier, R.; Pack, S.

    2012-01-01

    The main research question of the current study is when whether the efficiency of the Dutch legislative procedure for parliamentary acts indeed constitutes a problem, in particular if compared to the achievements of legislative processes in several other European countries and, if that turns out to

  6. Natural Language Processing-Enabled and Conventional Data Capture Methods for Input to Electronic Health Records: A Comparative Usability Study.

    Science.gov (United States)

    Kaufman, David R; Sheehan, Barbara; Stetson, Peter; Bhatt, Ashish R; Field, Adele I; Patel, Chirag; Maisel, James Mark

    2016-10-28

    The process of documentation in electronic health records (EHRs) is known to be time consuming, inefficient, and cumbersome. The use of dictation coupled with manual transcription has become an increasingly common practice. In recent years, natural language processing (NLP)-enabled data capture has become a viable alternative for data entry. It enables the clinician to maintain control of the process and potentially reduce the documentation burden. The question remains how this NLP-enabled workflow will impact EHR usability and whether it can meet the structured data and other EHR requirements while enhancing the user's experience. The objective of this study is evaluate the comparative effectiveness of an NLP-enabled data capture method using dictation and data extraction from transcribed documents (NLP Entry) in terms of documentation time, documentation quality, and usability versus standard EHR keyboard-and-mouse data entry. This formative study investigated the results of using 4 combinations of NLP Entry and Standard Entry methods ("protocols") of EHR data capture. We compared a novel dictation-based protocol using MediSapien NLP (NLP-NLP) for structured data capture against a standard structured data capture protocol (Standard-Standard) as well as 2 novel hybrid protocols (NLP-Standard and Standard-NLP). The 31 participants included neurologists, cardiologists, and nephrologists. Participants generated 4 consultation or admission notes using 4 documentation protocols. We recorded the time on task, documentation quality (using the Physician Documentation Quality Instrument, PDQI-9), and usability of the documentation processes. A total of 118 notes were documented across the 3 subject areas. The NLP-NLP protocol required a median of 5.2 minutes per cardiology note, 7.3 minutes per nephrology note, and 8.5 minutes per neurology note compared with 16.9, 20.7, and 21.2 minutes, respectively, using the Standard-Standard protocol and 13.8, 21.3, and 18.7 minutes

  7. Comparative technoeconomic analysis of a softwood ethanol process featuring posthydrolysis sugars concentration operations and continuous fermentation with cell recycle.

    Science.gov (United States)

    Schneiderman, Steven J; Gurram, Raghu N; Menkhaus, Todd J; Gilcrease, Patrick C

    2015-01-01

    Economical production of second generation ethanol from Ponderosa pine is of interest due to widespread mountain pine beetle infestation in the western United States and Canada. The conversion process is limited by low glucose and high inhibitor concentrations resulting from conventional low-solids dilute acid pretreatment and enzymatic hydrolysis. Inhibited fermentations require larger fermentors (due to reduced volumetric productivity) and low sugars lead to low ethanol titers, increasing distillation costs. In this work, multiple effect evaporation (MEE) and nanofiltration (NF) were evaluated to concentrate the hydrolysate from 30 g/l to 100, 150, or 200 g/l glucose. To ferment this high gravity, inhibitor containing stream, traditional batch fermentation was compared with continuous stirred tank fermentation (CSTF) and continuous fermentation with cell recycle (CSTF-CR). Equivalent annual operating cost (EAOC = amortized capital + yearly operating expenses) was used to compare these potential improvements for a local-scale 5 MGY ethanol production facility. Hydrolysate concentration via evaporation increased EAOC over the base process due to the capital and energy intensive nature of evaporating a very dilute sugar stream; however, concentration via NF decreased EAOC for several of the cases (by 2 to 15%). NF concentration to 100 g/l glucose with a CSTF-CR was the most economical option, reducing EAOC by $0.15 per gallon ethanol produced. Sensitivity analyses on NF options showed that EAOC improvement over the base case could still be realized for even higher solids removal requirements (up to two times higher centrifuge requirement for the best case) or decreased NF performance. © 2015 American Institute of Chemical Engineers.

  8. Can Process Understanding Help Elucidate The Structure Of The Critical Zone? Comparing Process-Based Soil Formation Models With Digital Soil Mapping.

    Science.gov (United States)

    Vanwalleghem, T.; Román, A.; Peña, A.; Laguna, A.; Giráldez, J. V.

    2017-12-01

    There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties in the critical zone. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of traditional digital soil mapping versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.

  9. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence.

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  10. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

    Science.gov (United States)

    Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

    2018-01-01

    Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

  11. A comparison of regression methods for model selection in individual-based landscape genetic analysis.

    Science.gov (United States)

    Shirk, Andrew J; Landguth, Erin L; Cushman, Samuel A

    2018-01-01

    Anthropogenic migration barriers fragment many populations and limit the ability of species to respond to climate-induced biome shifts. Conservation actions designed to conserve habitat connectivity and mitigate barriers are needed to unite fragmented populations into larger, more viable metapopulations, and to allow species to track their climate envelope over time. Landscape genetic analysis provides an empirical means to infer landscape factors influencing gene flow and thereby inform such conservation actions. However, there are currently many methods available for model selection in landscape genetics, and considerable uncertainty as to which provide the greatest accuracy in identifying the true landscape model influencing gene flow among competing alternative hypotheses. In this study, we used population genetic simulations to evaluate the performance of seven regression-based model selection methods on a broad array of landscapes that varied by the number and type of variables contributing to resistance, the magnitude and cohesion of resistance, as well as the functional relationship between variables and resistance. We also assessed the effect of transformations designed to linearize the relationship between genetic and landscape distances. We found that linear mixed effects models had the highest accuracy in every way we evaluated model performance; however, other methods also performed well in many circumstances, particularly when landscape resistance was high and the correlation among competing hypotheses was limited. Our results provide guidance for which regression-based model selection methods provide the most accurate inferences in landscape genetic analysis and thereby best inform connectivity conservation actions. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  12. Comparative study on the impact of coal and uranium mining, processing, and transportation in the western United States

    International Nuclear Information System (INIS)

    Sandquist, G.M.

    1979-06-01

    A comparative study and quantitative assessment of the impacts, costs and benefits associated with the mining, processing and transportation of coal and uranium within the western states, specifically Arizona, California, Colorado, Montana, New Mexico, Oregon, Utah, Washington and Wyoming are presented. The western states possess 49% of the US reserve coal base, 67% of the total identified reserves and 82% of the hypothetical reserves. Western coal production has increased at an average annual rate of about 22% since 1970 and should become the major US coal supplier in the 1980's. The Colorado Plateau (in Arizona, Colorado, New Mexico and Utah) and the Wyoming Basin areas account for 72% of the $15/lb U 3 O 8 resources, 76% of the $30/lb, and 75% of the $50/lb resources. It is apparent that the West will serve as the major supplier of domestic US coal and uranium fuels for at least the next several decades. Impacts considered are: environmental impacts, (land, water, air quality); health effects of coal and uranium mining, processing, and transportation; risks from transportation accidents; radiological impact of coal and uranium mining; social and economic impacts; and aesthetic impacts (land, air, noise, water, biota, and man-made objects). Economic benefits are discussed

  13. Comparative proteomics of tuber induction, development and maturation reveal the complexity of tuberization process in potato (Solanum tuberosum L.).

    Science.gov (United States)

    Agrawal, Lalit; Chakraborty, Subhra; Jaiswal, Dinesh Kumar; Gupta, Sonika; Datta, Asis; Chakraborty, Niranjan

    2008-09-01

    Tuberization in potato ( Solanum tuberosum L.) is a developmental process that serves a double function, as a storage organ and as a vegetative propagation system. It is a multistep, complex process and the underlying mechanisms governing these overlapping steps are not fully understood. To understand the molecular basis of tuberization in potato, a comparative proteomic approach has been applied to monitor differentially expressed proteins at different development stages using two-dimensional gel electrophoresis (2-DE). The differentially displayed proteomes revealed 219 protein spots that change their intensities more than 2.5-fold. The LC-ES-MS/MS analyses led to the identification of 97 differentially regulated proteins that include predicted and novel tuber-specific proteins. Nonhierarchical clustering revealed coexpression patterns of functionally similar proteins. The expression of reactive oxygen species catabolizing enzymes, viz., superoxide dismutase, ascorbate peroxidase and catalase, were induced by more than 2-fold indicating their possible role during the developmental transition from stolons into tubers. We demonstrate that nearly 100 proteins, some presumably associated with tuber cell differentiation, regulate diverse functions like protein biogenesis and storage, bioenergy and metabolism, and cell defense and rescue impinge on the complexity of tuber development in potato.

  14. Comparative study on the impact of coal and uranium mining, processing, and transportation in the western United States

    Energy Technology Data Exchange (ETDEWEB)

    Sandquist, G.M.

    1979-06-01

    A comparative study and quantitative assessment of the impacts, costs and benefits associated with the mining, processing and transportation of coal and uranium within the western states, specifically Arizona, California, Colorado, Montana, New Mexico, Oregon, Utah, Washington and Wyoming are presented. The western states possess 49% of the US reserve coal base, 67% of the total identified reserves and 82% of the hypothetical reserves. Western coal production has increased at an average annual rate of about 22% since 1970 and should become the major US coal supplier in the 1980's. The Colorado Plateau (in Arizona, Colorado, New Mexico and Utah) and the Wyoming Basin areas account for 72% of the $15/lb U/sub 3/O/sub 8/ resources, 76% of the $30/lb, and 75% of the $50/lb resources. It is apparent that the West will serve as the major supplier of domestic US coal and uranium fuels for at least the next several decades. Impacts considered are: environmental impacts, (land, water, air quality); health effects of coal and uranium mining, processing, and transportation; risks from transportation accidents; radiological impact of coal and uranium mining; social and economic impacts; and aesthetic impacts (land, air, noise, water, biota, and man-made objects). Economic benefits are discussed.

  15. Quantification of greenhouse gas emissions from waste management processes for municipalities--a comparative review focusing on Africa.

    Science.gov (United States)

    Friedrich, Elena; Trois, Cristina

    2011-07-01

    The amount of greenhouse gases (GHG) emitted due to waste management in the cities of developing countries is predicted to rise considerably in the near future; however, these countries have a series of problems in accounting and reporting these gases. Some of these problems are related to the status quo of waste management in the developing world and some to the lack of a coherent framework for accounting and reporting of greenhouse gases from waste at municipal level. This review summarizes and compares GHG emissions from individual waste management processes which make up a municipal waste management system, with an emphasis on developing countries and, in particular, Africa. It should be seen as a first step towards developing a more holistic GHG accounting model for municipalities. The comparison between these emissions from developed and developing countries at process level, reveals that there is agreement on the magnitude of the emissions expected from each process (generation of waste, collection and transport, disposal and recycling). The highest GHG savings are achieved through recycling, and these savings would be even higher in developing countries which rely on coal for energy production (e.g. South Africa, India and China) and where non-motorized collection and transport is used. The highest emissions are due to the methane released by dumpsites and landfills, and these emissions are predicted to increase significantly, unless more of the methane is captured and either flared or used for energy generation. The clean development mechanism (CDM) projects implemented in the developing world have made some progress in this field; however, African countries lag behind. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Probabilistic forecasts of extreme local precipitation using HARMONIE predictors and comparing 3 different post-processing methods

    Science.gov (United States)

    Whan, Kirien; Schmeits, Maurice

    2017-04-01

    Statistical post-processing of deterministic weather forecasts allows production of the full forecast distribution, and thus probabilistic forecasts, to be derived from that deterministic model output. We focus on local extreme precipitation amounts, as these are one predictand used in the KNMI weather warning system. As such, the predictand is based on the maximum hourly calibrated radar precipitation in a 3x3 km2 area within 12 large regions covering The Netherlands in a 6-hour afternoon period in summer (12-18 UTC). We compare three statistical methods when post-processing output from the operational high-resolution forecast model at KNMI, HARMONIE. These methods are 1) extended logistic regression (ELR), 2) an ensemble model output statistics approach where the parameters of a zero-adjusted gamma (ZAGA) distribution depends on a set of covariates and 3) quantile random forests (QRF). The set of predictors used as covariates includes model precipitation and indices capturing a variety of processes associated with deep convection. We use stepwise selection to select predictors for ELR and ZAGA based on the AIC. Predictors and coefficients are selected in a cross-validation framework based one two-years of training data and the skill of the forecasts are assessed on one-year of test data. The inclusion of additional predictors results in more skilfull forecasts, as expected, particularly for higher precipitation thresholds and for forecasts using the QRF method. We also assess the value of using a time-lagged ensemble. Forecasts derived from ZAGA and QRF are generally more skilfull, as defined by the Brier Skill Score, than ELR and lower precipitation amounts are skillfully predicted.

  17. Comparative reduction of Giardia cysts, F+ coliphages, sulphite reducing clostridia and fecal coliforms by wastewater treatment processes.

    Science.gov (United States)

    Nasser, Abidelfatah M; Benisti, Neta-Lee; Ofer, Naomi; Hovers, Sivan; Nitzan, Yeshayahu

    2017-01-28

    Advanced wastewater treatment processes are applied to prevent the environmental dissemination of pathogenic microorganisms. Giardia lamblia causes a severe disease called giardiasis, and is highly prevalent in untreated wastewater worldwide. Monitoring the microbial quality of wastewater effluents is usually based on testing for the levels of indicator microorganisms in the effluents. This study was conducted to compare the suitability of fecal coliforms, F+ coliphages and sulfide reducing clostridia (SRC) as indicators for the reduction of Giardia cysts in two full-scale wastewater treatment plants. The treatment process consists of activated sludge, coagulation, high rate filtration and either chlorine or UV disinfection. The results of the study demonstrated that Giardia cysts are highly prevalent in raw wastewater at an average concentration of 3600 cysts/L. Fecal coliforms, F+ coliphages and SRC were also detected at high concentrations in raw wastewater. Giardia cysts were efficiently removed (3.6 log 10 ) by the treatment train. The greatest reduction was observed for fecal coliforms (9.6 log 10 ) whereas the least reduction was observed for F+ coliphages (2.1 log 10 ) following chlorine disinfection. Similar reduction was observed for SRC by filtration and disinfection by either UV (3.6 log 10 ) or chlorine (3.3 log 10 ). Since F+ coliphage and SRC were found to be more resistant than fecal coliforms for the tertiary treatment processes, they may prove to be more suitable as indicators for Giardia. The results of this study demonstrated that advanced wastewater treatment may prove efficient for the removal of Giardia cysts and may prevent its transmission when treated effluents are applied for crop irrigation or streams restoration.

  18. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  19. Model selection emphasises the importance of non-chromosomal information in genetic studies.

    Directory of Open Access Journals (Sweden)

    Reda Rawi

    Full Text Available Ever since the case of the missing heritability was highlighted some years ago, scientists have been investigating various possible explanations for the issue. However, none of these explanations include non-chromosomal genetic information. Here we describe explicitly how chromosomal and non-chromosomal modifiers collectively influence the heritability of a trait, in this case, the growth rate of yeast. Our results show that the non-chromosomal contribution can be large, adding another dimension to the estimation of heritability. We also discovered, combining the strength of LASSO with model selection, that the interaction of chromosomal and non-chromosomal information is essential in describing phenotypes.

  20. A finite volume alternate direction implicit approach to modeling selective laser melting

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri; Mohanty, Sankhya

    2013-01-01

    Over the last decade, several studies have attempted to develop thermal models for analyzing the selective laser melting process with a vision to predict thermal stresses, microstructures and resulting mechanical properties of manufactured products. While a holistic model addressing all involved...... is proposed for modeling single-layer and few-layers selective laser melting processes. The ADI technique is implemented and applied for two cases involving constant material properties and non-linear material behavior. The ADI FV method consume less time while having comparable accuracy with respect to 3D...

  1. Comparing two psychological interventions in reducing impulsive processes of eating behaviour: effects on self-selected portion size.

    Science.gov (United States)

    van Koningsbruggen, Guido M; Veling, Harm; Stroebe, Wolfgang; Aarts, Henk

    2014-11-01

    Palatable food, such as sweets, contains properties that automatically trigger the impulse to consume it even when people have goals or intentions to refrain from consuming such food. We compared the effectiveness of two interventions in reducing the portion size of palatable food that people select for themselves. Specifically, the use of dieting implementation intentions that reduce behaviour towards palatable food via top-down implementation of a dieting goal was pitted against a stop-signal training that changes the impulse-evoking quality of palatable food from bottom-up. We compared the two interventions using a 2 × 2 factorial design. Participants completed a stop-signal training in which they learned to withhold a behavioural response upon presentation of tempting sweets (vs. control condition) and formed implementation intentions to diet (vs. control condition). Selected portion size was measured in a sweet-shop-like environment (Experiment 1) and through a computerized snack dispenser (Experiment 2). Both interventions reduced the amount of sweets selected in the sweet shop environment (Experiment 1) and the snack dispenser (Experiment 2). On average, participants receiving an intervention selected 36% (Experiment 1) and 51% (Experiment 2) fewer sweets than control participants. In both studies, combining the interventions did not lead to additive effects: Employing one of the interventions appears to successfully eliminate instrumental behaviour towards tempting food, making the other intervention redundant. Both interventions reduce self-selected portion size, which is considered a major contributor to the current obesity epidemic. What is already known on this subject? Exposure to temptations, such as unhealthy palatable food, often frustrates people's attainment of long-term health goals. Current approaches to self-control suggest that this is partly because temptations automatically trigger impulsive or hedonic processes that override the

  2. A Comparative Study of the Quality of Teaching Learning Process at Post Graduate Level in the Faculty of Science and Social Science

    Science.gov (United States)

    Shahzadi, Uzma; Shaheen, Gulnaz; Shah, Ashfaque Ahmed

    2012-01-01

    The study was intended to compare the quality of teaching learning process in the faculty of social science and science at University of Sargodha. This study was descriptive and quantitative in nature. The objectives of the study were to compare the quality of teaching learning process in the faculty of social science and science at University of…

  3. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Model selection for dynamical systems via sparse regression and information criteria.

    Science.gov (United States)

    Mangan, N M; Kutz, J N; Brunton, S L; Proctor, J L

    2017-08-01

    We develop an algorithm for model selection which allows for the consideration of a combinatorially large number of candidate models governing a dynamical system. The innovation circumvents a disadvantage of standard model selection which typically limits the number of candidate models considered due to the intractability of computing information criteria. Using a recently developed sparse identification of nonlinear dynamics algorithm, the sub-selection of candidate models near the Pareto frontier allows feasible computation of Akaike information criteria (AIC) or Bayes information criteria scores for the remaining candidate models. The information criteria hierarchically ranks the most informative models, enabling the automatic and principled selection of the model with the strongest support in relation to the time-series data. Specifically, we show that AIC scores place each candidate model in the strong support , weak support or no support category. The method correctly recovers several canonical dynamical systems, including a susceptible-exposed-infectious-recovered disease model, Burgers' equation and the Lorenz equations, identifying the correct dynamical system as the only candidate model with strong support.

  5. Process performance and comparative metagenomic analysis during co-digestion of manure and lignocellulosic biomass for biogas production

    International Nuclear Information System (INIS)

    Tsapekos, P.; Kougias, P.G.; Treu, L.; Campanaro, S.; Angelidaki, I.

    2017-01-01

    Highlights: • Pig manure and ensiled meadow grass were examined in co-digestion process. • Mechanical pretreatment increased the methane yield by 6.4%. • Coprothermobacter proteolyticus was firmly bounded to the digested grass. • Clostridium thermocellum was enriched in the firmly attached grass samples. • The abundance of methanogens was higher in the liquid fraction of digestate. - Abstract: Mechanical pretreatment is considered to be a fast and easily applicable method to prepare the biomass for anaerobic digestion. In the present study, the effect of mechanical pretreatment on lignocellulosic silages biodegradability was elucidated in batch reactors. Moreover, co-digestion of the silages with pig manure in continuously fed biogas reactors was examined. Metagenomic analysis for determining the microbial communities in the pig manure digestion system was performed by analysing unassembled shotgun genomic sequences. A comparative analysis allowed to identify the microbial species firmly attached to the digested grass particles and to distinguish them from the planktonic microbes floating in the liquid medium. It was shown that the methane yield of ensiled grass was significantly increased by 12.3% due to mechanical pretreatment in batch experiments. Similarly, the increment of the methane yield in the co-digestion system reached 6.4%. Regarding the metagenomic study, species similar to Coprothermobacter proteolyticus and to Clostridium thermocellum, known for high proteolytic and cellulolytic activity respectively, were found firmly attached to the solid fraction of digested feedstock. Results from liquid samples revealed clear differences in microbial community composition, mainly dominated by Proteobacteria. The archaeal community was found in higher relative abundance in the liquid fraction of co-digestion experiment compared to the solid fraction. Finally, an unclassified Alkaliphilus sp. was found in high relative abundance in all samples.

  6. A comparative evaluation of conceptual models for the Snake River Plain aquifer at the Idaho Chemical Processing Plant, INEL

    International Nuclear Information System (INIS)

    Prahl, C.J.

    1992-01-01

    Geologic and hydrologic data collected by the United States Geological Survey (USGS) are used to evaluate the existing ground water monitoring well network completed in the upper portion of the Snake River Plain aquifer (SRPA) beneath the Idaho Chemical Processing Plant (ICPP). The USGS data analyzed and compared in this study include: (a) lithologic, geophysical, and stratigraphic information, including the conceptual geologic models intrawell, ground water flow measurement (Tracejector tests) and (c) dedicated, submersible, sampling group elevations. Qualitative evaluation of these data indicate that the upper portion of the SRPA is both heterogeneous and anisotropic at the scale of the ICPP monitoring well network. Tracejector test results indicate that the hydraulic interconnection and spatial configuration of water-producing zones is extremely complex within the upper portion of the SRPA. The majority of ICPP monitoring wells currently are equipped to sample ground water only the upper lithostratigraphic intervals of the SRPA, primarily basalt flow groups E, EF, and F. Depth-specific hydrogeochemical sampling and analysis are necessary to determine if ground water quality varies significantly between the various lithostratigraphic units adjacent to individual sampling pumps

  7. Comparative study on shelf life of whole milk processed by high-intensity pulsed electric field or heat treatment.

    Science.gov (United States)

    Odriozola-Serrano, I; Bendicho-Porta, S; Martín-Belloso, O

    2006-03-01

    The effect of high-intensity pulsed electric fields (HI-PEF) processing (35.5 kV/cm for 1,000 or 300 micros with bipolar 7-micros pulses at 111 Hz; the temperature outside the chamber was always milk were investigated and compared with traditional heat pasteurization (75 degrees C for 15 s), and to raw milk during storage at 4 degrees C. A HIPEF treatment of 1,000 micros ensured the microbiological stability of whole milk stored for 5 d under refrigeration. Initial acidity values, pH, and free fatty acid content were not affected by the treatments; and no proteolysis and lipolysis were observed during 1 wk of storage in milk treated by HIPEF for 1,000 micros. The whey proteins (serum albumin, beta-lactoglobulin, and alpha-lactalbumin) in HIPEF-treated milk were retained at 75.5, 79.9, and 60%, respectively, similar to values for milk treated by traditional heat pasteurization.

  8. Secondary Ion Mass Spectrometry Imaging of Molecular Distributions in Cultured Neurons and Their Processes: Comparative Analysis of Sample Preparation

    Science.gov (United States)

    Tucker, Kevin R.; Li, Zhen; Rubakhin, Stanislav S.; Sweedler, Jonathan V.

    2012-11-01

    Neurons often exhibit a complex chemical distribution and topography; therefore, sample preparation protocols that preserve structures ranging from relatively large cell somata to small neurites and growth cones are important factors in secondary ion mass spectrometry (SIMS) imaging studies. Here, SIMS was used to investigate the subcellular localization of lipids and lipophilic species in neurons from Aplysia californica. Using individual neurons cultured on silicon wafers, we compared and optimized several SIMS sampling approaches. After an initial step to remove the high salt culturing media, formaldehyde, paraformaldehyde, and glycerol, and various combinations thereof, were tested for their ability to achieve cell stabilization during and after the removal of extracellular media. These treatments improved the preservation of cellular morphology as visualized with SIMS imaging. For analytes >250 Da, coating the cell surface with a 3.2 nm-thick gold layer increased the ion intensity; multiple analytes previously not observed or observed at low abundance were detected, including intact cholesterol and vitamin E molecular ions. However, once a sample was coated, many of the lower molecular mass (SIMS imaging of processes of individual cultured neurons over a broad mass range with enhanced image contrast.

  9. Weldability of AA 5052 H32 aluminium alloy by TIG welding and FSW process - A comparative study

    Science.gov (United States)

    Shanavas, S.; Raja Dhas, J. Edwin

    2017-10-01

    Aluminium 5xxx series alloys are the strongest non-heat treatable aluminium alloy. Its application found in automotive components and body structures due to its good formability, good strength, high corrosion resistance, and weight savings. In the present work, the influence of Tungsten Inert Gas (TIG) welding parameters on the quality of weld on AA 5052 H32 aluminium alloy plates were analyzed and the mechanical characterization of the joint so produced was compared with Friction stir (FS) welded joint. The selected input variable parameters are welding current and inert gas flow rate. Other parameters such as welding speed and arc voltage were kept constant throughout the study, based on the response from several trial runs conducted. The quality of the weld is measured in terms of ultimate tensile strength. A double side V-butt joints were fabricated by double pass on one side to ensure maximum strength of TIG welded joints. Macro and microstructural examination were conducted for both welding process.

  10. [Comparative study on absorption kinetics in intestines of rats of epimedii foliunm of Xianlinggubao capsules prepared by different processes].

    Science.gov (United States)

    Wu, Huichao; Lu, Yang; Du, Shouying; Chen, Wen; Wang, Yue

    2011-10-01

    To study the characteristics of intestinal absorption of icariin and epimedin C of Xianlinggubao capsules, and compare the absorption of Xianlinggubao capsules prepared by different processes. Non everted gut sac method was applied to investigate the influence of absorption sites and concentration on icariin and epimedin C, which were determined by HPLC. The absorption rate constants of epimedin C in duodenum were absolutely more than that in jejunum and ileum (P absorption rate constants of icariin in jejunum were absolutely less than that in duodenum and ileum (P absorption rate constants of epimedin C and icariin kept at the same level when the concentrations of drug solution were at high, middle and low level. The Ka of epimedin C at three levels were 0.040, 0.058, 0.061 h(-1) , respectively, and the Ka of icariin at three levels were 0.002, 0.007, 0.003 h(-1), respectively. Intestinal absorption of icariin and epimedin C is not effected by concentrations. The absorption rate constants of icariin and epimedin C in new Xianlinggubao capsules are higher.

  11. Multi-Collinearity Based Model Selection for Landslide Susceptibility Mapping: A Case Study from Ulus District of Karabuk, Turkey

    Science.gov (United States)

    Sahin, E. K.; Colkesen, I., , Dr; Kavzoglu, T.

    2017-12-01

    Identification of localities prone to landslide areas plays an important role for emergency planning, disaster management and recovery planning. Due to its great importance for disaster management, producing accurate and up-to-date landslide susceptibility maps is essential for hazard mitigation purpose and regional planning. The main objective of the present study was to apply multi-collinearity based model selection approach for the production of a landslide susceptibility map of Ulus district of Karabuk, Turkey. It is a fact that data do not contain enough information to describe the problem under consideration when the factors are highly correlated with each other. In such cases, choosing a subset of the original features will often lead to better performance. This paper presents multi-collinearity based model selection approach to deal with the high correlation within the dataset. Two collinearity diagnostic factors (Tolerance (TOL) and the Variance Inflation Factor (VIF)) are commonly used to identify multi-collinearity. Values of VIF that exceed 10.0 and TOL values less than 1.0 are often regarded as indicating multi-collinearity. Five causative factors (slope length, curvature, plan curvature, profile curvature and topographical roughness index) were found highly correlated with each other among 15 factors available for the study area. As a result, the five correlated factors were removed from the model estimation, and performances of the models including the remaining 10 factors (aspect, drainage density, elevation, lithology, land use/land cover, NDVI, slope, sediment transport index, topographical position index and topographical wetness index) were evaluated using logistic regression. The performance of prediction model constructed with 10 factors was compared to that of 15-factor model. The prediction performance of two susceptibility maps was evaluated by overall accuracy and the area under the ROC curve (AUC) values. Results showed that overall

  12. The Continual Reassessment Method for Multiple Toxicity Grades: A Bayesian Model Selection Approach

    Science.gov (United States)

    Yuan, Ying; Zhang, Shemin; Zhang, Wenhong; Li, Chanjuan; Wang, Ling; Xia, Jielai

    2014-01-01

    Grade information has been considered in Yuan et al. (2007) wherein they proposed a Quasi-CRM method to incorporate the grade toxicity information in phase I trials. A potential problem with the Quasi-CRM model is that the choice of skeleton may dramatically vary the performance of the CRM model, which results in similar consequences for the Quasi-CRM model. In this paper, we propose a new model by utilizing bayesian model selection approach – Robust Quasi-CRM model – to tackle the above-mentioned pitfall with the Quasi-CRM model. The Robust Quasi-CRM model literally inherits the BMA-CRM model proposed by Yin and Yuan (2009) to consider a parallel of skeletons for Quasi-CRM. The superior performance of Robust Quasi-CRM model was demonstrated by extensive simulation studies. We conclude that the proposed method can be freely used in real practice. PMID:24875783

  13. Numerical algebraic geometry for model selection and its application to the life sciences

    KAUST Repository

    Gross, Elizabeth

    2016-10-12

    Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation and model selection. These are all optimization problems, well known to be challenging due to nonlinearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data are available. Here, we consider polynomial models (e.g. mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometrical structures relating models and data, and we demonstrate its utility on examples from cell signalling, synthetic biology and epidemiology.

  14. Numerical algebraic geometry for model selection and its application to the life sciences.

    Science.gov (United States)

    Gross, Elizabeth; Davis, Brent; Ho, Kenneth L; Bates, Daniel J; Harrington, Heather A

    2016-10-01

    Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation and model selection. These are all optimization problems, well known to be challenging due to nonlinearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data are available. Here, we consider polynomial models (e.g. mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometrical structures relating models and data, and we demonstrate its utility on examples from cell signalling, synthetic biology and epidemiology. © 2016 The Author(s).

  15. SNP calling using genotype model selection on high-throughput sequencing data

    KAUST Repository

    You, Na

    2012-01-16

    Motivation: A review of the available single nucleotide polymorphism (SNP) calling procedures for Illumina high-throughput sequencing (HTS) platform data reveals that most rely mainly on base-calling and mapping qualities as sources of error when calling SNPs. Thus, errors not involved in base-calling or alignment, such as those in genomic sample preparation, are not accounted for.Results: A novel method of consensus and SNP calling, Genotype Model Selection (GeMS), is given which accounts for the errors that occur during the preparation of the genomic sample. Simulations and real data analyses indicate that GeMS has the best performance balance of sensitivity and positive predictive value among the tested SNP callers. © The Author 2012. Published by Oxford University Press. All rights reserved.

  16. Fuzzy decision-making: a new method in model selection via various validity criteria

    International Nuclear Information System (INIS)

    Shakouri Ganjavi, H.; Nikravesh, K.

    2001-01-01

    Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran

  17. Statistical Model Selection for Better Prediction and Discovering Science Mechanisms That Affect Reliability

    Directory of Open Access Journals (Sweden)

    Christine M. Anderson-Cook

    2015-08-01

    Full Text Available Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidate inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. Finally, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.

  18. Comparative Study of Laboratory-Scale and Prototypic Production-Scale Fuel Fabrication Processes and Product Characteristics

    International Nuclear Information System (INIS)

    Marshall, Douglas W.

    2014-01-01

    An objective of the High Temperature Gas Reactor fuel development and qualification program for the United States Department of Energy has been to qualify fuel fabricated in prototypic production-scale equipment. The quality and characteristics of the tristructural isotropic (TRISO) coatings on fuel kernels are influenced by the equipment scale and processing parameters. The standard deviations of some TRISO layer characteristics were diminished while others have become more significant in the larger processing equipment. The impact on statistical variability of the processes and the products, as equipment was scaled, are discussed. The prototypic production-scale processes produce test fuels meeting all fuel quality specifications. (author)

  19. Simultaneous Model Selection, Robust Data Reconciliation and Outlier Detection with Swarm Intelligence in a Thermal Reactor Power calculation

    International Nuclear Information System (INIS)

    Damianik Valdetaro, Eduardo; Schirru, Roberto

    2011-01-01

    Highlights: → We present a Model Selection, Robust Data Reconciliation and Outlier Detection method. → The novel method directly minimizes the Robust Akaike Information Criteria. → We use Hampel's redescending estimator and a slight modified objective function. → We obtain a good performance using the Particle Swarm Optimization Algorithm. → Simulations are made including a simplified Thermal Reactor Power calculation. - Abstract: Data Reconciliation (DR) and Gross Errors Detection (GED) are techniques of increasing interest in Nuclear Power Plants and used in order to keep Mass and Energy balance into account. These Techniques have been extensively studied in Chemical and Petrochemical Industry due to its benefits, which include closing the mass and energy balance and the yield of promising financial results. Many techniques were developed to solve Data Reconciliation and Outlier Detection, some of them use, for example, Quadratic Programming, Lagrange Multipliers, Mixed-Integer NonLinear Programming and others use Evolutionary Algorithms like Genetic Algorithms (GA) and Particle Swarm Optimization (PSO). Nowadays, Robust Statistics is also increasing in interest and it is being used in order to surpass some methods limitation, e.g., assuming that the errors are Normally Distributed, which does not always reflects real problems situation. In this paper we present a novel method to perform simultaneously: (a) the tuning of the Hampel's Three Part Redescending Estimator (HTPRE) constants; (b) the Robust Data Reconciliation and (c) the Gross Error Detection. The automatic tuning procedure is based on the minimization of the Robust Akaike Criteria and the Particle Swarm Algorithm is used as a global optimization method. Simulations were made considering a nonlinear process commonly used as a benchmark by several authors and also in calculating the Thermal Reactor Power based on a simplified example. The results show the potential use of the technique even in

  20. Comparing equivalent thermal, high pressure and pulsed electric field processes for mild pasteurization of orange juice: Part II: Impact on specific chemical and biochemical quality parameters

    NARCIS (Netherlands)

    Vervoort, L.; Plancken, van der I.; Grauwet, T.; Timmermans, R.A.H.; Mastwijk, H.C.; Matser, A.M.; Hendrickx, M.E.; Loey, van A.

    2011-01-01

    The impact of thermal, high pressure (HP) and pulsed electric field (PEF) processing for mild pasteurization of orange juice was compared on a fair basis, using processing conditions leading to an equivalent degree of microbial inactivation. Examining the effect on specific chemical and biochemical

  1. Polysaccharide production in batch process of Neisseria meningitidis serogroup C comparing Frantz, modified Frantz and Cartlin 6 cultivation media

    OpenAIRE

    Paz Marcelo Fossa da; Baruque-Ramos Júlia; Hiss Haroldo; Vicentin Márcio Alberto; Leal Maria Betania Batista; Raw Isaías

    2003-01-01

    Polysaccharide of N. meningitidis serogroup C constitutes the antigen for the vaccine against meningitis. The goal of this work was to compare three cultivation media for production of this polysaccharide: Frantz, modified Frantz medium (with replacement of glucose by glycerol), and Catlin 6 (a synthetic medium with glucose). The comparative criteria were based on the final polysaccharide concentrations and the yield coefficient cell/polysaccharide (Y P/X). The kinetic parameters: pH, substra...

  2. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    Science.gov (United States)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  3. Deconvolution of Complex 1D NMR Spectra Using Objective Model Selection.

    Directory of Open Access Journals (Sweden)

    Travis S Hughes

    Full Text Available Fluorine (19F NMR has emerged as a useful tool for characterization of slow dynamics in 19F-labeled proteins. One-dimensional (1D 19F NMR spectra of proteins can be broad, irregular and complex, due to exchange of probe nuclei between distinct electrostatic environments; and therefore cannot be deconvoluted and analyzed in an objective way using currently available software. We have developed a Python-based deconvolution program, decon1d, which uses Bayesian information criteria (BIC to objectively determine which model (number of peaks would most likely produce the experimentally obtained data. The method also allows for fitting of intermediate exchange spectra, which is not supported by current software in the absence of a specific kinetic model. In current methods, determination of the deconvolution model best supported by the data is done manually through comparison of residual error values, which can be time consuming and requires model selection by the user. In contrast, the BIC method used by decond1d provides a quantitative method for model comparison that penalizes for model complexity helping to prevent over-fitting of the data and allows identification of the most parsimonious model. The decon1d program is freely available as a downloadable Python script at the project website (https://github.com/hughests/decon1d/.

  4. Model selection for local and regional meteorological normalisation of background concentrations of tropospheric ozone

    Science.gov (United States)

    Libiseller, Claudia; Grimvall, Anders

    Meteorological normalisation of time series of air quality data aims to extract anthropogenic signals by removing natural fluctuations in the collected data. We showed that the currently used procedures to select normalisation models can cause over-fitting to observed data and undesirable smoothing of anthropogenic signals. A simulation study revealed that the risk of such effects is particularly large when: (i) the observed data are serially correlated, (ii) the normalisation model is selected by leave-one-out cross-validation, and (iii) complex models, such as artificial neural networks, are fitted to data. When the size of the test sets used in the cross-validation was increased, and only moderately complex linear models were fitted to data, the over-fitting was less pronounced. An empirical study of the predictive ability of different normalisation models for tropospheric ozone in Finland confirmed the importance of using appropriate model selection strategies. Moderately complex regional models involving contemporaneous meteorological data from a network of stations were found to be superior to single-site models as well as more complex regional models involving both contemporaneous and time-lagged meteorological data from a network of stations.

  5. Generalized linear discriminant analysis: a unified framework and efficient model selection.

    Science.gov (United States)

    Ji, Shuiwang; Ye, Jieping

    2008-10-01

    High-dimensional data are common in many domains, and dimensionality reduction is the key to cope with the curse-of-dimensionality. Linear discriminant analysis (LDA) is a well-known method for supervised dimensionality reduction. When dealing with high-dimensional and low sample size data, classical LDA suffers from the singularity problem. Over the years, many algorithms have been developed to overcome this problem, and they have been applied successfully in various applications. However, there is a lack of a systematic study of the commonalities and differences of these algorithms, as well as their intrinsic relationships. In this paper, a unified framework for generalized LDA is proposed, which elucidates the properties of various algorithms and their relationships. Based on the proposed framework, we show that the matrix computations involved in LDA-based algorithms can be simplified so that the cross-validation procedure for model selection can be performed efficiently. We conduct extensive experiments using a collection of high-dimensional data sets, including text documents, face images, gene expression data, and gene expression pattern images, to evaluate the proposed theories and algorithms.

  6. Comparative Study of Laboratory-Scale and Prototypic Production-Scale Fuel Fabrication Processes and Product Characteristics

    International Nuclear Information System (INIS)

    2014-01-01

    An objective of the High Temperature Gas Reactor fuel development and qualification program for the United States Department of Energy has been to qualify fuel fabricated in prototypic production-scale equipment. The quality and characteristics of the tristructural isotropic coatings on fuel kernels are influenced by the equipment scale and processing parameters. Some characteristics affecting product quality were suppressed while others have become more significant in the larger equipment. Changes to the composition and method of producing resinated graphite matrix material has eliminated the use of hazardous, flammable liquids and enabled it to be procured as a vendor-supplied feed stock. A new method of overcoating TRISO particles with the resinated graphite matrix eliminates the use of hazardous, flammable liquids, produces highly spherical particles with a narrow size distribution, and attains product yields in excess of 99%. Compact fabrication processes have been scaled-up and automated with relatively minor changes to compact quality to manual laboratory-scale processes. The impact on statistical variability of the processes and the products as equipment was scaled are discussed. The prototypic production-scale processes produce test fuels that meet fuel quality specifications.

  7. Comparative Study of Laboratory-Scale and Prototypic Production-Scale Fuel Fabrication Processes and Product Characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Douglas W. Marshall

    2014-10-01

    An objective of the High Temperature Gas Reactor fuel development and qualification program for the United States Department of Energy has been to qualify fuel fabricated in prototypic production-scale equipment. The quality and characteristics of the tristructural isotropic coatings on fuel kernels are influenced by the equipment scale and processing parameters. Some characteristics affecting product quality were suppressed while others have become more significant in the larger equipment. Changes to the composition and method of producing resinated graphite matrix material has eliminated the use of hazardous, flammable liquids and enabled it to be procured as a vendor-supplied feed stock. A new method of overcoating TRISO particles with the resinated graphite matrix eliminates the use of hazardous, flammable liquids, produces highly spherical particles with a narrow size distribution, and attains product yields in excess of 99%. Compact fabrication processes have been scaled-up and automated with relatively minor changes to compact quality to manual laboratory-scale processes. The impact on statistical variability of the processes and the products as equipment was scaled are discussed. The prototypic production-scale processes produce test fuels that meet fuel quality specifications.

  8. Evaluating experimental design for soil-plant model selection using a Bootstrap Filter and Bayesian model averaging

    Science.gov (United States)

    Wöhling, T.; Schöniger, A.; Geiges, A.; Nowak, W.; Gayler, S.

    2013-12-01

    The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), we analyze the changes in posterior model weights and posterior model choice uncertainty when more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. Using a Bootstrap Filter (BF), the models were then conditioned on field measurements of soil moisture, matric potential, leaf-area index, and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at a field site at the Swabian Alb in Southwestern Germany. Following our new method, we derived model weights when using all data or different subsets thereof. We discuss to which degree the posterior mean outperforms the prior mean and all

  9. A comparative analysis between FinFET Semi-Dynamic Flip-Flop topologies under process variations

    KAUST Repository

    Rabie, Mohamed A.

    2011-11-01

    Semi-Dynamic Flip-Flops are widely used in state-of-art microprocessors. Moreover, scaling down traditional CMOS technology faces major challenges which rises the need for new devices for replacement. FinFET technology is a potential replacement due to similarity in both fabrication process and theory of operation to current CMOS technology. Hence, this paper presents the study of Semi Dynamic Flip Flops using both Independent gate and Tied gate FinFET devices in 32nm technology node. Furthermore, it studies the performance of these new circuits under process variations. © 2011 IEEE.

  10. Numerical Simulations of Electrokinetic Processes Comparing the Use of a Constant Voltage Difference or a Constant Current as Driving Force

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.

    materials and the prevention of the reinforced concrete corrosion. The electrical energy applied in an electrokinetic process produces electrochemical reactions at the electrodes. Different electrode processes can occur. When considering inert electrodes in aqueous solutions, the reduction of water...... are transported from the anode to the cathode through the closed electrical circuit of the cell. In the solution, the electrical current is carried by the ions, which move towards the electrode with different charge. Therefore, different authors have studied the system using the circuit theory. Assuming...

  11. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS data. Application and comparative study of selected tools

    Directory of Open Access Journals (Sweden)

    O'Callaghan Sean

    2012-05-01

    Full Text Available Abstract Background Gas chromatography–mass spectrometry (GC-MS is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. Results PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX, noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI, allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS. Conclusions PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs

  12. A Comparative Study of Structural and Process Quality in Center-Based and Family-Based Child Care Services

    Science.gov (United States)

    Bigras, Nathalie; Bouchard, Caroline; Cantin, Gilles; Brunson, Liesette; Coutu, Sylvain; Lemay, Lise; Tremblay, Melissa; Japel, Christa; Charron, Annie

    2010-01-01

    This study seeks to determine whether center-based and family-based child care services differ with respect to process quality, as measured by the Educative Quality Observation Scales ("EQOS", Bourgon and Lavallee 2004a, b, c), for groups of children 18 months old and younger. It also seeks to identify structural variables associated…

  13. Comparing military and civilian critical thinking and information processes in operational risk management: what are the lessons?

    Science.gov (United States)

    VanVactor, Jerry D; Gill, Tony

    2010-03-01

    Business continuity has expanded into a discipline that spans most functional areas of large enterprises. Both the military and financial sectors have consistently demonstrated an aptitude to expand the boundaries of continuity planning and crisis mitigation. A comparison of both enterprises is provided to see how their respective methodologies compare. Interestingly, the similarities far outweigh the differences. The paper provides commentary related to comparative insight from risk practitioners' perspectives from within the US Army, one of the largest military organisations in the world, and the Bank of Montreal, one of Canada's leading financial institutions.

  14. Comparing two psychological interventions in reducing impulsive processes of eating behaviour: Effects on self-selected portion size

    NARCIS (Netherlands)

    Koningsbruggen, G.M. van; Veling, H.P.; Stroebe, W.; Aarts, H.A.G.

    2014-01-01

    Objective Palatable food, such as sweets, contains properties that automatically trigger the impulse to consume it even when people have goals or intentions to refrain from consuming such food. We compared the effectiveness of two interventions in reducing the portion size of palatable food that

  15. Comparing two psychological interventions in reducing impulsive processes of eating behaviour : Effects on self-selected portion size

    NARCIS (Netherlands)

    van Koningsbruggen, G.M.; Veling, H.P.; Stroebe, Wolfgang; Aarts, Henk

    2014-01-01

    ObjectivePalatable food, such as sweets, contains properties that automatically trigger the impulse to consume it even when people have goals or intentions to refrain from consuming such food. We compared the effectiveness of two interventions in reducing the portion size of palatable food that

  16. A comparative study of total electron scattering cross sections of plasma processing gasses at intermediate electron energies

    Science.gov (United States)

    Palihawadana, Prasanga; Villela, Gilberto; Ariyasinghe, Wickramasinghe

    2008-11-01

    A comparison is made between the total electron cross sections (TCS) of Tetrafluoromethane (CF4), Trifluoromethane (CHF3), Hexafluoroethane (C2F6), and Octafluorocyclobutane (C4F8) available in the literature and those recently measured in this laboratory using the linear transmission technique. The present measurements are about 0-20% higher than those in the literature. An empirical formula developed to predict the TCS of plasma processing gases, as a function of incident electron energy, will be presented.

  17. A juice extractor can simplify the downstream processing of plant-derived biopharmaceutical proteins compared to blade-based homogenizers

    OpenAIRE

    Buyel, J.F.; Fischer, R.

    2015-01-01

    The production of biopharmaceutical proteins using plant-based systems has recently become economically competitive with conventional expression platforms based on microbes and mammalian cells, but downstream processing remains a significant cost factor. Here we report that, depending on the protein expression level, production costs for biopharmaceuticals made in plants can be reduced by up to 30% if a juice extractor is used instead of a blade-based homogenizer or blender. Although the extr...

  18. Ultra-High Temperature Effect on Bioactive Compounds and Sensory Attributes of Orange Juice Compared with Traditional Processing

    Directory of Open Access Journals (Sweden)

    Zvaigzne Gaļina

    2017-12-01

    Full Text Available Orange juices are an important source of bioactive compounds. Because of its unique combination of sensory attributes and nutritional value, orange juice is the world’s most popular fruit juice. Orange (Citrus sinensis juice of Greek Navel variety was used in this study. The impact of Conventional Thermal Pasteurisation (94 °C/30' (CTP and alternative Ultra-High Temperature (UHT (130 °C/2' processing on bioactive compounds and antioxidant capacity changes of fresh Navel orange juice was investigated. Sensory attributes of processed juices were evaluated. Results showed that using technologies CTP and UHT orange juice Navel significantly changed vitamin C concentration in comparison with fresh orange juice. The highest concentration of antioxidants (vitamin C, total phenols, hesperidin and carotenoids was observed in orange juice Navel produced by UHT technology. Sensory results indicated that characteristics of the orange juice obtained using UHT technology were more liked than the CTP heat treated juice. UHT technology emerges as an advantageous alternative process to preserve bioactive compounds in orange juice.

  19. Evaluation of Image Processing Technique for Measuring of Nitrogen and Yield in Paddy Rice and Comparing it with Standard Methods

    Directory of Open Access Journals (Sweden)

    M.R Larijani

    2011-09-01

    Full Text Available In order to use new and low cost methods in precision agriculture, nitrogen should be supplied for plants on time and precisely. For determining the required nitrogen of paddy rice in the clustering stage, a series of experiments were conducted using three different methods of: image processing, kjeldahl and chlorophyll meter set (SPAD-502, in a randomized complete block design with three replications during 2010 at Rice Research Center of Tonekabon, Iran. Four experimental treatments were different level of fertilizer (Urea with 46% nitrogen. In the clustering stage, some images from rice plants were taken vertically by a digital camera and were analyzed using image processing technique. Simultaneously the chlorophyll index of plants was measured by SPAD-502 chlorophyll meter set and the percentage amount of nitrogen was measured using of the so called kjeldahl laboratory method. The results showed that the three methods of determining nitrogen of rice plant were highly correlated. Moreover, the correlation among the three methods and crop yield were almost the same. In general, the method of image processing could have a high potential for nitrogen management in the field, while this method was low-cost, faster and also nondestructive in comparison to the other methods.

  20. A Multicenter, Prospective, Randomized, Pilot Study of Outcomes for Digital Nerve Repair in the Hand Using Hollow Conduit Compared With Processed Allograft Nerve.

    Science.gov (United States)

    Means, Kenneth R; Rinker, Brian D; Higgins, James P; Payne, S Houston; Merrell, Gregory A; Wilgis, E F Shaw

    2016-06-01

    Current repair options for peripheral nerve injuries where tension-free gap closure is not possible include allograft, processed nerve allograft, and hollow tube conduit. Here we report on the outcomes from a multicenter prospective, randomized, patient- and evaluator-blinded, pilot study comparing processed nerve allograft and hollow conduit for digital nerve reconstructions in the hand. Across 4 centers, consented participants meeting inclusion criteria while not meeting exclusion criteria were randomized intraoperatively to either processed nerve allograft or hollow conduit. Standard sensory and safety assessments were conducted at baseline, 1, 3, 6, 9, and 12 months after reconstruction. The primary outcome was static 2-point discrimination (s2PD) testing. Participants and assessors were blinded to treatment. The contralateral digit served as the control. We randomized 23 participants with 31 digital nerve injuries. Sixteen participants with 20 repairs had at least 6 months of follow-up while 12-month follow-up was available for 15 repairs. There were no significant differences in participant and baseline characteristics between treatment groups. The predominant nerve injury was laceration/sharp transection. The mean ± SD length of the nerve gap prior to repair was 12 ± 4 mm (5-20 mm) for both groups. The average s2PD for processed allograft was 5 ± 1 mm (n = 6) compared with 8 ± 5 mm (n = 9) for hollow conduits. The average moving 2PD for processed allograft was 5 ± 1 mm compared with 7 ± 5 mm for hollow conduits. All injuries randomized to processed nerve allograft returned some degree of s2PD as compared with 75% of the repairs in the conduit group. Two hollow conduits and one allograft were lost due to infection during the study. In this pilot study, patients whose digital nerve reconstructions were performed with processed nerve allografts had significantly improved and more consistent functional sensory outcomes compared with hollow conduits.

  1. A Multicenter, Prospective, Randomized, Pilot Study of Outcomes for Digital Nerve Repair in the Hand Using Hollow Conduit Compared With Processed Allograft Nerve

    OpenAIRE

    Means, Kenneth R.; Rinker, Brian D.; Higgins, James P.; Payne, S. Houston; Merrell, Gregory A.; Wilgis, E. F. Shaw

    2016-01-01

    Background: Current repair options for peripheral nerve injuries where tension-free gap closure is not possible include allograft, processed nerve allograft, and hollow tube conduit. Here we report on the outcomes from a multicenter prospective, randomized, patient- and evaluator-blinded, pilot study comparing processed nerve allograft and hollow conduit for digital nerve reconstructions in the hand. Methods: Across 4 centers, consented participants meeting inclusion criteria while not meetin...

  2. The decolorization and mineralization of Acid Orange 6 azo dye in aqueous solution by advanced oxidation processes: A comparative study

    International Nuclear Information System (INIS)

    Hsing, H.-J.; Chiang, P.-C.; Chang, E.-E.; Chen, M.-Y.

    2007-01-01

    The comparison of different advanced oxidation processes (AOPs), i.e. ultraviolet (UV)/TiO 2 , O 3 , O 3 /UV, O 3 /UV/TiO 2 , Fenton and electrocoagulation (EC), is of interest to determine the best removal performance for the destruction of the target compound in an Acid Orange 6 (AO6) solution, exploring the most efficient experimental conditions as well; on the other hand, the results may provide baseline information of the combination of different AOPs in treating industrial wastewater. The following conclusions can be drawn: (1) in the effects of individual and combined ozonation and photocatalytic UV irradiation, both O 3 /UV and O 3 /UV/TiO 2 processes exhibit remarkable TOC removal capability that can achieve a 65% removal efficiency at pH 7 and O 3 dose = 45 mg/L; (2) the optimum pH and ratio of [H 2 O 2 ]/[Fe 2+ ] found for the Fenton process, are pH 4 and [H 2 O 2 ]/[Fe 2+ ] = 6.58. The optimum [H 2 O 2 ] and [Fe 2+ ] under the same HF value are 58.82 and 8.93 mM, respectively; (3) the optimum applied voltage found in the EC experiment is 80 V, and the initial pH will affect the AO6 and TOC removal rates in that acidic conditions may be favorable for a higher removal rate; (4) the AO6 decolorization rate ranking was obtained in the order of O 3 3 /UV = O 3 /UV/TiO 2 3 = Fenton 3 /UV 3 /UV/TiO 2 for 30 min of reaction time

  3. A comparative study of the design and construction process of energy efficient buildings in Germany and Sweden

    International Nuclear Information System (INIS)

    Schade, Jutta; Wallström, Peter; Olofsson, Thomas; Lagerqvist, Ove

    2013-01-01

    Reducing the energy consumption of buildings is an important goal for the European Union. However, it is therefore of interest to investigate how different member states address these goals. Countries like Sweden and Germany have developed different strategies for energy conservation within the building sector. A longitudinal comparison between implemented energy conservation key policy instruments in Sweden and Germany and a survey regarding the management of energy requirements in the building process shows that: –No evidence is found that energy consumption is of great importance for producing competitive offers, either for Swedish or German clients. –The Swedish market-driven policy has not been as successful as the German regulation policy in decreasing the energy consumption of new buildings. –Building standards and regulations regarding energy performance affects how professionals are educated and the way energy requirements and demands are managed throughout the building process. In conclusion, the client's demand will govern the development of energy efficient buildings. Therefore, in order to use market-driven policies, the desired parameters must be of concern for the customer to influence the majority of building projects to be more energy efficient than is specified in national standards and regulations. - Highlights: ► Longitudinal comparison between implemented energy key policy instruments. ► A survey regarding the management of energy requirements in the building process. ► German energy regulation policy more successful as the Swedish marked orientation. ► The gap between technological possible and regulation need to be balanced

  4. From Reactionary to Responsive: Applying the Internal Environmental Scan Protocol to Lifelong Learning Strategic Planning and Operational Model Selection

    Science.gov (United States)

    Downing, David L.

    2009-01-01

    This study describes and implements a necessary preliminary strategic planning procedure, the Internal Environmental Scanning (IES), and discusses its relevance to strategic planning and university-sponsored lifelong learning program model selection. Employing a qualitative research methodology, a proposed lifelong learning-centric IES process…

  5. COMPARATIVE ANALYSIS USING DIPIRONA DEGRADATION PROCESS WITH PHOTO-FENTON UV-C LIGHT AND SOLAR RADIATION

    Directory of Open Access Journals (Sweden)

    Daniella Carla Napoleão

    2015-01-01

    Full Text Available The contamination of water bodies is a major concern on the part of scientists from different parts of the world. Domestic and industrial activities are the cause of the daily pouring of various types of pollutants which are in most cases resistant to conventional treatments of waters. Among the contaminants, especially noteworthy are the drugs in which it is found that 50% to 90% are discarded without treatment. The concerns about these substances are the adverse effects to human health and animals, especially in aquatic environments. The advanced oxidation processes (AOP have been studied and applied as an efficient alternative treatment, in order that it can be applied to the degradation of the different pollutants, considering that can generate hydroxyl radicals, highly reactive even somewhat selective. This study evaluated the efficiency of the photo-Fenton process using UV-C radiation and sunlight to degradation of the drug dipyrone in aqueous solution contaminated with the active ingredient of the drug at a concentration of 20 mg.L-1. Assays were performed with 50 mL aliquots of the solution following 23 factorial designs with central point, and the variables studied: addition of H2O2, adding FeSO4.7H2O and time. The detection and quantification of dipyrone before and after the AOP was performed by high performance liquid chromatography (HPLC and verified that about DE100% degradation of the compound was obtained.

  6. A comparative study of solution-processed low- and high-band-gap chalcopyrite thin-film solar cells

    International Nuclear Information System (INIS)

    Park, Se Jin; Moon, Sung Hwan; Min, Byoung Koun; Cho, Yunae; Kim, Ji Eun; Kim, Dong-Wook; Lee, Doh-Kwon; Gwak, Jihye; Kim, Jihyun

    2014-01-01

    Low-cost and printable chalcopyrite thin-film solar cells were fabricated by a precursor solution-based coating method with a multi-step heat-treatment process (oxidation, sulfurization, and selenization). The high-band-gap (1.57 eV) Cu(In x Ga 1−x )S 2 (CIGS) solar cell showed a high open-circuit voltage of 787 mV, whereas the low-band-gap (1.12 eV) Cu(In x Ga 1−x )(S 1−y Se y ) 2 (CIGSSe) cell exhibited a high short-circuit current density of 32.6 mA cm −2 . The energy conversion efficiencies were 8.28% for CIGS and 8.81% for CIGSSe under standard irradiation conditions. Despite similar efficiencies, the two samples showed notable differences in grain size, surface morphology, and interfacial properties. Low-temperature transport and admittance characteristics of the samples clearly revealed how their structural differences influenced their photovoltaic and electrical properties. Such analyses provide insight into the enhanced solar cell performance of the solution-processed chalcopyrite thin films. (paper)

  7. Radiation protection on EPR: comparative approach of the French and Finnish regulatory reviewing process and optimization at the design phase

    International Nuclear Information System (INIS)

    Arial, E.; Couasnon, O.; Latil-querrec, N.; Evrard, J.M.; Herviou, K.; Riihiluoma, V.; Beneteau, Y.; Foret, J.L.

    2010-01-01

    Taking the opportunity to evaluate the preliminary safety report of the French EPR reactor built in Flamanville, the IRSN proposes to assess the history of EPR, from the decision to implement studies in the 90's to the French and German cooperation, and finally to the construction of a unit in Finland and in France, and to make a synthesis of the assessment of radiation protection arrangements. This assessment presents the dose targets (calculated reference doses) planned by the nuclear operators in the design phase as well as the global radiation protection optimization process and a comparison of French and Finnish analyses. In France, for example, EDF performed a detailed optimization analysis of selected tasks known to have a major contribution to the annual average collective dose (thermal insulation, logistics, valve maintenance, opening/closing of the vessel, preparation and checks of steam generators, on-site spent fuel management, and waste management). The optimization process is based (in France) on an iterative method. A comparison between the EPR collective dose target and doses received in other pressurized water reactors that are close to the EPR design (Konvoi of German design, French existing units, etc.) is also presented. This synthesis was carried out by the IRSN, the expert body of the French nuclear safety authority, in association with Electricite de France (EDF), the French operator, and the authority for nuclear safety in Finland (STUK). It summarizes more than 15 years of studies and partnership, focusing on radiation protection, in the design phase of the EPR. (authors)

  8. Using the Analytic Network Process (ANP) to assess the distribution of pharmaceuticals in hospitals – a comparative case study of a Danish and American hospital

    DEFF Research Database (Denmark)

    Feibert, Diana Cordes; Sørup, Christian Michel; Jacobsen, Peter

    2016-01-01

    Pharmaceuticals are a vital part of patient treatment and the timely delivery of pharmaceuticals to patients is therefore important. Hospitals are complex systems that provide a challenging environment for decision making. Implementing process changes and technologies to improve the pharmaceutical...... distribution process can therefore be a complex and challenging undertaking. A comparative case study was conducted benchmarking the pharmaceutical distribution process at a Danish and US hospital to identify best practices. Using the ANP method, taking tangible and intangible aspects into consideration......, the most suitable solution for pharmaceutical distribution reflecting management preferences was identified....

  9. Comparative metabolomics in vanilla pod and vanilla bean revealing the biosynthesis of vanillin during the curing process of vanilla.

    Science.gov (United States)

    Gu, Fenglin; Chen, Yonggan; Hong, Yinghua; Fang, Yiming; Tan, Lehe

    2017-12-01

    High-performance liquid chromatography-mass spectrometry (LC-MS) was used for comprehensive metabolomic fingerprinting of vanilla fruits prepared from the curing process. In this study, the metabolic changes of vanilla pods and vanilla beans were characterized using MS-based metabolomics to elucidate the biosynthesis of vanillin. The vanilla pods were significantly different from vanilla beans. Seven pathways of vanillin biosynthesis were constructed, namely, glucovanillin, glucose, cresol, capsaicin, vanillyl alcohol, tyrosine, and phenylalanine pathways. Investigations demonstrated that glucose, cresol, capsaicin, and vanillyl alcohol pathway were detected in a wide range of distribution in microbial metabolism. Thus, microorganisms might have participated in vanillin biosynthesis during vanilla curing. Furthermore, the ion strength of glucovanillin was stable, which indicated that glucovanillin only participated in the vanillin biosynthesis during the curing of vanilla.

  10. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    Science.gov (United States)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  11. [A comparative analysis of the dynamics of affective symptoms in overweight patients with depression and eating disorders during treatment process].

    Science.gov (United States)

    Makhortova, I S; Shiryaev, O U

    Eating disorders are linked with depression in patients with high body mass index (BMI). To evaluate the dynamics of affective symptoms in overweight patients with depression and eating disorders in the process of treatment with agomelatine. Male (n=15) and female (n=37) overweight patients (n=52, mean age 33.67±2.31 years) were randomly observed. The sample was divided into two groups. The first group included individuals with depression and high BMI and the second with co-morbid eating disorders of bulimic type. Patients were treated with agomelatine in average therapeutic doses. The presence of an eating disorder significantly influences clinical symptoms of depression by reducing the speed of therapeutic effect of agomelatine.

  12. TOURISM IN TIME OF CRISIS AND INFLUENCE IN THE PROCESS OF INCREASE ECONOMIC. COMPARATIVE ANALYSIS ROMANIA-BULGARIA-GREECE

    Directory of Open Access Journals (Sweden)

    Laura-Maria POPESCU

    2013-10-01

    Full Text Available The article shows an analysis of tourist activity in the period of crisis, as well as as far as this sector is retrieves in the process of increase in ecomonic among countries concerned. Have been taken into account three areas with different approaches in the field of tourism so as to be able to easily highlight difference between tradition in the case of Greece, exploitation and operation of the investment in Bulgaria and development in progress, in the case of Romania. In this way, the work of front proposes to analyze determinants underlying competitiveness in tourism from the perspective of the three states in direct competition, to highlight effects of communication on the competitiveness in the tourism industry. The purpose of this analysis is to provide a series of responses, from the perspective of the development strategies and communication, which could explain the results so different in its turn to the three national economy in the tourism sector.

  13. A comparative evaluation of acute stress and corticosterone on the process of learning and emotional memory in rat

    Directory of Open Access Journals (Sweden)

    Vafaei AA

    2009-07-01

    Full Text Available "nBackground: Previous studies suggested that stressful events that release Glucocorticoid from adrenal cortex and also injection of agonists of glucocorticoids receptors probably affect emotional learning and memory process and modulate them. The aim of this study was to determine the effects of acute stress and systemic injection of Corticosterone (as agonist of glucocorticoid receptors on acquisition (ACQ, consolidation (CONS and retrieval (RET of emotional memory in rat. "nMethods: In this experimental study we used 180 male Wistar rats (220-250. At the first rats was training in one trial inhibitory avoidance task. On the retention test given 48 h after training, the latency to re-enter the dark compartment of the apparatus (Step-through latency, STL and the time spent in light chamber (TLC were recorded during 10 min test. Intraperitoneal corticosterone in doses of 0.5, 1 and 3mg/kg injected 30min before, immediately after instruction and 30min before retrieval test. Also some groups received 10min stressful stimulation by restrainer at the same time. At the end locomotor's activity was measured for all animals. "nResults: The data indicated that administration of corticosterone 30min before ACQ (1mg/kg, and immediately after CONS (1, 3mg/kg enhance and 30min before RET (1, 3mg/kg impair emotional memory (p<0.05. Acute stress impaired emotional memory in all phases (p<0.05. Also acute stress and injection of Corticosterone have not significantly affect motor activity.  "nConclusions: These findings show that Glucocorticoid receptors in activation dependently plays an important role in modulation of emotional spatial memory processes (ACQ, CONS and RET in new information for emotional events and these effects varies in different phases.

  14. Process performance and comparative metagenomic analysis during co-digestion of manure and lignocellulosic biomass for biogas production

    DEFF Research Database (Denmark)

    Tsapekos, Panagiotis; Kougias, Panagiotis; Treu, Laura

    2017-01-01

    Mechanical pretreatment is considered to be a fast and easily applicable method to prepare the biomass for anaerobic digestion. In the present study, the effect of mechanical pretreatment on lignocellulosic silages biodegradability was elucidated in batch reactors. Moreover, co-digestion of the s......Mechanical pretreatment is considered to be a fast and easily applicable method to prepare the biomass for anaerobic digestion. In the present study, the effect of mechanical pretreatment on lignocellulosic silages biodegradability was elucidated in batch reactors. Moreover, co...... feedstock. Results from liquid samples revealed clear differences in microbial community composition, mainly dominated by Proteobacteria. The archaeal community was found in higher relative abundance in the liquid fraction of co-digestion experiment compared to the solid fraction. Finally, an unclassified...... Alkaliphilus sp. was found in high relative abundance in all samples....

  15. Semantic memory processing is enhanced in preadolescents breastfed compared to those formula-fed as infants: An ERP N400 study of sentential semantic congruity

    Science.gov (United States)

    Studies comparing child cognitive development and brain activity during cognitive functions between children who were fed breast milk (BF), milk formula (MF), or soy formula (SF) have not been reported. We recorded event-related scalp potentials reflecting semantic processing (N400 ERP) from 20 homo...

  16. Treatment of cutting fluid: comparative study of different processes of recycling; Tratamiento de fluidos de corte estudio comparativo de los diferentes procesos de reciclaje

    Energy Technology Data Exchange (ETDEWEB)

    Labarta Carreno, C.E.; Ipinar, E.

    1997-12-31

    The environmental concerning about cutting fluid (commonly known in Spain as taladrines) is deeply analyzed in this paper by the authors. They describe the history of these hazardous effluents what kind there, are their characteristics, and finally they study comparatively the different industrial processes for their treatment. (Author) 7 refs.

  17. Comparing the impact of homogenization and heat processing on the properties and in vitro digestion of milk from organic and conventional dairy herds

    Science.gov (United States)

    The effects of homogenization and heat processing on the chemical and in vitro digestion traits of milk from organic and conventional herds were compared. Raw milk from organic (>50% of dry matter intake from pasture) and conventional (no access to pasture) farms were adjusted to commercial whole a...

  18. Impact of the choice of the precipitation reference data set on climate model selection and the resulting climate change signal

    Science.gov (United States)

    Gampe, D.; Ludwig, R.

    2017-12-01

    Regional Climate Models (RCMs) that downscale General Circulation Models (GCMs) are the primary tool to project future climate and serve as input to many impact models to assess the related changes and impacts under such climate conditions. Such RCMs are made available through the Coordinated Regional climate Downscaling Experiment (CORDEX). The ensemble of models provides a range of possible future climate changes around the ensemble mean climate change signal. The model outputs however are prone to biases compared to regional observations. A bias correction of these deviations is a crucial step in the impact modelling chain to allow the reproduction of historic conditions of i.e. river discharge. However, the detection and quantification of model biases are highly dependent on the selected regional reference data set. Additionally, in practice due to computational constraints it is usually not feasible to consider the entire ensembles of climate simulations with all members as input for impact models which provide information to support decision-making. Although more and more studies focus on model selection based on the preservation of the climate model spread, a selection based on validity, i.e. the representation of the historic conditions is still a widely applied approach. In this study, several available reference data sets for precipitation are selected to detect the model bias for the reference period 1989 - 2008 over the alpine catchment of the Adige River located in Northern Italy. The reference data sets originate from various sources, such as station data or reanalysis. These data sets are remapped to the common RCM grid at 0.11° resolution and several indicators, such as dry and wet spells, extreme precipitation and general climatology, are calculate to evaluate the capability of the RCMs to produce the historical conditions. The resulting RCM spread is compared against the spread of the reference data set to determine the related uncertainties and

  19. Comparative lipid production by oleaginous yeasts in hydrolyzates of lignocellulosic biomass and process strategy for high titers.

    Science.gov (United States)

    Slininger, Patricia J; Dien, Bruce S; Kurtzman, Cletus P; Moser, Bryan R; Bakota, Erica L; Thompson, Stephanie R; O'Bryan, Patricia J; Cotta, Michael A; Balan, Venkatesh; Jin, Mingjie; Sousa, Leonardo da Costa; Dale, Bruce E

    2016-08-01

    Oleaginous yeasts can convert sugars to lipids with fatty acid profiles similar to those of vegetable oils, making them attractive for production of biodiesel. Lignocellulosic biomass is an attractive source of sugars for yeast lipid production because it is abundant, potentially low cost, and renewable. However, lignocellulosic hydrolyzates are laden with byproducts which inhibit microbial growth and metabolism. With the goal of identifying oleaginous yeast strains able to convert plant biomass to lipids, we screened 32 strains from the ARS Culture Collection, Peoria, IL to identify four robust strains able to produce high lipid concentrations from both acid and base-pretreated biomass. The screening was arranged in two tiers using undetoxified enzyme hydrolyzates of ammonia fiber expansion (AFEX)-pretreated cornstover as the primary screening medium and acid-pretreated switch grass as the secondary screening medium applied to strains passing the primary screen. Hydrolyzates were prepared at ∼18-20% solids loading to provide ∼110 g/L sugars at ∼56:39:5 mass ratio glucose:xylose:arabinose. A two stage process boosting the molar C:N ratio from 60 to well above 400 in undetoxified switchgrass hydrolyzate was optimized with respect to nitrogen source, C:N, and carbon loading. Using this process three strains were able to consume acetic acid and nearly all available sugars to accumulate 50-65% of cell biomass as lipid (w/w), to produce 25-30 g/L lipid at 0.12-0.22 g/L/h and 0.13-0.15 g/g or 39-45% of the theoretical yield at pH 6 and 7, a performance unprecedented in lignocellulosic hydrolyzates. Three of the top strains have not previously been reported for the bioconversion of lignocellulose to lipids. The successful identification and development of top-performing lipid-producing yeast in lignocellulose hydrolyzates is expected to advance the economic feasibility of high quality biodiesel and jet fuels from renewable biomass, expanding the market

  20. Identification of light absorbing oligomers from glyoxal and methylglyoxal aqueous processing: a comparative study at the molecular level

    Science.gov (United States)

    Finessi, Emanuela; Hamilton, Jacqueline; Rickard, Andrew; Baeza-Romero, Maria; Healy, Robert; Peppe, Salvatore; Adams, Tom; Daniels, Mark; Ball, Stephen; Goodall, Iain; Monks, Paul; Borras, Esther; Munoz, Amalia

    2014-05-01

    Numerous studies point to the reactive uptake of gaseous low molecular weight carbonyls onto atmospheric waters (clouds/fog droplets and wet aerosols) as an important SOA formation route not yet included in current models. However, the evaluation of these processes is challenging because water provides a medium for a complex array of reactions to take place such as self-oligomerization, aldol condensation and Maillard-type browning reactions in the presence of ammonium salts. In addition to adding to SOA mass, aqueous chemistry products have been shown to include light absorbing, surface-active and high molecular weight oligomeric species, and can therefore affect climatically relevant aerosol properties such as light absorption and hygroscopicity. Glyoxal (GLY) and methylglyoxal (MGLY) are the gaseous carbonyls that have perhaps received the most attention to date owing to their ubiquity, abundance and reactivity in water, with the majority of studies focussing on bulk physical properties. However, very little is known at the molecular level, in particular for MGLY, and the relative potential of these species as aqueous SOA precursors in ambient air is still unclear. We have conducted experiments with both laboratory solutions and chamber-generated particles to simulate the aqueous processing of GLY and MGLY with ammonium sulphate (AS) under typical atmospheric conditions and investigated their respective aging products. Both high performance liquid chromatography coupled with UV-Vis detection and ion trap mass spectrometry (HPLC-DAD-MSn) and high resolution mass spectrometry (FTICRMS) have been used for molecular identification purposes. Comprehensive gas chromatography with nitrogen chemiluminescence detection (GCxGC-NCD) has been applied for the first time to these systems, revealing a surprisingly high number of nitrogen-containing organics (ONs), with a large extent of polarities. GCxGC-NCD proved to be a valuable tool to determine overall amount and rates of

  1. Inference Based on the Best-Fitting Model can Contribute to the Replication Crisis: Assessing Model Selection Uncertainty Using a Bootstrap Approach

    Science.gov (United States)

    Lubke, Gitta H.; Campbell, Ian

    2016-01-01

    Inference and conclusions drawn from model fitting analyses are commonly based on a single “best-fitting” model. If model selection and inference are carried out using the same data model selection uncertainty is ignored. We illustrate the Type I error inflation that can result from using the same data for model selection and inference, and we then propose a simple bootstrap based approach to quantify model selection uncertainty in terms of model selection rates. A selection rate can be interpreted as an estimate of the replication probability of a fitted model. The benefits of bootstrapping model selection uncertainty is demonstrated in a growth mixture analyses of data from the National Longitudinal Study of Youth, and a 2-group measurement invariance analysis of the Holzinger-Swineford data. PMID:28663687

  2. Increasing biomass utilisation in energy systems: A comparative study of CO2 reduction and cost for different bioenergy processing options

    International Nuclear Information System (INIS)

    Wahlund, Bertil; Yan Jinyue; Westermark, Mats

    2004-01-01

    Emissions of greenhouse gases, such as CO 2 , need to be greatly reduced to avoid the risk of a harmful climate change. One powerful way to mitigate emissions is to switch fuels from fossil fuels to renewable energy, such as biomass. In this paper, we systematically investigate several bioenergy processing options, quantify the reduction rate and calculate the specific cost of reduction. This paper addresses the issue of which option Sweden should concentrate on to achieve the largest CO 2 reduction at the lowest cost. The results show that the largest and most long-term sustainable CO 2 reduction would be achieved by refining the woody biomass to fuel pellets for coal substitution, which have been done in Sweden. Refining to motor fuels, such as methanol, DME and ethanol, gives only half of the reduction and furthermore at a higher specific cost. Biomass refining into pellets enables transportation over long distances and seasonal storage, which is crucial for further utilisation of the woody biomass potential

  3. The joint flanker effect and the joint Simon effect: On the comparability of processes underlying joint compatibility effects.

    Science.gov (United States)

    Dittrich, Kerstin; Bossert, Marie-Luise; Rothe-Wulf, Annelie; Klauer, Karl Christoph

    2017-09-01

    Previous studies observed compatibility effects in different interference paradigms such as the Simon and flanker task even when the task was distributed across two co-actors. In both Simon and flanker tasks, performance is improved in compatible trials relative to incompatible trials if one actor works on the task alone as well as if two co-actors share the task. These findings have been taken to indicate that actors automatically co-represent their co-actor's task. However, recent research on the joint Simon and joint flanker effect suggests alternative non-social interpretations. To which degree both joint effects are driven by the same underlying processes is the question of the present study, and it was scrutinized by manipulating the visibility of the co-actor. While the joint Simon effect was not affected by the visibility of the co-actor, the joint flanker effect was reduced when participants did not see their co-actors but knew where the co-actors were seated. These findings provide further evidence for a spatial interpretation of the joint Simon effect. In contrast to recent claims, however, we propose a new explanation of the joint flanker effect that attributes the effect to an impairment in the focusing of spatial attention contingent on the visibility of the co-actor.

  4. A comparative study on aromatic profiles of strawberry vinegars obtained using different conditions in the production process.

    Science.gov (United States)

    Ubeda, Cristina; Callejón, Raquel M; Troncoso, Ana M; Moreno-Rojas, Jose M; Peña, Francisco; Morales, M Lourdes

    2016-02-01

    Impact odorants in strawberry vinegars produced in different containers (glass, oak and cherry barrels) were determined by gas chromatography-olfactometry using modified frequency (MF) technique, and dynamic headspace gas chromatography-mass spectrometry. Aromatic profile of vinegar from strawberry cooked must was also studied. All strawberry vinegars retained certain impact odorants from strawberries: 3-nonen-2-one, (E,E)-2,4-decadienal, guaiacol, nerolidol, pantolactone+furaneol, eugenol, γ-dodecalactone and phenylacetic acid. Isovaleric acid, pantolactone+furaneol, p-vinylguaiacol, phenylacetic acid and vanillin were the most important aroma-active compounds in all vinegars. The strawberry cooked must vinegar accounted for the highest number of impact odorants. Wood barrels provided more aroma complexity than glass containers. Impact odorants with grassy characteristics were predominant in vinegar from glass containers, and those with sweet and fruity characteristics in vinegars from wood barrels. Principal component analysis indicated that the production process led to differences in the impact odorants. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A comparative analysis for multiattribute selection among renewable energy alternatives using fuzzy axiomatic design and fuzzy analytic hierarchy process

    Energy Technology Data Exchange (ETDEWEB)

    Kahraman, Cengiz; Kaya, Ihsan; Cebi, Selcuk [Istanbul Technical University, Department of Industrial Engineering, 34367, Macka-Istanbul (Turkey)

    2009-10-15

    Renewable energy is the energy generated from natural resources such as sunlight, wind, rain, tides and geothermal heat which are renewable. Energy resources are very important in perspective of economics and politics for all countries. Hence, the selection of the best alternative for any country takes an important role for energy investments. Among decision-making methodologies, axiomatic design (AD) and analytic hierarchy process (AHP) are often used in the literature. The fuzzy set theory is a powerful tool to treat the uncertainty in case of incomplete or vague information. In this paper, fuzzy multicriteria decision- making methodologies are suggested for the selection among renewable energy alternatives. The first methodology is based on the AHP which allows the evaluation scores from experts to be linguistic expressions, crisp, or fuzzy numbers, while the second is based on AD principles under fuzziness which evaluates the alternatives under objective or subjective criteria with respect to the functional requirements obtained from experts. The originality of the paper comes from the fuzzy AD application to the selection of the best renewable energy alternative and the comparison with fuzzy AHP. In the application of the proposed methodologies the most appropriate renewable energy alternative is determined for Turkey. (author)

  6. Spectroscopic investigations of plasma nitriding processes: A comparative study using steel and carbon as active screen materials

    Science.gov (United States)

    Hamann, S.; Burlacov, I.; Spies, H.-J.; Biermann, H.; Röpcke, J.

    2017-04-01

    Low-pressure pulsed DC H2-N2 plasmas were investigated in the laboratory active screen plasma nitriding monitoring reactor, PLANIMOR, to compare the usage of two different active screen electrodes: (i) a steel screen with the additional usage of CH4 as carbon containing precursor in the feeding gas and (ii) a carbon screen without the usage of any additional gaseous carbon precursor. Applying the quantum cascade laser absorption spectroscopy, the evolution of the concentration of four stable molecular species, NH3, HCN, CH4, and C2H2, has been monitored. The concentrations were found to be in a range of 1012-1016 molecules cm-3. By analyzing the development of the molecular concentrations at variations of the screen plasma power, a similar behavior of the monitored reaction products has been found for both screen materials, with NH3 and HCN as the main reaction products. When using the carbon screen, the concentration of HCN and C2H2 was 30 and 70 times higher, respectively, compared to the usage of the steel screen with an admixture of 1% CH4. Considering the concentration of the three detected hydrocarbon reaction products, a combustion rate of the carbon screen of up to 69 mg h-1 has been found. The applied optical emission spectroscopy enabled the determination of the rotational temperature of the N2+ ion which has been in a range of 650-900 K increasing with the power in a similar way in the plasma of both screens. Also with power the ionic component of nitrogen molecules, represented by the N2+ (0-0) band of the first negative system, as well as the CN (0-0) band of the violet system increase strongly in relation to the intensity of the neutral nitrogen component, i.e., the N2 (0-0) band of the second positive system. In addition, steel samples have been treated with both the steel and the carbon screen resulting in a formation of a compound layer of up to 10 wt. % nitrogen and 10 wt. % carbon, respectively, depending on the screen material.

  7. Comparative study of the osseous healing process following three different techniques of bone augmentation in the mandible: an experimental study.

    Science.gov (United States)

    Benlidayi, M E; Gaggl, A; Buerger, H; Kahraman, O E; Sencar, L; Brandtner, C; Kurkcu, M; Polat, S; Borumandi, F

    2014-11-01

    The aim of this study was to evaluate the osseointegration of three different bone grafting techniques. Forty-eight mature New Zealand rabbits were divided randomly into three groups of 16 each. Horizontal augmentation was performed on the corpus of the mandible using three different techniques: free bone graft (FBG), free periosteal bone graft (PBG), pedicled bone flap (BF). The animals were sacrificed at postoperative weeks 1, 3, or 8. Specimens were decalcified for histological examination, and histomorphometric measurements were performed. The histological evaluation demonstrated bony fusion between the grafts and the augmented mandibular bone after 8 weeks in all groups. At week 8, the bone volume was significantly greater in the BF group than in the FBG (PPBG (P=0.001) groups, and also the trabecular thickness was significantly greater than in the FBG (P=0.015) and PBG (P=0.015) groups. Trabecular separation was significantly lower in the BF group than in the FBG group at week 8 (P=0.015). BF demonstrated greater osseous healing capacity compared to FBG and PBG. The preserved vascularization in BF improves the bone quality in mandibular bone augmentations. Copyright © 2014 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Multiscale Model Selection for High-Frequency Financial Data of a Large Tick Stock by Means of the Jensen–Shannon Metric

    Directory of Open Access Journals (Sweden)

    Gianbiagio Curato

    2014-01-01

    Full Text Available Modeling financial time series at different time scales is still an open challenge. The choice of a suitable indicator quantifying the distance between the model and the data is therefore of fundamental importance for selecting models. In this paper, we propose a multiscale model selection method based on the Jensen–Shannon distance in order to select the model that is able to better reproduce the distribution of price changes at different time scales. Specifically, we consider the problem of modeling the ultra high frequency dynamics of an asset with a large tick-to-price ratio. We study the price process at different time scales and compute the Jensen–Shannon distance between the original dataset and different models, showing that the coupling between spread and returns is important to model return distribution at different time scales of observation, ranging from the scale of single transactions to the daily time scale.

  9. Comparing Two Processing Pipelines to Measure Subcortical and Cortical Volumes in Patients with and without Mild Traumatic Brain Injury.

    Science.gov (United States)

    Reid, Matthew W; Hannemann, Nathan P; York, Gerald E; Ritter, John L; Kini, Jonathan A; Lewis, Jeffrey D; Sherman, Paul M; Velez, Carmen S; Drennon, Ann Marie; Bolzenius, Jacob D; Tate, David F

    2017-07-01

    To compare volumetric results from NeuroQuant® and FreeSurfer in a service member setting. Since the advent of medical imaging, quantification of brain anatomy has been a major research and clinical effort. Rapid advancement of methods to automate quantification and to deploy this information into clinical practice has surfaced in recent years. NeuroQuant® is one such tool that has recently been used in clinical settings. Accurate volumetric data are useful in many clinical indications; therefore, it is important to assess the intermethod reliability and concurrent validity of similar volume quantifying tools. Volumetric data from 148 U.S. service members across three different experimental groups participating in a study of mild traumatic brain injury (mTBI) were examined. Groups included mTBI (n = 71), posttraumatic stress disorder (n = 22), or a noncranial orthopedic injury (n = 55). Correlation coefficients and nonparametric group mean comparisons were used to assess reliability and concurrent validity, respectively. Comparison of these methods across our entire sample demonstrates generally fair to excellent reliability as evidenced by large intraclass correlation coefficients (ICC = .4 to .99), but little concurrent validity as evidenced by significantly different Mann-Whitney U comparisons for 26 of 30 brain structures measured. While reliability between the two segmenting tools is fair to excellent, volumetric outcomes are statistically different between the two methods. As suggested by both developers, structure segmentation should be visually verified prior to clinical use and rigor should be used when interpreting results generated by either method. Copyright © 2017 by the American Society of Neuroimaging.

  10. Microwave processed bulk and nano NiMg ferrites: A comparative study on X-band electromagnetic interference shielding properties

    Energy Technology Data Exchange (ETDEWEB)

    Chandra Babu Naidu, K., E-mail: chandrababu954@gmail.com [Ceramic Composite Laboratory, Centre for Crystal Growth, SAS, VIT University, Vellore 632014, Tamilnadu (India); Madhuri, W., E-mail: madhuriw12@gmail.com [Ceramic Composite Laboratory, Centre for Crystal Growth, SAS, VIT University, Vellore 632014, Tamilnadu (India); IFW, Leibniz Institute for Solid State and Materials Research, Technische Universität Dresden, 01069 Dresden (Germany)

    2017-02-01

    Bulk and nano Ni{sub 1-x}Mg{sub x}Fe{sub 2}O{sub 4} (x = 0–1) samples were synthesized via microwave double sintering and microwave assisted hydrothermal techniques respectively. The diffraction pattern confirmed the formation of cubic spinel phases in case of both the ferrites. The larger bulk densities were achieved to the bulk than that of nano. In addition, a comparative study on X-band (8.4–12 GHz) electromagnetic interference shielding properties of current bulk and nanomaterials was elucidated. The results showed that the bulk Ni{sub 0.6}Mg{sub 0.4}Fe{sub 2}O{sub 4} composition revealed the highest total shielding efficiency (SE{sub T}) of ∼17 dB. In comparison, the shielding efficiency values of all bulk contents were higher than that of nano because of larger bulk densities. Moreover, the ac-electromagnetic parameters such as electrical conductivity (σ{sub ac}), the respective real (ε′ & μ′) and imaginary parts (ε″ & μ″) of complex permittivity and permeability were investigated as a function of gigahertz frequency. The bulk ferrites of x = 0.4 & 0.6 showed the high ε″ of 10.26 & 6.71 and μ″ of 3.65 & 3.09 respectively at 12 GHz which can work as promising microwave absorber materials. Interestingly, nanoferrites exhibited negative μ″ values at few frequencies due to geometrical effects which improves the microwave absorption. - Highlights: • Bulk and nano NiMg ferrites are prepared by microwave and hydrothermal method. • X-band EMI shielding properties are studied for both bulk and nano ferrites. • Bulk Ni{sub 0.6}Mg{sub 0.4}Fe{sub 2}O{sub 4} revealed the highest SE{sub T} of ∼17 dB at 8.4 GHz. • Bulk x = 0.4 & 0.6 showed the high ε″ and μ″ at 12 GHz for absorber applications.

  11. A Comparative Study of Applying Active-Set and Interior Point Methods in MPC for Controlling Nonlinear pH Process

    Directory of Open Access Journals (Sweden)

    Syam Syafiie

    2014-06-01

    Full Text Available A comparative study of Model Predictive Control (MPC using active-set method and interior point methods is proposed as a control technique for highly non-linear pH process. The process is a strong acid-strong base system. A strong acid of hydrochloric acid (HCl and a strong base of sodium hydroxide (NaOH with the presence of buffer solution sodium bicarbonate (NaHCO3 are used in a neutralization process flowing into reactor. The non-linear pH neutralization model governed in this process is presented by multi-linear models. Performance of both controllers is studied by evaluating its ability of set-point tracking and disturbance-rejection. Besides, the optimization time is compared between these two methods; both MPC shows the similar performance with no overshoot, offset, and oscillation. However, the conventional active-set method gives a shorter control action time for small scale optimization problem compared to MPC using IPM method for pH control.

  12. A comparative study of drug listing recommendations and the decision-making process in Australia, the Netherlands, Sweden, and the UK.

    Science.gov (United States)

    Salas-Vega, Sebastian; Bertling, Annika; Mossialos, Elias

    2016-10-01

    Drug listing recommendations from health technology assessment (HTA) agencies often fail to coincide with one another. We conducted a comparative analysis of listing recommendations in Australia (PBAC), the Netherlands (CVZ), Sweden (TLV) and the UK (NICE) over time, examined interagency agreement, and explored how process-related factors-including time delay between HTA evaluations, therapeutic indication and orphan drug status, measure of health economic value, and comparator-impacted decision-making in drug coverage. Agreement was poor to moderate across HTA agency listing recommendations, yet it increased as the delay between HTA agency appraisals decreased, when orphan drugs were assessed, and when medicines deemed to provide low value (immunosuppressants, antineoplastics) were removed from the sample. International differences in drug listing recommendations seem to occur in part due to inconsistencies in how the supporting evidence informs assessment, but also to differences in how domestic priorities shape the value-based decision-making process. Copyright © 2016. Published by Elsevier Ireland Ltd.

  13. Comparative genome analysis of the candidate functional starter culture strains Lactobacillus fermentum 222 and Lactobacillus plantarum 80 for controlled cocoa bean fermentation processes.

    Science.gov (United States)

    Illeghems, Koen; De Vuyst, Luc; Weckx, Stefan

    2015-10-12

    Lactobacillus fermentum 222 and Lactobacillus plantarum 80, isolates from a spontaneous Ghanaian cocoa bean fermentation process, proved to be interesting functional starter culture strains for cocoa bean fermentations. Lactobacillus fermentum 222 is a thermotolerant strain, able to dominate the fermentation process, thereby converting citrate and producing mannitol. Lactobacillus plantarum 80 is an acid-tolerant and facultative heterofermentative strain that is competitive during cocoa bean fermentation processes. In this study, whole-genome sequencing and comparative genome analysis was used to investigate the mechanisms of these strains to dominate the cocoa bean fermentation process. Through functional annotation and analysis of the high-coverage contigs obtained through 454 pyrosequencing, plantaricin production was predicted for L. plantarum 80. For L. fermentum 222, genes encoding a complete arginine deiminase pathway were attributed. Further, in-depth functional analysis revealed the capacities of these strains associated with carbohydrate and amino acid metabolism, such as the ability to use alternative external electron acceptors, the presence of an extended pyruvate metabolism, and the occurrence of several amino acid conversion pathways. A comparative genome sequence analysis using publicly available genome sequences of strains of the species L. plantarum and L. fermentum revealed unique features of both strains studied. Indeed, L. fermentum 222 possessed genes encoding additional citrate transporters and enzymes involved in amino acid conversions, whereas L. plantarum 80 is the only member of this species that harboured a gene cluster involved in uptake and consumption of fructose and/or sorbose. In-depth genome sequence analysis of the candidate functional starter culture strains L. fermentum 222 and L. plantarum 80 revealed their metabolic capacities, niche adaptations and functionalities that enable them to dominate the cocoa bean fermentation

  14. Implementation of the Nutrition Care Process and International Dietetics and Nutrition Terminology in a single-center hemodialysis unit: comparing paper vs electronic records.

    Science.gov (United States)

    Rossi, Megan; Campbell, Katrina Louise; Ferguson, Maree

    2014-01-01

    There is little doubt surrounding the benefits of the Nutrition Care Process and International Dietetics and Nutrition Terminology (IDNT) to dietetics practice; however, evidence to support the most efficient method of incorporating these into practice is lacking. The main objective of our study was to compare the efficiency and effectiveness of an electronic and a manual paper-based system for capturing the Nutrition Care Process and IDNT in a single in-center hemodialysis unit. A cohort of 56 adult patients receiving maintenance hemodialysis were followed for 12 months. During the first 6 months, patients received the usual standard care, with documentation via a manual paper-based system. During the following 6-month period (Months 7 to 12), nutrition care was documented by an electronic system. Workload efficiency, number of IDNT codes used related to nutrition-related diagnoses, interventions, monitoring and evaluation using IDNT, nutritional status using the scored Patient-Generated Subjective Global Assessment Tool of Quality of Life were the main outcome measures. Compared with paper-based documentation of nutrition care, our study demonstrated that an electronic system improved the efficiency of total time spent by the dietitian by 13 minutes per consultation. There were also a greater number of nutrition-related diagnoses resolved using the electronic system compared with the paper-based documentation (PDietetics. Published by Elsevier Inc. All rights reserved.

  15. Quantifying particle dispersal in aquatic sediments at short time scales: model selection

    NARCIS (Netherlands)

    Meysman, F.J.R.; Malyuga, V.; Boudreau, B.P.; Middelburg, J.J.

    2008-01-01

    In a pulse-tracer experiment, a layer of tracer particles is added to the sediment-water interface, and the down-mixing of these particles is followed over a short time scale. Here, we compare different models (biodiffusion, telegraph, CTRW) to analyse the resulting tracer depth profiles. The

  16. Comparing student clinical self-efficacy and team process outcomes for a DEU, blended, and traditional clinical setting: A quasi-experimental research study.

    Science.gov (United States)

    Plemmons, Christina; Clark, Michele; Feng, Du

    2018-03-01

    Clinical education is vital to both the development of clinical self-efficacy and the integration of future nurses into health care teams. The dedicated education unit clinical teaching model is an innovative clinical partnership, which promotes skill development, professional growth, clinical self-efficacy, and integration as a team member. Blended clinical teaching models are combining features of the dedicated education unit and traditional clinical model. The aims of this study are to explore how each of three clinical teaching models (dedicated education unit, blended, traditional) affects clinical self-efficacy and attitude toward team process, and to compare the dedicated education unit model and blended model to traditional clinical. A nonequivalent control-group quasi-experimental design was utilized. The convenience sample of 272 entry-level baccalaureate nursing students included 84 students participating in a dedicated education unit model treatment group, 66 students participating in a blended model treatment group, and 122 students participating in a traditional model control group. Perceived clinical self-efficacy was evaluated by the pretest/posttest scores obtained on the General Self-Efficacy scale. Attitude toward team process was evaluated by the pretest/posttest scores obtained on the TeamSTEPPS® Teamwork Attitude Questionnaire. All three clinical teaching models resulted in significant increases in both clinical self-efficacy (p=0.04) and attitude toward team process (p=0.003). Students participating in the dedicated education unit model (p=0.016) and students participating in the blended model (pself-efficacy compared to students participating in the traditional model. These findings support the use of dedicated education unit and blended clinical partnerships as effective alternatives to the traditional model to promote both clinical self-efficacy and team process among entry-level baccalaureate nursing students. Copyright © 2017 Elsevier

  17. Model Selection and Evaluation Based on Emerging Infectious Disease Data Sets including A/H1N1 and Ebola

    Directory of Open Access Journals (Sweden)

    Wendi Liu

    2015-01-01

    Full Text Available The aim of the present study is to apply simple ODE models in the area of modeling the spread of emerging infectious diseases and show the importance of model selection in estimating parameters, the basic reproduction number, turning point, and final size. To quantify the plausibility of each model, given the data and the set of four models including Logistic, Gompertz, Rosenzweg, and Richards models, the Bayes factors are calculated and the precise estimates of the best fitted model parameters and key epidemic characteristics have been obtained. In particular, for Ebola the basic reproduction numbers are 1.3522 (95% CI (1.3506, 1.3537, 1.2101 (95% CI (1.2084, 1.2119, 3.0234 (95% CI (2.6063, 3.4881, and 1.9018 (95% CI (1.8565, 1.9478, the turning points are November 7,November 17, October 2, and November 3, 2014, and the final sizes until December 2015 are 25794 (95% CI (25630, 25958, 3916 (95% CI (3865, 3967, 9886 (95% CI (9740, 10031, and 12633 (95% CI (12515, 12750 for West Africa, Guinea, Liberia, and Sierra Leone, respectively. The main results confirm that model selection is crucial in evaluating and predicting the important quantities describing the emerging infectious diseases, and arbitrarily picking a model without any consideration of alternatives is problematic.

  18. A unifying framework for robust association testing, estimation, and genetic model selection using the generalized linear model.

    Science.gov (United States)

    Loley, Christina; König, Inke R; Hothorn, Ludwig; Ziegler, Andreas

    2013-12-01

    The analysis of genome-wide genetic association studies generally starts with univariate statistical tests of each single-nucleotide polymorphism. The standard approach is the Cochran-Armitage trend test or its logistic regression equivalent although this approach can lose considerable power if the underlying genetic model is not additive. An alternative is the MAX test, which is robust against the three basic modes of inheritance. Here, the asymptotic distribution of the MAX test is derived using the generalized linear model together with the Delta method and multiple contrasts. The approach is applicable to binary, quantitative, and survival traits. It may be used for unrelated individuals, family-based studies, and matched pairs. The approach provides point and interval effect estimates and allows selecting the most plausible genetic model using the minimum P-value. R code is provided. A Monte-Carlo simulation study shows that the asymptotic MAX test framework meets type I error levels well, has good power, and good model selection properties for minor allele frequencies ≥0.3. Pearson's χ(2)-test is superior for lower minor allele frequencies with low frequencies for the rare homozygous genotype. In these cases, the model selection procedure should be used with caution. The use of the MAX test is illustrated by reanalyzing findings from seven genome-wide association studies including case-control, matched pairs, and quantitative trait data.

  19. Overlap and Differences in Brain Networks Underlying the Processing of Complex Sentence Structures in Second Language Users Compared with Native Speakers.

    Science.gov (United States)

    Weber, Kirsten; Luther, Lisa; Indefrey, Peter; Hagoort, Peter

    2016-05-01

    When we learn a second language later in life, do we integrate it with the established neural networks in place for the first language or is at least a partially new network recruited? While there is evidence that simple grammatical structures in a second language share a system with the native language, the story becomes more multifaceted for complex sentence structures. In this study, we investigated the underlying brain networks in native speakers compared with proficient second language users while processing complex sentences. As hypothesized, complex structures were processed by the same large-scale inferior frontal and middle temporal language networks of the brain in the second language, as seen in native speakers. These effects were seen both in activations and task-related connectivity patterns. Furthermore, the second language users showed increased task-related connectivity from inferior frontal to inferior parietal regions of the brain, regions related to attention and cognitive control, suggesting less automatic processing for these structures in a second language.

  20. The temporoammonic input to the hippocampal CA1 region displays distinctly different synaptic plasticity compared to the Schaffer collateral input in vivo: significance for synaptic information processing

    Directory of Open Access Journals (Sweden)

    Ayla eAksoy Aksel

    2013-08-01

    Full Text Available In terms of its sub-regional differentiation, the hippocampal CA1 region receives cortical information directly via the perforant (temporoammonic path (pp-CA1 synapse and indirectly via the tri-synaptic pathway where the last relay station is the Schaffer collateral-CA1 synapse (Sc-CA1 synapse. Research to date on pp-CA1 synapses has been conducted predominantly in vitro and never in awake animals, but these studies hint that information processing at this synapse might be distinct to processing at the Sc-CA1 synapse. Here, we characterized synaptic properties and synaptic plasticity at the pp-CA1 synapse of freely behaving adult rats. We established that field excitatory postsynaptic potentials at the pp-CA1 have longer onset latencies and a shorter time-to-peak compared to the Sc-CA1 synapse. LTP (> 24h was successfully evoked by tetanic afferent stimulation of pp-CA1 synapses. Low frequency stimulation evoked synaptic depression at Sc-CA1 synapses, but did not elicit LTD at pp-CA1 synapses unless the Schaffer collateral afferents to the CA1 region had been severed. Paired-pulse responses also showed significant differences. Our data suggest that synaptic plasticity at the pp-CA1 synapse is distinct from the Sc-CA1 synapse and that this may reflect its specific role in hippocampal information processing.

  1. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    Science.gov (United States)

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  2. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for PKA energy spectra and heating number under neutron irradiation

    International Nuclear Information System (INIS)

    Iwamoto, Y.; Ogawa, T.

    2016-01-01

    The modelling of the damage in materials irradiated by neutrons is needed for understanding the mechanism of radiation damage in fission and fusion reactor facilities. The molecular dynamics simulations of damage cascades with full atomic interactions require information about the energy distribution of the Primary Knock on Atoms (PKAs). The most common process to calculate PKA energy spectra under low-energy neutron irradiation is to use the nuclear data processing code NJOY2012. It calculates group-to-group recoil cross section matrices using nuclear data libraries in ENDF data format, which is energy and angular recoil distributions for many reactions. After the NJOY2012 process, SPKA6C is employed to produce PKA energy spectra combining recoil cross section matrices with an incident neutron energy spectrum. However, intercomparison with different processes and nuclear data libraries has not been studied yet. Especially, the higher energy (~5 MeV) of the incident neutrons, compared to fission, leads to many reaction channels, which produces a complex distribution of PKAs in energy and type. Recently, we have developed the event generator mode (EGM) in the Particle and Heavy Ion Transport code System PHITS for neutron incident reactions in the energy region below 20 MeV. The main feature of EGM is to produce PKA with keeping energy and momentum conservation in a reaction. It is used for event-by-event analysis in application fields such as soft error analysis in semiconductors, micro dosimetry in human body, and estimation of Displacement per Atoms (DPA) value in metals and so on. The purpose of this work is to specify differences of PKA spectra and heating number related with kerma between different calculation method using PHITS-EGM and NJOY2012+SPKA6C with different libraries TENDL-2015, ENDF/B-VII.1 and JENDL-4.0 for fusion relevant materials

  3. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  4. A comparative analysis of teacher-authored websites in high school honors and Advanced Placement physics for Web-design and NSES content and process standards

    Science.gov (United States)

    Persin, Ronald C.

    The purpose of this study was to investigate whether statistically significant differences existed between high school Honors Physics websites and those of Advanced Placement (AP) Physics in terms of Web-design, National Science Education Standards (NSES) Physics content, and NSES Science Process standards. The procedure began with the selection of 152 sites comprising two groups with equal sample sizes of 76 for Honors Physics and for Advanced Placement Physics. The websites used in the study were accumulated using the Google(TM) search engine. To find Honors Physics websites, the search words "honors physics high school" were entered as the query into the search engine. To find sites for Advanced Placement Physics, the query, "advanced placement physics high school," was entered into the search engine. The evaluation of each website was performed using an instrument developed by the researcher based on three attributes: Web-design, NSES Physics content, and NSES Science Process standards. A "1" was scored if the website was found to have each attribute, otherwise a "0" was given. This process continued until all 76 websites were evaluated for each of the two types of physics websites, Honors and Advanced Placement. Subsequently the data were processed using Excel functions and the SPSS statistical software program. The mean and standard deviation were computed individually for the three attributes under consideration. Three, 2-tailed, independent samples t tests were performed to compare the two groups of physics websites separately on the basis of Web Design, Physics Content, and Science Process. The results of the study indicated that there was only one statistically significant difference between high school Honors Physics websites and those of AP Physics. The only difference detected was in terms of National Science Education Standards Physics content. It was found that Advanced Placement Physics websites contained more NSES physics content than Honors

  5. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    2012-01-01

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  6. Model Selection and Quality Estimation of Time Series Models for Artificial Technical Surface Generation

    Directory of Open Access Journals (Sweden)

    Matthias Eifler

    2017-12-01

    Full Text Available Standard compliant parameter calculation in surface topography analysis takes the manufacturing process into account. Thus, the measurement technician can be supported with automated suggestions for preprocessing, filtering and evaluation of the measurement data based on the character of the surface topography. Artificial neuronal networks (ANN are one approach for the recognition or classification of technical surfaces. However the required set of training data for ANN is often not available, especially when data acquisition is time consuming or expensive—as e.g., measuring surface topography. Thus, generation of artificial (simulated data becomes of interest. An approach from time series analysis is chosen and examined regarding its suitability for the description of technical surfaces: the ARMAsel model, an approach for time series modelling which is capable of choosing the statistical model with the smallest prediction error and the best number of coefficients for a certain surface. With a reliable model which features the relevant stochastic properties of a surface, a generation of training data for classifiers of artificial neural networks is possible. Based on the determined ARMA-coefficients from the ARMAsel-approach, with only few measured datasets many different artificial surfaces can be generated which can be used for training classifiers of an artificial neural network. In doing so, an improved calculation of the model input data for the generation of artificial surfaces is possible as the training data generation is based on actual measurement data. The trained artificial neural network is tested with actual measurement data of surfaces that were manufactured with varying manufacturing methods and a recognition rate of the according manufacturing principle between 60% and 78% can be determined. This means that based on only few measured datasets, stochastic surface information of various manufacturing principles can be extracted

  7. Advanced pulse oximeter signal processing technology compared to simple averaging. II. Effect on frequency of alarms in the postanesthesia care unit.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new pulse oximeter (Nellcor Symphony N-3000, Pleasanton, CA) with signal processing technique (Oxismart) on the incidence of false alarms in the postanesthesia care unit (PACU). Prospective study. Nonuniversity hospital. 603 consecutive ASA physical status I, II, and III patients recovering from general or regional anesthesia in the PACU. We compared the number of alarms produced by a recently developed "third"-generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504, Waukesha, WI). Patients were randomly assigned to either a Nellcor pulse oximeter or a Criticare with the signal averaging time set at either 12 or 21 seconds. For each patient the number of false (artifact) alarms was counted. The Nellcor generated one false alarm in 199 patients and 36 (in 31 patients) "loss of pulse" alarms. The conventional pulse oximeter with the averaging time set at 12 seconds generated a total of 32 false alarms in 17 of 197 patients [compared with the Nellcor, relative risk (RR) 0.06, confidence interval (CI) 0.01 to 0.25] and a total of 172 "loss of pulse" alarms in 79 patients (RR 0.39, CI 0.28 to 0.55). The conventional pulse oximeter with the averaging time set at 21 seconds generated 12 false alarms in 11 of 207 patients (compared with the Nellcor, RR 0.09, CI 0.02 to 0.48) and a total of 204 "loss of pulse" alarms in 81 patients (RR 0.40, CI 0.28 to 0.56). The lower incidence of false alarms of the conventional pulse oximeter with the longest averaging time compared with the shorter averaging time did not reach statistical significance (false alarms RR 0.62, CI 0.3 to 1.27; "loss of pulse" alarms RR 0.98, CI 0.77 to 1.3). To date, this is the first report of a pulse oximeter that produced almost no false alarms in the PACU.

  8. Experiments on the Model Testing of the 2nd Phase of Die Casting Process Compared with the Results of Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Dańko R.

    2015-12-01

    Full Text Available Experiments of filling the model moulds cavity of various inner shapes inserted in rectangular cavity of the casting die (dimensions: 280 mm (height × 190 mm (width × 10 mm (depth by applying model liquids of various density and viscosity are presented in the paper. Influence of die venting as well as inlet system area and inlet velocity on the volumetric rate of filling of the model liquid - achieved by means of filming the process in the system of a cold-chamber casting die was tested. Experiments compared with the results of simulation performed by means of the calculation module Novacast (Novaflow&Solid for the selected various casting conditions - are also presented in the paper.

  9. Comparative mechanical evaluation of two 2,5D C/SiC composites processed via chemical vapor infiltration and powder infiltration/polymer injection routes

    Energy Technology Data Exchange (ETDEWEB)

    Sudre, O.; Parlier, M. [ONERA, Chatillon (France); Bouillon, E. [SEP, Saint Medard-en-Jalles (France)

    1995-12-01

    Ceramic matrix composites were processed using two matrix infiltration techniques: chemical vapor infiltration (CVI) and powder infiltration/polymer injection. However, the two composites were elaborated from an identical fiber preform, and with a similar pyrocarbon interphase deposited onto the fibers by CVI. They reached comparable densification level and had an equivalent monotonic tensile behavior, although the CVI technique gave a higher modulus and a 10% higher tensile strength. The main differences were found in the details of the mechanical behavior (Young`s modulus evolution, residual strain and unloading-loading loops) and in some fatigue behaviors. These differences were related to the matrix modulus and microstructure. Merits of the resulting composites and the two techniques were discussed.

  10. Clinical trial and in-vitro study comparing the efficacy of treating bony lesions with allografts versus synthetic or highly-processed xenogeneic bone grafts

    DEFF Research Database (Denmark)

    Kubosch, Eva Johanna; Bernstein, Anke; Wolf, Laura

    2016-01-01

    BACKGROUND: Our study aim was to compare allogeneic cancellous bone (ACB) and synthetic or highly-processed xenogeneic bone substitutes (SBS) in the treatment of skeletal defects in orthopedic surgery. METHODS: 232 patients treated for bony lesions with ACB (n = 116) or SBS (n = 116) within a 10......-year time period were included in this case-control study. Furthermore, both materials were seeded with human osteoblasts (hOB, n = 10) and analyzed by histology, for viability (AlamarBlue®) and protein expression activity (Luminex®). RESULTS: The complication rate was 14.2 %, proportion of defects....... Histological examination revealed similar bone structures, whereas cell remnants were apparent only in the allografts. Both materials were biocompatible in-vitro, and seeded with human osteoblasts. The cells remained vital over the 3-week culture period and produced microscopically typical bone matrix. We...

  11. Comparative Evaluation of the Effect of Manufacturing Process on Distortion of Rotary ProFile and Twisted File: An in Vitro SEM Study

    Directory of Open Access Journals (Sweden)

    Swati Sharma

    2015-12-01

    Full Text Available Background and aims. The manufacturing process of rotary Ni-Ti file can influence its resistance to fracture. The rotary ProFile (Dentsply-Maillefer, Baillagues, Switzerland is manufactured by grinding mechanism whereas Twisted File (Sybron Endo, USA is manufactured with a twisting method. The purpose of this study was to comparatively evaluate the effect of manufacturing process on distortion of rotary ProFile and Twisted files using scanning electron microscopy after in vitro use. Materials and methods. Five sets of each type of file were used for this study -rotary ProFile (group A and Twisted file (group B. Each set was used according to manufacturer’s instructions to prepare 5 mesial canals of extracted mandibular molars. The changes in files were observed under a scanning electron microscope at ×18, ×100, ×250 and ×500 magnifications. Observations were classified as intact with no discernible distortion, intact but with unwinding, and fractured. Group A and B were then compared for deformation and fracture using two-proportion z-test. Results. On SEM observation, used rotary ProFile showed microfractures along the machining grooves whereas Twisted file showed crack propagation that was perpendicular to the machining marks. On statistical analysis, no significant difference was found between ProFile and Twisted file for deformation (P=0.642 and fracture (P=0.475. Conclusion. Within the experimental protocol of this study, it was concluded that both ProFile and Twisted files exhibited visible sign of distortion before fracture. But Twisted file gained edge over ProFile because of its manufacture design and unparalleled resistance to breakage.

  12. Comparative transcripts profiling reveals new insight into molecular processes regulating lycopene accumulation in a sweet orange (Citrus sinensis red-flesh mutant

    Directory of Open Access Journals (Sweden)

    Zhang Jianchen

    2009-11-01

    Full Text Available Abstract Background Interest in lycopene metabolism and regulation is growing rapidly because accumulative studies have suggested an important role for lycopene in human health promotion. However, little is known about the molecular processes regulating lycopene accumulation in fruits other than tomato so far. Results On a spontaneous sweet orange bud mutant with abnormal lycopene accumulation in fruits and its wild type, comparative transcripts profiling was performed using Massively Parallel Signature Sequencing (MPSS. A total of 6,877,027 and 6,275,309 reliable signatures were obtained for the wild type (WT and the mutant (MT, respectively. Interpretation of the MPSS signatures revealed that the total number of transcribed gene in MT is 18,106, larger than that in WT 17,670, suggesting that newly initiated transcription occurs in the MT. Further comparison of the transcripts abundance between MT and WT revealed that 3,738 genes show more than two fold expression difference, and 582 genes are up- or down-regulated at 0.05% significance level by more than three fold difference. Functional assignments of the differentially expressed genes indicated that 26 reliable metabolic pathways are altered in the mutant; the most noticeable ones are carotenoid biosynthesis, photosynthesis, and citrate cycle. These data suggest that enhanced photosynthesis and partial impairment of lycopene downstream flux are critical for the formation of lycopene accumulation trait in the mutant. Conclusion This study provided a global picture of the gene expression changes in a sweet orange red-flesh mutant as compared to the wild type. Interpretation of the differentially expressed genes revealed new insight into the molecular processes regulating lycopene accumulation in the sweet orange red-flesh mutant.

  13. Comparative analyses of chromatographic fingerprints of the roots of Polygonum multiflorum Thunb. and their processed products using RRLC/DAD/ESI-MS(n).

    Science.gov (United States)

    Liu, Zhenli; Liu, Yuanyan; Wang, Chao; Guo, Na; Song, Zhiqian; Wang, Chun; Xia, Lei; Lu, Aiping

    2011-11-01

    The dried roots of Polygonum multiflorum Thunb. (Heshouwu) and their processed products (Zhi-heshouwu) are widely used in traditional Chinese medicine, yet their therapeutic effects are different. Previous investigations focused mainly on the differences between Heshouwu and Zhi-heshouwu in the contents of several known compounds. In this study, a rapid resolution liquid chromatography-diode array detection/electrospray ionization tandem mass spectrometry (RRLC/DAD/ESI-MS(n)) method was developed for the comparative analysis of the components of Heshouwu and Zhi-heshouwu. A total of 23 compounds were identified or tentatively characterized. We found that 16 batches of Heshouwu and 15 batches of Zhi-heshouwu samples shared eight compounds, including gallic acid; 3,5,4'-tetrahydroxylstilbene-2,3-di-O-glucoside, CIS-2,3,5,4'-tetrahydroxylstilbene-2-O- β-D-glucoside, trans-2,3,5,4'-tetrahydroxylstilbene-2-O- β-D-glucoside, emodin-8-O- β-D-glucoside, physcion-8-O- β-D-glucoside, emodin, and physcion. Nevertheless, the relative amounts of gallic acid, emodin, and physcion were very high in Zhi-heshouwu samples compared to those in Heshouwu samples. Six compounds disappeared after processing and were unique for Heshouwu: catechin, flavanol gallate dimer, polygonimitin B, emodin-1-O-glucoside, emodin-8-O-(6'-O-malonyl)-glucoside, and physcion-8-O-(6'-O-malonyl)-glucoside. Three compounds were unique for Zhi-heshouwu: hydroxymaltol, 2,3-dihydro-3,5-dihydroxy-6-methyl-4(H)-pyran-4-one, and 5-hydroxymethyl furfural. These results suggest that the types and relative amounts of the chemical components of Heshouwu and Zhi-heshouwu are different. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Qualitative and quantitative cell recovery in umbilical cord blood processed by two automated devices in routine cord blood banking: a comparative study.

    Science.gov (United States)

    Solves, Pilar; Planelles, Dolores; Mirabet, Vicente; Blanquer, Amando; Carbonell-Uberos, Francisco

    2013-07-01

    Volume reduction is a widely used procedure in umbilical cord blood banking. It concentrates progenitor cells by reducing plasma and red blood cells, thereby optimising the use of storage space. Sepax and AXP are automated systems specifically developed for umbilical cord blood processing. These systems basically consist of a bag processing set into which cord blood is transferred and a device that automatically separates the different components during centrifugation. The aim of this study was to analyse and compare cell recovery of umbilical cord blood units processed with Sepax and AXP at Valencia Cord Blood Bank. Cell counts were performed before and after volume reduction with AXP and Sepax. When analysing all the data (n =1,000 for AXP and n= 670 for Sepax), the percentages of total nucleated cell recovery and red blood cell depletion were 76.76 ± 7.51% and 88.28 ± 5.62%, respectively, for AXP and 78.81 ± 7.25% and 88.32 ± 7.94%, respectively, for Sepax (P recovery and viability in umbilical cord blood units were similar with both devices. Mononuclear cell recovery was significantly higher when the Sepax system was used. Both the Sepax and AXP automated systems achieve acceptable total nucleated cell recovery and good CD34(+) cell recovery after volume reduction of umbilical cord blood units and maintain cell viability. It should be noted that total nucleated cell recovery is significantly better with the Sepax system. Both systems deplete red blood cells efficiently, especially AXP which works without hydroxyethyl starch.

  15. Assessing the influence of knowledge translation platforms on health system policy processes to achieve the health millennium development goals in Cameroon and Uganda: a comparative case study.

    Science.gov (United States)

    Ongolo-Zogo, Pierre; Lavis, John N; Tomson, Goran; Sewankambo, Nelson K

    2018-05-01

    There is a scarcity of empirical data on the influence of initiatives supporting evidence-informed health system policy-making (EIHSP), such as the knowledge translation platforms (KTPs) operating in Africa. To assess whether and how two KTPs housed in government-affiliated institutions in Cameroon and Uganda have influenced: (1) health system policy-making processes and decisions aiming at supporting achievement of the health millennium development goals (MDGs); and (2) the general climate for EIHSP. We conducted an embedded comparative case study of four policy processes in which Evidence Informed Policy Network (EVIPNet) Cameroon and Regional East African Community Health Policy Initiative (REACH-PI) Uganda were involved between 2009 and 2011. We combined a documentary review and semi structured interviews of 54 stakeholders. A framework-guided thematic analysis, inspired by scholarship in health policy analysis and knowledge utilization was used. EVIPNet Cameroon and REACH-PI Uganda have had direct influence on health system policy decisions. The coproduction of evidence briefs combined with tacit knowledge gathered during inclusive evidence-informed stakeholder dialogues helped to reframe health system problems, unveil sources of conflicts, open grounds for consensus and align viable and affordable options for achieving the health MDGs thus leading to decisions. New policy issue networks have emerged. The KTPs indirectly influenced health policy processes by changing how interests interact with one another and by introducing safe-harbour deliberations and intersected with contextual ideational factors by improving access to policy-relevant evidence. KTPs were perceived as change agents with positive impact on the understanding, acceptance and adoption of EIHSP because of their complementary work in relation to capacity building, rapid evidence syntheses and clearinghouse of policy-relevant evidence. This embedded case study illustrates how two KTPs influenced

  16. Comparative genomic analysis of the arthropod muscle myosin heavy chain genes allows ancestral gene reconstruction and reveals a new type of 'partially' processed pseudogene

    Directory of Open Access Journals (Sweden)

    Kollmar Martin

    2008-02-01

    Full Text Available Abstract Background Alternative splicing of mutually exclusive exons is an important mechanism for increasing protein diversity in eukaryotes. The insect Mhc (myosin heavy chain gene produces all different muscle myosins as a result of alternative splicing in contrast to most other organisms of the Metazoa lineage, that have a family of muscle genes with each gene coding for a protein specialized for a functional niche. Results The muscle myosin heavy chain genes of 22 species of the Arthropoda ranging from the waterflea to wasp and Drosophila have been annotated. The analysis of the gene structures allowed the reconstruction of an ancient muscle myosin heavy chain gene and showed that during evolution of the arthropods introns have mainly been lost in these genes although intron gain might have happened in a few cases. Surprisingly, the genome of Aedes aegypti contains another and that of Culex pipiens quinquefasciatus two further muscle myosin heavy chain genes, called Mhc3 and Mhc4, that contain only one variant of the corresponding alternative exons of the Mhc1 gene. Mhc3 transcription in Aedes aegypti is documented by EST data. Mhc3 and Mhc4 inserted in the Aedes and Culex genomes either by gene duplication followed by the loss of all but one variant of the alternative exons, or by incorporation of a transcript of which all other variants have been spliced out retaining the exon-intron structure. The second and more likely possibility represents a new type of a 'partially' processed pseudogene. Conclusion Based on the comparative genomic analysis of the alternatively spliced arthropod muscle myosin heavy chain genes we propose that the splicing process operates sequentially on the transcript. The process consists of the splicing of the mutually exclusive exons until one exon out of the cluster remains while retaining surrounding intronic sequence. In a second step splicing of introns takes place. A related mechanism could be responsible for

  17. Body hair counts during hair length reduction procedures: a comparative study between Computer Assisted Image Analysis after Manual Processing (CAIAMP) and Trichoscan(™).

    Science.gov (United States)

    Van Neste, D J J

    2015-08-01

    To compare two measurement methods for body hair. Calibration of computer assisted image analysis after manual processing (CAIAMP) showed variation body sites with 'good natural contrast between hair and skin' were taken before hair dye, after hair dye or after hair length reduction without hair extraction or destruction. Data in the same targets were compared with Trichoscan(™) quoted for 'unambiguous evaluation of the hair growth after shaving'. CAIAMP detected a total of 337 hair and showed no statistically significant differences with the three procedures confirming 'good natural contrast between hair and skin' and that reduction methods did not affect hair counts. While CAIAMP found a mean number of 19 thick hair (≥30 μm) before dye, 18 after dye and 20 after hair reduction, Trichoscan(™) found in the same sites respectively 44, 73 and 61. Trichoscan(™) generated counts differed statistically significantly from CAIAMP-data. Automated analyses were considered un-specifically influenced by hair medulla and natural or artificial skin background. Quality control including all steps of human intervention and measurement technology are mandatory for body hair measurements during experimental or clinical trials on body hair grooming, shaving or removal. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Survey compare team based learning and lecture teaching method, on learning-teaching process nursing student\\'s, in Surgical and Internal Diseases course

    Directory of Open Access Journals (Sweden)

    AA Vaezi

    2015-12-01

    Full Text Available Introduction: The effect of teaching methods on learning process of students will help teachers to improve the quality of teaching by selecting an appropriate method. This study aimed to compare the team- based learning and lecture teaching method on learning-teaching process of nursing students in surgical and internal diseases courses. Method: This quasi-experimental study was carried on the nursing students in the School of Nursing and Midwifery in Yazd and Meybod cities. Studied sample was all of the students in the sixth term in the Faculty of Nursing in Yazd (48 persons and the Faculty of Nursing in Meybod (28 persons. The rate of students' learning through lecture was measured using MCQ tests and teaching based on team-based learning (TBL method was run using MCQ tests (IRAT, GRAT, Appeals and Task group. Therefore, in order to examine the students' satisfaction about the TBL method, a 5-point Likert scale (translated questionnaire (1=completely disagree, 2= disagree, 3=not effective, 4=agree, and 5=completely agree consisted of 22 items was utilized. The reliability and validity of this translated questionnaire was measured. The collected data were analyzed through SPSS 17.0 using descriptive and analytical statistic. Result: The results showed that the mean scores in team-based learning were meaningful in individual assessment (17±84 and assessment group (17.2±1.17. The mean of overall scores in TBL method (17.84±0.98% was higher compared with the lecture teaching method (16±2.31. Most of the students believed that TBL method has improved their interpersonal and group interaction skills (100%. Among them, 97.7% of students mentioned that this method (TBL helped them to understand the course content better. The lowest levels of the satisfaction have related to the continuous learning during lifelong (51.2%. Conclusion: The results of the present study showed that the TBL method led to improving the communication skills, understanding

  19. Making Steppingstones out of Stumbling Blocks: A New Bayesian Model Evidence Estimator with Application to Groundwater Model Selection

    Science.gov (United States)

    Ye, M.; Elshall, A. S.; Tang, G.; Samani, S.

    2016-12-01

    Bayesian Model Evidence (BME) is the measure of the average fit of the model to data given all the parameter values that the model can take. By accounting for the trade-off between the model ability to reproduce the observation data and model complexity, BME estimates of candidate models are employed to calculate model weights, which are used for model selection and model averaging. This study shows that accurate estimation of the BME is important for penalizing models with more complexity. To improve the accuracy of BME estimation, we resort to Monte Carlo numerical estimators over semi-analytical solutions (such as Laplace approximations, BIC, KIC and other). This study examines prominent numerical estimators of BME that are the thermodynamic integration (TI), and the importance sampling methods of arithmetic mean (AM), harmonic mean (HM), and steppingstone sampling (SS). AM estimator (based on prior sampling) and HM estimator (based on posterior sampling) are straightforward to implement, yet they lead to under and over estimation, respectively. TI and SS improve beyond this by means of sampling multiple intermediate distributions that links the prior and the posterior, using Markov Chain Monte Carlo (MCMC). TI and SS are theoretically unbiased estimators that are mathematically rigorous. Yet a theoretically unbiased estimator could have large bias in practice arising from numerical implementation, because MCMC sampling errors of certain intermediate distributions can introduce bias. We propose an SS variant, namely the multiple one-steppingstone sampling (MOSS), which turns these intermediate stumbling "blocks" of SS into steppingstones toward BME estimation. Thus, MOSS is less sensitive to MCMC sampling errors. We evaluate these estimators using a problem of groundwater transport model selection. The modeling results show that SS and MOSS estimators gave the most accurate results. In addition, the results show that the magnitude of the estimation error is a

  20. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    Science.gov (United States)

    2010-01-01

    Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological

  1. Models selection and fitting

    International Nuclear Information System (INIS)

    Martin Llorente, F.

    1990-01-01

    The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination

  2. A comparative study of disinfection efficiency and regrowth control of microorganism in secondary wastewater effluent using UV, ozone, and ionizing irradiation process

    International Nuclear Information System (INIS)

    Lee, O-Mi; Kim, Hyun Young; Park, Wooshin; Kim, Tae-Hun; Yu, Seungho

    2015-01-01

    Highlights: • The ionizing radiation was applied to inactivate microorganisms and the critical dose to prevent the regrowth was determined. • The seasonal variation of disinfection efficiency observed in on-site UV treatment system was influenced by suspended solid, temperature, and precipitation, whereas, stable values were observed in ionizing radiation. • The electrical power consumption for disinfection using UV and ozone requires higher energy than ionizing radiation. - Abstract: Ionizing radiation technology was suggested as an alternative method to disinfection processes, such as chlorine, UV, and ozone. Although many studies have demonstrated the effectiveness of irradiation technology for microbial disinfection, there has been a lack of information on comparison studies of disinfection techniques and a regrowth of each treatment. In the present study, an ionizing radiation was investigated to inactivate microorganisms and to determine the critical dose to prevent the regrowth. As a result, it was observed that the disinfection efficiency using ionizing radiation was not affected by the seasonal changes of wastewater characteristics, such as temperature and turbidity. In terms of bacterial regrowth after disinfection, the ionizing radiation showed a significant resistance of regrowth, whereas, on-site UV treatment is influenced by the suspended solid, temperature, or precipitation. The electric power consumption was also compared for the economic feasibility of each technique at a given value of disinfection efficiency of 90% (1-log), showing 0.12, 36.80, and 96.53 Wh/(L/day) for ionizing radiation, ozone, and UV, respectively. The ionizing radiation requires two or three orders of magnitude lower power consumption than UV and ozone. Consequently, ionizing radiation can be applied as an effective and economical alternative technique to other conventional disinfection processes

  3. Comparing cropland net primary production estimates from inventory, a satellite-based model, and a process-based model in the Midwest of the United States

    Science.gov (United States)

    Li, Zhengpeng; Liu, Shuguang; Tan, Zhengxi; Bliss, Norman B.; Young, Claudia J.; West, Tristram O.; Ogle, Stephen M.

    2014-01-01

    Accurately quantifying the spatial and temporal variability of net primary production (NPP) for croplands is essential to understand regional cropland carbon dynamics. We compared three NPP estimates for croplands in the Midwestern United States: inventory-based estimates using crop yield data from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS); estimates from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) NPP product; and estimates from the General Ensemble biogeochemical Modeling System (GEMS) process-based model. The three methods estimated mean NPP in the range of 469–687 g C m−2 yr−1and total NPP in the range of 318–490 Tg C yr−1 for croplands in the Midwest in 2007 and 2008. The NPP estimates from crop yield data and the GEMS model showed the mean NPP for croplands was over 650 g C m−2 yr−1 while the MODIS NPP product estimated the mean NPP was less than 500 g C m−2 yr−1. MODIS NPP also showed very different spatial variability of the cropland NPP from the other two methods. We found these differences were mainly caused by the difference in the land cover data and the crop specific information used in the methods. Our study demonstrated that the detailed mapping of the temporal and spatial change of crop species is critical for estimating the spatial and temporal variability of cropland NPP. We suggest that high resolution land cover data with species–specific crop information should be used in satellite-based and process-based models to improve carbon estimates for croplands.

  4. Dissociation between arithmetic relatedness and distance effects is modulated by task properties: an ERP study comparing explicit vs. implicit arithmetic processing.

    Science.gov (United States)

    Avancini, Chiara; Galfano, Giovanni; Szűcs, Dénes

    2014-12-01

    Event-related potential (ERP) studies have detected several characteristic consecutive amplitude modulations in both implicit and explicit mental arithmetic tasks. Implicit tasks typically focused on the arithmetic relatedness effect (in which performance is affected by semantic associations between numbers) while explicit tasks focused on the distance effect (in which performance is affected by the numerical difference of to-be-compared numbers). Both task types elicit morphologically similar ERP waves which were explained in functionally similar terms. However, to date, the relationship between these tasks has not been investigated explicitly and systematically. In order to fill this gap, here we examined whether ERP effects and their underlying cognitive processes in implicit and explicit mental arithmetic tasks differ from each other. The same group of participants performed both an implicit number-matching task (in which arithmetic knowledge is task-irrelevant) and an explicit arithmetic-verification task (in which arithmetic knowledge is task-relevant). 129-channel ERP data differed substantially between tasks. In the number-matching task, the arithmetic relatedness effect appeared as a negativity over left-frontal electrodes whereas the distance effect was more prominent over right centro-parietal electrodes. In the verification task, all probe types elicited similar N2b waves over right fronto-central electrodes and typical centro-parietal N400 effects over central electrodes. The distance effect appeared as an early-rising, long-lasting left parietal negativity. We suggest that ERP effects in the implicit task reflect access to semantic memory networks and to magnitude discrimination, respectively. In contrast, effects of expectation violation are more prominent in explicit tasks and may mask more delicate cognitive processes. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Comparative assessment of geo dynamics processes of oil and gas production areas at the west and east boards of the south-Caspian depression

    International Nuclear Information System (INIS)

    Zhardecki, A.V; Zhukov, V.S; Poloudin, G.A

    2002-01-01

    Full text: Alpine geosynclinals s belt including fold mountains up Carpathian and Crimea Mountainous to Copetdag and Pamirs divided to two unequal parts by the South-Caspian depression.Ashgabadian depression at the east side and Kyrian depression extends and get deeper at the east direction and transforms to South-Caspian depression. Large in number of oil and gas deposits and fields are situated at the areas of this depressions on the west and east boards of the South Caspian. They have a many common characteristics. They are:1.Anticline highs are form tectonic structure like a line. Lines was branching, anticline highs are shingling.2.Red color reservoir of the depression at the east board and production reservoir at the west board of the depression are the main oil and gas containing reservoirs and are stratigraphic analogy of the middle Pliocene age.3.Both side of the depression are areas of the diapiric folding and mud volcanic activity. 4.The intensive seismic activity.5 Marine gryphons, island and sandbank sometime appear and disappear at the littoral area. 6.The Caspian Sea level has quick changes at the geological history and present time.Thus, it is possible to mark two main factors of activation of the geo dynamic processes. First deformation terrestrial surface, and second -induced seismic activity. Comparing above mentioned data on western and east it is visible to boards of the South-Caspian hollow, that for want of availability of the large number identical tectonic of features there are essential distinctions in a character of induced geo dynamic activity. In the long term, in accordance with me development of oil deposits, it is possible to expect manifestation of both factors of activation of geo dynamic processes on both boards of the South-Caspian hollow

  6. Comparative Analysis of a MOOC and a Residential Community Using Introductory College Physics: Documenting How Learning Environments Are Created, Lessons Learned in the Process, and Measurable Outcomes

    Science.gov (United States)

    Olsen, Jack Ryan

    Higher education institutions, such as the University of Colorado Boulder (CU-Boulder), have as a core mission to advance their students' academic performance. On the frontier of education technologies that hold the promise to address our educational mission are Massively Open Online Courses (MOOCs) which are new enough to not be fully understood or well-researched. MOOCs, in theory, have vast potential for being cost-effective and for reaching diverse audiences across the world. This thesis examines the implementation of one MOOC, Physics 1 for Physical Science Majors, implemented in the augural round of institutionally sanctioned MOOCs in Fall 2013. While comparatively inexpensive to a brick-and-mortar course and while it initially enrolled audience of nearly 16,000 students, this MOOC was found to be time-consuming to implement, and only roughly 1.5% of those who enrolled completed the course---approximately 1/4 of those who completed the standard brick and mortar course that the MOOC was designed around. An established education technology, residential communities, contrast the MOOCs by being high-touch and highly humanized, but by being expensive and locally-based. The Andrews Hall Residential College (AHRC) on the CU campus fosters academic success and retention by engaging and networking students outside of the standard brick and mortar courses and enculturating students into an environment with vertical integration through the different classes: freshman, sophomore, junior, etc. The physics MOOC and the AHRC were studied to determine how the environments were made and what lessons were learned in the process. Also, student performance was compared for the physics MOOC, a subset of the AHRC students enrolled in a special physics course, and the standard CU Physics 1 brick and mortar course. All yielded similar learning gains for physics 1 performance, for those who completed the courses. These environments are presented together to compare and contrast their

  7. Comparative dynamics of self-consciousness of schizophrenic patients and patients with acute and transient psychotic disorders in the process of compulsory treatment

    Directory of Open Access Journals (Sweden)

    Yur’yeva L.N.

    2013-10-01

    Full Text Available Article presents the materials of empirical research of life sense orientations, self-relation and level of claims of schizophrenic patients and patients with acute and transient psychotic disorders, who have committed socially hazardous actions and to whom forced measures of medical character are temporarily applied. Changes in the self- consciousness of patients with schizophrenia in dynamics were examined: in comparing the results obtained at first and fourth stages of patients' stay in mental hospital with strict supervision (the first stage – adaptation and diagnostics, the fourth stage – the consolidation of treatment results and preparation of patient to be discharged. Research was done by D. Leontiev’s test of life sense orientations (LSO, by S. Pantileyev – V. Stolin’s techniques and by Shwarzlander’s “Motor test”. Statistical processing of the obtained results of differences by indices of life sense orientations, of selfattitude and level of claims between the group under study with Student's t-criterion was used.

  8. Comparative genomic analysis of a neurotoxigenic Clostridium species using partial genome sequence: Phylogenetic analysis of a few conserved proteins involved in cellular processes and metabolism.

    Science.gov (United States)

    Alam, Syed Imteyaz; Dixit, Aparna; Tomar, Arvind; Singh, Lokendra

    2010-04-01

    Clostridial organisms produce neurotoxins, which are generally regarded as the most potent toxic substances of biological origin and potential biological warfare agents. Clostridium tetani produces tetanus neurotoxin and is responsible for the fatal tetanus disease. In spite of the extensive immunization regimen, the disease is an important cause of death especially among neonates. Strains of C. tetani have not been genetically characterized except the complete genome sequencing of strain E88. The present study reports the genetic makeup and phylogenetic affiliations of an environmental strain of this bacterium with respect to C. tetani E88 and other clostridia. A shot gun library was constructed from the genomic DNA of C. tetani drde, isolated from decaying fish sample. Unique clones were sequenced and sequences compared with its closest relative C. tetani E88. A total of 275 clones were obtained and 32,457 bases of non-redundant sequence were generated. A total of 150 base changes were observed over the entire length of sequence obtained, including, additions, deletions and base substitutions. Of the total 120 ORFs detected, 48 exhibited closest similarity to E88 proteins of which three are hypothetical proteins. Eight of the ORFs exhibited similarity with hypothetical proteins from other organisms and 10 aligned with other proteins from unrelated organisms. There is an overall conservation of protein sequences among the two strains of C. tetani and. Selected ORFs involved in cellular processes and metabolism were subjected to phylogenetic analysis. Copyright 2009 Elsevier Ltd. All rights reserved.

  9. Religion, ethnic groups and processes of social stratification in the United States. Mexicans and Chinese citizens’ situation from a comparative perspective

    Directory of Open Access Journals (Sweden)

    Rafael Arriaga Martínez

    2008-01-01

    Full Text Available Abstract. Through this article we are offering a global view of a theory that explains and contributes to the understanding of the position of ethnic groups in the United States within the social hierarchy. This research is focused toward a comparative analysis of Mexican and Chinese groups starting considering the following Weberian statements: a one that considers the influence of ideas and religious beliefs in the economic behavior of individuals, b and, another one that conceives religions as ethical vehicles liable to inhibit or stimulate the process of social stratification. In sum, this deals trying to consider the influence of diverse elements of religious culture in the makeup of certain economic behavior and context which is capable of vitalizing or hindering group dynamism in the social scale. This behavior is remarkably related to: a money in all its modalities, savings, expenses, investments, loans, etc., b work and entrepreneur business, and c family and communitarian solidarity. We also would like to emphasize on problems stemming from the practical applications of theory and method to the above mentioned phenomenon, emphasizing the productivity of concepts and analytical categories which are representative of a Methodological Individualism and Rational General Theory.

  10. Nível de maturidade e comparação dos PDPs de produtos automotivos Automotive product development process: comparing and evaluating the maturity level

    Directory of Open Access Journals (Sweden)

    Heitor Luiz Murat de Meirelles Quintella

    2007-04-01

    Full Text Available O desenvolvimento de novos produtos, que atendam às necessidades do mercado consumidor, é atividade estratégica para sustentabilidade das organizações. O objetivo deste artigo é comparar e avaliar o nível de maturidade desses processos em duas montadoras de veículos instaladas na região Sul Fluminense do País. São discutidas algumas características dos processos, expõe-se o modelo de avaliação, baseado nos critérios do CMMI (Capability Maturity Model Integration, e são apresentados os resultados de pesquisa que investigou o nível de maturidade junto a 47 representantes das empresas estudadas. Como resultado, foi possível identificar lacunas na estruturação dos PDP's, com diversas práticas e ferramentas usadas de forma isolada e não integradas, havendo campo suficiente - no entendimento dos próprios executivos - para serem aprimorados e refinados, o que permitiria torná-los mais completos, abrangentes e potentes para alavancar os resultados de mercado e financeiros das próprias organizações.The development of new products, which shall fulfill the customer needs, is a strategic activity for the organizations sustainability. This research has the intention of comparing and evaluating the maturity level of the product development process, at two automotive plants in the Southern region of the State of Rio de Janeiro in Brazil. Some of the processes characteristics are discussed, as well as the evaluation model, which is based on the CMMI (Capability Maturity Model Integration criteria, and the results of the research, which investigated the level of maturity along with 47 representatives from the studied companies, are hereby presented. This study revealed the existence of gaps in the PDP structure, with several practices and tools being used in a non-integrated way, having according to the executives, room for improvement in order to make them more complete, and powerful enough to leverage the market and the financial

  11. Testing general relativity using Bayesian model selection: Applications to observations of gravitational waves from compact binary systems

    International Nuclear Information System (INIS)

    Del Pozzo, Walter; Veitch, John; Vecchio, Alberto

    2011-01-01

    Second-generation interferometric gravitational-wave detectors, such as Advanced LIGO and Advanced Virgo, are expected to begin operation by 2015. Such instruments plan to reach sensitivities that will offer the unique possibility to test general relativity in the dynamical, strong-field regime and investigate departures from its predictions, in particular, using the signal from coalescing binary systems. We introduce a statistical framework based on Bayesian model selection in which the Bayes factor between two competing hypotheses measures which theory is favored by the data. Probability density functions of the model parameters are then used to quantify the inference on individual parameters. We also develop a method to combine the information coming from multiple independent observations of gravitational waves, and show how much stronger inference could be. As an introduction and illustration of this framework-and a practical numerical implementation through the Monte Carlo integration technique of nested sampling-we apply it to gravitational waves from the inspiral phase of coalescing binary systems as predicted by general relativity and a very simple alternative theory in which the graviton has a nonzero mass. This method can (and should) be extended to more realistic and physically motivated theories.

  12. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for recoil cross section spectra under neutron irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Iwamoto, Yosuke, E-mail: iwamoto.yosuke@jaea.go.jp; Ogawa, Tatsuhiko

    2017-04-01

    Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for {sup 72}Ge, {sup 75}As, {sup 89}Y, and {sup 109}Ag in the ENDF/B-VII.1 library, and for {sup 90}Zr and {sup 55}Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.

  13. Comparative evaluation of microbial diversity and metabolite profiles in doenjang, a fermented soybean paste, during the two different industrial manufacturing processes.

    Science.gov (United States)

    Lee, Sunmin; Lee, Sarah; Singh, Digar; Oh, Ji Young; Jeon, Eun Jung; Ryu, Hyung SeoK; Lee, Dong Wan; Kim, Beom Seok; Lee, Choong Hwan

    2017-04-15

    Two different doenjang manufacturing processes, the industrial process (IP) and the modified industrial process (mIP) with specific microbial assortments, were subjected to metabolite profiling using liquid chromatography-mass spectrometry (LC-MS) and gas chromatography time-of-flight mass spectrometry (GC-TOF-MS). The multivariate analyses indicated that both primary and secondary metabolites exhibited distinct patterns according to the fermentation processes (IP and mIP). Microbial community analysis for doenjang using denaturing gradient gel electrophoresis (DGGE), exhibited that both bacteria and fungi contributed proportionally for each step in the process viz., soybean, steaming, drying, meju fermentation, cooling, brining, and aging. Further, correlation analysis indicated that Aspergillus population was linked to sugar metabolism, Bacillus spp. with that of fatty acids, whereas Tetragenococcus and Zygosaccharomyces were found associated with amino acids. These results suggest that the components and quality of doenjang are critically influenced by the microbial assortments in each process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Comparing different policy scenarios to reduce the consumption of ultra-processed foods in UK: impact on cardiovascular disease mortality using a modelling approach.

    Science.gov (United States)

    Moreira, Patricia V L; Baraldi, Larissa Galastri; Moubarac, Jean-Claude; Monteiro, Carlos Augusto; Newton, Alex; Capewell, Simon; O'Flaherty, Martin

    2015-01-01

    The global burden of non-communicable diseases partly reflects growing exposure to ultra-processed food products (UPPs). These heavily marketed UPPs are cheap and convenient for consumers and profitable for manufacturers, but contain high levels of salt, fat and sugars. This study aimed to explore the potential mortality reduction associated with future policies for substantially reducing ultra-processed food intake in the UK. We obtained data from the UK Living Cost and Food Survey and from the National Diet and Nutrition Survey. By the NOVA food typology, all food items were categorized into three groups according to the extent of food processing: Group 1 describes unprocessed/minimally processed foods. Group 2 comprises processed culinary ingredients. Group 3 includes all processed or ultra-processed products. Using UK nutrient conversion tables, we estimated the energy and nutrient profile of each food group. We then used the IMPACT Food Policy model to estimate reductions in cardiovascular mortality from improved nutrient intakes reflecting shifts from processed or ultra-processed to unprocessed/minimally processed foods. We then conducted probabilistic sensitivity analyses using Monte Carlo simulation. Approximately 175,000 cardiovascular disease (CVD) deaths might be expected in 2030 if current mortality patterns persist. However, halving the intake of Group 3 (processed) foods could result in approximately 22,055 fewer CVD related deaths in 2030 (minimum estimate 10,705, maximum estimate 34,625). An ideal scenario in which salt and fat intakes are reduced to the low levels observed in Group 1 and 2 could lead to approximately 14,235 (minimum estimate 6,680, maximum estimate 22,525) fewer coronary deaths and approximately 7,820 (minimum estimate 4,025, maximum estimate 12,100) fewer stroke deaths, comprising almost 13% mortality reduction. This study shows a substantial potential for reducing the cardiovascular disease burden through a healthier food system

  15. Comparing motivational, self-regulatory and habitual processes in a computer-tailored physical activity intervention in hospital employees - protocol for the PATHS randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Dominika Kwasnicka

    2017-05-01

    Full Text Available Abstract Background Most people do not engage in sufficient physical activity to confer health benefits and to reduce risk of chronic disease. Healthcare professionals frequently provide guidance on physical activity, but often do not meet guideline levels of physical activity themselves. The main objective of this study is to develop and test the efficacy of a tailored intervention to increase healthcare professionals’ physical activity participation and quality of life, and to reduce work-related stress and absenteeism. This is the first study to compare the additive effects of three forms of a tailored intervention using different techniques from behavioural theory, which differ according to their focus on motivational, self-regulatory and/or habitual processes. Methods/Design Healthcare professionals (N = 192 will be recruited from four hospitals in Perth, Western Australia, via email lists, leaflets, and posters to participate in the four group randomised controlled trial. Participants will be randomised to one of four conditions: (1 education only (non-tailored information only, (2 education plus intervention components to enhance motivation, (3 education plus components to enhance motivation and self-regulation, and (4 education plus components to enhance motivation, self-regulation and habit formation. All intervention groups will receive a computer-tailored intervention administered via a web-based platform and will receive supporting text-messages containing tailored information, prompts and feedback relevant to each condition. All outcomes will be assessed at baseline, and at 3-month follow-up. The primary outcome assessed in this study is physical activity measured using activity monitors. Secondary outcomes include: quality of life, stress, anxiety, sleep, and absenteeism. Website engagement, retention, preferences and intervention fidelity will also be evaluated as well as potential mediators and moderators of intervention

  16. Comparing motivational, self-regulatory and habitual processes in a computer-tailored physical activity intervention in hospital employees - protocol for the PATHS randomised controlled trial.

    Science.gov (United States)

    Kwasnicka, Dominika; Vandelanotte, Corneel; Rebar, Amanda; Gardner, Benjamin; Short, Camille; Duncan, Mitch; Crook, Dawn; Hagger, Martin S

    2017-05-26

    Most people do not engage in sufficient physical activity to confer health benefits and to reduce risk of chronic disease. Healthcare professionals frequently provide guidance on physical activity, but often do not meet guideline levels of physical activity themselves. The main objective of this study is to develop and test the efficacy of a tailored intervention to increase healthcare professionals' physical activity participation and quality of life, and to reduce work-related stress and absenteeism. This is the first study to compare the additive effects of three forms of a tailored intervention using different techniques from behavioural theory, which differ according to their focus on motivational, self-regulatory and/or habitual processes. Healthcare professionals (N = 192) will be recruited from four hospitals in Perth, Western Australia, via email lists, leaflets, and posters to participate in the four group randomised controlled trial. Participants will be randomised to one of four conditions: (1) education only (non-tailored information only), (2) education plus intervention components to enhance motivation, (3) education plus components to enhance motivation and self-regulation, and (4) education plus components to enhance motivation, self-regulation and habit formation. All intervention groups will receive a computer-tailored intervention administered via a web-based platform and will receive supporting text-messages containing tailored information, prompts and feedback relevant to each condition. All outcomes will be assessed at baseline, and at 3-month follow-up. The primary outcome assessed in this study is physical activity measured using activity monitors. Secondary outcomes include: quality of life, stress, anxiety, sleep, and absenteeism. Website engagement, retention, preferences and intervention fidelity will also be evaluated as well as potential mediators and moderators of intervention effect. This is the first study to examine a tailored

  17. The Effects of Computer Simulation and Animation (CSA) on Students' Cognitive Processes: A Comparative Case Study in an Undergraduate Engineering Course

    Science.gov (United States)

    Fang, N.; Tajvidi, M.

    2018-01-01

    This study focuses on the investigation of the effects of computer simulation and animation (CSA) on students' cognitive processes in an undergraduate engineering course. The revised Bloom's taxonomy, which consists of six categories in the cognitive process domain, was employed in this study. Five of the six categories were investigated,…

  18. Recruitment of Anterior and Posterior Structures in Lexical-Semantic Processing: An fMRI Study Comparing Implicit and Explicit Tasks

    Science.gov (United States)

    Ruff, Ilana; Blumstein, Sheila E.; Myers, Emily B.; Hutchison, Emmette

    2008-01-01

    Previous studies examining explicit semantic processing have consistently shown activation of the left inferior frontal gyrus (IFG). In contrast, implicit semantic processing tasks have shown activation in posterior areas including the superior temporal gyrus (STG) and the middle temporal gyrus (MTG) with less consistent activation in the IFG.…

  19. Using potential distributions to explore environmental correlates of bat species richness in southern Africa: Effects of model selection and taxonomy

    Directory of Open Access Journals (Sweden)

    M. Corrie SCHOEMAN, F. P. D. (Woody COTTERILL, Peter J. TAYLOR, Ara MONADJEM

    2013-06-01

    Full Text Available We tested the prediction that at coarse spatial scales, variables associated with climate, energy, and productivity hypotheses should be better predictor(s of bat species richness than those associated with environmental heterogeneity. Distribution ranges of 64 bat species were estimated with niche-based models informed by 3629 verified museum specimens. The influence of environmental correlates on bat richness was assessed using ordinary least squares regression (OLS, simultaneous autoregressive models (SAR, conditional autoregressive models (CAR, spatial eigenvector-based filtering models (SEVM, and Classification and Regression Trees (CART. To test the assumption of stationarity, Geographically Weighted Regression (GWR was used. Bat species richness was highest in the eastern parts of southern Africa, particularly in central Zimbabwe and along the western border of Mozambique. We found support for the predictions of both the habitat heterogeneity and climate/productivity/ energy hypotheses, and as we expected, support varied among bat families and model selection. Richness patterns and predictors of Miniopteridae and Pteropodidae clearly differed from those of other bat families. Altitude range was the only independent variable that was sig­nificant in all models and it was most often the best predictor of bat richness. Standard coefficients of SAR and CAR models were similar to those of OLS models, while those of SEVM models differed. Although GWR indicated that the assumption of stationa­rity was violated, the CART analysis corroborated the findings of the curve-fitting models. Our results identify where additional data on current species ranges, and future conservation action and ecological work are needed [Current Zoology 59 (3: 279–293, 2013].

  20. A comparative analysis of waste water treatment processes in different towns in the province of Castellon (Spain); Analisis comaprativo entre procesos de tratamiento de agua residual de pequnas poblaciones de la provincia de Castellon

    Energy Technology Data Exchange (ETDEWEB)

    Ferrer, C.; Miguel, D.; Ferrer, L.; Alonso, S.; Sanguesa, I.; Basiero, A.; Bernacer, I.; Morenilla, J. J.

    2008-07-01

    The variety of waste water treatment processes installed in small waste water treatment plans (WWTP) in the province of Castellon allows us to analyse comparatively these systems according to the capacity of treatment of the same. We compare factors such as energy efficiency, the efficiency of carbonaceous organic matter elimination or the maintenance and operation cost of the above mentioned. (Author)

  1. "That pulled the rug out from under my feet!" - adverse experiences and altered emotion processing in patients with functional neurological symptoms compared to healthy comparison subjects.

    Science.gov (United States)

    Steffen, Astrid; Fiess, Johanna; Schmidt, Roger; Rockstroh, Brigitte

    2015-06-24

    Medically unexplained movement or sensibility disorders, recently defined in DSM-5 as functional neurological symptoms (FNS), are still insufficiently understood. Stress and trauma have been addressed as relevant factors in FNS genesis. Altered emotion processing has been discussed. The present study screened different types and times of adverse experiences in childhood and adulthood in patients with FNS as well as in healthy individuals. The relationship between stress profile, aspects of emotion processing and symptom severity was examined, with the hypothesis that particularly emotional childhood adversities would have an impact on dysfunctional emotion processing as a mediator of FNS. Adverse childhood experiences (ACE), recent negative life events (LE), alexithymia, and emotion regulation style were assessed in 45 inpatients diagnosed with dissociative disorder expressing FNS, and in 45 healthy comparison subjects (HC). Patients reported more severe FNS, more (particularly emotional) ACE, and more LE than HC. FNS severity varied with emotional ACE and negative LE, and LE partially mediated the relation between ACE and FNS. Alexithymia and suppressive emotion regulation style were stronger in patients than HC, and alexithymia varied with FNS severity. Structural equation modeling verified partial mediation of the relationship between emotional ACE and FNS by alexithymia. Early, emotional and accumulating stress show a substantial impact on FNS-associated emotion processing, influencing FNS. Understanding this complex interplay of stress, emotion processing and the severity of FNS is relevant not only for theoretical models, but, as a consequence also inform diagnostic and therapeutic adjustments.

  2. UHPLC-MS/MS quantification combined with chemometrics for the comparative analysis of different batches of raw and wine-processed Dipsacus asper.

    Science.gov (United States)

    Tao, Yi; Du, Yingshan; Su, Dandan; Li, Weidong; Cai, Baochang

    2017-04-01

    A rapid and sensitive ultra-high performance liquid chromatography with tandem mass spectrometry approach was established for the simultaneous determination of 4-caffeoylquinic acid, loganic acid, chlorogenic acid, loganin, 3,5-dicaffeoylquinic acid, dipsacoside B, asperosaponin VI, and sweroside in raw and wine-processed Dipsacus asper. Chloramphenicol and glycyrrhetinic acid were employed as internal standards. The proposed approach was fully validated in terms of linearity, sensitivity, precision, repeatability as well as recovery. Intra- and interassay variability for all analytes were 2.8-4.9 and 1.7-4.8%, respectively. The standard addition method determined recovery rates for each analytes (96.8-104.6%). In addition, the developed approach was applied to 20 batches of raw and wine-processed samples of Dipsacus asper. Principle component analysis and partial least squares-discriminate analysis revealed a clear separation between the raw group and wine-processed group. After wine-processing, the contents of loganic acid, chlorogenic acid, dipsacoside B, and asperosaponin VI were upregulated, while the contents of 3,5-dicaffeoylquinic acid, 4-caffeoylquinic acid, loganin, and sweroside were downregulated. Our results demonstrated that ultra-high performance liquid chromatography with tandem mass spectrometry quantification combined with chemometrics is a viable method for quality evaluation of the raw Dipsacus asper and its wine-processed products. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Validation of the design of small diameter pulsed columns for the process line DRA. Tests reliability compared with the industrial scale

    International Nuclear Information System (INIS)

    Leybros, J.

    2000-01-01

    As part of the Spin program related to the management of nuclear wastes, studies have been undertaken to develop partitioning processes like Diamex process. The process line CCBP/DRA in Atalante facility forms one of the main equipment devoted to these studies. On this line industrial apparatus are used but some like pulsed columns need to be adapted because of the specificity of the installation: limiting amount of nuclear matter, gaseous waste minimization, safety, limiting amounts of new extractants,... This article presents the comparison of 2 air pulsed columns, one with a standard diameter of 25 (DN25), the other with a reduced diameter of 15 (DN15). This comparison is based on 3 main criteria: pulsation capability, superficial throughput and mass transfer efficiency. The overall comparison shows that a DN15 pulsed column can be considered as a representative tool of research and development. Particularly, the study demonstrates the possibility of scaling up the results

  4. Comparative Studies on the Roles of Linguistic Knowledge and Sentence Processing Speed in L2 Listening and Reading Comprehension in an EFL Tertiary Setting

    Science.gov (United States)

    Oh, Eunjou

    2016-01-01

    The present study investigated the relative contributions of vocabulary knowledge, grammar knowledge, and processing speed to second language listening and reading comprehension. Seventy-five Korean university students participated in the study. Results showed the three tested components had a significant portion of shared variance in explaining…

  5. Comparative study of the break in process of post doped and sol–gel high temperature proton exchange membrane fuel cells

    DEFF Research Database (Denmark)

    Vang, Jakob Rabjerg; Andreasen, Søren Juhl; Araya, Samuel Simon

    2014-01-01

    In this paper six High Temperature PEM (HTPEM) MEAs from two manufacturers have been tested. The MEAs are three Dapozol 77 from Danish Power Systems (DPS) with varying electrode composition and two Celtec P2100 and one Celtec P1000 from BASF. The break in process of the MEAs has been monitored us...

  6. Study and comparative evaluation of radiation lesions and recovery processes in the large intestine affected by radionuclides of the rare earth group

    International Nuclear Information System (INIS)

    Lavrent'ev, L.N.

    1975-01-01

    Prolonged chronic irradiation of rat large intestine tissues with various types of β-radiators revealed a definite dependence between the tissue doses, damaging effect and nature of recovery. Alterations in blood vessels and inclusions of microbial flora in the necrotized sites of the tunica mucosa were partly responsible for promoting the duration of inflammatory and recovery processes [ru

  7. A Comparative, Holistic, Multiple-Case Study of the Implementation of the Strategic Thinking Protocol© and Traditional Strategic Planning Processes at a Southeastern University

    Science.gov (United States)

    Robinson, Deborah J.

    2012-01-01

    This study explores the strategic thinking and strategic planning efforts in a department, college and university in the Southeastern United States. The goal of the study was to identify elements of strategic planning processes that meet the unique organizational features and complexities of a higher education institution. The study employed a…

  8. Comparing the Effectiveness of Processing Instruction and Production-Based Instruction on L2 Grammar Learning: The Role of Explicit Information

    Science.gov (United States)

    Soruç, Adem; Qin, Jingjing; Kim, YouJin

    2017-01-01

    This article reports on a study that investigated whether processing instruction(PI) or production-based instruction (PBI) is more effective for the teaching of regular past simple verb forms in English. In addition, this study examined whether explicit grammatical information (EI) mediates the effectiveness of PI or PBI. A total of 194 Turkish…

  9. Headspace fingerprinting as an untargeted approach to compare novel and traditional processing technologies: A case-study on orange juice pasteurisation

    NARCIS (Netherlands)

    Vervoort, L.; Grauwet, T.; Kebede, T.; Plancken, van der I.; Timmermans, R.A.H.; Hendrickx, M.; Loey, van A.

    2012-01-01

    As a rule, previous studies have generally addressed the comparison of novel and traditional processing technologies by a targeted approach, in the sense that only the impact on specific quality attributes is investigated. By contrast, this work focused on an untargeted strategy, in order to take

  10. Comparative Study on Interaction of Form and Motion Processing Streams by Applying Two Different Classifiers in Mechanism for Recognition of Biological Movement

    Science.gov (United States)

    2014-01-01

    Research on psychophysics, neurophysiology, and functional imaging shows particular representation of biological movements which contains two pathways. The visual perception of biological movements formed through the visual system called dorsal and ventral processing streams. Ventral processing stream is associated with the form information extraction; on the other hand, dorsal processing stream provides motion information. Active basic model (ABM) as hierarchical representation of the human object had revealed novelty in form pathway due to applying Gabor based supervised object recognition method. It creates more biological plausibility along with similarity with original model. Fuzzy inference system is used for motion pattern information in motion pathway creating more robustness in recognition process. Besides, interaction of these paths is intriguing and many studies in various fields considered it. Here, the interaction of the pathways to get more appropriated results has been investigated. Extreme learning machine (ELM) has been implied for classification unit of this model, due to having the main properties of artificial neural networks, but crosses from the difficulty of training time substantially diminished in it. Here, there will be a comparison between two different configurations, interactions using synergetic neural network and ELM, in terms of accuracy and compatibility. PMID:25276860

  11. Comparative Study on Interaction of Form and Motion Processing Streams by Applying Two Different Classifiers in Mechanism for Recognition of Biological Movement

    Directory of Open Access Journals (Sweden)

    Bardia Yousefi

    2014-01-01

    Full Text Available Research on psychophysics, neurophysiology, and functional imaging shows particular representation of biological movements which contains two pathways. The visual perception of biological movements formed through the visual system called dorsal and ventral processing streams. Ventral processing stream is associated with the form information extraction; on the other hand, dorsal processing stream provides motion information. Active basic model (ABM as hierarchical representation of the human object had revealed novelty in form pathway due to applying Gabor based supervised object recognition method. It creates more biological plausibility along with similarity with original model. Fuzzy inference system is used for motion pattern information in motion pathway creating more robustness in recognition process. Besides, interaction of these paths is intriguing and many studies in various fields considered it. Here, the interaction of the pathways to get more appropriated results has been investigated. Extreme learning machine (ELM has been implied for classification unit of this model, due to having the main properties of artificial neural networks, but crosses from the difficulty of training time substantially diminished in it. Here, there will be a comparison between two different configurations, interactions using synergetic neural network and ELM, in terms of accuracy and compatibility.

  12. Process measures or patient reported experience measures (PREMs) for comparing performance across providers? A study of measures related to access and continuity in Swedish primary care.

    Science.gov (United States)

    Glenngård, Anna H; Anell, Anders

    2018-01-01

    Aim To study (a) the covariation between patient reported experience measures (PREMs) and registered process measures of access and continuity when ranking providers in a primary care setting, and (b) whether registered process measures or PREMs provided more or less information about potential linkages between levels of access and continuity and explaining variables. Access and continuity are important objectives in primary care. They can be measured through registered process measures or PREMs. These measures do not necessarily converge in terms of outcomes. Patient views are affected by factors not necessarily reflecting quality of services. Results from surveys are often uncertain due to low response rates, particularly in vulnerable groups. The quality of process measures, on the other hand, may be influenced by registration practices and are often more easy to manipulate. With increased transparency and use of quality measures for management and governance purposes, knowledge about the pros and cons of using different measures to assess the performance across providers are important. Four regression models were developed with registered process measures and PREMs of access and continuity as dependent variables. Independent variables were characteristics of providers as well as geographical location and degree of competition facing providers. Data were taken from two large Swedish county councils. Findings Although ranking of providers is sensitive to the measure used, the results suggest that providers performing well with respect to one measure also tended to perform well with respect to the other. As process measures are easier and quicker to collect they may be looked upon as the preferred option. PREMs were better than process measures when exploring factors that contributed to variation in performance across providers in our study; however, if the purpose of comparison is continuous learning and development of services, a combination of PREMs and

  13. Examination of the Effects of Dimensionality on Cognitive Processing in Science: A Computational Modeling Experiment Comparing Online Laboratory Simulations and Serious Educational Games

    Science.gov (United States)

    Lamb, Richard L.

    2016-01-01

    Within the last 10 years, new tools for assisting in the teaching and learning of academic skills and content within the context of science have arisen. These new tools include multiple types of computer software and hardware to include (video) games. The purpose of this study was to examine and compare the effect of computer learning games in the…

  14. Factors affecting the decision-making process when choosing an event destination: A comparative approach between Vilamoura (Portugal and Marbella (Spain

    Directory of Open Access Journals (Sweden)

    Julie Houdement

    2017-06-01

    Full Text Available Business travel is nowadays a key component of tourism industry and an important instrument for reducing seasonality. Literature has identified several attributes that affect the decision-making process when choosing a destination to hold an event. The main objective of this research is to determine their importance and how they influence the decision-making process. Vilamoura in Portugal and Marbella in Spain are the destinations under analysis, as they are important seaside destinations where business travel has contributed to a successful meeting industry. In order to achieve the study’s aim, a qualitative methodology based on semi-structured interviews both to event organisers and suppliers has been conducted. The findings confirm the hypothesis that underpinned the study, demonstrating that destination image is the main determining site-selection factor. This investigation, proposed as an exploratory examination for further research, could constitute a useful resource for event professionals to improve their destination promotion and their positioning.

  15. Comparative analysis of the calcretization process in the Marilia formations (Bauru group - Brasil) and Mercedes ( Paysandu group - Uruguay), Upper Cretaceous of the Parana basin

    International Nuclear Information System (INIS)

    Veroslavsky, G.; Etchebehere, M.; Sad, A.; Fulfaro, J.

    1998-01-01

    Pedogenic and non-pedogenic calcrete facies are very common feature of Marilia (Brazil) and Mercedes (Uruguay) formations in the Parana Basin. The non-pedogenic ones constitute massive limestone facies that have been recently interpreted as groundwater calcretes. These limestones are exploited in both countries to supply raw materials to Portland cement and soil conditioner in origin and age of calcretization phenomena. In Uruguay, the calcretization process seens to be band formation. Field relationships and fossil assemblage point to a Paleocene (or later) age for the calcretization. In Brazilian territory, the groundwater calcretes aresupposed to be of Upper Cretaceous age due to the presence of dinosaurs scattered through the Bauru Group, including siliciclastic beds below and above the calcretes. The authors assume that calcretization processes are similar in both countries (host rocks, intensity, size, textures, geometries and economic potential). The main difference is in age of the calcretization. (author)

  16. Comparative study of the variables for determining unit processing cost of irradiated food products in developing countries : case study of Ghana

    International Nuclear Information System (INIS)

    Banini, G.K; Emi-Reynolds, G.; Kassapu, S.N.

    1997-01-01

    A method for estimating unit cost of gamma treated food products in a developing country like Ghana is presented. The method employs the cost of cobalt source requirement, capital and operating costs, dose requirements etc. and relates these variables to various annual throughput at a gamma processing facility. In situations where the cost of foreign components or devices are required, the assumptions have been based on those of Kunstadt and Steeves. Otherwise, the prevailing conditions existing in Ghana have been used. The study reveals that the unit processing cost for gamma treatment foods in such a facility is between 8.0 to 147.2 US dollars per tonne. (author). 9 refs., 4 figs

  17. A comparative theoretical study of exciton-dissociation and charge-recombination processes in oligothiophene/fullerene and oligothiophene/perylenediimide complexes for organic solar cells

    KAUST Repository

    Yi, Yuanping

    2011-01-01

    The exciton-dissociation and charge-recombination processes in donor-acceptor complexes found in α-sexithienyl/C60 and α-sexithienyl/perylenetetracarboxydiimide (PDI) solar cells are investigated by means of quantum-chemical methods. The electronic couplings and exciton-dissociation and charge-recombination rates have been evaluated for various configurations of the complexes. The results suggest that the decay of the lowest charge-transfer state to the ground state in the PDI-based devices: (i) is faster than that in the fullerene-based devices and (ii) in most cases, can compete with the dissociation of the charge-transfer state into mobile charge carriers. This faster charge-recombination process is consistent with the lower performance observed experimentally for the devices using PDI derivatives as the acceptor. © 2011 The Royal Society of Chemistry.

  18. On the analysis of international migration processes. Macro-quantitative perspectives and a comparative case study on the situation of the Turkish Community in Austria

    OpenAIRE

    Tausch, Arno

    2010-01-01

    The present article presents at first a German language summary about recent quantitative studies by the author and his associates about global development since the end of Communism in up to 175 nations of the world, using 26 predictor variables to evaluate the determinants of 30 processes of development on a global scale. As correctly predicted by quantitative dependency and world system research of the 1980s and 1990s, core capital penetration (MNC penetration) has very significant negativ...

  19. Challenges and insights for situated language processing: Comment on "Towards a computational comparative neuroprimatology: Framing the language-ready brain" by Michael A. Arbib

    Science.gov (United States)

    Knoeferle, Pia

    2016-03-01

    In his review article [19], Arbib outlines an ambitious research agenda: to accommodate within a unified framework the evolution, the development, and the processing of language in natural settings (implicating other systems such as vision). He does so with neuro-computationally explicit modeling in mind [1,2] and inspired by research on the mirror neuron system in primates. Similar research questions have received substantial attention also among other scientists [3,4,12].

  20. COMPARATIVE ANALYSIS OF MECHANICAL CHARACTERISTICS OF THE STEELS, APPLIED FOR PRODUCTION OF CHIPPING KNIVES, RECEIVED BY METHODS OF THERMAL AND THERMOMECHANICAL PROCESSINGS

    Directory of Open Access Journals (Sweden)

    A. V. Alifanov

    2014-01-01

    Full Text Available Results of researches of chemical composition of chipping knives of foreign and domestic producers are given in the article. Results of mechanical tests of samples with determination of temporary resistance, percentage elongation, ultimate strength at cross bending, bend from the various tool steels, subjected to heat treatment (tempering and thermomechanical processing with low tempering, are given. Recommendations on use of TO and TMO for investigated steels are given.

  1. Toward practical all-solid-state lithium-ion batteries with high energy density and safety: Comparative study for electrodes fabricated by dry- and slurry-mixing processes

    Science.gov (United States)

    Nam, Young Jin; Oh, Dae Yang; Jung, Sung Hoo; Jung, Yoon Seok

    2018-01-01

    Owing to their potential for greater safety, higher energy density, and scalable fabrication, bulk-type all-solid-state lithium-ion batteries (ASLBs) employing deformable sulfide superionic conductors are considered highly promising for applications in battery electric vehicles. While fabrication of sheet-type electrodes is imperative from the practical point of view, reports on relevant research are scarce. This might be attributable to issues that complicate the slurry-based fabrication process and/or issues with ionic contacts and percolation. In this work, we systematically investigate the electrochemical performance of conventional dry-mixed electrodes and wet-slurry fabricated electrodes for ASLBs, by varying the different fractions of solid electrolytes and the mass loading. This information calls for a need to develop well-designed electrodes with better ionic contacts and to improve the ionic conductivity of solid electrolytes. As a scalable proof-of-concept to achieve better ionic contacts, a premixing process for active materials and solid electrolytes is demonstrated to significantly improve electrochemical performance. Pouch-type 80 × 60 mm2 all-solid-state LiNi0·6Co0·2Mn0·2O2/graphite full-cells fabricated by the slurry process show high cell-based energy density (184 W h kg-1 and 432 W h L-1). For the first time, their excellent safety is also demonstrated by simple tests (cutting with scissors and heating at 110 °C).

  2. New soil composition data for Europe and Australia: demonstrating comparability, identifying continental-scale processes and learning lessons for global geochemical mapping.

    Science.gov (United States)

    Reimann, Clemens; de Caritat, Patrice

    2012-02-01

    New geochemical data from two continental-scale soil surveys in Europe and Australia are compared. Internal project standards were exchanged to assess comparability of analytical results. The total concentration of 26 oxides/elements (Al2O3, As, Ba, CaO, Ce, Co, Cr, Fe2O3, Ga, K2O, MgO, MnO, Na2O, Nb, Ni, P2O5, Pb, Rb, SiO2, Sr, Th, TiO2, V, Y, Zn, and Zr), Loss On Ignition (LOI) and pH are demonstrated to be comparable. Additionally, directly comparable data for 14 elements in an aqua regia extraction (Ag, As, Bi, Cd, Ce, Co, Cs, Cu, Fe, La, Li, Mn, Mo, and Pb) are provided for both continents. Median soil compositions are close, though generally Australian soils are depleted in all elements with the exception of SiO2 and Zr. This is interpreted to reflect the generally longer and, in places, more intense weathering in Australia. Calculation of the Chemical Index of Alteration (CIA) gives a median value of 72% for Australia compared to 60% for Europe. Element concentrations vary over 3 (and up to 5) orders of magnitude. Several elements (total As and Ni; aqua regia As, Co, Bi, Li, Pb) have a lower element concentration by a factor of 2-3 in the soils of northern Europe compared to southern Europe. The break in concentration coincides with the maximum extent of the last glaciation. The younger soils of northern Europe are more similar to the Australian soils than the older soils from southern Europe. In Australia, the central region with especially high SiO2 concentrations is commonly depleted in many elements. The new data define the natural background variation for two continents on both hemispheres based on real data. Judging from the experience of these two continental surveys, it can be concluded that analytical quality is the key requirement for the success of global geochemical mapping. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Landscape and participation: construction of a PhD research problem and an analysis method. Towards the comparative analysis of participatory processes of landscape management projects design on a local scale in the Walloon region (Belgium)

    OpenAIRE

    Droeven, Emilie

    2007-01-01

    A preliminary reflection to the definition of a PhD research problem on the concepts of participation, landscape and project, led the student to be interested in the participatory processes of landscape management projects design, and in the inhabitants landscapes representations. The method includes the comparative analysis of local processes of projects design, and the direct observation of two Walloon landscape management projects design (investigation conducted with stakeholders implied i...

  4. Advanced Nursing Process quality: Comparing the International Classification for Nursing Practice (ICNP) with the NANDA-International (NANDA-I) and Nursing Interventions Classification (NIC).

    Science.gov (United States)

    Rabelo-Silva, Eneida Rejane; Dantas Cavalcanti, Ana Carla; Ramos Goulart Caldas, Maria Cristina; Lucena, Amália de Fátima; Almeida, Miriam de Abreu; Linch, Graciele Fernanda da Costa; da Silva, Marcos Barragan; Müller-Staub, Maria

    2017-02-01

    To assess the quality of the advanced nursing process in nursing documentation in two hospitals. Various standardised terminologies are employed by nurses worldwide, whether for teaching, research or patient care. These systems can improve the quality of nursing records, enable care continuity, consistency in written communication and enhance safety for patients and providers alike. Cross-sectional study. A total of 138 records from two facilities (69 records from each facility) were analysed, one using the NANDA-International and Nursing Interventions Classification terminology (Centre 1) and one the International Classification for Nursing Practice (Centre 2), by means of the Quality of Diagnoses, Interventions, and Outcomes instrument. Quality of Diagnoses, Interventions, and Outcomes scores range from 0-58 points. Nursing records were dated 2012-2013 for Centre 1 and 2010-2011 for Centre 2. Centre 1 had a Quality of Diagnoses, Interventions, and Outcomes score of 35·46 (±6·45), whereas Centre 2 had a Quality of Diagnoses, Interventions, and Outcomes score of 31·72 (±4·62) (p Nursing Diagnoses as Process' dimension, whereas in the 'Nursing Diagnoses as Product', 'Nursing Interventions' and 'Nursing Outcomes' dimensions, Centre 1 exhibited superior performance; acceptable reliability values were obtained for both centres, except for the 'Nursing Interventions' domain in Centre 1 and the 'Nursing Diagnoses as Process' and 'Nursing Diagnoses as Product' domains in Centre 2. The quality of nursing documentation was superior at Centre 1, although both facilities demonstrated moderate scores considering the maximum potential score of 58 points. Reliability analyses showed satisfactory results for both standardised terminologies. Nursing leaders should use a validated instrument to investigate the quality of nursing records after implementation of standardised terminologies. © 2016 John Wiley & Sons Ltd.

  5. The Connection between Persistent, Disinfectant-Resistant Listeria monocytogenes Strains from Two Geographically Separate Iberian Pork Processing Plants: Evidence from Comparative Genome Analysis.

    Science.gov (United States)

    Ortiz, Sagrario; López-Alonso, Victoria; Rodríguez, Pablo; Martínez-Suárez, Joaquín V

    2015-10-23

    The aim of this study was to investigate the basis of the putative persistence of Listeria monocytogenes in a new industrial facility dedicated to the processing of ready-to-eat (RTE) Iberian pork products. Quaternary ammonium compounds, which included benzalkonium chloride (BAC), were repeatedly used as surface disinfectants in the processing plant. Clean and disinfected surfaces were sampled to evaluate if resistance to disinfectants was associated with persistence. Of the 14 isolates obtained from product contact and non-product contact surfaces, only five different pulsed-field gel electrophoresis (PFGE) types were identified during the 27-month study period. Two of these PFGE types (S1 and S10-1) were previously identified to be persistent and BAC-resistant (BAC(r)) strains in a geographically separate slaughterhouse belonging to the same company. The remaining three PFGE types, which were first identified in this study, were also BAC(r). Whole-genome sequencing and in silico multilocus sequence typing (MLST) analysis of five BAC(r) isolates of the different PFGE types identified in this study showed that the isolate of the S1 PFGE type belonged to MLST sequence type 31 (ST31), a low-virulence type characterized by mutations in the inlA and prfA genes. The isolates of the remaining four PFGE types were found to belong to MLST ST121, a persistent type that has been isolated in several countries. The ST121 strains contained the BAC resistance transposon Tn6188. The disinfection-resistant L. monocytogenes population in this RTE pork product plant comprised two distinct genotypes with different multidrug resistance phenotypes. This work offers insight into the L. monocytogenes subtypes associated with persistence in food processing environments. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  6. Comparative study of oxidation of dye-Reactive Black B by different advanced oxidation processes: Fenton, electro-Fenton and photo-Fenton

    International Nuclear Information System (INIS)

    Huang Yaohui; Huang Yifong; Chang Poshun; Chen Chuhyung

    2008-01-01

    This study makes a comparison between photo-Fenton and a novel electro-Fenton called Fered-Fenton to study the mineralization of 10,000 mg/L of dye-Reactive Black B (RBB) aqueous solution, which was chosen as the model dye contaminant. Results indicate that the traditional Fenton process only yields 70% mineralization. This result can be improved by using Fered-Fenton to yield 93% mineralization resulting from the action of ferrous ion regenerated on the cathode. Furthermore, photo-Fenton allows a fast and more complete destruction of dye solutions and as a result of the action of ferrous ion regenerated by UV irradiation yields more than 98% mineralization. In all treatments, the RBB is rapidly decayed to some carboxylic acid intermediates. The major intermediates found are formic acid and oxalic acid. This study finds that formic acid can be completely mineralized by photo-Fenton, but its destruction is problematic using the Fenton method. Oxalic acid is much more difficult to treat than other organic acids. It could get further mineralization with the use of the Fered-Fenton process

  7. Structure and Magnetic Properties of Bi5Ti3FeO15 Ceramics Prepared by Sintering, Mechanical Activation and Edamm Process. A Comparative Study

    Directory of Open Access Journals (Sweden)

    Jartych E.

    2016-06-01

    Full Text Available Three different methods were used to obtain Bi5Ti3FeO15 ceramics, i.e. solid-state sintering, mechanical activation (MA with subsequent thermal treatment, and electrical discharge assisted mechanical milling (EDAMM. The structure and magnetic properties of produced Bi5Ti3FeO15 samples were characterized using X-ray diffraction and Mössbauer spectroscopy. The purest Bi5Ti3FeO15 ceramics was obtained by standard solid-state sintering method. Mechanical milling methods are attractive because the Bi5Ti3FeO15 compound may be formed at lower temperature or without subsequent thermal treatment. In the case of EDAMM process also the time of processing is significantly shorter in comparison with solid-state sintering method. As revealed by Mössbauer spectroscopy, at room temperature the Bi5Ti3FeO15 ceramics produced by various methods is in paramagnetic state.

  8. a comparative study of models for correlated binary data with ...

    African Journals Online (AJOL)

    Preferred Customer

    significance. Next, several subsets of predictors are compared through the AIC criterion, whenever applicable. Key words/phrases: Beta-binomial, bootstrap, correlated binary data, model selection, overdispersion. *. Current address: University of Hannover, Bioinformatic Unit, Herrenhauser Strasse 2, D-30419 Hannover, ...

  9. Model Selection for and Partial-Wave Analysis of a Five-Pion Final State at the COMPASS Experiment at CERN

    CERN Document Server

    AUTHOR|(CDS)2073723; Mallot, Gerhard

    The light-meson spectrum is an important ingredient to understand quantum chromodynamics, the theory of strong interaction at low energies, where quarks and gluons are confined into hadrons. However, measuring this spectrum experimentally is very challenging, due to the large number of overlapping resonances it contains. To disentangle this complicated spectrum, partial-wave techniques are used. At the COMPASS experiment at CERN, the light-meson spectrum is studied in diffractive dissociation reactions. One such reaction, π- + p→π- π+ π- π+ π- + p, puts the analysis methodology to the test, because the large number of final-state particles requires a dedicated model-selection procedure and introduces a complicated background situation. In this thesis, such a model-selection procedure is developed and verified on simulated events. In addition to finding a suitable model, it can be used to assess the reliability of the results. The model-selection procedure is then successfully applied t...

  10. Comparing coagulation activity of Selaginella tamariscina before and after stir-frying process and determining the possible active constituents based on compositional variation.

    Science.gov (United States)

    Zhang, Qian; Wang, Ya-Li; Gao, Die; Cai, Liang; Yang, Yi-Yao; Hu, Yuan-Jia; Yang, Feng-Qing; Chen, Hua; Xia, Zhi-Ning

    2018-12-01

    Selaginella tamariscina (P. Beauv.) Spring (Selaginellaceae) (ST) has been widely used in China as a medicine for improving blood circulation. However, its processed product, S. tamariscina carbonisatus (STC), possesses opposite haemostatic activity. To comprehensively evaluate the activity of ST and STC on physiological coagulation system of rats, and seek potential active substances accounting for the activity transformation of ST during processing. The 75% methanol extracts of the whole grass (fine powder) of ST and STC were prepared, respectively. Male Sprague-Dawley rats were randomly divided into five groups: control group, model group, model + ST group, model + STC group and positive control group (model + Yunnanbaiyao). The duration of intragastric administration was 72 h at 12 h intervals. Haemorheology parameters were measured using an LB-2 A cone-plate viscometer and the existed classic methods, respectively. SC40 semi-automatic coagulation analyzer was employed to determine coagulation indices. Meanwhile, HPLC and LC-MS were applied for chemical analyses of ST and STC extracts. STC shortened tail-bleeding time, increased whole blood viscosity (WBV) and plasma viscosity (PV), decreased erythrocyte sedimentation rate blood (ESR), reduced activated partial thromboplastin time (APTT) and increased the fibrinogen (FIB) content in the plasma of bleeding model rats. Although ST could shorten APTT and TT, the FIB content was significantly decreased by ST. Dihydrocaffeic acid with increased content in STC vs. ST showed haemostatic activity for promoting the platelet aggregation induced by collagen and trap-6, and reducing APTT and PT significantly with a concentration of 171.7 μM in vitro. Amentoflavone with reduced content in STC vs. ST inhibited ADP and AA-induced platelet aggregation significantly with a concentration of 40.7 μM. As the processed product of ST, STC showed strong haemostatic activity on bleeding rat through regulating

  11. Examination of the Effects of Dimensionality on Cognitive Processing in Science: A Computational Modeling Experiment Comparing Online Laboratory Simulations and Serious Educational Games

    Science.gov (United States)

    Lamb, Richard L.

    2016-02-01

    Within the last 10 years, new tools for assisting in the teaching and learning of academic skills and content within the context of science have arisen. These new tools include multiple types of computer software and hardware to include (video) games. The purpose of this study was to examine and compare the effect of computer learning games in the form of three-dimensional serious educational games, two-dimensional online laboratories, and traditional lecture-based instruction in the context of student content learning in science. In particular, this study examines the impact of dimensionality, or the ability to move along the X-, Y-, and Z-axis in the games. Study subjects ( N = 551) were randomly selected using a stratified sampling technique. Independent strata subsamples were developed based upon the conditions of serious educational games, online laboratories, and lecture. The study also computationally models a potential mechanism of action and compares two- and three-dimensional learning environments. F test results suggest a significant difference for the main effect of condition across the factor of content gain score with large effect. Overall, comparisons using computational models suggest that three-dimensional serious educational games increase the level of success in learning as measured with content examinations through greater recruitment and attributional retraining of cognitive systems. The study supports assertions in the literature that the use of games in higher dimensions (i.e., three-dimensional versus two-dimensional) helps to increase student understanding of science concepts.

  12. DEFINING THE EFFECTIVENESS OF FACTORS IN PROCESS OF DRYING INDUSTRIAL BAKERS YEAST BY USING TAGUCHI METHOD AND REGRESSION ANALYSIS, AND COMPARING THE RESULTS

    Directory of Open Access Journals (Sweden)

    Semra Boran

    2007-09-01

    Full Text Available Taguchi Method and Regression Analysis have wide spread applications in statistical researches. It can be said that Taguchi Method is one of the most frequently used method especially in optimization problems. But applications of this method are not common in food industry . In this study, optimal operating parameters were determined for industrial size fluidized bed dryer by using Taguchi method. Then the effects of operating parameters on activity value (the quality chracteristic of this problem were calculated by regression analysis. Finally, results of two methods were compared.To summarise, average activity value was found to be 660 for the 400 kg loading and average drying time 26 minutes by using the factors and levels taken from application of Taguchi Method. Whereas, in normal conditions (with 600 kg loading average activity value was found to be 630 and drying time 28 minutes. Taguchi Method application caused 15 % rise in activity value.

  13. Comparative study of the application of microcurrent and AsGa 904 nm laser radiation in the process of repair after calvaria bone excision in rats

    International Nuclear Information System (INIS)

    Mendonça, J S; Neves, L M G; Esquisatto, M A M; Mendonça, F A S; Santos, G M T

    2013-01-01

    This study evaluated the effects of microcurrent stimulation (10 μA/5 min) and 904 nm GaAs laser irradiation (3 J cm −2 for 69 s/day) on excisional lesions created in the calvaria bone of Wistar rats. The results showed significant responses in the reduction of inflammatory cells and an increase in the number of new blood vessels, number of fibroblasts and deposition of birefringent collagen fibers when these data were compared with those of samples of the untreated lesions. Both applications, microcurrent and laser at 904 nm, favored tissue repair in the region of bone excisions during the study period and these techniques can be used as coadjuvantes in the repair of bone tissue. (paper)

  14. Using concept mapping in the knowledge-to-action process to compare stakeholder opinions on barriers to use of cancer screening among South Asians.

    Science.gov (United States)

    Lobb, Rebecca; Pinto, Andrew D; Lofters, Aisha

    2013-03-23

    Using the knowledge-to-action (KTA) process, this study examined barriers to use of evidence-based interventions to improve early detection of cancer among South Asians from the perspective of multiple stakeholders. In 2011, we used concept mapping with South Asian residents, and representatives from health service and community service organizations in the region of Peel Ontario. As part of concept mapping procedures, brainstorming sessions were conducted with stakeholders (n = 53) to identify barriers to cancer screening among South Asians. Participants (n = 46) sorted barriers into groups, and rated barriers from lowest (1) to highest (6) in terms of importance for use of mammograms, Pap tests and fecal occult blood tests, and how feasible it would be to address them. Multi-dimensional scaling, cluster analysis, and descriptive statistics were used to analyze the data. A total of 45 unique barriers to use of mammograms, Pap tests, and fecal occult blood tests among South Asians were classified into seven clusters using concept mapping procedures: patient's beliefs, fears, lack of social support; health system; limited knowledge among residents; limited knowledge among physicians; health education programs; ethno-cultural discordance with the health system; and cost. Overall, the top three ranked clusters of barriers were 'limited knowledge among residents,' 'ethno-cultural discordance,' and 'health education programs' across surveys. Only residents ranked 'cost' second in importance for fecal occult blood testing, and stakeholders from health service organizations ranked 'limited knowledge among physicians' third for the feasibility survey. Stakeholders from health services organizations ranked 'limited knowledge among physicians' fourth for all other surveys, but this cluster consistently ranked lowest among residents. The limited reach of cancer control programs to racial and ethnic minority groups is a critical implementation issue that requires attention

  15. Comparative study of water and ammonia rinsing processes of potassium fluoride-treated Cu(In,Ga)Se2 thin film solar cells

    Science.gov (United States)

    Khatri, Ishwor; Shudo, Kosuke; Matsuura, Junpei; Sugiyama, Mutsumi; Nakada, Tokio

    2017-08-01

    In this work, potassium fluoride (KF)-treated Cu(In,Ga)Se2 (CIGS) thin films were rinsed in ammonia and water solutions before buffer layer (CdS) deposition and the effects of rinsing on photovoltaic properties were investigated. X-ray photoelectron spectroscopy (XPS) and secondary ion mass spectrometry (SIMS) measurements revealed that sodium atoms out-diffused at the surface region during KF deposition. Water and ammonia rinsing processes of KF-treated CIGS thin films reduced alkali metals from the surface. However, sodium at the Cu-depleted surface layer remained at a high concentration, suggesting the occupation of Cu vacancies with sodium atoms. On the other hand, ammonia rinsing removed the Cu-poor region from the surfaces of KF-treated CIGS thin films affecting the growth (or nucleation) of the CdS layer. The surface coverage of the CdS layer deposited on the ammonia-rinsed KF-treated CIGS thin film was inferior to than that of water-rinsed samples, resulting in the poor cell performance due to an increased interface recombination.

  16. Process of neovascularization compared with pain intensity in tendinopathy of the long head of the biceps brachii tendon associated with concomitant shoulder disorders, after arthroscopic treatment.

    Science.gov (United States)

    Zabrzyński, Jan; Paczesny, Łukasz; Łapaj, Łukasz; Grzanka, Dariusz; Szukalski, Jacenty

    2017-10-24

    Tendinopathy of the long head of the biceps brachii tendon is one of the most common, painful conditions of the anterior part of the shoulder and often coexists with rotator cuff tears. Multifactorial etiopathology of tendinopathy is poorly understood, however, several studies indicated that it is seen predominantly in areas with decreased vascularity of the tissue; the pathology is also characterized by expansive and abundant neovascular in-growth. The aim of the study was to investigate the relationship between the neovascularization of proximal part of the long head of the biceps brachii tendon and pain along the bicipital groove. Tissue material was obtained from 28 patients who underwent a shoulder arthroscopy and experienced pain along the bicipital groove measured using VAS score. CD31 and CD34 molecules were visualized by immunohistochemical method to assess biceps tendon neovascularization and quantify it based on a Bonar scoring system. Although all patients reported pain prior to arthroscopy (mean VAS score was 7.5), microscopic examination did not reveal neovascularization in all cases. Immunohistochemical staining for CD31 and CD34 allowed for very precise visualization and quantification of neovascularization, however there was also no correlation between vessels in-growth scores and pain. The obtained data suggest that neovascularization process in tendinopathy is not directly related to pain, however, further studies are needed to explain its significance in the long head of the biceps brachii tendon tendinopathy.

  17. Immediate function on the day of surgery compared with a delayed implant loading process in the mandible: a randomized clinical trial over 5 years.

    Science.gov (United States)

    Jokstad, Asbjorn; Alkumru, Hassan

    2014-12-01

    To appraise the feasibility of loading four implants with a pre-existing denture converted to a fixed dental prosthesis (FDP) on the day of implant surgery compared with waiting for 3- to 4-month healing. Patients with an edentulous, fully healed mandible were recruited in a faculty clinic to partake in a blinded two-arm parallel randomized controlled trial (RCT). The participants received four parallel intraforamina mandibular implants with a moderately rough titanium surface (Brånemark System Mk III or Mk IV TiUnite; Nobel Biocare AB, Göteborg, Sweden). The implants were loaded on the same day by converting the participants' pre-existing denture in the experimental group. The implants were placed using a one-stage surgery procedure, and the participants' pre-existing denture were soft-relined in the control group. For both groups, the permanent 10- to 12-unit FDP consisting of a type-3 cast precious alloy veneered with acrylic and artificial teeth was placed 3-4 months after implant surgery. All participants have been recalled annually for 5 years for appraisal of bone loss and registration of adverse events. Thirty-five of the original 42 participants (83%) returned for clinical and radiological examinations at the 5-year follow-up recall. No selective dropout or specific reasons for dropout was identified in the two study arms; leaving n = 17 (Intention-to-treat group, ITT) in the experimental group, alternatively n = 13 as per protocol group (PP), and n = 18 participants in the control group (ITT = PP). At study commencement, five of the participants assigned to the experimental group did not receive their planned intervention. In the control group, one implant failed to osseointegrate and another failed due to bone loss after 5 years. The crestal bone level changes over 5 years were identical in the experimental and control groups, that is, 1.2 mm (SD = 0.7). There were no differences between the two study arms with regard to incidence of biological and

  18. Immediate function on the day of surgery compared with a delayed implant loading process in the mandible: a randomized clinical trial over 5 years

    Science.gov (United States)

    Jokstad, Asbjorn; Alkumru, Hassan

    2014-01-01

    Objectives To appraise the feasibility of loading four implants with a pre-existing denture converted to a fixed dental prosthesis (FDP) on the day of implant surgery compared with waiting for 3- to 4-month healing. Methods Patients with an edentulous, fully healed mandible were recruited in a faculty clinic to partake in a blinded two-arm parallel randomized controlled trial (RCT). The participants received four parallel intraforamina mandibular implants with a moderately rough titanium surface (Brånemark System Mk III or Mk IV TiUnite; Nobel Biocare AB, Göteborg, Sweden). The implants were loaded on the same day by converting the participants' pre-existing denture in the experimental group. The implants were placed using a one-stage surgery procedure, and the participants' pre-existing denture were soft-relined in the control group. For both groups, the permanent 10- to 12-unit FDP consisting of a type-3 cast precious alloy veneered with acrylic and artificial teeth was placed 3–4 months after implant surgery. All participants have been recalled annually for 5 years for appraisal of bone loss and registration of adverse events. Results Thirty-five of the original 42 participants (83%) returned for clinical and radiological examinations at the 5-year follow-up recall. No selective dropout or specific reasons for dropout was identified in the two study arms; leaving n = 17 (Intention-to-treat group, ITT) in the experimental group, alternatively n = 13 as per protocol group (PP), and n = 18 participants in the control group (ITT = PP). At study commencement, five of the participants assigned to the experimental group did not receive their planned intervention. In the control group, one implant failed to osseointegrate and another failed due to bone loss after 5 years. The crestal bone level changes over 5 years were identical in the experimental and control groups, that is, 1.2 mm (SD = 0.7). There were no differences between the two study arms

  19. A comparative study of the disinfection efficacy of H2O2/ferrate and UV/H2O2/ferrate processes on inactivation of Bacillus subtilis spores by response surface methodology for modeling and optimization.

    Science.gov (United States)

    Matin, Atiyeh Rajabi; Yousefzadeh, Samira; Ahmadi, Ehsan; Mahvi, Amirhossein; Alimohammadi, Mahmood; Aslani, Hassan; Nabizadeh, Ramin

    2018-04-03

    Although chlorination can inactivate most of the microorganisms in water but protozoan parasites like C. parvum oocysts and Giardia cysts can resist against it. Therefore, many researches have been conducted to find a novel method for water disinfection. Present study evaluated the synergistic effect of H2O2 and ferrate followed by UV radiation to inactivate Bacillus subtilis spores as surrogate microorganisms. Response surface methodology(RSM) was employed for the optimization for UV/H2O2/ferrate and H2O2/ferrate processes. By using central composite design(CCD), the effect of three main parameters including time, hydrogen peroxide, and ferrate concentrations was examined on process performance. The results showed that the combination of UV, H2O2 and ferrate was the most effective disinfection process in compare with when H2O2 and ferrate were used. This study indicated that by UV/H2O2/ferrate, about 5.2 log reductions of B. subtilis spores was inactivated at 9299 mg/l of H2O2 and 0.4 mg/l of ferrate concentrations after 57 min of contact time which was the optimum condition, but H2O2/ferrate can inactivate B. subtilis spores about 4.7 logs compare to the other process. Therefore, the results of this research demonstrated that UV/H2O2 /ferrate process is a promising process for spore inactivation and water disinfection. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Comparative study on the organic removal effect in liquid rad waste between the two different type of UV lamps for UV photo-oxidation degradation process

    International Nuclear Information System (INIS)

    Park, Se Moon; Kim, Jong Bin; Park, Eun Jung; Lee Myung Chan

    1999-01-01

    Fundamental experiments on the two different type of UV lamps for organic removal rates were carried out. Organic removal rates from the liquid laundry waste, the UV lamps which radiate 253.7nm wavelength and 180-400nm were compared. The TOC removal rate with the flow rate of 100 l/h are measured from the rad waste using the two UV lamps with oxidizing additives, hydrogen peroxide. Both shows that as the function of hydrogen peroxide volume was increasing, the organic removal rate was increasing but it was decreased at the above certain volume. The test with pH was also studied. At pH 9.5 for 253.7nm lamp and pH 10 for 180-400nm, the TOC removal showed optimum in rate but the high pH value is not always in accordance with the high TOC removal rate. The TOC removal rate more than 85% for 1 ton/h waste treatment can be expected with 28.6 kw for 253.7nm UV lamp and 30 kw for 180-400nm. (author)

  1. Inducible Protective Processes in Animal Systems XIII: Comparative Analysis of Induction of Adaptive Response by EMS and MMS in Ehrlich Ascites Carcinoma Cells.

    Science.gov (United States)

    Mahadimane, Periyapatna Vishwaprakash; Vasudev, Venkateshaiah

    2014-01-01

    In order to investigate the presence of adaptive response in cancerous cells, two monofunctional alkylating agents, namely, ethyl methanesulfonate (EMS) and methyl methanesulfonate (MMS), were employed to treat Ehrlich ascites carcinoma (EAC) cells in vivo. Conditioning dose of 80 mg/kg body weight of EMS or 50 mg/kg body weight of MMS and challenging dose of 240 mg/kg body weight of EMS or 150 mg/kg body weight of MMS were selected by pilot toxicity studies. Conditioned EAC cells when challenged after 8 h time lag resulted in significant reduction in chromosomal aberrations compared to challenging dose of respective agents. As has been proved in earlier studies with normal organisms, even in cancerous cells (EAC), there is presence of adaptive response to methylating and ethylating agents. Furthermore, it is also interesting to note in the present studies that the methylating agent, MMS, is a stronger inducer of the adaptive response than the ethylating agent, EMS.

  2. Inducible Protective Processes in Animal Systems XIII: Comparative Analysis of Induction of Adaptive Response by EMS and MMS in Ehrlich Ascites Carcinoma Cells

    Directory of Open Access Journals (Sweden)

    Periyapatna Vishwaprakash Mahadimane

    2014-01-01

    Full Text Available In order to investigate the presence of adaptive response in cancerous cells, two monofunctional alkylating agents, namely, ethyl methanesulfonate (EMS and methyl methanesulfonate (MMS, were employed to treat Ehrlich ascites carcinoma (EAC cells in vivo. Conditioning dose of 80 mg/kg body weight of EMS or 50 mg/kg body weight of MMS and challenging dose of 240 mg/kg body weight of EMS or 150 mg/kg body weight of MMS were selected by pilot toxicity studies. Conditioned EAC cells when challenged after 8 h time lag resulted in significant reduction in chromosomal aberrations compared to challenging dose of respective agents. As has been proved in earlier studies with normal organisms, even in cancerous cells (EAC, there is presence of adaptive response to methylating and ethylating agents. Furthermore, it is also interesting to note in the present studies that the methylating agent, MMS, is a stronger inducer of the adaptive response than the ethylating agent, EMS.

  3. Evaluation of Federal Energy Savings Performance Contracting -- Methodology for Comparing Processes and Costs of ESPC and Appropriatins-Funded Energy Projects

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, P.J.

    2002-10-08

    lower interest rates than the private sector, but appropriations for energy projects are scarce. What are the costs associated with requesting funding and waiting for appropriations? And how is the value of an energy project affected if savings that are not guaranteed do not last? The objective of this study was to develop and demonstrate methods to help federal energy managers take some of the guesswork out of obtaining best value from spending on building retrofit energy improvements. We developed a method for comparing all-inclusive prices of energy conservation measures (ECMs) implemented using appropriated funds and through ESPCs that illustrates how agencies can use their own appropriations-funded project experience to ensure fair ESPC pricing. The second method documented in this report is for comparing life-cycle costs. This method illustrates how agencies can use their experience, and their judgment concerning their prospects for appropriations, to decide between financing and waiting.

  4. Comparative Evaluation of Tensile Bond Strength between Silicon Soft Liners and Processed Denture Base Resin Conditioned by Three Modes of Surface Treatment: An Invitro Study.

    Science.gov (United States)

    Surapaneni, Hemchand; Ariga, Padma; Haribabu, R; Ravi Shankar, Y; Kumar, V H C; Attili, Sirisha

    2013-09-01

    Soft denture liners act as a cushion for the denture bearing mucosa through even distribution of functional load, avoiding local stress concentrations and improving retention of dentures there by providing comfort to the patient. The objective of the present study was to compare and evaluate the tensile bond strengths of silicone-based soft lining materials (Ufi Gel P and GC Reline soft) with different surface pre treatments of heat cure PMMA denture base acrylic resin. Stainless steel dies measuring 40 mm in length; 10 mm in width and 10 mm in height (40 × 10 × 10) were machined to prepare standardized for the polymethyl methacrylate resin blocks. Stainless steel dies (spacer for resilient liner) measuring 3 mm thick; 10 mm long and 10 mm wide were prepared as spacers to ensure uniformity of the soft liner being tested. Two types of Addition silicone-based soft lining materials (room temperature polymerised soft lining materials (RTPSLM): Ufi Gel P and GC Reline soft) were selected. Ufi Gel P (VOCO, Germany), GC Reline soft (GC America) are resilient, chairside vinyl polysiloxane denture reliners of two different manufacturers. A total of 80 test samples were prepared of which 40 specimens were prepared for Group A (Ufi Gel P) and 40 specimens for Group B (GC Reline soft). In these groups, based on Pre-treatment of acrylic resin specimens each group was subdivided into four sub groups of 10 samples each. Sub-group I-without any surface treatment. Sub-group II-sand blasted Sub-group III-treated with Methyl Methacrylate monomer Sub-group IV-treated with chemical etchant Acetone. The results were statistically analysed by Kruscal Wallis test, Mann-Whitney U test, and Independent t test. The specimens treated with MMA monomer wetting showed superior and significant bond strength than those obtained by other surface treatments. The samples belonging to subgroups of GC Reline soft exhibit superior tensile bond strength than subgroups of Ufi Gel P. The modes

  5. Hospital Compare

    Data.gov (United States)

    U.S. Department of Health & Human Services — Hospital Compare has information about the quality of care at over 4,000 Medicare-certified hospitals across the country. You can use Hospital Compare to find...

  6. Model Identification of Integrated ARMA Processes

    Science.gov (United States)

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  7. Monitoring and analysis of the change process in curriculum mapping compared to the National Competency-based Learning Objective Catalogue for Undergraduate Medical Education (NKLM) at four medical faculties. Part II: Key factors for motivating the faculty during the process.

    Science.gov (United States)

    Lammerding-Koeppel, Maria; Giesler, Marianne; Gornostayeva, Maryna; Narciss, Elisabeth; Wosnik, Annette; Zipfel, Stephan; Griewatz, Jan; Fritze, Olaf

    2017-01-01

    Objective: After adoption of the National Competency-based Learning Objectives Catalogue in Medicine [Nationaler Kompetenzbasierter Lernzielkatalog Medizin, NKLM], the German medical faculties are asked to test the learning obejctives recorded in it and evaluate them critically. The faculties require curricular transparency for competence-oriented transition of present curricula, which is best achieved by systematic curriculum mapping in comparison to the NKLM. Based on this inventory, curricula can be further developed target-oriented. Considerable resistance has to be expected when a complex existing curriculum is to be mapped for the first time and a faculty must be convinced of its usefulness. Headed by Tübingen, the faculties of Freiburg, Heidelberg, Mannheim and Tübingen rose to this task. This two-part article analyses and summarises how NKLM curriculum mapping was successful at the locations despite resistance. Part I presented the resources and structures that supported implementation. Part II focuses on factors that motivate individuals and groups of persons to cooperate in the faculties. Method: Both parts used the same method. In short, the joint project was systematically planned following the steps of project and change management and adjusted in the course of the process. From the beginning of the project, a Grounded-Theory approach was used to systematically collect detailed information on measures and developments at the faculties, to continually analyse them and to draw final conclusions. Results: At all sites, faculties, teachers, students and administrative staff were not per se willing to deal with the NKLM and its contents, and even less to map their present curricula. Analysis of the development reflected a number of factors that had either a negative effect on the willingness to cooperate when missing, or a positive one when present. These were: clear top-down and bottom-up management; continuous information of the faculty; user

  8. Monitoring and analysis of the change process in curriculum mapping compared to the National Competency-based Learning Objective Catalogue for Undergraduate Medical Education (NKLM at four medical faculties. Part II: Key factors for motivating the faculty during the process

    Directory of Open Access Journals (Sweden)

    Lammerding-Koeppel, Maria

    2017-02-01

    Full Text Available Objective: After adoption of the National Competency-based Learning Objectives Catalogue in Medicine [Nationaler Kompetenzbasierter Lernzielkatalog Medizin, ], the German medical faculties are asked to test the learning obejctives recorded in it and evaluate them critically. The faculties require curricular transparency for competence-oriented transition of present curricula, which is best achieved by systematic curriculum mapping in comparison to the NKLM. Based on this inventory, curricula can be further developed target-oriented.Considerable resistance has to be expected when a complex existing curriculum is to be mapped for the first time and a faculty must be convinced of its usefulness. Headed by Tübingen, the faculties of Freiburg, Heidelberg, Mannheim and Tübingen rose to this task. This two-part article analyses and summarises how NKLM curriculum mapping was successful at the locations despite resistance. Part I presented the resources and structures that supported implementation. Part II focuses on factors that motivate individuals and groups of persons to cooperate in the faculties.Method: Both parts used the same method. In short, the joint project was systematically planned following the steps of project and change management and adjusted in the course of the process. From the beginning of the project, a Grounded-Theory approach was used to systematically collect detailed information on measures and developments at the faculties, to continually analyse them and to draw final conclusions. Results: At all sites, faculties, teachers, students and administrative staff were not per se willing to deal with the NKLM and its contents, and even less to map their present curricula. Analysis of the development reflected a number of factors that had either a negative effect on the willingness to cooperate when missing, or a positive one when present. These were: clear top-down and bottom-up management; continuous information of the faculty

  9. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    Science.gov (United States)

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  10. Lower glucose-dependent insulinotropic polypeptide (GIP) response but similar glucagon-like peptide 1 (GLP-1), glycaemic, and insulinaemic response to ancient wheat compared to modern wheat depends on processing

    DEFF Research Database (Denmark)

    Bakhøj, S; Flint, A.; Holst, Jens Juul

    2003-01-01

    OBJECTIVE: To test the hypothesis that bread made from the ancient wheat Einkorn (Triticum monococcum) reduces the insulin and glucose responses through modulation of the gastrointestinal responses of glucose-dependent insulinotrophic polypeptide (GIP) and glucagon-like peptide 1 (GLP-1) compared...... whole grain bread elicit a reduced gastrointestinal response of GIP compared to conventional yeast bread. No differences were found in the glycaemic, insulinaemic and GLP-1 responses. Processing of starchy foods such as wheat may be a powerful tool to modify the postprandial GIP response....